What is ECU(AWS)? What does that mean? - performance

I did a test performance for my server(1 ECU), but My server only arrived 1000 users in testing, how many ECU I need for 15000 users?

The ECU (Elastic Compute Unit) was a unit of measure designed to provide a relative measure of performance between Amazon EC2 instance types. For example, an m1.small instance had 1 ECU, an m1.large had 2 ECUs, etc.
However, it is no longer possible to summarize the power of an instance in a single number. Some instances have more RAM, some have more CPUs or more powerful CPUs, GPUs, enhanced networking and even burst capabilities.
Therefore, the ECU has slowly disappeared from AWS services and documentation. It can still be viewed as an optional column in the Amazon EC2 Launch Instance console.
The ECU is definitely not a good measure of "the number of users" that a system can support. The number of users that a system can support are totally dependent upon the application architecture and its system requirements. When testing the number of users a system can support, closely monitor all system components (eg CPU load, RAM utilization, disk queues) to identify the bottleneck. You can then try to modify the application or improve the bottleneck to provide better application performance.

Related

One large machine or cluster of small machines for Jmeter load generator in load testing?

I want to simulate up to 100,000 requests per second and I know that tools like Jmeter and Locust can run in distributed mode to generate load.
But since there are cloud VMs with up to 64 vCPUs and 240GB of RAM on a single VM, is it necessary to run in a cluster of smaller machines, or can I just use 1 large VM?
Will I be able to achieve more "concurrency" with more machines due to a network bottleneck coming from the 1 large machine?
If I just use one big machine, would I be limited by the number of ports there are?
In the load generator, does every simulated "user" that sends a request also require a port on the machine to receive a 200 response? (Sorry, my understanding of how TCP ports work is a bit weak.)
Also, we use Kubernetes pretty heavily, but with Jmeter or Locust, I feel like it'd be easier to run it on bare VM, without containerizing (even in distributed mode) while still maintaining reproducibility. Should I be trying to containerize Jmeter or Locust and running in Kubernetes instead?
According to KISS principle it is better to go for a single machine assuming it is capable of conducting the required load.
Make sure you're following JMeter Best Practices
Make sure you have monitoring of baseline OS health metrics (CPU, RAM, swap, network and disk IO, JVM statistics, etc.)
Start with low number of users and gradually increase the load until you reach the desired throughput or limit of any of the monitored metrics, whatever comes the first. If there will be a lack of CPU or RAM or something - see what could be done to overcome the limitation.
More information: What’s the Max Number of Users You Can Test on JMeter?

Sharing of resources on AWS

I am interested in understanding the way in which the hardware resources (CPU, disk, network, etc.) of an AWS physical server is shared between different applications. Do people have experiences about inexplicable performance changes in services running on AWS that you have successfully attributed to another application sharing the physical resources? If so, how did you go about debugging this?
In particular, I am interested in more complicated interactions between the resources, such as CPU->Memory bandwidth. If you run 15 VMs on a single machine, you will surely have worse performance than if you ran 2 VMs.
Perhaps this is a more general question about Xen virtualization, but I don't know if there is some kind of AWS magic happening under the hood that I don't know about.
I am not sure if this is the right forum for this kind of question; if not, it would be helpful if you could point me towards a resource or another forum.
Amazon EC2 instances are not susceptible to "noisy neighbour" problems.
Based upon the Instance Type selected, the EC2 instance receives CPU, Memory and (for some instance types) locally attached disk storage. These resources are dedicated to the instance and will not be impacted by other users nor other virtual machines. (An exception to this is the t1 and t2 instance types.)
Specifically:
The instance is allocated a number of vCPUs. These are provided to the instance and no other instance can use these vCPUs (see note about t1 and t2 below). The EC2 Instance Type page defines a vCPU as:
Each vCPU is a hyperthread of an Intel Xeon core for M4, M3, C4, C3, R3, HS1, G2, I2, and D2.
The instance is allocated an amount of RAM. No other instance can use this RAM. There is no oversubscription of CPU nor RAM.
The instance might be allocated locally-attached disk storage, known as Instance Store or Ephemeral Storage. This disk storage does not persist when the instance is Stopped or Terminated, so only store temporary data or data that is replicated elsewhere.
The instance is allocated network bandwidth that is dedicated to that instance. No other instance can impact this network bandwidth. The network performance is based upon the selected instance type. Basically, larger instances receive more network performance.
None of the above factors are impacted by other instances (virtual machines) running on the same host.
t1 and t2 instance types
An exception to the above statement are:
t1.micro instances "provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available".
t2 instances provide burst capacity based upon a system of CPU Credits. CPU Credits are earned at a constant rate depending upon instance type, and these credits can be used to burst the CPU when necessary.
For both these instance types, I would assume that this burst capacity is shared between instances, so it is possible that CPU burst might be impacted by other instances also wishing to burst. The t2 instances, however, would make this 'fair' by only consuming CPU credits when the CPU did actually burst.
Dedicated Instances and Dedicated Hosts
Dedicated instances are "Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer." Basically, your AWS account will be the only account running instances on that host computer.
A Dedicated Host is a "physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, Linux Enterprise Server, and so on." Basically, you pay for the entire host computer and then launch individually instances on the host (at no additional charge).
The use of a Dedicated Instance or a Dedicated Host has no impact on resources allocated to each instance. They would receive the same resources as when running as a normal Shared Instance.

How to monitoring a server's resources/performance

I've been giving a task to monitor an Amazon ec2 instance's resources/performance. I do not have access to the Amazon Control Panel/Dashboard but I'm allowed to install free software on the ec2 that can track the stats.
I know you need to pay for indpeth/custom charts/graphs in the Amazon Control Panel, is this maybe the best approach for accurate stats or are the preferable free software that can track the following stats.
Total used memory and free memory in x amount time
Total requests made in X amount time
Total CPU usage in x amount time
You may want to use a good, basic monitoring service like New Relic. They have both server and application monitoring available that, together, could give you the stats you list. Your first and third items are more server-centric, while your second bullet is specific to the application you're running (i.e. Apache, NGINX, Postfix, etc.).
Here is a list of other monitoring options.

Best EC2 setup for redis server [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are deploying a large scale web application that uses only redis as a data store. I notice the the benchmark of our redis master is around 8000 transactions per second on EC2, far less than the stated benchmarks on dedicated hardware.
I understand that there is a performance penalty for running Redis on a virtual machine like EC2, but I would love some pointers from people who have deployed Redis in production environments on EC2 on what EC2 setup you have found most effective for getting more out of redis.
Thanks.
EC2 is probably not the best environment to run Redis on virtualized hardware, but it is a popular one, and there are a number of points to know to get the best from Redis on this platform.
I'm one of the authors of http://redis.io/topics/benchmarks and http://redis.io/topics/latency which cover most of the topics I present below. This is just a summary of the main points.
Virtualization toll
It is not specific to EC2, but Redis is significantly slower when running on a VM (in term of maximum supported throughput). This is due to the fact for basic operations, Redis does not add much overhead to the epoll/read/write system calls required to handle client connections (like memcached, or other efficient key/value stores). System calls are typically more expensive on a VM, and they represent a significant part of Redis activity (especially in benchmarks). In that conditions, a 50% decrease in term of maximum throughput compared to bare metal is not uncommon.
Of course, it also depends on the quality of the hypervisor. For EC2, Xen is used.
Benchmarking in good conditions
Benchmarking can be tricky, especially on a platform like EC2. One point often forgotten is to ensure a proper configuration for both the benchmark client and server. For instance, do not run redis-benchmark on a CPU starved micro-instance (which will likely be throttled down by Amazon) while targeting your Redis server. Both machines are equally important to get a good maximum throughput.
Actually, to evaluate Redis performance, you need to:
run redis-benchmark locally (on the same machine than the server), assuming you have more than one vCPU core.
run redis-benchmark remotely (from a different VM), on a machine whose QoS configuration is equivalent to the server machine
So you can evaluate and compare performance of the machines and the network.
On EC2, you will have the best results with second generation M3 instances (or high-memory, or cluster compute instances) so you can benefit of HVM (hardware virtualization) instead of relying on slower para-virtualization.
The fork issue
This is not specific to EC2, but to Xen: forking a large process can be really slow on Xen (it looks better with kvm). For Redis this is a big problem if you plan to use persistence: both persistence options (RDB or AOF) require the main thread to fork and launch background save or rewrite processes.
In some cases, fork latency can freeze Redis event loop for several seconds. The more memory managed by the Redis instance, the more latency.
On EC2, be sure to use a HVM enabled instance (M3, high-memory, cluster), it will mitigate the issue.
Then, if you have large memory requirements, and your application can tolerate it, consider running several smaller Redis instances on the same machine, and shard your data. It can decrease the latency due to fork operations to an acceptable level.
Persistence configuration
This is a key point to get good performance from Redis (both on VM and bare metal). So please take the time to carefully read http://redis.io/topics/persistence
If you use RDB, keep in mind the memory copy-on-write mechanism will start duplicating pages once the save background process has been forked off. So you need to ensure there is enough memory for Redis itself, plus some margin to cope with the COW. the amount of extra memory depends on your workload. The more you write in the instance, the more extra memory you need.
Please note writing a file may also consume some memory (because of the filesystem cache), so during a Redis background save, you need to account for Redis memory, COW overhead, and size of the dump file.
The machine running the Redis server must never swap. If it does, the result will be catastrophic. Contrary to some other stores, Redis is not virtual memory friendly.
With Linux, be sure to set sensible system parameters: vm.overcommit_memory=1 and vm.swappiness=0 (or a very low value anyway). Do not use old kernel versions: they are quite bad at enforcing a low swappiness (resulting in swapping when a large file is written).
If you use AOF, review the fsync options. It is a tradeoff between raw performance and durability of the write operations. You need to make a choice and define a strategy.
You also need to get familiar with the EC2 storage options. On some VM, you have the choice between ephemeral storage and EBS. On some others, you only have EBS.
Ephemeral storage is generally faster, and you will probably get less issues than with EBS, but you can easily loose your data in case of disk failure or reboot of the host, etc ... You can imagine putting RDB snapshots on ephemeral storage, and then copying the resulting files to EBS directories, as a tradeoff between performance and robustness.
EBS is remote storage: it may eat the standard network bandwidth allocated to the VM, and impact the maximum throughput of Redis. If you plan to use EBS, consider selecting the "EBS-optimized" option to establish a QoS between the standard network and storage links.
Finally, a very common setup for performance demanding instances with EC2 is to deactivate persistence on the master, and only activate it on a slave instance. It is probably less safe for the data, but it may prevent a lot of potential latency issues on the master.

how does windows azure platform scale for my app?

Just a question about Azure.
Yes, I know roughly about Azure and cloud computing. I will put it in this way:
say, in normal way, I build a program listening to a TCP port. I run this server program in a server. I also build a client program, which connects to the server through specified port. Once a client is connected, my server program will compute some thing and return to the client.
Above is the normal model, or say my program's model.
Now I want to use Azure. I want to use because my clients are too many, let's say 1 million a day. I don't want to rent 1000 servers and maintain them. ( just a assumption for the number of clients)
I have looked at the Azure pricing plan. It say about CPU and talks about small, median, large instances.
I don't know what they mean. for e.g., in my above assumed case, how many instances do I need? or at most I can get from azure for extra large (8 small instances?)
How does Azure scale for my program? If I choose small instance (my server program is very little, just compute some data and return to clients), will Azure scale for me? or Azure just gives me one virture server and let it overload?
Please consider the CPU only, not storage or network traffic.
You choose two things: what size of VM to run (small, medium, large) and how many of those VMs to run. That means you could choose a small VM (single processor) and run 100 "instances" of it (100 VMs), or you could choose a large VM (eight processors on the same server) and run 10 instances of it (10 VMs).
Today, Windows Azure doesn't automatically adjust your scale, so it's up to you to use the web portal or the Service Management API to increase the number of instances as your need increases.
One factor to consider is if your app can take advantage of multi-core environments - multi-thread, shared memory, etc. to improve its scale. If it can, it may be better to use 5 2x core (i.e. medium) VMs than 10 1x core (small) VMs. You may find in some cases that 2 4x core VMs perform better than 5 2core.
If your app is not parallel/multi-core, then you could just do some 'x' number of small VMs. The charges are linear anyway - i.e. a 2core VM is twice the cost of a single core.
Other factors would include the scratch disk size & memory available in the VM.
One other suggestion - you may want to look into leveraging the Azure queues (i.e. have the client post to the queue and the workers pull from there). This would allow you to transparently (to the client) increase/decrease the workers w/out worrying about connections, etc. Also, if a processing step failed and crashed your instance the message would persist and be picked up by one of the others.
I suggest you also monitor, evaluate, and perfect the results of your Azure configuration.
For "Monitoring Applications in Windows Azure" (and performance) please reference
http://channel9.msdn.com/learn/courses/Azure/Deployment/DeployingApplicationsinWindowsAzure/Exercise-3-Monitoring-Applications-in-Windows-Azure/
There is also a good blog entry titled "Visualizing Windows Azure diagnostic data"
Check out http://www.paraleap.com - simple service for automatically adjusting number of instances that you have according to demand.

Resources