How many ec2 compute units would equate a quadcore - amazon-ec2

How many ec2 compute units of amazon would equate an Intel® Core™ i7-920 Quad-Core.

Asked another way, which Amazon EC2 instance comes closest to the i7-920. The i7-920 scores 5567 PassMark CPU mark, the Amazon High-CPU Extra Large Instance scores 7962, the Amazon standard Extra Large Instance scores 3627.
http://www.cpubenchmark.net/cpu_list.php
http://huanliu.wordpress.com/2010/06/14/amazons-physical-hardware-and-ec2-compute-unit/

Related

How to migrate workload from one ec2 instance to another using Boto3

I have a requirement that says if I have the CPU Utilization of 70% of one EC2 instance It should be scaled up and if I have the CPU Utilization of 30% and at least two EC2 instances it should be scaled down. This is working fine, I can scale up and scale down as it is supposed to be.
But for scenarios in which I have at least two EC2 instances, and one has CPU Utilization of 70% and another one has 30% I should migrate the CPU Workload from the instance with 70% to the one with 30%.
Does anyone know how can I do this using boto3? I've read the EC2 documentation but did not find anything related to this kind of workload migration.
Does anyone know how can I do this using boto3?
You can't do this in general and that's why you did not find anything about this. This is not how EC2 Autoscaling works, assuming that you are using the scaling as its not stated in your question.
You would have to develop your own, custom solution for such a requirement. And this depends on exactly what is your "workload". It is also not explained in your question so its difficult to even begin recommending something.

Benchmarking amazon EC2 vs Google compute engine

For EC2: t2.micro, host in Oregon
For compute engine: n1-standard-1, in west-us
Both in ubuntu 16.04, single instance. I installed nvm, then node js 7.7.2 and initiated a simple server:
require('http').createServer(function(req, res) {
res.end('Hello world')
}).listen(8080)
Then from my local machine, I used wrk for bench marking
wrk -t12 -c400 -d10s http://myinstanceaddress
Result, average of 5:
EC2: ~1200 requests/s
Compute engine: ~500 requests/s
I expected that compute engine n1-standard-1 would perform better than EC2 t2.micro since the former has more CPU and power. However, the result indicated otherwise.
So, my question is: is this simple benchmark accurate, what other factors that I need to consider? If the benchmark is accurate, does this mean AWS EC2 t2.micro actually performs better than google compute engine n1-standard-1 despite the later has more power (CPU and memory as advertised)?
Have you checked the network path (e.g. latency) between the two nodes, using a tool like ping?
I'm guessing that you're actually being limited by the network path somewhere along the way.

AWS EC2 high ping and S3 download slow

i am new to AWS.
I have some questions want to know.
my EC2 instance :
Instance type : t2.micro - windows server
EC2 region : Asia Pacific (Tokyo)
S3 region : AWS write "S3 does not require region selection."
User location : Taiwan
My EC2 ping is too high to my real-time game and S3 download sometimes is very slow.
My network type is WiFi and Cellular networks(3G&4G).
I have test with my local server, 5 users connected and all work fine.
Average 80kb/s per user for EC2.
Questions:
1.Why my client ping EC2, the time always over 100ms?
2.How can i reduce ping under 100ms?
3.Why S3 download speed is very unstable, 50k~5mb?
4.How can i keep download speed stable?
Regarding S3:
It DOES matter where you create your S3 bucket:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html
"Amazon S3 creates bucket in a region you specify. You can choose any AWS region that is geographically close to you to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you might find it advantageous to create buckets in the EU (Ireland) region."
On the other hand, "S3 does not require region selection." Means you can access any bucket from any region, though access is optimized for the region where the bucket is located. Bucket location can be set only at the time they are created
Regarding unstable times:
t2.micro is not designed to provide a stable capacity. Instead, its capacity is throttled. Actually any instance type starting with t2 has throttled (non constant) CPU capacity. I suggest you try with a larger instance, like m1.small .
See http://www.ec2instances.info/ , any instance that says "burstable" has a non-stable performance.

AWS RDS Provisioned IOPS really worth it?

As I understand it, RDS Provisioned IOPS is quite expensive compared to standard I/O rate.
In Tokyo region, P-IOPS rate is 0.15$/GB, 0.12$/IOP for standard deployment. (Double the price for Multi-AZ deployment...)
For P-IOPS, the minimum required storage is 100GB, IOP is 1000.
Therefore, starting cost for P-IOPS is 135$ excluding instance pricing.
For my case, using P-IOPS costs about 100X more than using standard I/O rate.
This may be a very subjective question, but please give some opinion.
In the most optimized database for RDS P-IOPS, would the performance be worth the price?
or
The AWS site gives some insights on how P-IOPS can benefit the performance. Is there any actual benchmark?
SELF ANSWER
In addition to the answer that zeroSkillz wrote, I did some more research. However, please note that I am not an expert on reading database benchmarks. Also, the benchmark and the answer was based on EBS.
According to an article written by "Rodrigo Campos", the performance does actually improve significantly.
From 1000 IOPS to 2000 IOPS, the read/write(including random read/write) performance doubles. From what zeroSkillz said, the standard EBS block provices about 100 IOPS. Imagine the improvement on performance when 100 IOPS goes up to 1000 IOPS(which is the minimum IOPS for P-IOPS deployment).
Conclusion
According to the benchmark, the performance/price seems reasonable. For performance critical situations, I guess some people or companies should choose P-IOPS even when they are charged 100X more.
However, if I were a financial consultant in a small or medium business, I would just scale-up(as in CPU, memory) on my RDS instances gradually until the performance/price matches P-IOPS.
Ok. This is a bad question because it doesn't mention the size of the allocated storage or any other details of the setup. We use RDS and it has its pluses and minuses. First- you can't use an ephemeral storage device with RDS. You cant even access the storage device directly when using the RDS service.
That being said - the storage medium for RDS is presumed to be based on a variant of EBS from amazon. Performance for standard IOPS depends on the size of the volume and there are many sources stating that above 100GB storage they start to "stripe" EBS volumes. This provides better average case data access both on read and write.
We run currently about 300GB of storage allocation and can get 2k write IOP and 1k IOP about 85% of the time over a several hour time period. We use datadog to log this so we can actually see. We've seen bursts of up to 4k write IOPs, but nothing sustained like that.
The main symptom we see from an application side is lock contention if the IOPS for writing is not enough. The number and frequency you get of these in your application logs will give you symptoms for exhausting the IOPS of standard RDS. You can also use a service like datadog to monitor the IOPS.
The problem with provisioned IOPS is they assume steady state volumes of writes / reads in order to be cost effective. This is almost never a realistic use case and is the reason Amazon started cloud services to fix. The only assurance you get with P-IOPS is that you'll get a max throughput capability reserved. If don't use it, you pay for it still.
If you're ok with running replicas, we recommend running a read-only replica as a NON-RDS instance, and putting it on a regular EC2 instance. You can get better read-IOPS at a much cheaper price by managing the replica yourself. We even setup replicas outside AWS using stunnel and put SSD drives as the primary block device and we get ridiculous read speeds for our reporting systems - literally 100 times faster than we get from RDS.
I hope this helps give some real world details. In short, in my opinion - unless you must ensure a certain level of throughput capability (or your application will fail) on a constant basis (or at any given point) there are better alternatives to provisioned-IOPS including read-write splitting with read-replicas memcache etc.
So, I just got off of a call with an Amazon System Engineer, and he had some interesting insights related to this question. (ie. this is 2nd hand knowledge.)
standard EBS blocks can handle bursty traffic well, but eventually it will taper off to about 100 iops. There were several alternatives that this engineer suggested.
some customers use multiple small EBS blocks and stripe them. This will improve IOPS, and be the most cost effective. You don't need to worry about mirroring because EBS is mirrored behind the scenes.
some customers use the ephemeral storage on the EC2 instance. (or RDS instance) and have multiple slaves to "ensure" durabilty. The ephemeral storage is local storage and much faster than EBS. You can even use SSD provisioned EC2 instances.
some customers will configure the master to use provisioned IOPS, or SSD ephemeral storage, then use standard EBS storage for the slave(s). Expected performance is good, but failover performance is degraded (but still available)
anyway, If you decide to use any of these strategies, I would recheck with amazon to make sure I haven't forgotten any important steps. As I said before, this is 2nd hand knowledge.

Rapid AWS autoscaling

How do you configure AWS autoscaling to scale up quickly? I've setup an AWS autoscaling group with an ELB. All is working well, except it takes several minutes before the new instances are added and are online. I came across the following in a post about Puppet and autoscaling:
The time to scale can be lowered from several minutes to a few seconds if the AMI you use for a group of nodes is already up to date.
http://puppetlabs.com/blog/rapid-scaling-with-auto-generated-amis-using-puppet/
Is this true? Can time to scale be reduced to a few seconds? Would using puppet add any performance boosts?
I also read that smaller instances start quicker than larger ones:
Small Instance 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform with a base install of CentOS 5.3 AMI
Amount of time from launch of instance to availability:
Between 5 and 6 minutes us-east-1c
Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform with a base install of CentOS 5.3 AMI
Amount of time from launch of instance to availability:
Between 11 and 18 minutes us-east-1c
Both were started via command line using Amazons tools.
http://www.philchen.com/2009/04/21/how-long-does-it-take-to-launch-an-amazon-ec2-instance
I note that the article is old and my c1.xlarge instances are certainly not taking 18min to launch. Nonetheless, would configuring an autoscale group with 50 micro instances (with an up scale policy of 100% capacity increase) be more efficient than one with 20 large instances? Or potentially creating two autoscale groups, one of micros for quick launch time and one of large instances to add CPU grunt a few minutes later? All else being equal, how much quicker does a t1.micro come online than a c1.xlarge?
you can increase or decrease the time of reaction for an autoscaller by playing with
"--cooldown" value (in seconds).
regarding the types of instances to be used, this is mostly based on the application type and a decision on this topic should be taken after close performance monitor and production tuning.
The time to scale can be lowered from several minutes to a few seconds
if the AMI you use for a group of nodes is already up to date. This
way, when Puppet runs on boot, it has to do very little, if anything,
to configure the instance with the node’s assigned role.
The advice here is talking about having your AMI (The snapshot of your operating system) as up to date as possible. This way, when auto scale brings up a new machine, Puppet doesn't have to install lots of software like it normally would on a blank AMI, it may just need to pull some updated application files.
Depending on how much work your Puppet scripts do (apt-get install, compiling software, etc) this could save you 5-20 minutes.
The two other factors you have to worry about are:
How long it takes your load balancer to determine you need more resources (e.g a policy that dictates "new machines should be added when CPU is above 90% for more then 5 minutes" would be less responsive and more likely to lead to timeouts compared to "new machines should be added when CPU is above 60% for more then 1 minute")
How long it takes to provision a new EC2 instance (smaller Instance Types tend to take shorted times to provision)
How soon ASG responds would depend on 3 things:
1. Step - how much to increase by % or fixed number - a large step - you can rapidly increase. ASG will launch the entire Step in one go
2. Cooldown Period - This applies 'how soon' the next increase can happen. If the previous increase step is still within the defined cooldown period (seconds), ASG will wait and not take action for next increase yet. Having a small cooldown period will enable next Step quicker.
3 AMI type- how much time a AMI takes to launch, this depends on type of AMI - many factors come into play. All things equal Fully Baked AMIs launch much faster

Resources