ec2 instance cpu type - amazon-ec2

How do you select either AMD or Intel CPU for your new ec2 instance?

That's not the point of cloud computing at all. One small instance is roughly a particular benchmark, a 1.7GHz Intel processor I believe. What runs to get that benchmark is up to Amazon.

You can't select it. You can try to run an instance and see, if it doesn't have the CPU that you'd like, you can shut it down and try to start a new one, you can also try to change the Availability Zone.

The instances that have guaranteed CPU types are the Cluster Compute instances, which you should be using if you are doing something numerical.

Related

How to migrate workload from one ec2 instance to another using Boto3

I have a requirement that says if I have the CPU Utilization of 70% of one EC2 instance It should be scaled up and if I have the CPU Utilization of 30% and at least two EC2 instances it should be scaled down. This is working fine, I can scale up and scale down as it is supposed to be.
But for scenarios in which I have at least two EC2 instances, and one has CPU Utilization of 70% and another one has 30% I should migrate the CPU Workload from the instance with 70% to the one with 30%.
Does anyone know how can I do this using boto3? I've read the EC2 documentation but did not find anything related to this kind of workload migration.
Does anyone know how can I do this using boto3?
You can't do this in general and that's why you did not find anything about this. This is not how EC2 Autoscaling works, assuming that you are using the scaling as its not stated in your question.
You would have to develop your own, custom solution for such a requirement. And this depends on exactly what is your "workload". It is also not explained in your question so its difficult to even begin recommending something.

Rapid AWS autoscaling

How do you configure AWS autoscaling to scale up quickly? I've setup an AWS autoscaling group with an ELB. All is working well, except it takes several minutes before the new instances are added and are online. I came across the following in a post about Puppet and autoscaling:
The time to scale can be lowered from several minutes to a few seconds if the AMI you use for a group of nodes is already up to date.
http://puppetlabs.com/blog/rapid-scaling-with-auto-generated-amis-using-puppet/
Is this true? Can time to scale be reduced to a few seconds? Would using puppet add any performance boosts?
I also read that smaller instances start quicker than larger ones:
Small Instance 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform with a base install of CentOS 5.3 AMI
Amount of time from launch of instance to availability:
Between 5 and 6 minutes us-east-1c
Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform with a base install of CentOS 5.3 AMI
Amount of time from launch of instance to availability:
Between 11 and 18 minutes us-east-1c
Both were started via command line using Amazons tools.
http://www.philchen.com/2009/04/21/how-long-does-it-take-to-launch-an-amazon-ec2-instance
I note that the article is old and my c1.xlarge instances are certainly not taking 18min to launch. Nonetheless, would configuring an autoscale group with 50 micro instances (with an up scale policy of 100% capacity increase) be more efficient than one with 20 large instances? Or potentially creating two autoscale groups, one of micros for quick launch time and one of large instances to add CPU grunt a few minutes later? All else being equal, how much quicker does a t1.micro come online than a c1.xlarge?
you can increase or decrease the time of reaction for an autoscaller by playing with
"--cooldown" value (in seconds).
regarding the types of instances to be used, this is mostly based on the application type and a decision on this topic should be taken after close performance monitor and production tuning.
The time to scale can be lowered from several minutes to a few seconds
if the AMI you use for a group of nodes is already up to date. This
way, when Puppet runs on boot, it has to do very little, if anything,
to configure the instance with the node’s assigned role.
The advice here is talking about having your AMI (The snapshot of your operating system) as up to date as possible. This way, when auto scale brings up a new machine, Puppet doesn't have to install lots of software like it normally would on a blank AMI, it may just need to pull some updated application files.
Depending on how much work your Puppet scripts do (apt-get install, compiling software, etc) this could save you 5-20 minutes.
The two other factors you have to worry about are:
How long it takes your load balancer to determine you need more resources (e.g a policy that dictates "new machines should be added when CPU is above 90% for more then 5 minutes" would be less responsive and more likely to lead to timeouts compared to "new machines should be added when CPU is above 60% for more then 1 minute")
How long it takes to provision a new EC2 instance (smaller Instance Types tend to take shorted times to provision)
How soon ASG responds would depend on 3 things:
1. Step - how much to increase by % or fixed number - a large step - you can rapidly increase. ASG will launch the entire Step in one go
2. Cooldown Period - This applies 'how soon' the next increase can happen. If the previous increase step is still within the defined cooldown period (seconds), ASG will wait and not take action for next increase yet. Having a small cooldown period will enable next Step quicker.
3 AMI type- how much time a AMI takes to launch, this depends on type of AMI - many factors come into play. All things equal Fully Baked AMIs launch much faster

online fast computation environment for ruby

I'm writing a ruby program that need raw cpu power (I know ruby is a very bad choice for this!).. but I don't have a powerful computer, so I wanted to rent something online that you pay per hour..
Any idea? Something simple to use yet very powerful, with multiple cores. I took a look at amazon ec2, that's a possibility. Anything else, more CPU oriented?
As far as on-demand compute power, Amazon's EC2 is a good choice. You can either pay market rates or you can go to their special discount spot market which is similar, but your instance can and will be terminated without warning when demand picks up again.
It's best to have a system that either uses a persistent EBS drive to save results, or saves them to something like S3 frequently.
If you can parallelize your processing, try and split it across the most cost-effective instance type instead of having to pay a premium for a single instance. For example, the XL Hi-CPU On-Demand Instance gives you 20 compute units for $0.68/hr. compared with the 4XL Cluster Compute Instance which is only 33.5 for $1.60/hr.
Remember that a single Ruby process can only use one CPU core unless you're using a combination of JRuby and threads. You will need to support multiple processes in order to make full use of the machine if this is not the case.
SimpleWorker - I think it's more simpler then EC2.

Are Amazon's micro instances (Linux, 64bit) good for MongoDB servers?

Do you think using an EC2 instance (Micro, 64bit) would be good for MongoDB replica sets?
Seems like if that is all they did, and with 600+ megs of RAM, one could use them for a nice set.
Also, would they make good primary (write) servers too?
My database is only 1-2 gigs now but I see it growing to 20-40 gigs this year (hopefully).
Thanks
They COULD be good - depending on your data set, but likely they will not be very good.
For starters, you dont get much RAM with those instances. Consider that you will be running an entire operating system and all related services - 613mb of RAM could get filled up very quickly.
MongoDB tries to keep as much data in RAM as possible and that wont be possible if your data set is 1-2 gigs and becomes even more of a problem if your data set grows to 20-40 gigs.
Secondly they are labeled as "Low IO performance" so when your data swaps to disk (and it will based on the size of that data set), you are going to suffer from disk reads due to low IO throughput.
Be aware that micro instances are designed for spiky CPU usage, and you will be throttled to the "low background level" if you exceed the allotment.
The AWS Micro Documentation has good information of what they are intended for.
Between the CPU and not very good IO performance my experience with using micros for development/testing has not been very good. (larger instance types have been fine though), but a micro may work for your use case.
However, there are exceptions for a config or arbiter nodes, I believe a micro should be good enough for these types of machines.
There is also some mongodb documentation specific to EC2 which might help.

Can someone explain the concept of an "instance-hour" as used by cloud computing providers?

I am looking at the pricing of various cloud computing platforms, particularly Amazon's EC2, and a lot of the quotes are based on a unit called an Instance-Hour.
I am trying to get a handle on the exact definition of an instance-hour to better compare the costs of continuing to host a web-application versus putting it out on the cloud.
(1) Does it correspond to any of the Windows performance counters in such a way that I could benchmark our current implmentation and use it in their pricing calculators?
(2) How does a multi-processor instance figure into the instance-hour calculation?
An instance hour is simply a regular hour where the instance was available to you, wether you used it or not. Amazon has priced their different types of instances differently, so you pay for the type of resource you are getting, not how much you use it.
So... 1. No, it's just a regular hour. 2. It doesn't, it's already factored into the price you pay for the instance pr hour.
Note also that instance hours are billed rounded up (for Amazon EC2). So starting up an instance and immediately shutting it down again incurs the cost of 1 instance hour.
if you plan to start/stop the AWS EMR cluster rapidly within a single hour, and you want to avoid being billed for a full hour each time you do so, then
start the cluster with the --alive argument from cli, which means to leave it running.
then rapidly add steps to the same cluster, instead:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/AddingStepstoaJobFlow.html
Don't forget to stop the cluster when you're done! :)

Resources