How can I perform autoscaling in amazon ec2 instance? - amazon-ec2

I would like to know how can I perform autoscaling in amazon ec2 instance. Do amazon support vertical autoscaling or not?
Is there any option of memory based or CPU based autoscaling?
Thanks in advance

I would lik eto know if it can increase the CPU or memory based on user defined threshhold on a single instance?
No, it does not. Not at-least at the time of writing this. They might add that feature in the offing.
In current situation, the only way to ad more CPU/MEM to an instance is to shut it down and then change instance type. This option is available in AWS Console and I am not sure of APIs.
While changing the instance type, you can choose a bigger type of instance which will eventually get you more CPU/MEM.
There is no way to add more CPU/MEM to a running instance at the moment. In fact, there is no way to add CPU/MEM to a current instance without changing its instance type.
Autscaling does not do this either.

Related

EC2 in ECS keeps restarting in Aws

I have created an ECS cluster that has created an EC2 instance for me. Since it is in the dev phase, I would like to 'stop' the instance when not in use. But it restarts itself whenever I stop it manually.
I have a faint idea that it might be because of the ASG that tries to maintain it's desired state of 1 instances but how do I control this? If I edit the desired state in ASG to 0, it shuts down my instance altogether.
I just want to stop it when not in use from being unnecessarily billed.
Any help will be appreciated.
Thanks.
Setting Auto Scaling group's Desired Capacity to zero means terminating all of the instances and not stopping them. Setting it to one, one the other hand, means always keeping one instance running. So, it is not possible to have stopped instances in ASG. But you can schedule scaling for your ASG to save some money when you do not use your test machine. Read more about scheduled scaling here.

Change running ec2 instance type

Does amazon have the ability to ever offer a feature to allow users to change their ec2 instance types while the server is running? So like a t1.micro to a m1.large and not shut anything down. I know nothing about VMs or what would be involved, so I'm not sure if this is even possible, the level of difficulty (I'd assume difficult enough if they haven't rolled it out), and if there are any plans to do so.
No, instance type can not be changed while the instance is running. To change the instance type you must stop and, change the instance type and then start it.

EC2 for handling demand spikes

I'm writing the backend for a mobile app that does some cpu intensive work. We anticipate the app will not have heavy usage most of the time, but will have occasional spikes of high demand. I was thinking what we should do is reserve a couple of 24/7 servers to handle the steady-state of low demand traffic and then add and remove EC2 instances as needed to handle the spikes. The mobile app will first hit a simple load balancing server that does a simple round-robin user distribution among all the available processing servers. The load balancer will handle bringing new EC2 instances up and turning them back off as needed.
Some questions:
I've never written something like this before, does this sound like a good strategy?
What's the best way to handle bringing new EC2 instances up and back down? I was thinking I could just create X instances ahead of time, set them up as needed (install software, etc), and then stop each instance. The load balancer will then start and stop the instances as needed (eg through boto). I think this should be a lot faster and easier than trying to create new instances and install everything through a script or something. Good idea?
One thing I'm concerned about here is the cost of turning EC2 instances off and back on again. I looked at the AWS Usage Report and had difficulty interpreting it. I could see starting a stopped instance being a potentially costly operation. But it seems like since I'm just starting a stopped instance rather than provisioning a new one from scratch it shouldn't be too bad. Does that sound right?
This is a very reasonable strategy. I used it successfully before.
You may want to look at Elastic Load Balancing (ELB) in combination with Auto Scaling. Conceptually the two should solve this exact problem.
Back when I did this around 2010, ELB had some problems with certain types of HTTP requests that prevented us from using it. I understand those issues are resolved.
Since ELB was not an option, we manually launched instances from EBS snapshots as needed and manually added them to an NGinX load balancer. That certainly could have been automated using the AWS APIs, but our peaks were so predictable (end of month) that we just tasked someone to spin up the new instances and didn't get around to automating the task.
When an instance is stopped, I believe the only cost that you pay is for the EBS storage backing the instance and its data. Unless your instances have a huge amount of data associated, the EBS storage charge should be minimal. Perhaps things have changed since I last used AWS, but I would be surprised if this changed much if at all.
First with regards to costs, whether an instance is started from scratch or from a stopped state has no impact on cost. You are billed for the amount of compute units you use over time, period.
Second, what you are looking to do is called autoscaling. What you do is setup up a launch config that specifies an AMI you are going to use (along with any user-data configs you are using, the ELB and availiabilty zones you are going to use, min and max number of instances, etc. You set up a scaling group using that launch config. Then you set up scaling policies to determine what scaling actions are going to be attached to the group. You then attach cloud watch alarms to each of those policies to trigger the scaling actions.
You don't have servers in reserve that you attach to the ELB or anything like that. Everything is based on creating a single AMI that is used as the template for the servers you need.
You should read up on autoscaling at the link below:
http://aws.amazon.com/autoscaling/

EC2 autoscale: bigger instance, not more instances

Reading http://aws.amazon.com/autoscaling/ it looks like Amazon lets you create more virtual machines (EC2 units) automatically when the load on your existing machine gets high.
However, that's not what I want. I want a single virtual machine that becomes more powerful (more RAM, CPU, etc) when the machine load/memory usage is high. How do I do this?
vps.net appears to offer this:
http://vps.net/product/cloud-servers/
under "scale with demand", but I'd like to find an Amazon equivalent.
You can scale an EC2 instance up and down, but
Any automated trigger for this would need to be written by you, calling the EC2 API calls to perform the scaling
Moving the EC2 instance to a larger or smaller instance type requires shutting down and rebooting the server.
The basic method is:
stop (not terminate) the instance
modify-instance-attributes to change the type
start the instance
reassociate the Elastic IP address (if any).
I've written an article that provides more information, sample commands, and things to watch out for when performing this resize:
http://alestic.com/2011/02/ec2-change-type

Single instance Amazon EC2

We're running a lightweight web app on a single EC2 server instance, which is fine for our needs, but we're wondering about monitoring and restarting it if it goes down.
We have a separate non-Amazon server we'd like to use to monitor the EC2 and start a fresh instance if necessary and shut down the old one. All our user data is on Elastic Storage, so we're not too worried about losing anything.
I was wondering if anyone has any experience of using EC2 in this way, and in particular of automating the process of starting the new instance? We have no problem creating something from scratch, but it seems like it should be a solved problem, so I was wondering if anyone has any tips, links, scripts, tutorials, etc to share.
Thanks.
You should have a look at puppet and its support for AWS. I would also look at the RightScale AWS library as well as this post about starting a server with the RightScale scripts. You may also find this article on web serving with EC2 useful. I have done something similar to this but without the external monitoring, the node monitored itself and shut down when it was no longer needed then a new one would start up later when there was more work to do.
Couple of points:
You MUST MUST MUST back up your Amazon EBS volume.
They claim "better" reliability, but not 100%, and it's SEVERAL orders of magnitude off of S3's "12 9's" of durability. S3 durability >> EBS durability. That's a fact. EBS supports a "snapshots" feature which backs up your storage efficiently and incrementally to S3. Also, with EBS snapshots, you only pay for the compressed deltas, which is typically far far less than the allocated volume size. In another life, I've sent lost-volume emails to smaller customers like you who "thought" that EBS was "durable" and trusted it with the only copy of a mission-critical database... it's heartbreaking.
Your Q: automating start-up of a new instance
The design path you mention is relatively untraveled; here's why... Lots of companies run redundant "hot-spare" instances where the second instance is booted and running. This allows rapid failover (seconds) in the event of "failure" (could be hardware or software). The issue with a "cold-spare" is that it's harder to keep the machine up to date and ready to pick up where the old box left off. More important, it's tricky to VALIDATE that the spare is capable of successfully recovering your production service. Hardware is more reliable than untested software systems. TEST TEST TEST. If you haven't tested your fail-over, it doesn't work.
The simple automation of starting a new EBS instance is easy, bordering on trivial. It's just a one-line bash script calling the EC2 command-line tools. What's tricky is everything on top of that. Such a solution pretty much implies a fully 100% automated deployment process. And this is all specific to your application. Can your app pull down all the data it needs to run (maybe it's stored in S3?). Can you kill you instance today and boot a new instance with 0.000 manual setup/install steps?
Or, you may be talking about a scenario I'll call "re-instancing an EBS volume":
EC2 box dies (root volume is EBS)
Force detach EBS volume
Boot new EC2 instance with the EBS volume
... That mostly works. The gotchas:
Doesn't protect against EBS failures, either total volume loss or an availability loss
Recovery time is O(minutes) assuming everything works just right
Your services need to be configured to restart automatically. It does no good to bring the box back if Nginx isn't running.
Your DNS routes or other services or whatever need to be ok with the IP-address changing. This can be worked around with ElasticIP.
How are your host SSH keys handled? Same name, new host key can break SSH-based automation when it gets the strong-warning for host-key-changed.
I don't have proof of this (other than seeing it happen once), but I believe that EC2/EBS _already_does_this_ automatically for boot-from-EBS instances
Again, the hard part here is on your plate. Can you stop your production service today and bring it up RELIABLY on a new instance? If so, the EC2 part of the story is really really easy.
As a side point:
All our user data is on Elastic Storage, so we're not too worried about losing anything.
I'd strongly suggest to regularly snapshot your EBS (Elastic Block Storage) to S3 if you are not doing that already.
You can use an autoscale group with a min/max/desired quantity of 1. Place the instance behind an ELB and have the autoscale group be triggered by the ELB healthy node count. This allows you to have built in monitoring by cloudwatch and the ELB health check. Anytime there is an issue the instance be replaced by the autoscale service.
If you have not checked 'Protect against accidental termination' you might want to do so.
Even if you have disabled 'Detailed Monitoring' for your instance you should still see the 'StatusCheckFailed' metric for your instance over which you can configure an alarm (In the CloudWatch dashboard)
Your application (hosted in a different server) should receive the alarm and start the instance using the AWS API (or CLI)
Since you have protected against accidental termination you would never need to spawn a new instance.

Resources