Does it cost more to run an ELB in more than one availability zone? - amazon-ec2

I see that you can specify what availability zones an ELB serves. Right now, we're only setting a single EC2 instance up in a single AZ. But someday, we'll have multiple across AZ's for redundancy. Is there any reason I shouldn't just set up the ELB to serve two now, so I don't have to change it later?
Does it cost more to run your load balancer in multiple availability zones? Are there any other drawbacks to running an ELB in multiple AZ's?

No, it doesn't cost more to run load balancer in multi-az. Load Balancer costs are fix for every region, it does not vary for availability zones. Its better to have ELB right now, because when you need will increase and you add more instances to your account, you don't have to change URL in your application. ELB provides a seperate URL and in the background you can add more instances to it without any change to URL. If you provide DNS of instance right now in your application and in future, your plan to expand and have ELB, you need to change URL in application i.e., URL where your clients should connect.

Related

AWS Application load balancer stops to send traffic when new instance is added

I have a problem with autoscaling with AWS Application Load Balancer.
I'm running my Jmeter tests and discovered that whenever new instance is added to autoscaling group (that is when it becomes healthy and ALB starts to route traffic to it), then for some short period of time Load Balancer forwards less requests to targets and a lot of requests are apparently stuck at Load Balancer itself.
I'm attaching 3 images that show this issue. CPU of JVM of one of instances drops and than goes back to normal, some requests are hanging for more than 30 sec. Number of requests per target drops and then goes back to trend. (see attached pictures)
I'm using sticky sessions with 3 minutes validity period.
Does any one knows what may cause this temporary "choking" when new instance is added?
It is quite crucial to our user experience. Can't actually understand why adding new instance can have such adverse effect on traffic routing.
Issue is fully reproducible.

Questions about AWS EC2 auto scaling and load balancing

I have little bit knowledge of AWS EC2. Previously I have bought a mx3large instance and did all my necessary installations on the server and cloned my code there but didn't map domain to the system.
Now my client wants to implement auto scalling and load balancing. I have read about those that auto scalling is to add/subtract instances with respect to traffic and load balancing is to balance loads between instances.First correct me if I am wrong.
Now I want to ask
If I auto scale some instances, do I need to do installations on all instances or some one instance? If one then what will be my primary instance?
As mentioned above, I have already bought an instance where I did my all installations. But that instance is currently stop. Will I remove that instance when I will be using auto scaling?
What are the prices. If I bought a instance of $0.28/hour, will I have to pay number of instances * cost for number of time they are activated?
On which instance I will be cloning my code, ssh to connect and all related operations?
Load balancing will automatically start working with my auto scall instances?
Any help will be much appreciated.
Your questions make me think you still need to read a lot of "getting started" guides, but I'll do my best to answer your questions:
1) If I auto scale some instances, do I need to do installations on all instances or some one instance? If one then what will be my primary instance?
R: All instances, none will be a "primary instance". They will all be treated equally
2) As mentioned above, I have already bought an instance where I did my all installations. But that instance is currently stop. Will I remove that instance when I will be using auto scaling?
R: The Elastic Load Balancer (ELB) will detect that your instance is not healthy and will stop sending request to that instance
3) What are the prices. If I bought a instance of $0.28/hour, will I have to pay number of instances * cost for number of time they are activated?
R: You pay for the time they are in running state and for the time the ELB is running (it can't be stopped, only deleted)
4) On which instance I will be cloning my code, ssh to connect and all related operations?
R: On every instance. The ideal case is when all of them are identical
5) Load balancing will automatically start working with my auto scall instances?
R: yes, assuming you have correctly configured the autoscaling group with the ELB

EC2 for handling demand spikes

I'm writing the backend for a mobile app that does some cpu intensive work. We anticipate the app will not have heavy usage most of the time, but will have occasional spikes of high demand. I was thinking what we should do is reserve a couple of 24/7 servers to handle the steady-state of low demand traffic and then add and remove EC2 instances as needed to handle the spikes. The mobile app will first hit a simple load balancing server that does a simple round-robin user distribution among all the available processing servers. The load balancer will handle bringing new EC2 instances up and turning them back off as needed.
Some questions:
I've never written something like this before, does this sound like a good strategy?
What's the best way to handle bringing new EC2 instances up and back down? I was thinking I could just create X instances ahead of time, set them up as needed (install software, etc), and then stop each instance. The load balancer will then start and stop the instances as needed (eg through boto). I think this should be a lot faster and easier than trying to create new instances and install everything through a script or something. Good idea?
One thing I'm concerned about here is the cost of turning EC2 instances off and back on again. I looked at the AWS Usage Report and had difficulty interpreting it. I could see starting a stopped instance being a potentially costly operation. But it seems like since I'm just starting a stopped instance rather than provisioning a new one from scratch it shouldn't be too bad. Does that sound right?
This is a very reasonable strategy. I used it successfully before.
You may want to look at Elastic Load Balancing (ELB) in combination with Auto Scaling. Conceptually the two should solve this exact problem.
Back when I did this around 2010, ELB had some problems with certain types of HTTP requests that prevented us from using it. I understand those issues are resolved.
Since ELB was not an option, we manually launched instances from EBS snapshots as needed and manually added them to an NGinX load balancer. That certainly could have been automated using the AWS APIs, but our peaks were so predictable (end of month) that we just tasked someone to spin up the new instances and didn't get around to automating the task.
When an instance is stopped, I believe the only cost that you pay is for the EBS storage backing the instance and its data. Unless your instances have a huge amount of data associated, the EBS storage charge should be minimal. Perhaps things have changed since I last used AWS, but I would be surprised if this changed much if at all.
First with regards to costs, whether an instance is started from scratch or from a stopped state has no impact on cost. You are billed for the amount of compute units you use over time, period.
Second, what you are looking to do is called autoscaling. What you do is setup up a launch config that specifies an AMI you are going to use (along with any user-data configs you are using, the ELB and availiabilty zones you are going to use, min and max number of instances, etc. You set up a scaling group using that launch config. Then you set up scaling policies to determine what scaling actions are going to be attached to the group. You then attach cloud watch alarms to each of those policies to trigger the scaling actions.
You don't have servers in reserve that you attach to the ELB or anything like that. Everything is based on creating a single AMI that is used as the template for the servers you need.
You should read up on autoscaling at the link below:
http://aws.amazon.com/autoscaling/

Haproxy Load Balancer, EC2, writing my own availability script

I've been looking at high availability solutions such as heartbeat, and keepalived to failover when an haproxy load balancer goes down. I realised that although we would like high availability it's not really a requirement at this point in time to do it to the extent of the expenditure on having 2 load balancer instances running at any one time so that we get instant failover (particularly as one lb is going to be redundant in our setup).
My alternate solution is to fire up a new load balancer EC2 instance from an AMI if the current load balancer has stopped working and associate it to the elastic ip that our domain name points to. This should ensure that downtime is limited to the time it takes to fire up the new instance and associate the elastic ip, which given our current circumstance seems like a reasonably cost effective solution to high availability, particularly as we can easily do it multi-av zone. I am looking to do this using the following steps:
Prepare an AMI of the load balancer
Fire up a single ec2 instance acting as the load balancer and assign the Elastic IP to it
Have a micro server ping the current load balancer at regular intervals (we always have an extra micro server running anyway)
If the ping times out, fire up a new EC2 instance using the load balancer AMI
Associate the elastic ip to the new instance
Shut down the old load balancer instance
Repeat step 3 onwards with the new instance
I know how to run the commands in my script to start up and shut down EC2 instances, associate the elastic IP address to an instance, and ping the server.
My question is what would be a suitable ping here? Would a standard ping suffice at regular intervals, and what would be a good interval? Or is this a rather simplistic approach and there is a smarter health check that I should be doing?
Also if anyone foresees any problems with this approach please feel free to comment
I understand exactly where you're coming from, my company is in the same position. We care about having a highly available fault tolerant system however the overhead cost simply isn't viable for the traffic we get.
One problem I have with your solution is that you're assuming the micro instance and load balancer wont both die at the same time. With my experience with amazon I can tell you it's defiantly possible that this could happen, however unlikely, its possible that whatever causes your load balancer to die also takes down the micro instance.
Another potential problem is you also assume that you will always be able to start another replacement instance during downtime. This is simply not the case, take for example an outage amazon had in their us-east-1 region a few days ago. A power outage caused one of their zones to loose power. When they restored power and began to recover the instances their API's were not working properly because of the sheer load. During this time it took almost 1 hour before they were available. If an outage like this knocks out your load balancer and you're unable to start another you'll be down.
That being said. I find the ELB's provided by amazon are a better solution for me. I'm not sure what the reasoning is behind using HAProxy but I recommend investigating the ELB's as they will allow you to do things such as auto-scaling etc.
For each ELB you create amazon creates one load balancer in each zone that has an instance registered. These are still vulnerable to certain problems during severe outages at amazon like the one described above. For example during this downtime I could not add new instances to the load balancers but my current instances ( the ones not affected by the power outage ) were still serving requests.
UPDATE 2013-09-30
Recently we've changed our infrastructure to use a combination of ELB and HAProxy. I find that ELB gives the best availability but the fact that it uses DNS load balancing doesn't work well for my application. So our setup is ELB in front of a 2 node HAProxy cluster. Using this tool HAProxyCloud I created for AWS I can easily add auto scaling groups to the HAProxy servers.
I know this is a little old, but the solution you suggest is overcomplicated, there's a much simpler method that does exactly what you're trying to accomplish...
Just put your HAProxy machine, with your custom AMI in an auto-scaling group with a minimum AND maximum of 1 instance. That way when your instance goes down the ASG will bring it right back up, EIP and all. No external monitoring necessary, same if not faster response to downed instances.

What is your Health Check Settings for Elastic Load Balancer

What is your health check settings for elastic load balancer? I am not really well into this as my goal is to get the good settings to put where the ELB immediately failover the traffic when my 1st ec2 instance is down to the 2nd ec2 instance. Can anyone mind to share their configuration and knowledge?
Thanks.
James
Health check settings in ELB are important, but usually not that important.
1) ELB doesn't support active/passive application instances - only active/active.
2) If an application stops accepting connections or slows dramatically, load will automatically shift to the available / faster instances. This happens without the help of health checks.
3) Health checks prevent ELB from having to try to send a request to an instance in order to find out it is not well. This is good because a request to an unhealthy back end can sacrifice the request (an error will be sent to the client).
4) If your health check settings are too sensitive (such as using a 1 second timeout when some percent of your requests take longer than that) then it can pull instances out of service too easily. Too much of this and your site will appear to be down from time to time.
If you are trying a scenario with multiple availability zones and only one back-end in each zone, then the health checks are more important. If there are NO healthy back-ends in a zone, ELB will try to forward requests to another zone that has at least one healthy instance. In this case, the frequency of health checks determines the failover time, so you'll want faster checks.

Resources