I am wondering if ELB can route http requests to different ASGs (or different instances if the backend is a single instance based rather than ASG based), based upon the domain name.
Say, I am company owning two domains and these two domains serve different services. Can I put a single ELB in front of the two different logic serving ASGs? (See the following diagram for what is in my mind)
(If the answer to the above question is 'NO', would you please explain why which may answer the next question all together?) And then I have a similar question, can ELB serve different subdomains from different ASGs (see the next diagram)?
No. An ELB evenly distributes traffic across the instances associated with it. Multiple AutoScaling groups can indeed be associated with a single ELB, however it isn't possible to influence the load balancing algorithm depending on any factor.
In your case, you need 2 ELBs.
A possible work around: If all your instances behind the ELB had Apache with Virtual Hosts running on them, you could serve different domains or subdomains using a single ELB. However, each of your instances would be identical - you wouldn't have some instances for domain 1 and some for domain 2.
The moral of the story is that when using ELBs, all of your instances behind the ELB need to be stateless and do the same thing. And, you cannot influence how the ELB distributes traffic to the nodes behind it.
A reading of the documentation would be of benefit to you.
ELB subdomain routing was added at least as early as this: https://aws.amazon.com/about-aws/whats-new/2017/04/elastic-load-balancing-adds-support-for-host-based-routing-and-increased-rules-on-its-application-load-balancer/ Depending on the specific scenarios such as tls requirements this may or may not address your needs.
The application ELB was extended further as documented here: https://aws.amazon.com/blogs/aws/new-advanced-request-routing-for-aws-application-load-balancers/ which allows additional features such as custom headers and more powerful boolean logic operations.
As of this writing, certain conditions apply for example tls and wild card certificates though https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/ addresses some of these.
Related
I have the same application running on two WAS clusters. Each cluster has 3 application servers based in different datacenters. In front of each cluster are 3 IHS servers.
Can I specify a primary cluster, and a failover cluster within the plugin-cfg.xml? Currently I have both clusters defined within the plugin, but I'm only hitting 1 cluster for every request. The second cluster is completely ignored.
Thanks!
As noted already the WAS HTTP server plugin doesn't provide the function your're seeking as documented in the WAS KC http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rwsv_plugincfg.html?lang=en
assuming that by "failover cluster" what is actually meant is "BackupServers" in the plugin-cfg.xml
The ODR alternative mentioned previously likely isn't an option either, this because the ODR isn't supported for use in the DMZ (it's not been security hardened for DMZ deployment) http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/twve_odoecreateodr.html?lang=en
From an effective HA/DR perspective what you're seeking to accomplish should handled at the network layer, using the global load balancer (global site selector, global traffic manager, etc) that is routing traffic into the data centers, this is usually accomplished by setting a "site cookie" using the load balancer
This is by design. IHS, at least at the 8.5.5 level, does not allow for what you are trying to do. You will have to implement such level of high availability in a higher level in your topology.
There are a few options.
If the environemnt is relatively static, you could post-process plugin-cfg.xml and combine them into a single ServerCluster with the "dc2" servers listed as <BackupServer>'s in the cluster. The "dc1" servers are probably already listed as <PrimaryServer>'s
BackupServers are only used when no PrimaryServers are reachable.
Another option is to use the Java On-Demand Router, which has first-class awareness of applications running in two cells. Rules can be written that dictate the behavior of applications residing in two clusters (load balancer, failover, etc.). I believe these are "ODR Routing Rules".
Anyone has any idea how the ELB will distribute requests if I register multiple EC2 instances of different sizes. Say one m1.medium, one m1.large and one m1.xlarge.
Will it be different if I register EC2 instances of same size? If so then how?
That's a fairly complicated topic, mostly due to the Amazon ELB routing documentation falling short of being non existent, so one needs to assemble some pieces to draw a conclusion - see my answer to the related question Can Elastic Load Balancers correctly distribute traffic to different size instances for a detailed analysis including all the references I'm aware of.
For the question at hand I think it boils down to the somewhat vague AWS team response from 2009 to ELB Strategy:
ELB loosely keeps track of how many requests (or connections in the
case of TCP) are outstanding at each instance. It does not monitor
resource usage (such as CPU or memory) at each instance. ELB
currently will round-robin amongst those instances that it believes
has the fewest outstanding requests. [emphasis mine]
Depending on your application architecture and request variety, larger Amazon EC2 instance types might be able to serve requests faster, thus have less outstanding requests and receive more traffic accordingly, but either way the ELB supposedly distributes traffic appropriately on average, i.e. should implicitly account for the uneven instance characteristics to some extent - I haven't tried this myself though and would recommend both, Monitoring Your Load Balancer Using CloudWatch as well as monitoring your individual EC2 instances and correlate the results in order to gain respective insight and confidence into such a setup eventually.
Hi I agree with Steffen Opel, also I met one of the solution architect of AWS recently, he gave couple of heads up on Elastic Load balancing to achieve better performance through ELB.
1) make sure you have equal number of instances running on all the availability zones. For example in case of ap-southeast, we have to availability zones 1a and 1b so make sure you have equal number of instances attached to the ELB from both the regions.
2) Make sure your application is stateless, that's the cloud enforces and suggests the developers.
3)Dont use sticky sessions.
5)Reduce the TTL (Time to live) to the maximum possible level, like 10 secs or something.
6) Unhealthy checks TTL should be minimum so that ELB doesn't retain the unhealthy instances.
7)If you are excepting a lot of traffic to your ELB make sure you do a load testing on ELB itself, it doesn't scale as fast as your ec2 instances.
8) If you are caching then think thousand times from which point you are picking the data to cache.
Above all tips is just to help you get this better. Its better you have a same size of instances.
Following the scenario:
There is a service that runs 24/7 and a downtime is extremely expensive. This service is deployed on Amazon EC2. I am aware to the importance of deploying the application on two different availability zones and even in two different regions in order to prevent single points of failure. But...
My question is whether there are any additional configuration issues that may affect the redundancy of an application. I mean also to wrong configuration (for example wrong configuration of the DNS that will make it fail in case of a fail over).
Just to make sure I am clear - I am trying to create a list of validations that should be tested in order to ensure the redundancy of an application deployed on EC2.
Thank you all!
Just as a warning, just because you put your services in two availability zones doesn't mean that you're fault tolerant.
For example, one setup I had was to have 4 servers on a load balancer with us-east-1a us-east-1b as the two zones. Amazon's outage a few months ago caused some outages with my software because the load balancers weren't working properly. They were still forwarding requests but the two dead instances I had in one of the zones were also still receiving requests. Part of the load balancer logic is to remove dead instances, but since the load balancer queue was backlogged those instances were never removed. In my setup there are two load balancers once in each zone, so all of the requests to one load balancer were timing out because there were no instances to respond to the request. Luckily for me, the browser retried the request with the 2nd load balancer so the feeds I had were still loading but were very very slow.
My advice is to make sure that if you choose to go with only two availability zones over two regions that you make sure your systems are not dependent on any part of another availability zone, not even the load balancers. For me, it's not worth the extra cost to launch two completely independent systems in different zones so I'm unable to avoid this problem again in the future. But if your software is critical to the point where losing the service for 1 hour would pay for the cost of running extra hardware then it's definitely worth the extra servers to set it up correctly.
I also recommend paying for AWS support and working with their engineers to make sure that your design doesn't have any flaws for high-availability.
Recap of the issue I discussed: http://aws.amazon.com/message/67457/
This is a fairly generic question. Suppose I have three ec2 boxes: two app boxes and a box that hosts nginx as a reverse proxy, delegating requests to the two app boxes (my database is hosted elsewhere). Now, the two app machines can absorb a failure amongst themselves, however the third one represents a single point of failure. How can I configure my setup so that if the reverse proxy goes down, the site is still available?
I am looking at keepalived and HAproxy. For me this stuff is non-obvious, and any help for the ears of a beginner is appreciated.
If your nginx does no much more than proxying HTTP requests, please have a look at Amazon Elastic Load Balancer. You can set up your two (or more) app boxes, leave some spare ones (in order to keep always two or more up, if you need it), set up health checks, have SSL termination at the balancer, make use of sticky sessions, etc.
There is a lot of people, though, that would like to see the ability to set elastic IP addresses to ELBs, and others with good arguments why it is not neeeded.
My suggestions is that you take a look at ELB documentation, as it seems to perfectly fit your needs. I also recommend reading this interesting post for a good discussion on this subject.
I think if you are a beginner with HA and clusters, your best solution is Elastic Load Balancer (ELB) which is maintained by Amazon. They scale up automatically and implements a high availability cluster of balancers. So using ELB service you already mitigate the point of failure that you commented. Also it's important to have in mind that an ELB is cheaper than 2 instances in AWS. And of course it's easier to launch and maintain.
You can't see multiple ELB because it is a service, so you don't have to take care of the availability.
Other important point is that AWS elastic ips aren't assigned to NIC interface of your OS instance, so use virtual ips as well in classical infrastructures it's difficult.
After this explanation, if you still want Nginx as a proxy reverse in AWS because your reasons, I think you can implement an autoscaling group with a layer composed by Nginx instances. But if you aren't expert in autoscaling technology, it could be very tricky.
I'm relaunching a site (~5mm+ visits per day) on EC2, and am confused about how to deploy nodes in different data centers. My most basic setup is two nodes behind a Varnish server.
Should I have two Varnish instances in different availability zones, each with WWW nodes that talk to a shared RDS database? Each Varnish instance can be load balanced w/ Amazon's load balancer.
Something like:
1 load balancer talking to:
Varnish in Virginia, which talks to its own us-east-x nodes
Varnish in California, which talks to its own us-west-x nodes
We use amazon EC2 extensively to do load balancing and fault tolerance. While we still don't extensively use the LoadBalancers provided by Amazon we have our own load balancers(running outside Amazon). Amazon promises that the LoadBalancers will never go down, they are internally fault tolerant, but I havent tested that well enough.
In general we host two instances per availability zone. One acting as a mirroring server to the real server. If in case one of the servers go down we send the customers to the other one. But lately Amazon has shown a pattern that a single availability zone goes down quite often.
So the wise technique I presume is to set up servers across availability zones like you mentioned. We use postgres so, we can replicate content in the database across the instances. With 9.0 there is Binary replication that works great for two way replication. This way both the servers can take the load when up but when a availability zone does go down all the users are sent to one server. Since a common database is available it does not matter where the users go to. Just that they will experience a slight slowness, if they go to the wrong server.
With this approach you can do tandem updating of the web sites. Update one ensure that it is running fine and then update the next. So even if the server failed to upgrade the whole website is always up.