AWS EC2 or ELB issue - amazon-ec2

I am running my web app (developed in python & flask) on AWS ELB with 10 EC2 instances. I am getting throughput 600 requests/sec.
But, before one month with same configuration I was getting throughput 7000 requests/sec.
I have checked all configuration in my web server(Nginx) to increase though put . All is good.
Does anybody has idea why I am facing this problem?

This could be a problem in AWS, or a problem in your application.
Start debugging by reading CloudWatch ELB statistics (specially Latency) and then the logs for your application. There must be a bottleneck somewhere.

Related

Random 502/503 error on Nginx running behind Docker (on ECS cluster + ALB)

So i have setup a laravel application and hosted on a docker which in turned hosted using AWS ECS Cluster running behind ALB.
So far i have the application up and running as expected, everything runs just the way it is (e.g. Sessions are stored in memcached and working, static assets are in S3 bucket, etc).
Right now i just have 1 problem with stability and i am not quiet sure where exactly the problem is. When i hit my URL / website, sometimes (randomly) it returns 502/503 HTTP error. When this happen i have to wait for about a minute or 2 before the app can return 200 HTTP code.
Here's a result of doing tail on my docker (i.e. nginx log)
At this point i am totally lost and not sure where else i should check. I've tried the following:
Run it locally, with the same docker / nginx >> works just fine.
Run it without ALB (i.e. Using just 1 EC2) >> having similar problem.
Run it using ALB on 2 different EC2 type (i.e. t2.small and micro) >> both having similar problem.
Run it using ALB on just 1 EC2 >> having similar problem.
According to your logs, ngjnx is answering 401 Unauthorized to the ALB health check request. You have to answer 200 OK in / endpoint or configure a different one like /ping in your ALB target group.
To check the health of your targets using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
On the Targets tab, the Status column indicates the status of each target.
If the status is any value other than Healthy, view the tooltip for more information.
More info: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html
I have had a similar issue in the past for one of a couple of possible reasons;
Health checks configured for the ALB, e.g. the ALB is waiting for the configured number of checks to go green (e.g. every 30 seconds hit an endpoint and wait for a 200 for 4/5 times. During the "unhealthy phase" the instance may be designated offline. This usually happens most often immediately after a restart or deployment or if an instance goes unhealthy.
DNS within NGINX. If the DNS records of the downstream service that NGINX is proxying have changed it might be that NGINX has cached (either according to the TTL or for much longer depending on your configuration) the old record and is therefore unable to connect to the downstream.
To help fully debug, it might be worth determining whether the 502/503 is coming from the ALB or from NGINX. You might be able to determine this from the access log of the ALB or the /var/log/nginx/access|error.log in the container.
It may also help to check, was there a response body on the response?

Amazon e2 free tier 503 error

I am using the amazon aws e2 to host a parse server database. It was working fine for the last couple of weeks, but today I got an error 503 saying: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. My question is: is it because I'm using the free t2.micro tier and I have run out of quota? Or can there be some other problem? I just launched another instance and it seems to be working fine for now.
Have you set up a load balancer? Checkout Elastic Beanstalk, which manages EC2 instances to automatically spin servers up and down as your needs require it. Your server may have just crashed and nothign was set up to automatically redeploy it.

Do I need to have HAProxy TCP/HTTP Load Balancer when I already have AWS ELB?

Let's say I have 20 servers at Amazon AWS and I also have AWS ELB setup for these servers. I heard that HAProxy is reliable and fast TCP/HTTP Load Balancer, so question is:
do I need to have HAProxy installed in each EC2 instances while I have AWS ELB?
What is the benefit of having both ELB and Haproxy at the same time?
Thanks
There are a few scenarios where people chose their own load balancing solution like HAProxy than ELB:
Financial transactions: ELB is an opaque service. Logs are not provided. So if you are missing transactions, you won't know if ELB dropped them or not.
Doesn't work well with traffic spikes: ELBs scaling takes at least 5 minutes. If your application traffic is doubling every 5-10 minutes, it will do well. But if it is at a constant rate and you will get a spike all of a sudden, then you will have problems with ELB.
ELBs can be slower than running your own Loadbalancing: In my environment, I got 15% performance boost by using HAProxy/Nginx (for SSL termination) instead. It was roughly 30ms per call, but keep in mind I was using SSL so I use CPU power.
ELBs only do round-robin load balancing and HAProxy has a lot more.
HAProxy also has ton more configurations that ELB does not support. It depends if one needs them for their application.
In one suite of applications, I have both running. ELB->haproxy->A suite of apps. In my case the following occurs:
ELB translates HTTPS to http
HAproxy targets to the app servers based on path
The app servers run in plain old http
The upside to this is that I can move around the apps without changing their URLs
The downside is that ELB isn't a fixed IP address so if you need to point to it from an IP adress instead of a cname you can't do it.
Short answer: No you don't need HAProxy. Go with an ELB.
tldr;
Yes HAProxy is powerful and tested.
First of all, you would need to have a separate EC2 HAProxy instance (as opposed to having HAProxy installed on every EC2 instance you need to balance). In essence an ELB is equivalent to an EC2 instance loaded with some kind of load balancing software.
Second, having both ELBs and HAProxy balancing instances in your environment is a rare use case. You might come to a point that you need more fine grained access and the ability to configure more on your load balancers. It purely depends on what you're doing and what problems an ELB might be giving you. Google to read through possible use cases.
I'm using an ELB and Haproxy behind.
When a customer uses my webservices from a unique IP, ELB redirects all his requests to the same hosts. It doesn't scale. (I supposed it's a hash from the src ip or something like that).
The haproxy has another balancer algorithm.
I keep the ELB for HA (1 haproxy / availability zone). And each haproxy instance redispatchs to region zone backend servers

Amazon Load balancer not working?

I have an aws Elastic Load Balancer. Sometimes Elastic Load Balancer works sometimse not. Soemetimes I am able to hit the app sometimes it gives me the blank page .
Why its happening so
You should check ping path and ping port on ELB, whether you get response. It seems that ELB brings the instances down as failed healthcheck.
AWS Application Load Balancers are notoriously susceptible to DDoS attacks, a simple SYN flood will bring down a load balancer, and what's worse is that you won't know that it's down, because the AWS Dashboard doesn't expose anything about the load balancer other than some basic HTTP level metrics.

503 error in tomcat deployed in ec2

I have deployed tomcat 6.0.28 on two amazon ec2 instances and they share a common mysql 5.5 database. I have also made use of the elastic load balancer. When I run the program using the tomcat in my local machine, everything is fine.
But when i use the ones in EC2, i get the following error,
java.io.IOException: Server returned HTTP response code: 503 for URL:.
Can somebody help me? Thank you in advance.
Well the problem was with sessions. I did not enable stickiness. Hence the next time the load balancer routed to a different instance. I enabled stickiness in elastic load balancer and everything is fine now.

Resources