504 Getaway time out 50 sec in EC2 error - amazon-ec2

I have an application hosted on amzon EC2. we I do a AJAX request it will time out after 50 second.
any one know how to increase this time out?

It can be increases through the AWS Console at Load Balance section

Related

performance issue for spring r2dbc

Problem : slowness observed during first 1 sec only and the remaining 59 have a constant response time which 90% better than the max response time.
Server Env details: spring boot webflux with r2dbc pool deployed in ECS fargate and connecting to Postgres Aurora cluster.
pool settings - maxSize,initialSize is 200
Using spring data r2dbc and enabled proxy listener for debugging.
Client :
A gatling script with a very minimal load of 200,250,300,500 users with a ramp time of 50 sec is configured in AWS EC2 in same VPC.
Scenario
ECS server started.
wait for 4 min.
did a dry run of 5 requests using postman.
trigger load using gatling
Shutdown the ECS.
repeat the steps with a different number of users.
The behaviour is consistent with different users. Always the first 1 min has the slowest responses and having the max response time.
Subsequent runs without server restart has a good performance without any delays.
Total|OK |KO|Cnt/s|Min|50th pct|75th pct|95th pct|99th pct|Max |Mean |Std Dev|
500 |500 |0 |9.804|94 |184 |397 |1785 |2652 | 2912 | 417 | 556 |
And also observed in the logs the time difference between these two immediate lines logged is 168ms for the request with max response time.
-- Executing query: BEGIN
-- io.r2dbc.spi.Connection.beginTransaction callback at ConnectionFactory#create()
Any suggestion how to approach/fix the issue?
Thanks.

Elasticsearch speed vs. Cloud (localhost to production)

I have got a single ELK stack with a single node running in a vagrant virtual box on my machine. It has 3 indexes which are 90mb, 3.6gb, and 38gb.
At the same time, I have also got a Javascript application running on the host machine, consuming data from Elasticsearch which runs no problem, speed and everything's perfect. (Locally)
The issue comes when I put my Javascript application in production, as the Elasticsearch endpoint in the application has to go from localhost:9200 to MyDomainName.com:9200. The speed of the application runs fine within the company, but when I access it from home, the speed drastically decreases and often crashes. However, when I go to Kibana from home, running query there is fine.
The company is using BT broadband and has a download speed of 60mb, and 20mb upload. Doesn't use fixed IP so have to update A record whenever IP changes manually, but I don't think is relevant to the problem.
Is the internet speed the main issue that affected the loading speed outside of the company? How do I improve this? Is cloud (CDN?) the only option that would make things run faster? If so how much would it cost to host it in the cloud assuming I would index a lot of documents in the first time, but do a daily max. 10mb indexing after?
UPDATE1: Metrics from sending a request from Home using Chrome > Network
Queued at 32.77s
Started at 32.77s
Resource Scheduling
- Queueing 0.37 ms
Connection Start
- Stalled 38.32s
- DNS Lookup 0.22ms
- Initial Connection
Request/Response
- Request sent 48 μs
- Waiting (TTFB) 436.61.ms
- Content Download 0.58 ms
UPDATE2:
The stalling period seems to been much lesser when I use a VPN?

Why latency of requests to Jetty on EC2 Linux high?

I'm running jetty-distribution-9.3.0.v20150612 on Java(TM) SE Runtime Environment (build 1.8.0_51-b16) over AWS EC2 m1.small Linux machine.
It communicates with mobile apps with a mean count of 36 hits per minute, about 60% of traffic using HTTP/2.0, mean CPU utilisation is ~15% at peak and network i/o stands around 5 MB per minute, so it doesn't have any resource choking due to traffic.
Jetty's AsyncNCSARequestLog latency logging shows an average latency of around 2000 ms. As explained in this post, latency is calculated (now - request.getTimeStamp()), so it does not separate the time it took Jetty to handle the request between the time it took to create the HTTP connection.
How do I analyse the request's latency in order to find the bottle neck?

Amazon Load Balancer excessively high latency

I'm having an issue with an AWS load balancer - loading pages through it seems to give high latency (~5s)
There are two EC2 instances living behind the load balancer, let's call them p1 and p2.
I'm running Magento on these instances, they're both connected to the same database.
When viewing a category page on p1 or p2 directly, the initial load time is < 500ms, but when I visit the load balancer (which then points to p1 or p2) the browser spends ~5 seconds waiting for a response from the server.
This is a typical request to p1 or p2 directly:
This is a typical request from the load balancer:
I initially suspected it may be an issue with Magento trying to re-cache for requests coming from the load balancer but I then set p1 and p2 to have their caches synchronised so cache is unlikely the cause.
The stack on p1 and p2 are fairly regular Apache2 + PHP-FPM + PHP setups that are lightning fast on their own.
AWS has recently released a new feature of ELB just for such troubleshooting scenarios. Now you can get ELB access logs. These Acces logs can help yopu determine the time taken for a request at different intervals. e.g:
request_processing_time: Total time elapsed (in seconds) from the time the load balancer receives the request and sends the request to a registered instance.
backend_processing_time: Total time elapsed (in seconds) from the time the load balancer sends the request to a registered instance and the instance begins sending the response headers.
response_processing_time: Total time elapsed (in seconds) from the time the load balancer receives the response header from the registered instance and starts sending the response to the client. this processing time includes both queuing time at the load balancer and the connection acquisition time from the load balancer to the backend.
...and a lot more information. You need to configure the access logs first. Please follow below articles to get more understanding around using ELB access logs:
Access Logs for Elastic Load Balancers
Access Logs
These logs may/may not solve your problem but is certainly a good point to start with. Besides, you can always check with AWS Technical support for more in depth analysis.

Performance with several Web processes running

I'm in development for a Rails app and I am confused about what I'm seeing. I am new to this so I may misinterpreting the information. When I run one web process I am getting good results. But when I up the web process I am not getting the results I expect. I am trying to calculate how many I will need to run in production so I can determine my costs.
Based on New Relic I have response times of 40-60 MS per request at 3000 requests per minute (or about 50 requests per second) on one Heroku Dyno. Things are working just fine and the web processes are not even being pushed. Some responses are timing out at 1 second on Blitz, but I expect because I'm pushing as much as I can through one Dyno.
Now I try cranking up the Dyno's. First to 10 then to 50. I rush with Blitz again and get the same results as above. With 50 dynos running I blitz the website with 250 concurrent users and I get response times of 2 or 3 seconds. New Relic is reading the same traffic as one dyno, 3000 requests per second with 60MS request times. Blitz is seeing 3 second response times and getting a max of 50 to 60 rps.
In the logs I can see the activity split nicely between the different web processes. I am just testing against the home page and not accessing any external services or database calls. I'm not using caching.
I don't understand why a single web process easily handle up to 60 requests per second (per Blitz), but when increasing to 10 or even 50 web processes I don't get any additional performance. I did the Blitz rushes several times ramping up concurrent users.
Anyone have any ideas about what's happening?
I ended up removing the app server, Unicorn, from my app and adding it back in again. Now I'm getting 225 hits per sec with 250 concurrent users on one dyno. Running multiple Dyno's seems to work just fine as well. Must have been an error in setting up the App Server. Never did track down the exact error though.

Resources