Increase Timeouts To Avoid 504 Gateway Timeouts - ruby

I've taken over this rails app that is hosted on EC2 using passenger. After 1 minute requests are stopped with a gateway timeout 504 error. Where can this be increased. I've increased apache's /etc/http/http.conf to be TimeOut: 100000000, but that doesn't work.

Related

502 Bad Gateway on Elastic Load Balancer

I have a load balancer which redirects to an ec2 server which has a mulesoft application running on a port,
Load balancer redirects whatever comes through to that port
I get a 502 bad gateway everytime I try to POST data through postman, as soon as I post again the request goes through.
1st Attempt: Fail with 502 bad gateway
2nd Attempt (Immediate): Goes through
I increased the timeout on the load balancer to 5 minutes but still I get 502 bad gateway on the first try
Any help/suggestions appreciated

AWS ALB returning 502 without any log entries

We're using node js backend servers running in AWS ECS, behind an ALB. We then have AWS API gateway with a proxy lambda calling the ALB. This has been running in production for months, when suddenly a few days ago we started seeing 502 errors from some API calls.
I've checked the proxy lambda logs to see that the 502 is returned from the ALB. However, when I check my node application logs, there are no failing requests, in fact no requests seem to have reached the application at these timestamps. I then enabled access logs on the ALB, which only shows 200/201 responses - no 5xx whatsoever. I'm now a bit confused as to where to look next. What could cause my ALB to return 502 without this being present in the ALB access logs? And what could cause the requests to not reach my node app in ECS? Does anyone have any idea on what logs to check next or what to do to pinpoint the errors? Could some layer within ECS cause those symptoms? I can't see any errors in my docker containers or anything.
It seems to happen in bursts, up to 50 failed requests within a period of time, then all ok for several hours.
It could be due to a number of reasons. The below may be applicable to you -
The load balancer received a TCP RST from the target when attempting
to establish a connection.
The load balancer received an unexpected response from the target,
such as "ICMP Destination unreachable (Host unreachable)", when
attempting to establish a connection. Check whether traffic is allowed
from the load balancer subnets to the targets on the target port.
The target closed the connection with a TCP RST or a TCP FIN while the
load balancer had an outstanding request to the target. Check whether
the keep-alive duration of the target is shorter than the idle timeout
value of the load balancer.
The target response is malformed or contains HTTP headers that are not
valid.
The load balancer encountered an SSL handshake error or SSL handshake
timeout (10 seconds) when connecting to a target.
reference docs
This turned out to be memory leaks in my container applications. The RAM usage grew with every request until crash. At that point it took a while for ECS and ALB to react, so a bunch of requests were routed to the dead instance.
The problem was resolved by fixing the leak, but I'd have wanted better built in support for alarms on high memory usage from ECS/cloudwatch with triggers to replace instances on high usage gracefully. Seems i have to build that from scratch.

504 Gateway timeout for ELB

I have AWS Elastic load balancer which has two healthy instances. If I make a POST request, it gets accepted. But consequent requests throw 504 gateway timeout error. After 5-10 minutes, it accepts 2-4 requests, and then start throwing 504 error. I try to reach is Spring Boot Application hosted on these two instances. There are no application level timeouts. Further time duration between failed and accepted requests vary, so I believe no fixed timeout configuration setting is causing an issue. How can I resolve this?

Jersey client gets 504 when server keeps processing request

I have a Jersey client and server. And I see this behavior:
In client I post a request
In the server I see the request and start to handle it
Then out of a sudden I receive an empty response with status 504 to the client while the server still processes the request
I've set the client properties to have read and connect timeouts much higher than the time I get the empty response
After further analysis - the gateway timeout was due to a Load-Balancer between the client and the server.
Reconfiguring the timeout in the Load-Balancer solved the issue

Frequent 504 Gateway Time-out on appharbor

Deployed an asp.net mvc 4 app on appharbor with very low traffic. Each time the application is accessed after deployment of after a few minutes of inactivity, I get a 504 gateway time out error from nginx. Very annoying, what can I do to work around the error?
EDIT:
support ticket on appharbor's support site
The HTTP 504 is returned because the application doesn't respond within the request timeout. Application startup can take a little while, so sometimes a 504 may be returned on the initial request.
Applications on the free plan idle out after 20 minutes of inactivity. You can upgrade to one of the paid plans as they don't idle out after a period of inactivity.
We (AppHarbor) are working on decreasing the time it takes for applications to start up, which will mitigate the issue further. Note that the default request timeout was very recently increased to 120 seconds, so if you continue to experience this you're very welcome to open a ticket and let us know the application name so we can take a closer look.

Resources