Whenever a request times out to a dyno, it's automatically retried. Looks like it's managed by the dyno according to this article https://devcenter.heroku.com/articles/http-routing. How can I set/change this setting myself?
It's not possible to modify the behaviour of the router. If you need certain requests to be refused for whatever reason, this logic would need to be carried out within your application.
Details on how Heroku's routing works is available in the Dev Center article: HTTP Routing. The section on Dyno connection behavior on the Common Runtime is probably most relevant to your question.
PS: This was the response of heroku support when I raised a ticket for same issue.
Related
I'm investigation some issues with Stripe webhooks not reaching our test server.
According to their docs they submit requests from the following IPs: https://stripe.com/docs/ips#webhook-notifications
I have added these IPs to the iptables:
I'm not an iptables expert, but looking at this it seems that it's only matching 54.187.216.72. Other requests from Stripe will fail with a timeout error, which I'm assuming are coming from other IPs.
I can see the only working IP in my apache logs. I think I can rule out ufw / firewall issues because I have tried to temporary disable that as well during testing.
My question: How do I investigate this issue further? Is my iptables setup correct? Is there anything else here that could block IPs other iptables and ufw?
Stripe could not tell me which IP was used on their requests.
I hope I'm providing the correct information here, if not please let me know!
Thanks a lot!
The problem is that you do not have information from network diagnostic tools and cannot get it. When you see in the Stripe dashboard a timeout error this cannot mean only that your server blocks incoming requests or does not respond. Such error messages are generated by the Stripe backend, not network tools. Where are the packets lost? Are they really lost? Quite possible that Stripe has strict time intervals to resolve Promises and just rejects them despite of result.
You can't run traceroute from their IPs. Those IPs doesn't respond to ping. You can talk to your hosting provider and get "no problem here" response. You can turn off the firewall and still have errors. I'm pretty sure your filter rules are not the root of this issue.
P.S.: We had the same one. Without any filtering rules our endpoint was accessible from around the world except those IPs. The problem appeared without making any changes in the configuration. The problem disappeared without any changes in the configuration. Neither Stripe nor the hosting provider had problems found.
I am using SSE to push notification to client. The articture for my dataservices is as follows:
Client -> API Gateway(Spring cloud api gateway) -> f5(loadBalancer) -> (nginx) ->dataservice
When the load balancer is out of the picture, my notification works perfect but when I introduce f5 load balancer, it does not work and connection breaks.
Does f5 load balancer support long lived http connection? What configuration should I do to make it work.
Your question is unclear if it doesn't work at all, or if it stops working after a while (and then how long ?)
I suppose your F5 VS (Virtual Server) is of type Standard.
First, we can check if the HTTP Profile is in any way guilty. If your Virtual Server type is Standard virtual server with Layer 7 functionality, change it if possible to Standard by removing the HTTP Profile (and maybe some other profiles, such as caching..). You also can try Performance Layer4 type. Is it solving the issue ? If yes, we need to identify where the problem is, probably in the HTTP Profile or in a timeout setting as described below.
Check the HTTP Profile configured for your VS, at the Response Chunking option and set it to Preserve. See LTM HTTP Profile Option: Response Chunking if you need more details.
Check both Server and Client TCP Profiles related to your VS, their Time Wait option should be Indefinite if you suspect a timeout issue. There are other ways to solve a timeout, I'm just giving one of them. See K70025261 if you need more details.
As you're running SSE, you should probably disable Delayed Acks (enabled by default) and Nagle's Algorithm (disabled by default), as they can make your notifications slower. They're also both at the TCP Profile screen.
To answer the question:
YES, F5 supports SSE as I was able to make it work with some configuration tweeks. I cannot paste the configuration snapshot here, but in summary, turning off the **HTTP compression** property seemed to have done the trick for my case.
I currently have an application that is using the Pusher API to enable real time messaging and would like to remove my dependency on Pusher.
I am keen to keep my current application as it stands and connect over websockets to a channel on an Phoenix app that is a completely separate application on a separate instance. Reasoning for this is it will allow me to separately scale the phoenix app when there is a large number of messages.
Is this possible? I have experience of using Socket.IO and this supports this functionality by specifying the location of the Socket application when trying to connect.
Yeah it's possible, you can set the option :check_origin as explained in lib/phoenix/transports/long_poll.ex source code:
https://github.com/phoenixframework/phoenix/blob/master/lib/phoenix/transports/long_poll.ex#L26
:check_origin - if we should check the origin of requests when the
origin header is present. It defaults to true and, in such cases,
it will check against the host value in YourApp.Endpoint.config(:url)[:host].
It may be set to false (not recommended) or to a list of explicitly
allowed origins
I'm trying to diagnose an issue whereby an embedded device running an HTTP client to issue requests to a Node.js Web application running on Heroku is receiving empty responses with status code 400.
The problem I'm facing is that the presumably failing requests do not even appear in the Heroku logs, so it's certainly not the Web application code returning those 400s.
On the other hand issuing requests to the Web application from a browser works just fine and the requests do appear in the Heroku logs.
I'm trying to figure out whether the embedded client is really sending requests at all and I'm wondering if there are any reasons why Heroku might send back those 400s without the requests even appearing in the logs.
The cause was related to a badly implemented HTTP client in the device that was issuing requests omitting the host header.
Adding the header solved the problem.
Our Spring application is running on several different servers. For one of those servers POST requests do not seem to be working. All site functionality that uses GET requests works completely fine; however, as soon as I hit something that uses a POST request (ex. form submit) the site just hangs permanently. The server won't give any response. We can see the requests in Tomcat Manager but they don't time out.
Has anyone ever seen this?
We have found the problem. Our DBA accidentally deleted the MySQL database files on that particular server (/sigh). In our Spring application we use GET requests for record retrieval and the records we were trying to retrieve must have been cached by MySQL. This made it seem as if GET requests were working. When trying to add new data to the database, which we use POST requests to do, Tomcat would wait for a response, which never came, from MySQL.
In my experience if you're getting a timeout error it's almost always due to not having correct ports open for your application. For example, go into your virtual machine's rules and insure port 8080, 8443 or 80, 443 are open for http and https traffic.
In google cloud platform: its under VPC networking -> firewall rules. Azure and AWS are similar.