Did Heroku change its handling of double slash in paths? - heroku

Our Heroku-hosted service suddenly started to get requests for //foo instead of /foo yesterday; the service naturally 404'd the broken //foo version. It looks like a client of the service had been sending //foo all along, but somehow until about noon Eastern on July 23, the service was just seeing /foo so the client was working fine. But it broke once the service started to see //foo.
Did the Heroku router auto-fix //foo or something until yesterday?
Just trying to figure out what happened and ensure there's nothing else to fix.

I assume there was a heroku router change because we noticed the same thing at around the same time. Our application had been receiving "//foo" for at least the last few months but the router had been logging and handling it as "/foo" up until around the time you mention.

Related

Inconsistent crashes on Heroku app: where to look?

Disclaimer: Please tell me if the question is too broad, and I will do my best to narrow it down.
We have an Heroku app which is running 2 web 1X dynos. This infrastructure has been running for the last 9 months.
However, in the last few weeks, we had several episodes where the app would see its response times skyrocketing for about an hour, before returning to normal without us doing anything about it.
On the pictures below, you can find an extract of Heroku Metrics during one of these "episodes", which happened yesterday afternoon.
As you can see, response time is going up and eventually, almost any request made to the server gets a timeout. During the event, it was barely possible to even load the home page of our website, hosted on this app. Most of the times, we would get the "Application Error" Heroku page.
What I see is:
The amount of requests to the server (failed or not) was not crazily high (less than 1000 every 10 minutes). For this reason, I think a DDOS attack is out of the picture.
Everything that is shown by Heroku Logs is that the failed request get a 503 (Service Unavailable) error, which would make me think about an overload.
The dynos do not seem overloaded. The memory usage is low, and the dyno load is reasonable, nothing unusual.
Heroku reported no issue during our crash event, as https://status.heroku.com/ states (last incident was on the 1st of July).
Restarting the dynos through several methods (from the interface, a command line or triggering an automatic deployment via our Gitlab repository) had no effect.
I am quite unsure as to how to interpretate these metrics, and what would be the solution to ensure this kind of episode does not happen again.
So my question is: where should I look? Is there some kind of documentation about how to investigate crashes on Heroku apps?

Jetty webserver after idle breaks

I have a webapp deployed successfully in Jetty webserver.
The webserver responds to requests fine.
When I access the app it renders the home page.
Recently I noticed that when I don't use the app for certain period of time it breaks somehow. The period is somewhere around 2/3 weeks.
When I access the webapp after 2/3 weeks of idle I receive this output.
If I try to access any other link, i.e. the login page (/login.faces) I receive:
Problem accessing /error/not-found.faces. Reason:
/error/not-found.xhtml Not Found in ExternalContext as a Resource
which normally used to work before idling.
If I restart the webserver everything returns to normal and works fine. There are scheduled tasks set which make the app interact every day with database. (There is a scheduled task for fetching currency rates via webservice).
Therefore, my question is what would be the cause which breaks the site and makes it unavailable after idling? Is this a webserver (jetty) issue? Am I missing any setting which is crucial?
FYI, the project structure is: Java with Spring, Hibernate, JSF (PrimeFaces) and Jetty
This occurred due to permissions in CentOS.
If anyone faces the same issue make sure to check the logs have appropriate permissions to read and write

Server cascading failure

this might be a totally noob question.
We just migrated to AWS a week back. We have two separate apps , call them App1 and App2. For every request that App1 receives , it makes a web service call to App2 with a read timeout of 2 sec.So ,if the response isn't delivered within 2 sec,it is aborted.However, App2 server is facing some problems due to which sometimes App2 server goes down. But the problem is that whenever App2 server goes down,App1 server goes down with it. And when it comes back up ,the App1 server immediately comes back up as well.
This is weird problem.What do you guys think is happening ?
Any help will be greatly appreciated.
My guess is that requests are piling up on app 1 (due to increased latency) as app 2 goes down, which eventually causes app 1 to become unresponsive as well. I would also look into what actually happens when you abort your request after the two second timeout. Are you actually making sure the connection is aborted? If not, you may be using up system resources for dead connections.
But the above is just guessing in the dark; I think we need more information to make more educated guesses :).

Does Heroku change dyno IP during runtime?

I am currently working on a ruby/heroku app, that needs to query ~40 consecutive SOAP calls from a server, uploads a file to a FTP, then sleeps 15 Minutes and begins anew.
Strangely, yesterday everything worked fine (in the evening hours) either locally or via the dyno; now, since morning, I seldomly get through to the 10th query - it always stops on
D, [2014-03-20T14:18:49] Debug -- : HTTPI POST request to www.XXXX.de (net_http)
with a Connection timed out.
Locally, via foreman, everything works fine, so I'd like to rule out that the server doesn't accept 40 queries within about two minutes.
I came to the conclusion that maybe during runtime, the dyno IP is being changed; that would explain the timeout during SOAP call. Do I have to build a new savon-client for every call?
Heroku Dynos are ephemeral application instances. They may come up/down at any time and be replaced by a new one, or have your application restarted.
So, Dynos may often change which will result in new IPs for your app servers. However, the IP is very unlikely to change while the dyno is up and running. Only to be replaced by a new dyno with a different IP.

Want to know a request is served by which dyno at heroku

I have a sinatra app deployed at heroku, and I have scaled web worked to 3 dynos, so requests are being served by
web.1
web.2 and
web.3 respoectively.
I want to know in ruby, from within a conroller action that current request is being served by which dyno and then want to keep this in database. I did a bit of google but not found any satisfactory answer.
Any Help would be greatly appreciated.
Thanks
There is really no way to know this. You don't get any HTTP headers from Heroku that specify which Dyno is handling the request, so there's no way to tell. The best you can do is have Heroku stream all your logs somewhere (syslog drain) so that you can parse the logfiles and match request URIs to Dynos.
There's probably a very hacky way to do this on boot with a worker process.
You need to be able to monitor the output from heroku logs --tail, see https://github.com/dblock/heroku-commander for an example. The worker reads logs.
The worker makes an HTTP request to the app, eg. GET app/dyno?id=uuid1. The response is the MAC address of the dyno that responded, eg. mac1.
The worker can see in the logs that uuid1 went to web.5, which responded with its mac. Bingo, the worker now knows.
PUT app/dyno?mac1=web.5&mac2=web.6, etc. Each dyno that receives this will compare its mac to one of the macs and respond true/false that it now knows who it is.
Repeat until the worker has reached all dynos and all dynos know.
Watch for dyno restarts periodically.
You got to wonder though why you need to know that you're "web.1". Why can't it be a unique UUID, like the MAC address of the machine?

Resources