Extend Heroku 30 seconds limit - heroku

I have a Sinatra app that looks something like this:
get '/generate'
generate_result(params) # method that takes several minutes to complete
end
Unfortunately, method 'generate_result' takes several minutes to run. But Heroku's limitation is 30 seconds per request. I'm using a free Heroku account, so I'm looking for a solution that doesn't require buying a worker dyno.
I tried 'rack-timeout' gem, but the problem still appears on Heroku.

In order to support web sockets and long polling, Heroku provides a way around the timeout. Upon the initial request, you have 30 seconds to respond to the client with at least one byte, after which each subsequent byte will keep the connection open for at least 55 seconds.
https://devcenter.heroku.com/articles/request-timeout#long-polling-and-streaming-responses
As a hack, you can thus repeatedly send "heartbeat" messages over the connection in order to keep it alive. The remaining question is whether it's worth it – usually you'd be better off by doing background processing.

Related

Heroku Webhook Fails Sometimes

I'm creating a chatbot with dialogflow and I have a webhook hosted on heroku (so that I can use python scripts). The webhook works fine most of the time. However when I haven't used it in a while it will always fail on the first use with a request timeout. Has anyone else come across this issue? Is there a way to wake up the webserver before running the script I have written?
Heroku's free dynos will sleep after 30 minutes of inactivity.
Preventing them from sleeping is easy. You need to use any of their paid plans.
See https://www.heroku.com/pricing
Once you use a Hobby dyno, your app will never sleep anymore and you shouldn't be getting request timeouts.
Alternatively, you can also benchmark what's taking a long time to boot your app. With a faster boot time, the first request would be slow but wouldn't get a timeout.
Heroku times out requests after 30 seconds.

Is it dangerous to run git push heroku master too many times in a short period

I have been testing some viewport issues for mobile and probably ran
git push heroku master
about 50 times in the last 3 hours. I am now seeing from the google speed tests that:
Reduce server response time In our test, your server responded in 8.9
seconds. There are many factors that can slow down your server
response time. Please read our recommendations to learn how you can
monitor and measure where your server is spending the most time. Hide
details
This wasn't popping up earlier this morning and was under .5 seconds. Did I damage one of my dynos on the heroku servers?= My site isn't really getting any traffic yet so I haven't been doing any stage testing.
What is the best way to test production?
I was reading through this but was wondering if there is a better way to test production quickly.
https://devcenter.heroku.com/articles/multiple-environments
Thanks,
Jeff
There's nothing wrong with pushing many times in a row, but every time you push, your dynos will cycle. This takes something like 5 to 15 seconds depending on the size of your slug.
Generally this means that the first query sent to your app at the moment your dynos are cycling might hang for about that long. If Google checked your server's speed at that time, then that explains the response time. However, there shouldn't be any lasting effects after you finish pushing repeatedly.
If I recall correctly there is a Heroku labs option to cycle dynos to eliminate this pause, basically taking down some of your dynos and then cycling them while the other ones are still up, but I do not recommend using it as it makes code pushes very unpredictable and can result in two versions of your app being live at the same time.

Performance with several Web processes running

I'm in development for a Rails app and I am confused about what I'm seeing. I am new to this so I may misinterpreting the information. When I run one web process I am getting good results. But when I up the web process I am not getting the results I expect. I am trying to calculate how many I will need to run in production so I can determine my costs.
Based on New Relic I have response times of 40-60 MS per request at 3000 requests per minute (or about 50 requests per second) on one Heroku Dyno. Things are working just fine and the web processes are not even being pushed. Some responses are timing out at 1 second on Blitz, but I expect because I'm pushing as much as I can through one Dyno.
Now I try cranking up the Dyno's. First to 10 then to 50. I rush with Blitz again and get the same results as above. With 50 dynos running I blitz the website with 250 concurrent users and I get response times of 2 or 3 seconds. New Relic is reading the same traffic as one dyno, 3000 requests per second with 60MS request times. Blitz is seeing 3 second response times and getting a max of 50 to 60 rps.
In the logs I can see the activity split nicely between the different web processes. I am just testing against the home page and not accessing any external services or database calls. I'm not using caching.
I don't understand why a single web process easily handle up to 60 requests per second (per Blitz), but when increasing to 10 or even 50 web processes I don't get any additional performance. I did the Blitz rushes several times ramping up concurrent users.
Anyone have any ideas about what's happening?
I ended up removing the app server, Unicorn, from my app and adding it back in again. Now I'm getting 225 hits per sec with 250 concurrent users on one dyno. Running multiple Dyno's seems to work just fine as well. Must have been an error in setting up the App Server. Never did track down the exact error though.

How many dynos would require to host a static website on Heroku?

I want to host a static website on Heroku, but not sure how many dynos to start off with.
It says on this page: https://devcenter.heroku.com/articles/dyno-requests that the number of requests a dyno can serve, depends on the language and framework used. But I have also read somewhere that 1 dyno only handles one request at a time.
A little confused here, should 1 web dyno be enough to host a static website with very small traffic (<1000 views/month, <10/hour)? And how would you go about estimating additional use of dynos as traffic starts to increase?
Hope I worded my question correctly. Would really appreciate your input, thanks in advance!
A little miffed since I had a perfectly valid answer deleted but here's another attempt.
Heroku dynos are single threaded, so they are capable of dealing with a single request at a time. If you had a dynamic page (php, ruby etc) then you would look at how long a page takes to respond at the server, say it took 250ms to respond then a single dyno could deal with 4 requests a second. Adding more dynos increases concurrency NOT performance. So if you had 2 dynos, in this scenario you be able to deal with 8 requests per second.
Since you're only talking static pages, their response time should be much faster than this. Your best way to identify if you need more is to look at your heroku log output and see if you have sustained levels of the 'queue' value; this means that the dynos are unable to keep up and requests are being queued for processing.
Since most HTTP 1.1 clients will create two TCP connections to the webserver when requesting resources, I have a hunch you'll see better performance on single clients if you start two dynos, so the client's pipelined resources requests can be handled pipelined as well.
You'll have to decide if it is worth the extra money for the (potentially slight) improved performance of a single client.
If you ever anticipate having multiple clients requesting information at once, then you'll probably want more than two dynos, just to make sure at least one is readily available for additional clients.
In this situation, if you stay with one dyno. The first one is free, the second one puts you over the monthly minimum and starts to trigger costs.
But, you should also realize with one dyno on Heroku, the app will go to sleep if it hasn't been accessed recently (I think this is around 30 minutes). In that case, it can take 5-10 seconds to wake up again and can give your users a very slow initial experience.
There are web services that will ping your site, testing for it's response and keeping it awake. http://www.wekkars.com/ for example.

Is there any reson to not reduce Ping Maximum Response Time in IIS 7

IIS includes a worker process health check "ping" function that pings worker processes every 90 seconds by default and recycles them if they don't respond. I have an application that is chronically putting app pools into a bad state and I'm curious if there is any reason not to lower this time to force IIS to recycle a failed worker process quicker. Searching the web all I can find is people that are increasing the time to allow for debugging. It seems like 90 seconds is far to high for a web application, but perhaps I'm missing something.
Well the obvious answer is that in some situations requests would take longer than 90 seconds for the worker process to return. If you can't imagine a situation where this would be appropriate, then feel free to lower it.
I wouldn't recommend going too much lower than 30 seconds. I can see situations where you get in recycle loops. However you can do testing and see what makes sense in your situation. I would recommend Siege for load testing to see how your application behaves.

Resources