I have a Heroku worker setup to do a long running job which iterates over long periods. However whenever I do an update & deploy of other files in the repo this worker restarts, which is annoying, any way to avoid this?
No. This behaviour is part of Heroku's Automatic Dyno Restarting.
You can't work around this. Instead, you need to build all parts of your app to be able to function properly despite the fact that all dynos will restart at least once every 24 hours or so, whether or not you deploy updates in your repo.
Most significantly, you need to build support for Graceful Shutdown into all your processes (e.g. web process and worker processes).
Related
I have an application that is hosted on Heroku. The application has a process that is run on the server, that the user has the ability to start and stop. Once the user clicks 'Start', the process is supposed to stay running until the user presses 'Stop'. The functionality of the app should allow that the process can be run for extended periods of times (6 months or so) continuously.
I have deployed my app on a Heroku free dyno. While reading the Heroku documentation, I came across this page that states that Heroku Dynos are restarted automatically every 24 hours. Here is the relevant passage:
Dynos are also restarted (cycled) at least once per day to help maintain the health of applications running on Heroku. Any changes to the local filesystem will be deleted. The cycling happens once every 24 hours (plus up to 216 random minutes, to prevent every dyno for an application from restarting at the same time). Manual restarts (heroku ps:restart) and releases (deploys or changing config vars) will reset this 24 hour period. Cycling happens for all dynos, including one-off dynos, so dynos will run for a maximum of 24 hours + 216 minutes. If you have multiple dynos, they should cycle at different times based on the random 0 to 216 minutes difference. If you continually make changes to your application without a 24 hour gap, you won’t see cycling at all.
Does this mean that the user process that he/she has started will automatically be stopped when the dyno restarts? If yes, does it automatically resume the user process where it left off?
If not, I will have to find a different hosting solution since the process may need to be run 24x7x365.
Does this mean that the user process that he/she has started will automatically be stopped when the dyno restarts?
Yes it does. As it will when you change config variables, or deploy updates, or add / remove addons.
If yes, does it automatically resume the user process where it left off?
No it doesn't. Any safe-and-resume behaviour you have to implement in your application. When stopping processes, Heroku will send them a SIGTERM signal and give them 30 seconds to safe their work.
If not, I will have to find a different hosting solution since the process may need to be run 24x7x365.
I doubt that there is any hosting solution that will give you what you want. In a cloud environment restarts are a thing that happens all the time, at least for you updating your application, bugfixes, security fixes. Every hosting provider or platform that provides you with 24x7x365 uptime will also restart and replace your dynos all the time.
I am new to Go and I am using go routines in my app in Heroku, which are long (up to 7 minutes), and cannot be interrupted.
I saw that the auto scaler sometimes kills the Heroku dyno which is running the routine. I need a way of running this routine independently from the dynos so I know that it will not get shutdown. I read articles and still don't understand how to perform a go routine in a background worker. It is hard for me to believe I am the only one experiencing this.
My go routines use my redis database.
Could someone please point me to an example of how to setup a background worker in heroku for go and how to send my go routine to that worker?
Thank you very much
I need a way of running this routine independently from the dynos so I
know that it will not get shutdown.
If you don't want to run your worker code on a dyno then you'll need to use a different provider from Heroku, like Amazon AWS, Digital Ocean, Linode etc.
Having said that, you should design your workers, especially those that are mission critical, to be able to recover from a shutdown. Either to be able to continue where they left off or to start over. Heroku's dyno manager restarts the dynos at least once a day but I wouldn't be surprised if the other cloud providers also restart their virtual instances once in a while, probably not once a day but still... And even if you decide to deploy your workers on a physical machine that you control and never turn off, you cannot prevent things like hardware failure or power outage from happening.
If your workers need to perform some task till it's done you need to make them be aware of possible shutdowns and have them handle such scenarios gracefully. Do not ever rely on a machine, physical or virtual, to keep running while your worker is doing it's job.
For example if you're on Heroku, use a worker dyno and make your worker listen for the SIGTERM signal, after your worker receives such a signal...
The application processes have 30 seconds to shut down cleanly
(ideally, they will do so more quickly than that). During this time
they should stop accepting new requests or jobs and attempt to finish
their current requests, or put jobs back on the queue for other worker
processes to handle. If any processes remain after that time period,
the dyno manager will terminate them forcefully with SIGKILL.
... continue reading here.
But keep in mind, as I mentioned earlier, if there is an outage and Heroku goes down, which is something that happens from time to time, your worker won't even have those 30 seconds to clean up.
I have two web dynos in my heroku app, and at times get a dyno automatic restart (as per heroku policy). Is the function that was going on during the restart automatically restored in the new restarted dyno? If not, is there a way I can control this restart?
Is the function that was going on during the restart automatically restored in the new restarted dyno?
no
If not, is there a way I can control this restart?
no
What you can do, is trap the SIGTERM signal that is sent to your process 10 seconds before it is SIGKILLed. This would give you time to finish current computation, stop taking web requests, do cleanup, etc. More details on the process is in the Heroku Devcenter.
If I update an application running on Heroku using git push and this application is running on multiple dynos - how is the upgrade process run by Heroku?
All dynos at the same time?
One after another?
...?
In other words: Will there be a down-time of my "cluster", or will there be a small time-frame where different versions of my app are running in parallel, or ...?
well can not tell the internal state but what i have experienced is
Code push complete
Code compiled (slug is compiled )
After that all dynos get the latest code and get restarted. (restart take up to 30 seconds or so and during this time all requests get queue).
So there will be no down time as during the restart process all the requests get queued and there i dont think that that multiple versions of your code will be running after the deployment.
Everyone says there's 'no downtime' when updating a Heroku app, but for your app this may not be true.
I've recently worked on a reasonably sized Rails app that takes at least 25 seconds to start, and often fails to start inside the 30 seconds that Heroku allows before returning errors to your clients.
During this entire time, your users are waiting for something to happen. 30 seconds is a long time, and they may not be patient enough to wait.
Someone once told me that if you have more than 1 dyno, that they are re-started individually to give you no downtime. This is not true - Heroku Stops all dynos and then Starts all Dynos.
At no time will there be 2 versions of your app running on Heroku
Is there any potential downtime when I do a commit to a clojure/Java app running on Heroku?
I am guessing not - but can't find out for sure.
Thanks.
When you push to Heroku, you invoke the slug compiler, which does all the heavy lifting needed to turn your application into a self-contained archive. That can take a little while, as you see whenever you run git push. However, during this time, your application is running normally.
When your slug finishes compiling, Heroku then pushes it out to the dyno grid. This causes existing web dynos to stop and causes new ones to start. Your application will be unresponsive between the time that the old dynos stop and the new ones begin serving requests -- probably only a few seconds. During this interval, Heroku's routing layer will queue incoming requests.
TL;DR: users might notice a pause (but not an error!) as your application is updated. You can simulate this at any time by running heroku restart.