Performing go routines in background - heroku

I am new to Go and I am using go routines in my app in Heroku, which are long (up to 7 minutes), and cannot be interrupted.
I saw that the auto scaler sometimes kills the Heroku dyno which is running the routine. I need a way of running this routine independently from the dynos so I know that it will not get shutdown. I read articles and still don't understand how to perform a go routine in a background worker. It is hard for me to believe I am the only one experiencing this.
My go routines use my redis database.
Could someone please point me to an example of how to setup a background worker in heroku for go and how to send my go routine to that worker?
Thank you very much

I need a way of running this routine independently from the dynos so I
know that it will not get shutdown.
If you don't want to run your worker code on a dyno then you'll need to use a different provider from Heroku, like Amazon AWS, Digital Ocean, Linode etc.
Having said that, you should design your workers, especially those that are mission critical, to be able to recover from a shutdown. Either to be able to continue where they left off or to start over. Heroku's dyno manager restarts the dynos at least once a day but I wouldn't be surprised if the other cloud providers also restart their virtual instances once in a while, probably not once a day but still... And even if you decide to deploy your workers on a physical machine that you control and never turn off, you cannot prevent things like hardware failure or power outage from happening.
If your workers need to perform some task till it's done you need to make them be aware of possible shutdowns and have them handle such scenarios gracefully. Do not ever rely on a machine, physical or virtual, to keep running while your worker is doing it's job.
For example if you're on Heroku, use a worker dyno and make your worker listen for the SIGTERM signal, after your worker receives such a signal...
The application processes have 30 seconds to shut down cleanly
(ideally, they will do so more quickly than that). During this time
they should stop accepting new requests or jobs and attempt to finish
their current requests, or put jobs back on the queue for other worker
processes to handle. If any processes remain after that time period,
the dyno manager will terminate them forcefully with SIGKILL.
... continue reading here.
But keep in mind, as I mentioned earlier, if there is an outage and Heroku goes down, which is something that happens from time to time, your worker won't even have those 30 seconds to clean up.

Related

What's the recommended way of dealing with Heroku's dyno restarts on a worker?

I'm familiar with Heroku's policy on dyno restarts (once every 24 hours, if underlying hardware fails, etc).
I've also looked elsewhere on Stackoverflow and found questions like
Controlling Heroku's random dyno restarts.
Our apps are good from a web perspective – sessions are handled via external database and multiple dynos balance the load perfectly. Restarts aren't an issue there. The issue is workers. For example, our worker receives a message and begins processing a job. It's 99% done, awaiting some final asynchronous request to return, and it receives SIGTERM. Before it can even clean up the job, the process is killed. The code can handle local cleanup for a job that needs to be restarted, but external services can't really be part of the transaction.
For example, if a report was built and an email was sent, but some 3rd async operation didn't complete before SIGTERM, I can't really roll back that transaction. For hardware failures or other rare events, it's understandable that a multi-step transaction could get truncated, but with Heroku's policy it seems that I need to assume this will happen at least once a day. Can anyone help me get a better grip on this problem?

How to keep only the worker alive on Heroku free tier from within the app?

I'm testing an app with a worker and a web dyno on Heroku free tier and I'd like to keep the worker alive to be able to execute background tasks while letting the web dyno idle. By default they both go idle in 30 mins even if I have things queued on the worker.
I understand there're ways to keep the web dyno alive (and with that the worker as well), and there're ways to keep the web alive while scaling down the worker. However I'd need the worker alive and the web in idle.
I tried running a recurring job on the worker which would
Restart the dyno.
Scale the dyno down and then back up.
Both approaches worked (as in they restarted and scaled the dyno correctly) but the worker dyno would still idle after 30 mins (as if it's dependent on the web dyno). Edit: yep, that's pretty much the case as explained here: https://devcenter.heroku.com/articles/free-dyno-hours#dyno-sleeping
I could do this form the outside but it seems I'd have to constantly check for the state since a new restart doesn't seem to give me 30 mins headway. I'd also have to expose the API key which I'd like to avoid.
If I've gotten you right, you're trying to stop the web dyno and leave the worker dyno alive.
You could do that by going to the Resources tab:
And then in the 'web' section:
Press the pencil, toggle it off and press 'Confirm'.
As a workaround I currently remove the web dyno and explicitly enable it when I need it. As explained here:
Worker-only Free dynos do not sleep since they do not respond to web
requests.
My workaround was to just create two apps that deploy automatically from the same repository. Then, all you would need to do is enable the worker dyno for one and the web dyno for the other.

Heroku: Prevent worker process from restarting?

I have a Heroku worker setup to do a long running job which iterates over long periods. However whenever I do an update & deploy of other files in the repo this worker restarts, which is annoying, any way to avoid this?
No. This behaviour is part of Heroku's Automatic Dyno Restarting.
You can't work around this. Instead, you need to build all parts of your app to be able to function properly despite the fact that all dynos will restart at least once every 24 hours or so, whether or not you deploy updates in your repo.
Most significantly, you need to build support for Graceful Shutdown into all your processes (e.g. web process and worker processes).

Heroku Scheduler for recurring tasks and Delayed Jobs for asynchronous tasks, all using one web dyno

Very similar question to Is it feasible to run multiple processeses on a Heroku dyno?, or Running Heroku background tasks with only 1 web dyno and 0 worker dynos except I'm talking about a Ruby on Rails app.
Context:
I understand that it's encouraged to separate worker and web dynos... but I'm still testing and don't want to pay the expense. Especially because with my app, all the web requests pretty much happen either in the AM or in the PM, and during the whole middle of the day (and also middle of the night), literally nothing is happening.
I'd like the web dyno to run two types of background processing on the "downtime":
A recurring, long-running task every day (mailings)
An asynchronous, long-running task that is triggered when a user performs a certain action (it's a mailer)
I've done quite a bit of reading on this, but this is my first time doing anything asynchronous, so I wanted to ask the community a couple of questions just to ensure what I'm trying to do is feasible.
Questions
How do I do activity #1 for free?
To put it bluntly... considering my context above, if I use Heroku's Scheduler add-on, this runs a one-off dyno which I'll be charged for since I use NewRelic now to constantly ping my web dyno so it never actually sleeps meaning my one web dyno is my free dyno. Is there another way of doing this with the one web dyno that, in the middle of the night, won't be processing any requests? Alternatively, is there a way to tell New Relic to ping except at certain times, which will also then allow me to spin up a one-off dyno but still be within my free dyno hours?
For activity #2, I'm thinking of using Delayed Jobs, but how do I tell Delayed Jobs to delay until end of user 1's session, and then run mailer for user 1, but then pause again if user 2 sends a request, and then when user 2 is finished, start where left off on user 1's mailer, and then do user 2's mailer... and so forth? I think the root of the challenge here is that from what I've read, Delayed Jobs needs to be started with a script. But I'm not going to be at my computer starting a script all the time. How do I make the start (and the queue as illustrated in the question) something that happens automatically?
Would love even just point me directional pointers on what methods/ what considerations, etc.
I'm going to check out a nifty gem https://github.com/brandonhilkert/sucker_punch to do this. According to the author, it was written specifically to use Heroku's single dyno for hobby websites that have no need to spin up another dyno. It basically creates another thread.
FYI also, there is an add-on link that allows sucker_punch to do recurring tasks, called https://github.com/facto/fist_of_fury

Heroku suitable for app based on long running processes?

I have an app which requires long running processes - typically over 2 hours (recording streaming media). Based on Heroku's website, my worker server running these processes will be restarted randomly, at least once per day.
Is there anyway to control/avoid these restarts, so as not to interrupt my long running processes?
Do other paas providers avoid this issue?
I don't know, How to control/avoid these restarts. I was also going through their documentation, They clearly state that "Dynos are also cycled at least once per day, in addition to being restarted as needed for the overall health of the system and your app."
I think, Dynos restart should only take placed when system behaves unexpected or Dynos are found in crashed state OR In month or week to clear cache memories.
You can try App42 PaaS which monitors your Apps continuously to make sure that they are up and running. If any kontena is found in crashed state, Health Monitor try to bring it back to working state. if unable than that particular kontena is deleted & replaced with a new one.
Disclaimer: I work for App42 PaaS.

Resources