Does maintenance mode turn off scheduler? If not does a way exist to temporarily disable scheduler without manually deleting every item?
I recently ran into the issue of not being able to pause or disable the Heroku scheduler as well. I needed to do a DB upgrade so I couldn't use the delayed job suggestion. Not finding a better solution I wrote a little disable rake task.
task :pause_scheduler => :environment do
if ENV['PAUSE_SCHEDULER'] == 'true'
puts 'Scheduler Paused'
exit
end
end
Then I just inherit from :pause_scheduler instead of from :environment for all of my Heroku scheduler cron jobs and set the PAUSE_SCHEDULER environment variable to 'true' when I want to turn it off.
heroku config:set PAUSE_SCHEDULER=true
Seems to work pretty well. Much better than deleting and recreating all my scheduled jobs at any rate.
Just remember to turn it back on after you're done.
heroku config:set PAUSE_SCHEDULER=false
Hope that helps!
Nope. The only thing maintenance mode does is tell the router to refuse to forward new web requests. Everything else still runs, including one-off dynos (which the scheduler uses): https://devcenter.heroku.com/articles/maintenance-mode
One way to work around this would be to have your scheduled tasks just enqueue a job in a delayed job processing system (like delayed_job or resque). That way, you can scale the worker dynos down to 0 and nothing serious should happen during downtime, instead of the database or external services accidentally being written to.
Related
I have a Heroku worker setup to do a long running job which iterates over long periods. However whenever I do an update & deploy of other files in the repo this worker restarts, which is annoying, any way to avoid this?
No. This behaviour is part of Heroku's Automatic Dyno Restarting.
You can't work around this. Instead, you need to build all parts of your app to be able to function properly despite the fact that all dynos will restart at least once every 24 hours or so, whether or not you deploy updates in your repo.
Most significantly, you need to build support for Graceful Shutdown into all your processes (e.g. web process and worker processes).
If I have an app on Heroku that consists of one worker and one or no web dynos, will it run? I'm unsure if the absent or idling web dynos will cause the worker dyno not to run.
Heroku doesn't just run web dynos, in fact, it makes no assumptions at all with regards to the processes you're running. There's absolutely nothing wrong with launching a single worker process.
This is actually a common scenario for me to deploy single cron-like tasks to Heroku, I've written about it here http://blog.y3xz.com/blog/2012/11/16/deploying-periodical-tasks-on-heroku/
If you are looking for cron-like tasks for simple jobs (like I am), now you have another alternative: Heroku Scheduler. It is easy to configure in a dashboard.
Advantage:
No need to choose and learn a new scheduler library. Configure it in seconds.
Same way for different platforms: Python, Ruby, etc.
Save Dyno Hours for Free Plan user. Only the actual working time counts. Some scheduler library (like Rufus Scheduler) will keep running between launches (so that it does not rely on cron to work).
Disadvantage:
Trivial options. You can only choose among "Daily"/"Hourly"/"Every 10 minutes".
Conclusion: Best for basic use.
I have several processes which currently run as rake tasks. Can I somehow use Sidekiq to execute a process in a continuous loop? Is that a best-practice with Sidekiq?
These processes, though they run in the background in a continuous loop in their respective rake tasks now, occasionally fail. Then I have to restart the rake task.
I am trying a couple of options, with help from the SO community. One is to figure out how to monitor the rake tasks with monit. But that means each process will have to have its own environment, adding to server load. Since I'm running in a virtualized environment, I want to eliminate that wherever possible.
The other option is just to leverage the Sidekiq option I already have. I use Sidekiq now for background processing, but it's always just one-offs. Is there some way I can have a continuous process in Sidekiq? And also be notified of failures and have the processes restart automatically?
The answer per Mike Perham the Sidekiq author is to use a cron job for scheduled tasks like this. You can create a rake task which submits the job to Sidekiq to run in the background. Then create a cron job to schedule it.
I don't know why you go for sideki, is this project specific ? Previously I faced the same problem but I migrated to delayed_job and it satisfy my needs. If the active record objects are transactional use delayed_job otherwise go for resque it is also a nice one.
I have a Rails app running on heroku with Rufus Scheduler added on.
A 'once a day' task in the scheduler is running more often than once a day.
My guess would be something to do with the heroku app running on different dynos during the day, but I'm at a loss on how to confirm/fix the problem.
Has anyone else seen this/know of a solution?
Edit: I couldn't resolve the problem with the gem and have moved my app over to the heroku scheduler add on which does not experience this problem.
The Heroku scheduler isn't guaranteed, it's only a simple scheduling system designed to fill a gap. It's nothing to do with your application moving between dynos as it's a seperate management system spinning up one of processes.
If timeliness is essential to you, take a look at clockwork, which will let you configure all sorts of stuff, but also give you a bit more reliability (at the expense of having a clock process running).
If this won't do - simply rework your job so that it doesn't matter how often it runs.
I am wondering if there is a way to monitor these automatically. Right now, in our production/QA/Dev environments - we have bunch of services running that are critical to the application. We also have automatic ETLs running on windows task scheduler at a set time of the day. Currently, I have to log into each server and see if all the services are running fine or not, or check event logs for any errors, or check task scheduler to see if ETLs ran well etc etc... I have to do all the manually... I am wondering if there is a tool out there that will do the monitoring for me and send emails only in case something needs attention (like ETLs fail to run, or service get stopped for whatever reason or errors in event log etc). Thanks for the help.
Paessler PRTG Network Monitor can do all that. we have very good experience with it.
http://www.paessler.com/prtg/features
Nagios is the best tool for monitoring. It checks for the server status as well the defined services in it and if any service goes down or system goes down, sends the mail to specified mail id.
Refer the : http://nagios.org/
Thanks for the above information. I looked at the above options but they have a price.. what I did is an inexpensive way to address my concerns..
For my windows task scheduler jobs that run every night - I installed this tool/service from codeplex that is working great.
http://motash.codeplex.com/documentation#CommentsAnchor
For Windows services - I am just setting the "Recovery" Tab in each service "property" with actions to do when it fails. (like restart, reboot, or run a program which could be an email that will notify)
I built a simple tool (https://cronitor.io) for monitoring periodic/scheduled tasks. The name is a play on "cron" from the unix world, but it is system/task agnostic. All you have to do is make an http request to a unique tracking URL whenever your job runs. If your job doesn't check-in according to the rules you define then it will send you an email/sms message.
It also allows you to track the duration of your jobs by making calls at the beginning and end of your task. This can be really useful for long running jobs since you can be alerted if they start taking too long to run. For example, I once had a backup task that was scheduled every hour. About six months after I set it up it started taking longer than an hour to run!
There is https://eyewitness.io - which is for monitoring server cron tasks, queues and websites. It makes sure each of your cron jobs run when they are supposed to, and alerts you if they failed to be run.