Recovering cleanly from Resque::TermException or SIGTERM on Heroku - heroku

When we restart or deploy we get a number of Resque jobs in the failed queue with either Resque::TermException (SIGTERM) or Resque::DirtyExit.
We're using the new TERM_CHILD=1 RESQUE_TERM_TIMEOUT=10 in our Procfile so our worker line looks like:
worker: TERM_CHILD=1 RESQUE_TERM_TIMEOUT=10 bundle exec rake environment resque:work QUEUE=critical,high,low
We're also using resque-retry which I thought might auto-retry on these two exceptions? But it seems to not be.
So I guess two questions:
We could manually rescue from Resque::TermException in each job, and use this to reschedule the job. But is there a clean way to do this for all jobs? Even a monkey patch.
Shouldn't resque-retry auto retry these? Can you think of any reason why it wouldn't be?
Thanks!
Edit: Getting all jobs to complete in less than 10 seconds seems unreasonable at scale. It seems like there needs to be a way to automatically re-queue these jobs when the Resque::DirtyExit exception is run.

I ran into this issue as well. It turns out that Heroku sends the SIGTERM signal to not just the parent process but all forked processes. This is not the logic that Resque expects which causes the RESQUE_PRE_SHUTDOWN_TIMEOUT to be skipped, forcing jobs to executed without any time to attempt to finish a job.
Heroku gives workers 30s to gracefully shutdown after a SIGTERM is issued. In most cases, this is plenty of time to finish a job with some buffer time left over to requeue the job to Resque if the job couldn't finish. However, for all of this time to be used you need to set the RESQUE_PRE_SHUTDOWN_TIMEOUT and RESQUE_TERM_TIMEOUT env vars as well as patch Resque to correctly respond to SIGTERM being sent to forked processes.
Here's a gem which patches resque and explains this issue in more detail:
https://github.com/iloveitaly/resque-heroku-signals

This will be a two part answer, first addressing Resque::TermException and then Resque::DirtyExit.
TermException
It's worth noting that if you are using ActiveJob with Rails 7 or later the retry_on and discard_on methods can be used to handle Resque::TermException. You could write the following in your job class:
retry_on(::Resque::TermException, wait: 2.minutes, attempts: 4)
or
discard_on(::Resque::TermException)
A big caveat here is that if you are using a Rails version prior to 7 you'll need to add some custom code to get this to work.
The reason is that Resque::TermException does not inherit from StandardError (it inherits from SignalException, source: https://github.com/resque/resque/blob/master/lib/resque/errors.rb#L26) and prior to Rails 7 retry_on and discard_on only handle exceptions that inherit from StandardError.
Here's the Rails 7 commit that changes this to work with all exception subclasses: https://github.com/rails/rails/commit/142ae54e54ac81a0f62eaa43c3c280307cf2127a
So if you want to use retry_on to handle Resque::TermException on a Rails version earlier than 7 you have a few options:
Monkey patch TermException so that it inherits from StandardError.
Add a rescue statement to your perform method that explicitly looks for Resque::TermException or one of its ancestors (eg SignalException, Exception).
Patch the implementation of perform_now with the Rails 7 version (this is what I did in my codebase).
Here's how you can retry on a TermException by adding a rescue to your job's perform method:
class MyJob < ActiveJob::Base
prepend RetryOnTermination
# ActiveJob's `retry_on` and `discard_on` methods don't handle
`TermException`
# because it inherits from `SignalException` rather than `StandardError`.
module RetryOnTermination
def perform(*args, **kwargs)
super
rescue Resque::TermException
Rails.logger.info("Retrying #{self.class.name} due to Resque::TermException")
self.class.set(wait: 2.minutes).perform_later(*args, **kwargs)
end
end
end
Alternatively you can use the Rails 7 definition of perform_now by adding this to your job class:
# FIXME: Here we override the Rails 6 implementation of this method with the
# Rails 7 implementation in order to be able to retry/discard exceptions that
# don't inherit from StandardError, such as `Resque::TermException`.
#
# When we upgrade to Rails 7 we should remove this.
# Latest stable Rails (7 as of this writing) source: https://github.com/rails/rails/blob/main/activejob/lib/active_job/execution.rb
# Rails 6.1 source: https://github.com/rails/rails/blob/6-1-stable/activejob/lib/active_job/execution.rb
# Rails 6.0 source (same code as 6.1): https://github.com/rails/rails/blob/6-0-stable/activejob/lib/active_job/execution.rb
#
# NOTE: I've made a minor change to the Rails 7 implementation, I've removed
# the line `ActiveSupport::ExecutionContext[:job] = self`, because `ExecutionContext`
# isn't defined prior to Rails 7.
def perform_now
# Guard against jobs that were persisted before we started counting executions by zeroing out nil counters
self.executions = (executions || 0) + 1
deserialize_arguments_if_needed
run_callbacks :perform do
perform(*arguments)
end
rescue Exception => exception
rescue_with_handler(exception) || raise
end
DirtyExit
Resque::DirtyExit is raised in the parent process, rather than the forked child process that actually executes your job code. This means that any code you have in your job for rescuing or retrying those exceptions won't work. See these lines of code where that happens:
https://github.com/resque/resque/blob/master/lib/resque/worker.rb#L940
https://github.com/resque/resque/blob/master/lib/resque/job.rb#L234
https://github.com/resque/resque/blob/master/lib/resque/job.rb#L285
But fortunately, Resque provides a mechanism for dealing with this, job hooks, specifically the on_failure hook: https://github.com/resque/resque/blob/master/docs/HOOKS.md#job-hooks
A quote from those docs:
on_failure: Called with the exception and job args if any exception occurs while performing the job (or hooks), this includes Resque::DirtyExit.
And an example from those docs on how to use hooks to retry exceptions:
module RetriedJob
def on_failure_retry(e, *args)
Logger.info "Performing #{self} caused an exception (#{e}). Retrying..."
Resque.enqueue self, *args
end
end
class MyJob
extend RetriedJob
end

We could manually rescue from Resque::TermException in each job, and use this to reschedule the job. But is there a clean way to do
this for all jobs? Even a monkey patch.
The Resque::DirtyExit exception is raised when the job is killed with the SIGTERM signal. The job does not have the opportunity to catch the exception as you can read here.
Shouldn't resque-retry auto retry these? Can you think of any reason why it wouldn't be?
Don't see why it shouldn't, is the scheduler running? If not rake resque:scheduler.
I wrote a detailed blog post around some of the problems I had recently with Resque::DirtyExit, maybe it is useful => Understanding the Resque internals – Resque::DirtyExit unveiled

I've also struggled with this for awhile without finding a reliable solution.
One of the few solutions I've found is running a rake task on a schedule (cron job every 1 minute) which looks for jobs failing with Resque::DirtyExit, retries these specific jobs and removes these jobs from the failure queue.
Here's a sample of the rake task
https://gist.github.com/CharlesP/1818418754aec03403b3
This solution is clearly suboptimal but to date it's the best solution I've found to retry these jobs.

Are your resque jobs taking longer than 10 seconds to complete? If the jobs complete within 10 seconds after the initial SIGTERM is sent you should be fine. Try to break up the jobs into smaller chunks that finish quicker.
Also, you can have your worker re-enqueue the job doing something like this: https://gist.github.com/mrrooijen/3719427

Related

How to handle SIGTERM with resque-status in complex jobs

I've been using resque on Heroku, which will from time to time interrupt your jobs with a SIGTERM.
Thus far I've handled this with a simple:
def process(options)
do_the_job
rescue Resque::TermException
self.defer options
end
We've started using resque-status so that we can keep track of jobs, but the method above obviously breaks that as the job will show completed when actually it's been deferred to another job.
My current thinking is that instead of deferring the current job in resque, there needs to be another job that re-queues jobs that have failed due to SIGTERM.
The trick comes in that some jobs are more complicated:
def process(options)
do_part1 unless options['part1_finished']
options['part1_finished']
do_part2
rescue Resque::TermException
self.defer options
end
Simply removing the rescue and simply retrying those jobs would cause an exception when do_part1 gets repeated.
Looking more deeply into how resque-status works, a possible work around is to go straight to resque for the re-queue using the same parameters that resque-status would use.
def process
do_part1 unless options['part1_finished']
options['part1_finished']
do_part2
rescue Resque::TermException
Resque.enqueue self.class, uuid, options
raise DeferredToNewJob
end
Of course, this is undocumented so may be incompatible with future releases of resque-status.
There is a draw back: between that job failing and the new job picking it up, the status of the first job will be reported by resque-status.
This is why I re-raise a new exception - otherwise the job status will show completed until the new worker picks up the old job, which may confuse processes that are watching and waiting for the job to finish.
By raising a new exception DeferredToNewJob, the job status will temporarily show failure, which is easier to work around at the front end, and the specific exception can be automatically cleared from the resque failure queue.
UPDATE
resque-status provides support for on_failure handler. If a method with this name is defined as an instance method on the class, we can make this even simpler
Here's my on_failure
def on_failure(e)
if e.is_a? DeferredToNewJob
tick('Waiting for new job')
else
raise e
end
end
With this in place the job spends basically no time in the failed state for processes watching it's status.
In addition, if resque-status finds this handler, then it won't raise the exception up to resque, so it won't get added to the failed queue.

Understanding Celluloid Concurrency

Following are my Celluloid codes.
client1.rb One of the 2 clients. (I named it as client 1)
client2.rb 2nd of the 2 clients. (named as client 2 )
Note:
the only the difference between the above 2 clients is the text that is passed to the server. i.e ('client-1' and 'client-2' respectively)
On testing this 2 clients (by running them side by side) against following 2 servers (one at time). I found very strange results.
server1.rb (a basic example taken from the README.md of the celluloid-zmq)
Using this as the example server for the 2 above clients resulted in parallel executions of tasks.
OUTPUT
ruby server1.rb
Received at 04:59:39 PM and message is client-1
Going to sleep now
Received at 04:59:52 PM and message is client-2
Note:
the client2.rb message was processed when client1.rb request was on sleep.(mark of parallelism)
server2.rb
Using this as the example server for the 2 above clients did not resulted in parallel executions of tasks.
OUTPUT
ruby server2.rb
Received at 04:55:52 PM and message is client-1
Going to sleep now
Received at 04:56:52 PM and message is client-2
Note:
the client-2 was ask to wait 60 seconds since client-1 was sleeping(60 seconds sleep)
I ran the above test multiple times all resulted in same behaviour.
Can anyone explain me from the results of the above tests that.
Question: Why is celluloid made to wait for 60 seconds before it can process the other request i.e as noticed in server2.rb case.?
Ruby version
ruby -v
ruby 2.1.2p95 (2014-05-08 revision 45877) [x86_64-darwin13.0]
Using your gists, I verified this issue can be reproduced in MRI 2.2.1 as well as jRuby 1.7.21 and Rubinius 2.5.8 ... The difference between server1.rb and server2.rb is the use of the DisplayMessage and message class method in the latter.
Use of sleep in DisplayMessage is out of Celluloid scope.
When sleep is used in server1.rb it is using Celluloid.sleep in actuality, but when used in server2.rb it is using Kernel.sleep ... which locks up the mailbox for Server until 60 seconds have passed. This prevents future method calls on that actor to be processed until the mailbox is processing messages ( method calls on the actor ) again.
There are three ways to resolve this:
Use a defer {} or future {} block.
Explicitly invoke Celluloid.sleep rather than sleep ( if not explicitly invoked as Celluloid.sleep, using sleep will end up calling Kernel.sleep since DisplayMessage does not include Celluloid like Server does )
Bring the contents of DisplayMessage.message into handle_message as in server1.rb; or at least into Server, which is in Celluloid scope, and will use the correct sleep.
The defer {} approach:
def handle_message(message)
defer {
DisplayMessage.message(message)
}
end
The Celluloid.sleep approach:
class DisplayMessage
def self.message(message)
#de ...
Celluloid.sleep 60
end
end
Not truly a scope issue; it's about asynchrony.
To reiterate, the deeper issue is not the scope of sleep ... that's why defer and future are my best recommendation. But to post something here that came out in my comments:
Using defer or future pushes a task that would cause an actor to become tied up into another thread. If you use future, you can get the return value once the task is done, if you use defer you can fire & forget.
But better yet, create another actor for tasks that tend to get tied up, and even pool that other actor... if defer or future don't work for you.
I'd be more than happy to answer follow-up questions brought up by this question; we have a very active mailing list, and IRC channel. Your generous bounties are commendable, but plenty of us would help purely to help you.
Managed to reproduce and fix the issue.
Deleting my previous answer.
Apparently, the problem lies in sleep.
Confirmed by adding logs "actor/kernel sleeping" to the local copy of Celluloids.rb's sleep().
In server1.rb,
the call to sleep is within server - a class that includes Celluloid.
Thus Celluloid's implementation of sleep overrides the native sleep.
class Server
include Celluloid::ZMQ
...
def run
loop { async.handle_message #socket.read }
end
def handle_message(message)
...
sleep 60
end
end
Note the log actor sleeping from server1.rb. Log added to Celluloids.rb's sleep()
This suspends only the current "actor" in Celluloid
i.e. only the current "Celluloid thread" handling the client1 sleeps.
In server2.rb,
the call to sleep is within a different class DisplayMessage that does NOT include Celluloid.
Thus it is the native sleep itself.
class DisplayMessage
def self.message(message)
...
sleep 60
end
end
Note the ABSENCE of any actor sleeping log from server2.rb.
This suspends the current ruby task i.e. the ruby server sleeps (not just a single Celluloid actor).
The Fix?
In server2.rb, the appropriate sleep must be explicitly specified.
class DisplayMessage
def self.message(message)
puts "Received at #{Time.now.strftime('%I:%M:%S %p')} and message is #{message}"
## Intentionally added sleep to test whether Celluloid block the main process for 60 seconds or not.
if message == 'client-1'
puts 'Going to sleep now'.red
# "sleep 60" will invoke the native sleep.
# Use Celluloid.sleep to support concurrent execution
Celluloid.sleep 60
end
end
end

Get sidekiq to execute a job immediately

At the moment, I have a sidekiq job like this:
class SyncUser
include Sidekiq::Worker
def perform(user_id)
#do stuff
end
end
I am placing a job on the queue like this:
SyncUser.perform_async user.id
This all works of course but there is a bit of a lag between calling perform_async and the job actually getting executed.
Is there anything else I can do to tell sidekiq to execute the job immediately?
There are two questions here.
If you want to execute a job immediately, in the current context you can use:
SyncUser.new.perform(user.id)
If you want to decrease the delay between asynchronous work being scheduled and when it's executed in the sidekiq worker, you can decrease the poll_interval setting:
Sidekiq.configure_server do |config|
config.poll_interval = 2
end
The poll_interval is the delay within worker backends of how frequently workers check for jobs on the queue. The average time between a job being scheduled and executed with a free worker will be poll_interval / 2.
use .perform_inline method
SyncUser.perform_inline(user.id)
If you also need to perform nested jobs, you can use Sidekiq::Testing.inline! in your production console
require 'sidekiq/testing'
Sidekiq::Testing.inline!
SyncUser.perform_inline(user.id)
For those who are using Sidekiq via the Active Job framework, you can do
SyncUser.perform_now(user.id)

Any way to snipe or terminate specific sidekiq workers?

Is it possible to snipe or cancel specific Sidekiq workers/running jobs - effectively invoking an exception or something into the worker thread to terminate it.
I have some fairly simple background ruby (MRI 1.9.3) jobs under Sidekiq (latest) that run fine and are dependent on external systems. The external systems can take varying amounts of time during which the worker must remain available.
I think I can use Sidekiq's API to get to the appropriate worker - but I don't see any 'terminate/cancel/quite/exit' methods in the docs - is this possible? Is this something other people have done?
Ps. I know I could use an async loop within the workers job to trap relevant signals and shut itself down ..but that will complicate things a bit due to the nature of the external systems.
Async loop is the best way to do it as sidekiq has no way to terminate running job.
def perform
main_thread = Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
begin
# ...
ensure
$redis.set some_thread_key, 1
end
end
end
watcher_thread = Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
until $redis.del(some_thread_key) == 1 do
sleep 1
end
main_thread.kill
until !!main_thread.status == false do
sleep 0.1
end
end
end
[main_thread, watcher_thread].each(&:join)
end

resque-status and resque-scheduler for recurring jobs

I have the following configuration for my resque system (no rails just Sinatra base) where I have bunch of recurring jobs scheduled from a yml file
resque (1.23.0)
resque-scheduler (2.0.0)
resque-status (0.4.0)
The recurring schedule appears on the 'Schedule' tab and when I click on a 'Queue Now' button the status also appears on the 'Statuses' tab, the problem is that when the recurring jobs automatically run, they don't appear on the 'Statuses' tab.. my resque_schedule.yml looks something like this
email_notifier:
every: 5m
custom_job_class: Process_Notification_Emails
queue: email_notifier
args:
description: "Process mail notifications"
Note: These scheduled jobs are actually running every 5 minutes and are behaving as expected,the only issue I'm having is that they don't appear on the 'Statuses' tab unless I manually enqueue them
Any ideas what am I doing wrong here?
Support for resque-status (and other custom jobs)
Some Resque extensions like resque-status use custom job classes with
a slightly different API signature. Resque-scheduler isn't trying to
support all existing and future custom job classes, instead it
supports a schedule flag so you can extend your custom class and make
it support scheduled job.
Let's pretend we have a JobWithStatus class called FakeLeaderboard
class FakeLeaderboard < Resque::JobWithStatus
def perform
# do something and keep track of the status
end
end
And then a schedule:
create_fake_leaderboards:
cron: "30 6 * * 1"
queue: scoring
custom_job_class: FakeLeaderboard
args:
rails_env: demo
description: "This job will auto-create leaderboards for our online demo and the status will update as the worker makes progress"
If your extension doesn't support scheduled job, you would need to
extend the custom job class to support the #scheduled method:
module Resque
class JobWithStatus
# Wrapper API to forward a Resque::Job creation API call into
# a JobWithStatus call.
def self.scheduled(queue, klass, *args)
create(*args)
end
end
end
https://github.com/bvandenbos/resque-scheduler#support-for-resque-status-and-other-custom-jobs

Resources