Is there any way we can ensure certain code to run event after the delayed job is failed or succeeds just like we can write ensure block in exception handling?
What's wrong with the following approach?
def delayed_job_method
do_the_job
ensure
something
end
Related
Using Rspec and Capybara after recently adding in a debounce to most of my pages the test now fail randomly.
Now locally these are passing fine but on Semaphore 2.0 I am getting the random failings on shorter tests.
We use WebMock to stub the request in remoteFetch() and it seems that this is removed on shorter tests. As this is called afterwards, the stub doesn't exist and the test fails
function debouncedFetch(ids) {
store.idsToFetch.push(ids);
$timeout.cancel(store.fetchTimeoutFn);
store.fetchTimeoutFn = $timeout(() => { remoteFetch(store.idsToFetch); }, 200);
}
I have tried putting the debounce/timeout to 0 still with no joy.
Is there a way to check if tests/rootscope have finished or destroyed or something and not run the remoteFetch function.
Or get the test to wait for this function to run
Assuming you're using the default Capybara configuration where Capybara manages the running of the app under test it will wait for all network connections to be closed during the test reset in an after block. Since you're cleaning up your WebMock in an after block it's possible it's occurring before the Capybara registered block. To fix that you can change the order they're defined in or defined your WebMock cleanup with append_after rather than after so it's guaranteed to run after the Capybara session reset.
It turns out that the after(each:) { WebMock.reset! } as part of the gem is called before Capybara.reset_sessions!
This causes a race condition in the code. the way around this is to change the order and make sure in your spec_helper that require 'webmock/rspec' is called before require 'rspec/rails'
This ensures the order of the hooks are setup in the right order.
hope this helps someone else
We have intermittently failing tests due to Net::ReadTimeout errors.
We have yet to figure out a permanent fix. For right now we want to try rescuing that specific error and re-running the test. Not a ideal solution or true fix but we need something in the short term.
How can we re-run the rspec test that failed? I mean how can the test suite do this automatically for that test?
We can catch the error like this:
# spec/support/capybara.rb
def rescue_net_read_timeout(max_tries = 1, tries = 0, &block)
yield
rescue Net::ReadTimeout => e
Rails.logger.error e.message
end
but how do we make it try re-running that test?
We want to try re-running the test and then if the re-run passes move on with no error (ideally log it though), else fail for real and consider the test and hence the suite to have failed.
The following method uses the wait gem to retry a test - or a portion of a test, possibly just an assertion.
It
def retry_test(&block)
Wait.new(delay: 0.5, attempts: 10).until do
begin
yield
true # test passed
rescue RSpec::Expectations::ExpectationNotMetError => x
# make sure it failed for the expected reason
expect(x.to_s).to match /Net::ReadTimeout/
false # test failed, will retry
end
end
end
One can verify why it failed, and report a failure immediately if ever the test failed for another reason.
it 'should get a response' do
retry_test do
# The entire test, or just the portion to retry
end
end
I use a ruby gem called rspec-repeat. Out of a few hundred automated tests, we might run into a flaky one here or there and this helps us get past false negatives.
Ideally it's best to continue to diagnose these flaky tests, but this helps alleviate issues in the interim.
Side note, rspec-repeat based off of a another library named rspec-retry, but I find the code base for this one to be tidier and the configuration easier to use.
When we restart or deploy we get a number of Resque jobs in the failed queue with either Resque::TermException (SIGTERM) or Resque::DirtyExit.
We're using the new TERM_CHILD=1 RESQUE_TERM_TIMEOUT=10 in our Procfile so our worker line looks like:
worker: TERM_CHILD=1 RESQUE_TERM_TIMEOUT=10 bundle exec rake environment resque:work QUEUE=critical,high,low
We're also using resque-retry which I thought might auto-retry on these two exceptions? But it seems to not be.
So I guess two questions:
We could manually rescue from Resque::TermException in each job, and use this to reschedule the job. But is there a clean way to do this for all jobs? Even a monkey patch.
Shouldn't resque-retry auto retry these? Can you think of any reason why it wouldn't be?
Thanks!
Edit: Getting all jobs to complete in less than 10 seconds seems unreasonable at scale. It seems like there needs to be a way to automatically re-queue these jobs when the Resque::DirtyExit exception is run.
I ran into this issue as well. It turns out that Heroku sends the SIGTERM signal to not just the parent process but all forked processes. This is not the logic that Resque expects which causes the RESQUE_PRE_SHUTDOWN_TIMEOUT to be skipped, forcing jobs to executed without any time to attempt to finish a job.
Heroku gives workers 30s to gracefully shutdown after a SIGTERM is issued. In most cases, this is plenty of time to finish a job with some buffer time left over to requeue the job to Resque if the job couldn't finish. However, for all of this time to be used you need to set the RESQUE_PRE_SHUTDOWN_TIMEOUT and RESQUE_TERM_TIMEOUT env vars as well as patch Resque to correctly respond to SIGTERM being sent to forked processes.
Here's a gem which patches resque and explains this issue in more detail:
https://github.com/iloveitaly/resque-heroku-signals
This will be a two part answer, first addressing Resque::TermException and then Resque::DirtyExit.
TermException
It's worth noting that if you are using ActiveJob with Rails 7 or later the retry_on and discard_on methods can be used to handle Resque::TermException. You could write the following in your job class:
retry_on(::Resque::TermException, wait: 2.minutes, attempts: 4)
or
discard_on(::Resque::TermException)
A big caveat here is that if you are using a Rails version prior to 7 you'll need to add some custom code to get this to work.
The reason is that Resque::TermException does not inherit from StandardError (it inherits from SignalException, source: https://github.com/resque/resque/blob/master/lib/resque/errors.rb#L26) and prior to Rails 7 retry_on and discard_on only handle exceptions that inherit from StandardError.
Here's the Rails 7 commit that changes this to work with all exception subclasses: https://github.com/rails/rails/commit/142ae54e54ac81a0f62eaa43c3c280307cf2127a
So if you want to use retry_on to handle Resque::TermException on a Rails version earlier than 7 you have a few options:
Monkey patch TermException so that it inherits from StandardError.
Add a rescue statement to your perform method that explicitly looks for Resque::TermException or one of its ancestors (eg SignalException, Exception).
Patch the implementation of perform_now with the Rails 7 version (this is what I did in my codebase).
Here's how you can retry on a TermException by adding a rescue to your job's perform method:
class MyJob < ActiveJob::Base
prepend RetryOnTermination
# ActiveJob's `retry_on` and `discard_on` methods don't handle
`TermException`
# because it inherits from `SignalException` rather than `StandardError`.
module RetryOnTermination
def perform(*args, **kwargs)
super
rescue Resque::TermException
Rails.logger.info("Retrying #{self.class.name} due to Resque::TermException")
self.class.set(wait: 2.minutes).perform_later(*args, **kwargs)
end
end
end
Alternatively you can use the Rails 7 definition of perform_now by adding this to your job class:
# FIXME: Here we override the Rails 6 implementation of this method with the
# Rails 7 implementation in order to be able to retry/discard exceptions that
# don't inherit from StandardError, such as `Resque::TermException`.
#
# When we upgrade to Rails 7 we should remove this.
# Latest stable Rails (7 as of this writing) source: https://github.com/rails/rails/blob/main/activejob/lib/active_job/execution.rb
# Rails 6.1 source: https://github.com/rails/rails/blob/6-1-stable/activejob/lib/active_job/execution.rb
# Rails 6.0 source (same code as 6.1): https://github.com/rails/rails/blob/6-0-stable/activejob/lib/active_job/execution.rb
#
# NOTE: I've made a minor change to the Rails 7 implementation, I've removed
# the line `ActiveSupport::ExecutionContext[:job] = self`, because `ExecutionContext`
# isn't defined prior to Rails 7.
def perform_now
# Guard against jobs that were persisted before we started counting executions by zeroing out nil counters
self.executions = (executions || 0) + 1
deserialize_arguments_if_needed
run_callbacks :perform do
perform(*arguments)
end
rescue Exception => exception
rescue_with_handler(exception) || raise
end
DirtyExit
Resque::DirtyExit is raised in the parent process, rather than the forked child process that actually executes your job code. This means that any code you have in your job for rescuing or retrying those exceptions won't work. See these lines of code where that happens:
https://github.com/resque/resque/blob/master/lib/resque/worker.rb#L940
https://github.com/resque/resque/blob/master/lib/resque/job.rb#L234
https://github.com/resque/resque/blob/master/lib/resque/job.rb#L285
But fortunately, Resque provides a mechanism for dealing with this, job hooks, specifically the on_failure hook: https://github.com/resque/resque/blob/master/docs/HOOKS.md#job-hooks
A quote from those docs:
on_failure: Called with the exception and job args if any exception occurs while performing the job (or hooks), this includes Resque::DirtyExit.
And an example from those docs on how to use hooks to retry exceptions:
module RetriedJob
def on_failure_retry(e, *args)
Logger.info "Performing #{self} caused an exception (#{e}). Retrying..."
Resque.enqueue self, *args
end
end
class MyJob
extend RetriedJob
end
We could manually rescue from Resque::TermException in each job, and use this to reschedule the job. But is there a clean way to do
this for all jobs? Even a monkey patch.
The Resque::DirtyExit exception is raised when the job is killed with the SIGTERM signal. The job does not have the opportunity to catch the exception as you can read here.
Shouldn't resque-retry auto retry these? Can you think of any reason why it wouldn't be?
Don't see why it shouldn't, is the scheduler running? If not rake resque:scheduler.
I wrote a detailed blog post around some of the problems I had recently with Resque::DirtyExit, maybe it is useful => Understanding the Resque internals – Resque::DirtyExit unveiled
I've also struggled with this for awhile without finding a reliable solution.
One of the few solutions I've found is running a rake task on a schedule (cron job every 1 minute) which looks for jobs failing with Resque::DirtyExit, retries these specific jobs and removes these jobs from the failure queue.
Here's a sample of the rake task
https://gist.github.com/CharlesP/1818418754aec03403b3
This solution is clearly suboptimal but to date it's the best solution I've found to retry these jobs.
Are your resque jobs taking longer than 10 seconds to complete? If the jobs complete within 10 seconds after the initial SIGTERM is sent you should be fine. Try to break up the jobs into smaller chunks that finish quicker.
Also, you can have your worker re-enqueue the job doing something like this: https://gist.github.com/mrrooijen/3719427
def perform
refund_log = {
success: refund_retry.success?,
amount: refund_amount,
action: "refund"
}
if refund_retry.success?
refund_log[:reference] = refund_retry.transaction.id
refund_log[:message] = refund_retry.transaction.status
else
refund_log[:message] = refund_retry.message
refund_log[:params] = {}
refund_retry.errors.each do |error|
refund_log[:params][error.code] = error.message
end
order_transaction.message = refund_log[:params].values.join('|')
raise "delayed RefundJob has failed"
end
end
When I raise "delayed RefundJob has failed" in the else statement, it creates an Airbrake. I want to run the job again if it ends up in the else section.
Is there any way to re-queue the job without raising an exception? And prevent creating an airbrake?
I am using delayed_job version 1.
The cleanest way would be to re-queue, i.e. create a new job and enqueue it, and then exit the method normally.
To elaborate on #Roman's response, you can create a new job, with a retry parameter in it, and enqueue it.
If you maintain the retry parameter (increment it each time you re-enqueue a job), you can track how many retries you made, and thus avoid an endless retry loop.
DelayedJob expects a job to raise an error to requeued, by definition.
From there you can either :
Ignore your execpetion on airbrake side, see https://github.com/airbrake/airbrake#filtering so it still gets queued again without filling your logs
Dive into DelayedJob code where you can see on https://github.com/tobi/delayed_job/blob/master/lib/delayed/job.rb#L65 that a method named reschedule is available and used by run_with_lock ( https://github.com/tobi/delayed_job/blob/master/lib/delayed/job.rb#L99 ). From there you can call reschedule it manually, instead of raising your exception.
About the later solution, I advise adding some mechanism that still fill an airbrake report on the third or later try, you can still detect that something is wrong without the hassle of having your logs filled by the attempts.
This is more of an opinion oriented question. When handling exceptions in nested codes such as:
Assuming you have a class that initialize another class to run a job. The job returns a value, which is then processed by the class which initially called it.
Where would you put the exception and error logging? Would you define it on the initialization of the job class in the calling class, which will handle then exception in the job execution or on both levels ?
if the job handles exceptions then you don't need to wrap the call to the job in a try catch.
but the class that initializes and runs the job could throw exceptions, so you should handle exceptions at that level as well.
here is an example:
def some_job
begin
# a bunch of logic
rescue
# handle exception
# log it
end
end
it wouldn't make sense then to do this:
def some_manager
begin
some_job
rescue
# log
end
end
but something like this makes more sense:
def some_manager
begin
# a bunch of logic
some_job
# some more logic
rescue
# handle exception
# log
end
end
and of course you would want to catch specific exceptions.
Probably the best answer, in general, for handling Exceptions in Ruby is reading Exceptional Ruby. It may change your perspective on error handling.
Having said that, your specific case. When I hear "job" in hear "background process", so I'll base my answer on that.
Your job will want to report status while it's doing it's thing. This could be states like "in queue", "running", "finished", but it also could be more informative (user facing) information: "processing first 100 out of 1000 records".
So, if an error happens in your background process, my suggestion is two-fold:
Make sure you catch exceptions before you exit the job. Your background job processor might not like a random exception coming from your code. I, personally, like the idea of catching the exception and saving it to the database, for easy retrieval later. Then again, depending on your background job processor, maybe it handles error reporting for you. (I think reque does, for example).
On the front end, use AJAX (or something) to occasionally check in to how the job is doing. Say every 10 seconds or something. In additional to getting the status of the job, also make sure you return this additional information to the user (if appropriate).