How can I write code to go through the Resque failure queue and selectively delete jobs? Right now I've got a handful of important failures there, interspersed between thousands of failures from a runaway job that ran repeatedly. I want to delete the ones generated by the runaway job. The only API I'm familiar with is for enqueuing jobs. (I'll continue RTFMing, but I'm in a bit of a hurry.)
I neded up doing it like this:
# loop over all failure indices, instantiating as needed
(Resque::Failure.count-1).downto(0).each do |error_index_number|
failure = Resque::Failure.all(error_index_number)
# here :failure is the hash that has all the data about the failed job, perform any check you need here
if failure["error"][/regex_identifying_runaway_job/].present?
Resque::Failure.remove(error_index_number)
# or
# Resque::Failure.requeue(error_index_number)
end
As #Winfield mentioned, having a look at Resque's failure backend is useful.
You can manually modify the Failure queue the way you're asking, but it might be better to write a custom Failure handler that delete/re-enqueues jobs as they fail.
You can find the base failure backend here and an implementation that logs failed jobs to the Hoptoad exception tracking service here.
For example:
module Resque
module Failure
class RemoveRunaways < Base
def save
i=0
while job = Resque::Failure.all(i)
# Selectively remove all MyRunawayJobs from failure queue whenever they fail
if job.fetch('payload').fetch('class') == 'MyRunawayJob'
remove(i)
else
i = i + 1
end
end
end
end
end
end
EDIT: Forgot to mention how to specify this backend to handle Failures.
In your Resque initializer (eg: config/initializers/resque.rb):
# Use Resque Multi failure handler: standard handler and your custom handler
Resque::Failure::Multiple.classes = [Resque::Failure::Redis, Resque::Failure::RemoveRunaways]
Resque::Failure.backend = Resque::Failure::Multiple
Remove with bool function example
I used a higher order function approach, that evaluates a failure to remove
def remove_failures(should_remove_failure_func)
(Resque::Failure.count-1).downto(0).each do |i|
failure = Resque::Failure.all(i)
Resque::Failure.remove(i) if should_remove_failure_func.call(failure)
end
end
def remove_failed_validation_jobs
has_failed_for_validation_reason = -> (failure) do
failure["error"] == "Validation failed: Example has already been taken"
end
remove_failures(has_failed_for_validation_reason)
end
Related
Is there a way how to push job back to queue from sidekiq server middleware? Or simply retry without counting it?
UDPATE: My background: I need to track status of the jobs in elasticsearch (one job follows after another one), but if elastic is not accessible, and I reschedule the same worker again, I would lose the chain (jid changes).
The easiest way would be for the job to re-schedule itself, then exit. For example:
class MyJob < ApplicationJob
queue_as :default
def perform(*args)
if ready_to_perform?
# Do stuff!
else
MyJob.perform_later(args)
end
end
end
Use with caution. You probably don't want a job to be stuck re-scheduling itself forever!
This isn't quite the same as "retrying without incrementing the retry counter" (which is a little more complicated to implement), but is sufficient for most use cases like this.
This is not a working code but something like this could help you achieve want you want.
You could define a middleware and add it to sidekiq as below
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.add Sidekiq::RetryMonitoringMiddleware
end
end
Now, you can define in the middleware as mentioned below:
class Sidekiq::RetryMonitoringMiddleware
def call(worker, job_params, _queue)
#calling the worker perform method to add it to the queue
worker.perform(job_params['jid'], *job_params['args']) if should_retry?(job_params)
rescue StandardError => e
Rails.logger.error e
ensure
yield
end
private
def should_retry?(job)
# If worker is having a failure flag then only it should return a response
# Need to check in which key we get a failure message
(Integer(job['failure'])) == (1 || "true")
end
end
Hope it helps!!
I'm trying to test a gem I'm creating with RSpec. The gem's purpose is to create queues (using 'bunny'). It will serve to communicate between processes on several servers.
But I cannot find documentation on how to safely create processes inside RSpec running environment without spawning several testing processes (all displaying example failures and successes).
Here is what I wanted the tests to do :
Spawn children processes, waiting on the queue
Push messages from the main RSpec process
Consumes the queue on the children processes
Wait for children to stop and get the number of messages received from each child.
For now I implemented a simple case where child is consuming only one message and then stops.
Here is my code currently :
module Queues
# Basic CR accepting only jobs of type cmd_line
class CR
attr_reader :nb_jobs
def initialize
# opening communication pipes
#rout, #wout = IO.pipe
#nb_jobs = nil # not yet available.
end
def main
#todo = JobPipe.instance
job = #todo.pop do |j|
# accept only CMD_LINE type of jobs.
j.type == Messages::Job::CMD_LINE
end
# run command
%x{#{job.cmd}}
#wout.puts "1" # saying that we did one job
end
def run
#pid = Process.fork
if #pid.nil? then
# we are in the child
self.main
#rout.close
#wout.close
exit
end
end
def wait
#nb_jobs = #rout.gets(nil).to_i
Process.wait(#pid)
#rout.close
#wout.close
#nb_jobs
end
end
#job = Messages::Job.new({:type => Messages::Job::CMD_LINE, :cmd => "sleep 1" })
RSpec.describe JobPipe do
context "one Orchestrator and one CR" do
before(:each) do
indalo_queue_pre_configure
end
it "can send a job with Orchestrator and be received by CR" do
cr = CR.new
cr.run # execute the C.R. process
todo = JobPipe.instance
todo.push(#job)
nb_jobs = cr.wait
expect(nb_jobs).to eql(1)
end
end
context "one Orchestrator and severals CR" do
it 'can send one job per CR and get all back' do
crs = Array.new(rand(2..10)) { CR.new }
crs.each do |cr|
cr.run
end
todo = JobPipe.instance
crs.each do |_|
todo.push(#job)
end
nb_jobs = 0
crs.each do |cr|
nb_jobs += cr.wait
end
expect(nb_jobs).to eql(crs.length)
end
end
end
end
Edit: The question is (sorry not putting it right away, this was a mistake):
Is there a way to use correctly RSpec on a multi-process environment ?
I'm not looking for a code review, just wanted to display a clear example of what I wanted to do. Here I used fork, but this duplicate all the process (including RSpec part) and got numerous RSpec outputs which is not what we would expect in a test suite.
I would expect that only the main program states the RSpec outputs/stats and the subprocesses just interact with it.
The only way I see to do that correctly is not fork, but call subprocesses through an other mean. Maybe I answer alone to this question...
But not knowing well RSpec, I was wondering if someone knew how to do it within RSpec without writing external code. It seems to me that having separate codes linked to a single test example is not a good idea.
What I found about multi-process testing is this plugin to RSpec. The only thing is I don't know about the mock concept, but maybe I have to learn about it...
Ok, I found an answer which is to use the &block argument of the Process.fork method. In this case, you don't really duplicate all the process, but just execute the block of code in an other process and then return 0 (like said in the Ruby doc).
This prevent the children to get all the RSpec environment and displaying plenty of times the states of your tests.
PS : Be careful not to forget to redirect STDOUT/STDERR of child process if you don't want them to pollute the STDOUT/STDERR of the test.
PS2: don't forget to close #wout on the parent side if you call #rout.gets(nil) in it, because having it opened on the parent prevent EOF from happening (a bug in the code I presented) even if you close it in the child.
PS3: Use two pipes instead of one to prevent child/parent to talk and listen in the same. Childhood error but I did it again.
PS4: Use exit statement (at the end of the &block) to prevent zombie state of the child and usure parent not waiting too long that the rest of the child process dies.
Sorry for that long post, but it's good it stays also for me ^^
I've been using resque on Heroku, which will from time to time interrupt your jobs with a SIGTERM.
Thus far I've handled this with a simple:
def process(options)
do_the_job
rescue Resque::TermException
self.defer options
end
We've started using resque-status so that we can keep track of jobs, but the method above obviously breaks that as the job will show completed when actually it's been deferred to another job.
My current thinking is that instead of deferring the current job in resque, there needs to be another job that re-queues jobs that have failed due to SIGTERM.
The trick comes in that some jobs are more complicated:
def process(options)
do_part1 unless options['part1_finished']
options['part1_finished']
do_part2
rescue Resque::TermException
self.defer options
end
Simply removing the rescue and simply retrying those jobs would cause an exception when do_part1 gets repeated.
Looking more deeply into how resque-status works, a possible work around is to go straight to resque for the re-queue using the same parameters that resque-status would use.
def process
do_part1 unless options['part1_finished']
options['part1_finished']
do_part2
rescue Resque::TermException
Resque.enqueue self.class, uuid, options
raise DeferredToNewJob
end
Of course, this is undocumented so may be incompatible with future releases of resque-status.
There is a draw back: between that job failing and the new job picking it up, the status of the first job will be reported by resque-status.
This is why I re-raise a new exception - otherwise the job status will show completed until the new worker picks up the old job, which may confuse processes that are watching and waiting for the job to finish.
By raising a new exception DeferredToNewJob, the job status will temporarily show failure, which is easier to work around at the front end, and the specific exception can be automatically cleared from the resque failure queue.
UPDATE
resque-status provides support for on_failure handler. If a method with this name is defined as an instance method on the class, we can make this even simpler
Here's my on_failure
def on_failure(e)
if e.is_a? DeferredToNewJob
tick('Waiting for new job')
else
raise e
end
end
With this in place the job spends basically no time in the failed state for processes watching it's status.
In addition, if resque-status finds this handler, then it won't raise the exception up to resque, so it won't get added to the failed queue.
def perform
refund_log = {
success: refund_retry.success?,
amount: refund_amount,
action: "refund"
}
if refund_retry.success?
refund_log[:reference] = refund_retry.transaction.id
refund_log[:message] = refund_retry.transaction.status
else
refund_log[:message] = refund_retry.message
refund_log[:params] = {}
refund_retry.errors.each do |error|
refund_log[:params][error.code] = error.message
end
order_transaction.message = refund_log[:params].values.join('|')
raise "delayed RefundJob has failed"
end
end
When I raise "delayed RefundJob has failed" in the else statement, it creates an Airbrake. I want to run the job again if it ends up in the else section.
Is there any way to re-queue the job without raising an exception? And prevent creating an airbrake?
I am using delayed_job version 1.
The cleanest way would be to re-queue, i.e. create a new job and enqueue it, and then exit the method normally.
To elaborate on #Roman's response, you can create a new job, with a retry parameter in it, and enqueue it.
If you maintain the retry parameter (increment it each time you re-enqueue a job), you can track how many retries you made, and thus avoid an endless retry loop.
DelayedJob expects a job to raise an error to requeued, by definition.
From there you can either :
Ignore your execpetion on airbrake side, see https://github.com/airbrake/airbrake#filtering so it still gets queued again without filling your logs
Dive into DelayedJob code where you can see on https://github.com/tobi/delayed_job/blob/master/lib/delayed/job.rb#L65 that a method named reschedule is available and used by run_with_lock ( https://github.com/tobi/delayed_job/blob/master/lib/delayed/job.rb#L99 ). From there you can call reschedule it manually, instead of raising your exception.
About the later solution, I advise adding some mechanism that still fill an airbrake report on the third or later try, you can still detect that something is wrong without the hassle of having your logs filled by the attempts.
I have a Http client written in Ruby that can make synchronous requests to URLs. However, to quickly execute multiple requests I decided to use Eventmachine. The idea is to
queue all the requests and execute them using eventmachine.
class EventMachineBackend
...
...
def execute(request)
$q ||= EM.Queue.new
$q.push(request)
$q.pop {|request| request.invoke}
EM.run{EM.next_tick {EM.stop}}
end
...
end
Forgive my use of a global queue variable. I will refactor it later. Is what I am doing in EventMachineBackend#execute the right way of using Eventmachine queues?
One problem I see in my implementation is it is essentially synchronous. I push a request, pop and execute the request and wait for it to complete.
Could anyone suggest a better implementation.
Your the request logic has to be asynchronous for it to work with EventMachine, I suggest that you use em-http-request. You can find an example on how to use it here, it shows how to run the requests in parallel. An even better interface for running multiple connections in parallel is the MultiRequest class from the same gem.
If you want to queue requests and only run a fixed number of them in parallel you can do something like this:
EM.run do
urls = [...] # regular array with URLs
active_requests = 0
# this routine will be used as callback and will
# be run when each request finishes
when_done = proc do
active_requests -= 1
if urls.empty? && active_requests == 0
# if there are no more urls, and there are no active
# requests it means we're done, so shut down the reactor
EM.stop
elsif !urls.empty?
# if there are more urls launch a new request
launch_next.call
end
end
# this routine launches a request
launch_next = proc do
# get the next url to fetch
url = urls.pop
# launch the request, and register the callback
request = EM::HttpRequest.new(url).get
request.callback(&when_done)
request.errback(&when_done)
# increment the number of active requests, this
# is important since it will tell us when all requests
# are done
active_requests += 1
end
# launch three requests in parallel, each will launch
# a new requests when done, so there will always be
# three requests active at any one time, unless there
# are no more urls to fetch
3.times do
launch_next.call
end
end
Caveat emptor, there may very well be some detail I've missed in the code above.
If you think it's hard to follow the logic in my example, welcome to the world of evented programming. It's really tricky to write readable evented code. It all goes backwards. Sometimes it helps to start reading from the end.
I've assumed that you don't want to add more requests after you've started downloading, it doesn't look like it from the code in your question, but should you want to you can rewrite my code to use an EM::Queue instead of a regular array, and remove the part that does EM.stop, since you will not be stopping. You can probably remove the code that keeps track of the number of active requests too, since that's not relevant. The important part would look something like this:
launch_next = proc do
urls.pop do |url|
request = EM::HttpRequest.new(url).get
request.callback(&launch_next)
request.errback(&launch_next)
end
end
Also, bear in mind that my code doesn't actually do anything with the response. The response will be passed as an argument to the when_done routine (in the first example). I also do the same thing for success and error, which you may not want to do in a real application.