Testing sidekiq worker and retryset Rails minitest - ruby

I have a sidekiq middlware which catch custom exception
require 'celluloid'
require 'sidekiq/middleware/server/retry_jobs'
module Sidekiq
class RetryMiddleware < Sidekiq::Middleware::Server::RetryJobs
def call(worker, msg, queue)
yield
rescue Sidekiq::Shutdown
# ignore, will be pushed back onto queue during hard_shutdown
raise
rescue Sidekiq::Retries::Retry => e
# force a retry (for workers that have retries disabled)
msg['retry'] = e.max_retries
attempt_retry(worker, msg, queue, e.cause)
raise e.cause
rescue Sidekiq::Retries::Fail => e
# seriously, don't retry this
raise e.cause
rescue Exception => e
# ignore, will be pushed back onto queue during hard_shutdown
raise Sidekiq::Shutdown if exception_caused_by_shutdown?(e)
raise e unless msg['retry']
attempt_retry(worker, msg, queue, e)
raise e
end
end
end
and my worker looks like below
class SomeWorker
include Sidekiq::Worker
sidekiq_options retry: false
def perform(input_data)
begin
logic to insert data into db
rescue Ione::Io::ConnectionClosedError => e
raise Sidekiq::Retries::Retry
end
end
end
When i was trying to test the SomeWorker peform method is adding the job to retry.
In Testing I am not seeing the middleware is getting called
Thanks in advance

You're making this way too hard on yourself, just call the methods.
RetryMiddleware.new.call(MyWorker.new, { ... }, 'default') do
MyWorker.new.perform(...)
end

Related

ruby how to stop the execution after rescue

I have a func where when an exception was raised, I am rescuing it.
But the program continues to the next line and calls the next func create_request
But when there is exception, I do not want to continue
def validate_request_code options
if check_everything is good
#code to validate
else
errors << "something is gone bad"
end
[errors.size == 0, errors.size == 0 ? options : raise(ArgumentError, "Error while validating #{errors}")]
end
I am trying to catch/rescue the exception
def validate_request options
begin
validate_request_code options
rescue ArgumentError => e
log :error
rescue Exception => e
log :error
end
sleep 20
if options['action'] == "create"
create_request options
end
end
If by 'not continue' you mean that you want the original error to continue (i.e., you just want to take action on the way by), you can call raise inside the rescue block, which re-raises the original error.
def foo
begin
# stuff
rescue StandardError => e
# handle error
raise
end
end
You can also simply return from within the rescue block as well.
def foo
begin
# stuff
rescue StandardError => e
# handle error
return some_value
end
end
As an aside, generally you want to rescue StandardError rather than Exception. All the things that you can reasonably handle within your application are covered under the StandardError. The things outside that are things like out-of-memory, etc., that are outside your control.

Trouble Rescuing from Bunny Connection Net::ReadTimeout

I have a ruby script that uses the Bunny Gem to connect to a rabbitmq instance. The script works for a while, but eventually will die because of a Net::ReadTimeout
E, [2017-08-13T08:48:09.671988 #21351] ERROR -- #<Bunny::Session:0x39eca20 scrapes#104.196.154.25:5672, vhost=/, addresses=[104.196.154.25:5672]>: Uncaught exception from consumer #<Bunny::Consumer:32353120 #channel_id=1 #queue=sc_link_queue> #c
onsumer_tag=bunny-1502631967000-46739673895>: #<Net::ReadTimeout: Net::ReadTimeout> # /home/rails/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/net/protocol.rb:158:in `rbuf_fill'
E, [2017-08-13T08:48:32.468023 #23205] ERROR -- #<Bunny::Session:0x42202a0 scrapes#104.196.154.25:5672, vhost=/, addresses=[104.196.154.25:5672]>: Uncaught exception from consumer #<Bunny::Consumer:36695920 #channel_id=1 #queue=sc_link_queue> #c
onsumer_tag=bunny-1502631972000-482787698591>: #<Net::ReadTimeout: Net::ReadTimeout> # /home/rails/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/net/protocol.rb:158:in `rbuf_fill'
My script looks like this
module Sc
class Worker
def initialize
init()
end
def self.start_headless(type)
Headless.new(display: 50, destroy_at_exit: false, resuse: true).start
worker = new
worker.send(type)
end
def init
$conn ||= Bunny.new($rabbitmq_opts)
$conn.start
#browser = Sc::Browser.new()
rescue Timeout::Error, Net::ReadTimeout, Selenium::WebDriver::Error::UnknownError, Errno::ECONNREFUSED, Selenium::WebDriver::Error::JavascriptError, Exception, StandardError => e
LOGGER.error("[x] Trouble connecting to rabbitmq, retrying...")
LOGGER.error("[x] #{e}")
LOGGER.error("[x] #{e.backtrace}")
retry
end
def listen_for_searches
channel = $conn.create_channel
channel.prefetch(1)
queue = channel.queue($rabbitmq_search_queue, durable: true)
exchange = channel.default_exchange
queue.subscribe(:manual_ack => true, :block => true) do |delivery_info, properties, payload|
LOGGER.info "[x] Received #{payload}"
payload = JSON.parse(payload)
scrape = Sc::Search.new(browser: #browser.browser, county: payload["name"], type: payload["type"], date_type: payload["date_type"])
scrape.run
scrape.close
channel.ack(delivery_info.delivery_tag)
end
rescue Timeout::Error, Net::ReadTimeout, Selenium::WebDriver::Error::UnknownError, Errno::ECONNREFUSED, Selenium::WebDriver::Error::JavascriptError, Exception, StandardError => e
LOGGER.error("[x] #{e}")
LOGGER.error("[x] #{e.backtrace}")
LOGGER.error("[x] Trouble with scrape, retrying...")
retry
end
end
end
As you can see I'm trying to rescue from pretty much everything that could happen. I still can't seem to get it to recover from the Net::ReadTimeout error. Once the worker dies you can still see that it is connected to rabbitmq, but the last item it took from the queue is unacknowledged, it is essentially hung.
I have solved this. The issue was that everything that runs inside the Bunny subscribe block is handled in a different thread, so you need to add the rescue statements to inside that block.

Sidekiq transient vs fatal errors

Is there a way to err from a Sidekiq job in a way that tells Sidekiq that "this error is fatal and unrecoverable, do not retry, send it straight to dead job queue"?
Looking at Sidekiq Error Handling documentation, it seems like it interpret all errors as transient, and will retry a job (if retry is enabled) regardless of the error type.
You should rescue those specific errors and not re-raise them.
def perform
call_something
rescue CustomException
nil
end
Edit:
Well, if you want to purposely send a message to the DLQ/DJQ, you'd need to make a method that does what #send_to_morgue does. I'm sure Mike Perham is going to come in here and yell at me for suggesting this but...
def send_to_morgue(msg)
Sidekiq.logger.info { "Adding dead #{msg['class']} job #{msg['jid']}" }
payload = Sidekiq.dump_json(msg)
now = Time.now.to_f
Sidekiq.redis do |conn|
conn.multi do
conn.zadd('dead', now, payload)
conn.zremrangebyscore('dead', '-inf', now - DeadSet.timeout)
conn.zremrangebyrank('dead', 0, -DeadSet.max_jobs)
end
end
end
The only difference you'd have to dig into what msg looks like going into that method but I suspect it's what normally hits the middleware before parse.
If found on GitHub a solution for your problem. In that post they suggested to write a custom middleware that handles the exceptions you want to prevent retries for. This is a basic example:
def call(worker, msg, queue)
begin
yield
rescue ActiveRecord::RecordNotFound => e
msg['retry'] = false
raise
end
end
You can extending that you get:
def call(worker, msg, queue)
begin
yield
rescue ActiveRecord::RecordNotFound => e
msg['retry'] = false
raise
rescue Exception => e
if worker.respond_to?(:handle_error)
worker.handle_error(e)
else
raise
end
end
end

Handle exceptions in concurrent-ruby thread pool

How to handle exceptions in concurrent-ruby thread pools (http://ruby-concurrency.github.io/concurrent-ruby/file.thread_pools.html)?
Example:
pool = Concurrent::FixedThreadPool.new(5)
pool.post do
raise 'something goes wrong'
end
# how to rescue this exception here
Update:
Here is simplified version of my code:
def process
pool = Concurrent::FixedThreadPool.new(5)
products.each do |product|
new_product = generate_new_product
pool.post do
store_in_db(new_product) # here exception is raised, e.g. connection to db failed
end
end
pool.shutdown
pool.wait_for_terminaton
end
So what I want to achive, is to stop processing (break loop) in case of any exception.
This exception is also rescued at higher level of application and there are executed some cleaning jobs (like setting state of model to failure and sending some notifications).
The following answer is from jdantonio from here https://github.com/ruby-concurrency/concurrent-ruby/issues/616
"
Most applications should not use thread pools directly. Thread pools are a low-level abstraction meant for internal use. All of the high-level abstractions in this library (Promise, Actor, etc.) all post jobs to the global thread pool and all provide exception handling. Simply pick the abstraction that best fits your use case and use it.
If you feel the need to configure your own thread pool rather than use the global thread pool, you can still use the high-level abstractions. They all support an :executor option which allows you to inject your custom thread pool. You can then use the exception handling provided by the high-level abstraction.
If you absolutely insist on posting jobs directly to a thread pool rather than using our high-level abstractions (which I strongly discourage) then just create a job wrapper. You can find examples of job wrappers in all our high-level abstractions, Rails ActiveJob, Sucker Punch, and other libraries which use our thread pools."
So how about an implementation with Promises ?
http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Promise.html
In your case it would look something like this:
promises = []
products.each do |product|
new_product = generate_new_prodcut
promises << Concurrent::Promise.execute do
store_in_db(new_product)
end
end
# .value will wait for the Thread to finish.
# The ! means, that all exceptions will be propagated to the main thread
# .zip will make one Promise which contains all other promises.
Concurrent::Promise.zip(*promises).value!
There may be a better way, but this does work. You will want to change the error handling within wait_for_pool_to_finish.
def process
pool = Concurrent::FixedThreadPool.new(10)
errors = Concurrent::Array.new
10_000.times do
pool.post do
begin
# do the work
rescue StandardError => e
errors << e
end
end
end
wait_for_pool_to_finish(pool, errors)
end
private
def wait_for_pool_to_finish(pool, errors)
pool.shutdown
until pool.shutdown?
if errors.any?
pool.kill
fail errors.first
end
sleep 1
end
pool.wait_for_termination
end
I've created an issue #634. Concurrent thread pool can support abortable worker without any problems.
require "concurrent"
Concurrent::RubyThreadPoolExecutor.class_eval do
# Inspired by "ns_kill_execution".
def ns_abort_execution aborted_worker
#pool.each do |worker|
next if worker == aborted_worker
worker.kill
end
#pool = [aborted_worker]
#ready.clear
stopped_event.set
nil
end
def abort_worker worker
synchronize do
ns_abort_execution worker
end
nil
end
def join
shutdown
# We should wait for stopped event.
# We couldn't use timeout.
stopped_event.wait nil
#pool.each do |aborted_worker|
# Rubinius could receive an error from aborted thread's "join" only.
# MRI Ruby doesn't care about "join".
# It will receive error anyway.
# We can "raise" error in aborted thread and than "join" it from this thread.
# We can "join" aborted thread from this thread and than "raise" error in aborted thread.
# The order of "raise" and "join" is not important. We will receive target error anyway.
aborted_worker.join
end
#pool.clear
nil
end
class AbortableWorker < self.const_get :Worker
def initialize pool
super
#thread.abort_on_exception = true
end
def run_task pool, task, args
begin
task.call *args
rescue StandardError => error
pool.abort_worker self
raise error
end
pool.worker_task_completed
nil
end
def join
#thread.join
nil
end
end
self.send :remove_const, :Worker
self.const_set :Worker, AbortableWorker
end
class MyError < StandardError; end
pool = Concurrent::FixedThreadPool.new 5
begin
pool.post do
sleep 1
puts "we shouldn't receive this message"
end
pool.post do
puts "raising my error"
raise MyError
end
pool.join
rescue MyError => error
puts "received my error, trace: \n#{error.backtrace.join("\n")}"
end
sleep 2
Output:
raising my error
received my error, trace:
...
This patch works fine for any version of MRI Ruby and Rubinius. JRuby is not working and I don't care. Please patch JRuby executor if you want to support it. It should be easy.

rescue Timeouts with SystemTimer

I'm using the SystemTimer gem to deal with timeout problems.
https://github.com/ph7/system-timer
I can't find a way to catch the Exception when a Timeout
begin
SystemTimer.timeout_after(10.seconds) do
# facebook api
rest_graph.fql(query)
end
rescue RestGraph::Error::InvalidAccessToken
return nil
rescue Timeout::Error
# never executed
end
But the last Exception Timeout::Error is never triggered.
Why not use Timeout, which comes with 1.9.2 and is designed to do this?
require 'timeout'
status = Timeout::timeout(5) {
# Something that should be interrupted if it takes too much time...
}
Try this: (based on your link)
class TimedOut < StandardError
end
begin
SystemTimer.timeout_after(10.seconds, TimedOut) do
# ...
end
rescue TimedOut
# ...
end

Resources