NewRelic Ruby Agent: errors not ignored - ruby

In a Sidekiq worker I want to ignore some errors in NewRelic while still raising them so that Sidekiqs retry behaviour kicks in.
I wrote a helper class to call NewRelic::Agent.ignore_transaction but still NewRelic reports these exceptions and complains about the error rate. What am I doing wrong?
I guess that I may have misunderstood something about the scope of the transaction?
module NewRelic
# Run a block which rescues some exceptions and ignores them in NewRelic
class IgnoreExceptions
class << self
def ignore(*class_list)
raise ArgumentError, 'Block required' unless block_given?
begin
yield
rescue errors_matching { |e| class_list.include?(e.class) }
NewRelic::Agent.ignore_transaction
raise
end
end
private
def errors_matching(&block)
Module.new.tap { |m| m.define_singleton_method(:===, &block) }
end
end
end
end

Instead of trying to ignore the transaction you may be better off targeting the errors being generated. If there is a specific error class you would like the New Relic Ruby agent to ignore you could utilize the error_collector.ignore_errors configuration option. Another possible solution would be to use the NewRelic::Agent#ignore_error_filter method to filter the errors the agent is tracking.

Related

Sidekiq transient vs fatal errors

Is there a way to err from a Sidekiq job in a way that tells Sidekiq that "this error is fatal and unrecoverable, do not retry, send it straight to dead job queue"?
Looking at Sidekiq Error Handling documentation, it seems like it interpret all errors as transient, and will retry a job (if retry is enabled) regardless of the error type.
You should rescue those specific errors and not re-raise them.
def perform
call_something
rescue CustomException
nil
end
Edit:
Well, if you want to purposely send a message to the DLQ/DJQ, you'd need to make a method that does what #send_to_morgue does. I'm sure Mike Perham is going to come in here and yell at me for suggesting this but...
def send_to_morgue(msg)
Sidekiq.logger.info { "Adding dead #{msg['class']} job #{msg['jid']}" }
payload = Sidekiq.dump_json(msg)
now = Time.now.to_f
Sidekiq.redis do |conn|
conn.multi do
conn.zadd('dead', now, payload)
conn.zremrangebyscore('dead', '-inf', now - DeadSet.timeout)
conn.zremrangebyrank('dead', 0, -DeadSet.max_jobs)
end
end
end
The only difference you'd have to dig into what msg looks like going into that method but I suspect it's what normally hits the middleware before parse.
If found on GitHub a solution for your problem. In that post they suggested to write a custom middleware that handles the exceptions you want to prevent retries for. This is a basic example:
def call(worker, msg, queue)
begin
yield
rescue ActiveRecord::RecordNotFound => e
msg['retry'] = false
raise
end
end
You can extending that you get:
def call(worker, msg, queue)
begin
yield
rescue ActiveRecord::RecordNotFound => e
msg['retry'] = false
raise
rescue Exception => e
if worker.respond_to?(:handle_error)
worker.handle_error(e)
else
raise
end
end
end

Handle exceptions in concurrent-ruby thread pool

How to handle exceptions in concurrent-ruby thread pools (http://ruby-concurrency.github.io/concurrent-ruby/file.thread_pools.html)?
Example:
pool = Concurrent::FixedThreadPool.new(5)
pool.post do
raise 'something goes wrong'
end
# how to rescue this exception here
Update:
Here is simplified version of my code:
def process
pool = Concurrent::FixedThreadPool.new(5)
products.each do |product|
new_product = generate_new_product
pool.post do
store_in_db(new_product) # here exception is raised, e.g. connection to db failed
end
end
pool.shutdown
pool.wait_for_terminaton
end
So what I want to achive, is to stop processing (break loop) in case of any exception.
This exception is also rescued at higher level of application and there are executed some cleaning jobs (like setting state of model to failure and sending some notifications).
The following answer is from jdantonio from here https://github.com/ruby-concurrency/concurrent-ruby/issues/616
"
Most applications should not use thread pools directly. Thread pools are a low-level abstraction meant for internal use. All of the high-level abstractions in this library (Promise, Actor, etc.) all post jobs to the global thread pool and all provide exception handling. Simply pick the abstraction that best fits your use case and use it.
If you feel the need to configure your own thread pool rather than use the global thread pool, you can still use the high-level abstractions. They all support an :executor option which allows you to inject your custom thread pool. You can then use the exception handling provided by the high-level abstraction.
If you absolutely insist on posting jobs directly to a thread pool rather than using our high-level abstractions (which I strongly discourage) then just create a job wrapper. You can find examples of job wrappers in all our high-level abstractions, Rails ActiveJob, Sucker Punch, and other libraries which use our thread pools."
So how about an implementation with Promises ?
http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Promise.html
In your case it would look something like this:
promises = []
products.each do |product|
new_product = generate_new_prodcut
promises << Concurrent::Promise.execute do
store_in_db(new_product)
end
end
# .value will wait for the Thread to finish.
# The ! means, that all exceptions will be propagated to the main thread
# .zip will make one Promise which contains all other promises.
Concurrent::Promise.zip(*promises).value!
There may be a better way, but this does work. You will want to change the error handling within wait_for_pool_to_finish.
def process
pool = Concurrent::FixedThreadPool.new(10)
errors = Concurrent::Array.new
10_000.times do
pool.post do
begin
# do the work
rescue StandardError => e
errors << e
end
end
end
wait_for_pool_to_finish(pool, errors)
end
private
def wait_for_pool_to_finish(pool, errors)
pool.shutdown
until pool.shutdown?
if errors.any?
pool.kill
fail errors.first
end
sleep 1
end
pool.wait_for_termination
end
I've created an issue #634. Concurrent thread pool can support abortable worker without any problems.
require "concurrent"
Concurrent::RubyThreadPoolExecutor.class_eval do
# Inspired by "ns_kill_execution".
def ns_abort_execution aborted_worker
#pool.each do |worker|
next if worker == aborted_worker
worker.kill
end
#pool = [aborted_worker]
#ready.clear
stopped_event.set
nil
end
def abort_worker worker
synchronize do
ns_abort_execution worker
end
nil
end
def join
shutdown
# We should wait for stopped event.
# We couldn't use timeout.
stopped_event.wait nil
#pool.each do |aborted_worker|
# Rubinius could receive an error from aborted thread's "join" only.
# MRI Ruby doesn't care about "join".
# It will receive error anyway.
# We can "raise" error in aborted thread and than "join" it from this thread.
# We can "join" aborted thread from this thread and than "raise" error in aborted thread.
# The order of "raise" and "join" is not important. We will receive target error anyway.
aborted_worker.join
end
#pool.clear
nil
end
class AbortableWorker < self.const_get :Worker
def initialize pool
super
#thread.abort_on_exception = true
end
def run_task pool, task, args
begin
task.call *args
rescue StandardError => error
pool.abort_worker self
raise error
end
pool.worker_task_completed
nil
end
def join
#thread.join
nil
end
end
self.send :remove_const, :Worker
self.const_set :Worker, AbortableWorker
end
class MyError < StandardError; end
pool = Concurrent::FixedThreadPool.new 5
begin
pool.post do
sleep 1
puts "we shouldn't receive this message"
end
pool.post do
puts "raising my error"
raise MyError
end
pool.join
rescue MyError => error
puts "received my error, trace: \n#{error.backtrace.join("\n")}"
end
sleep 2
Output:
raising my error
received my error, trace:
...
This patch works fine for any version of MRI Ruby and Rubinius. JRuby is not working and I don't care. Please patch JRuby executor if you want to support it. It should be easy.

How to stub out global logging function in rspec examples

I am working with some code which logs to a global static logging class, e.g.:
GlobalLog.debug("Some message")
However in my tests, I don't want to include the real log, because it introduces a lot of unwanted dependencies. So I want to mock it out:
describe "some function" do
before(:all) do
log = double('log')
GlobalLog = log
log.stub(:debug)
end
...
end
Unfortunately, because doubles are cleared out after each example, this isn't allowed:
https://www.relishapp.com/rspec/rspec-mocks/docs/scope
If I change the before(:all) to before(:each), the code works, but I get a warning:
warning: already initialized constant GlobalLog
This is clogging up my test output, so I'd like to avoid the warning. Is there a clean solution?
Define GlobalLog once in your spec_helper.rb.
class GlobalLog
class << self
[:info, :debug, :warn, :error].each do |method|
define_method(method) {|*|}
end
end
end
You could throw it in spec/support if you want to be cleaner about it.
Why won't you stub original GlobalLog object method?
before(:each)
GlobalLog.stub(:debug)
end

How can I detect an API error?

I am developing a Ruby application that depends on the API from the other team.
Is there a good way to print the error message indicating it is
generated from their API error?
For example, there's a method provided from the api called foo()
so when I do:
api.foo()
it will return an error message: "foo error"
when I develop my code, I want the error message to look like: "api: foo error"
That way, when I see this error message, then I know it's the API error,
not my code's error.
So far the best practice I can think of is to wrap around all the
methods
provided by the API, for example:
class apiWrap
def initialize(api)
#api = api
end
def foo
begin
#api.foo()
rescue => e
raise "api: #{e.message}"
end
end
end
If in this api used its own exception class then you can redefine it like this:
class APIException
alias_method :old_exception, :exception
def exception(message)
old_exception(message.prepend("api: ")) # for ruby 1.9.3
old_exception("api: " + message) # for older ruby
end
end

RSpec and Open-URI how do I mock raise a SocketError/TimeoutError

I want to be able to spec out that when Open-Uri open() calls either timeout or raise an exception such as SocketError I am handling things as expected, however I'm having trouble with this.
Here is my spec (for SocketError):
#obj.should_receive(:open).with("some_url").and_raise(SocketError)
And the part of my object where I'm using open-uri:
begin
resp = open(url)
resp = resp.read
rescue SocketError
something = true
end
However in this situation the spec fails as with a nil.read error.
This is the second time this week I've come across this problem, the previous time I was attempting to simulate a TimeoutError when wrapping open() with a timeout() {}, that time I gave up and just caused an actual timeout to happen by opening up the class. I could obviously cause this to throw a SocketError by trying to call an invalid URL, but I'm sure there is a correct way to mock this out with RSpec.
Update: I obviously wasn't thinking clearly that late at night, the error was actually when I re-tried the URL after the SocketError, the and_raise(SocketError) part worked fine.
The line you provided should work, based on the information you've given: I made a tiny test class and spec (see below) with only the described functionality, and things behaved as expected. It might be helpful if you could provide a little more context - the full "it" block from the spec, for instance, might expose some other problem.
As mentioned, the following spec passes, and I believe it captures the logic you were attempting to verify:
require 'rubygems'
require 'spec'
class Foo
attr_accessor :socket_error
def get(url)
#socket_error = false
begin
resp = open(url)
resp = resp.read
rescue SocketError
#socket_error = true
end
end
end
describe Foo do
before do
#foo = Foo.new
end
it "should handle socket errors" do
#foo.should_receive(:open).with("http://www.google.com").and_raise(SocketError)
#foo.get("http://www.google.com")
#foo.socket_error.should be_true
end
end

Resources