I'm building a Sinatra API call that will trigger a long-running operation in a subprocess. I'm using the exception_handler gem, but don't understand how I'd use it in the forked process.
Sinatra app:
require 'sinatra'
require 'rubygems'
require 'bundler/setup'
require 'exception_notification'
use ExceptionNotification::Rack,
:email => {
:email_prefix => "[Example] ",
:sender_address => %{"notifier" <notifier#example.com>},
:exception_recipients => %w{me#example.com},
:delivery_method => :sendmail
}
get '/error' do
raise 'Bad!' # Notification gets sent
end
get '/error_async' do
p1 = fork do
sleep 10
raise 'Bad! (async)' # Notification never gets sent
end
Process.detach(p1)
end
Got it working, per the docs:
get '/error_async' do
p1 = fork do
begin
sleep 10
raise 'Bad! (async)'
rescue Exception => e
ExceptionNotifier.notify_exception(e)
end
end
Process.detach(p1)
end
Related
I have a ruby script that uses the Bunny Gem to connect to a rabbitmq instance. The script works for a while, but eventually will die because of a Net::ReadTimeout
E, [2017-08-13T08:48:09.671988 #21351] ERROR -- #<Bunny::Session:0x39eca20 scrapes#104.196.154.25:5672, vhost=/, addresses=[104.196.154.25:5672]>: Uncaught exception from consumer #<Bunny::Consumer:32353120 #channel_id=1 #queue=sc_link_queue> #c
onsumer_tag=bunny-1502631967000-46739673895>: #<Net::ReadTimeout: Net::ReadTimeout> # /home/rails/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/net/protocol.rb:158:in `rbuf_fill'
E, [2017-08-13T08:48:32.468023 #23205] ERROR -- #<Bunny::Session:0x42202a0 scrapes#104.196.154.25:5672, vhost=/, addresses=[104.196.154.25:5672]>: Uncaught exception from consumer #<Bunny::Consumer:36695920 #channel_id=1 #queue=sc_link_queue> #c
onsumer_tag=bunny-1502631972000-482787698591>: #<Net::ReadTimeout: Net::ReadTimeout> # /home/rails/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/net/protocol.rb:158:in `rbuf_fill'
My script looks like this
module Sc
class Worker
def initialize
init()
end
def self.start_headless(type)
Headless.new(display: 50, destroy_at_exit: false, resuse: true).start
worker = new
worker.send(type)
end
def init
$conn ||= Bunny.new($rabbitmq_opts)
$conn.start
#browser = Sc::Browser.new()
rescue Timeout::Error, Net::ReadTimeout, Selenium::WebDriver::Error::UnknownError, Errno::ECONNREFUSED, Selenium::WebDriver::Error::JavascriptError, Exception, StandardError => e
LOGGER.error("[x] Trouble connecting to rabbitmq, retrying...")
LOGGER.error("[x] #{e}")
LOGGER.error("[x] #{e.backtrace}")
retry
end
def listen_for_searches
channel = $conn.create_channel
channel.prefetch(1)
queue = channel.queue($rabbitmq_search_queue, durable: true)
exchange = channel.default_exchange
queue.subscribe(:manual_ack => true, :block => true) do |delivery_info, properties, payload|
LOGGER.info "[x] Received #{payload}"
payload = JSON.parse(payload)
scrape = Sc::Search.new(browser: #browser.browser, county: payload["name"], type: payload["type"], date_type: payload["date_type"])
scrape.run
scrape.close
channel.ack(delivery_info.delivery_tag)
end
rescue Timeout::Error, Net::ReadTimeout, Selenium::WebDriver::Error::UnknownError, Errno::ECONNREFUSED, Selenium::WebDriver::Error::JavascriptError, Exception, StandardError => e
LOGGER.error("[x] #{e}")
LOGGER.error("[x] #{e.backtrace}")
LOGGER.error("[x] Trouble with scrape, retrying...")
retry
end
end
end
As you can see I'm trying to rescue from pretty much everything that could happen. I still can't seem to get it to recover from the Net::ReadTimeout error. Once the worker dies you can still see that it is connected to rabbitmq, but the last item it took from the queue is unacknowledged, it is essentially hung.
I have solved this. The issue was that everything that runs inside the Bunny subscribe block is handled in a different thread, so you need to add the rescue statements to inside that block.
I am using AirBnb Nerve service. It's service code looks like this:
require 'logger'
require 'json'
require 'timeout'
require 'nerve/version'
require 'nerve/utils'
require 'nerve/log'
require 'nerve/ring_buffer'
require 'nerve/reporter'
require 'nerve/service_watcher'
module Nerve
class Nerve
include Logging
def initialize(opts={})
log.info 'nerve: starting up!'
# set global variable for exit signal
$EXIT = false
...some code...
# Any exceptions in the watcher threads should wake the main thread so
# that we can fail fast.
Thread.abort_on_exception = true
log.debug 'nerve: completed init'
end
def run
log.info 'nerve: starting run'
#services.each do |name, config|
launch_watcher(name, config)
end
begin
sleep
rescue StandardError => e
log.error "nerve: encountered unexpected exception #{e.inspect} in main thread"
raise e
ensure
$EXIT = true
log.warn 'nerve: reaping all watchers'
#watchers.each do |name, watcher_thread|
reap_watcher(name)
end
end
log.info 'nerve: exiting'
ensure
$EXIT = true
end
def launch_watcher(name, config)
... some code ...
end
def reap_watcher(name)
... some code ...
end
end
end
I do not see any stop method. What is the right way of stopping such a service? I am using JRuby and intend to write a JSVC adapter for this service.
There is no way to do this via the current API, short of sending it a signal.
If sending a signal isn't going to work and you want to handle stop explicitly, it looks like you will need to change the following things:
Add a #stop method to Nerve that sets $EXIT = true.
Modify #run so that rather than sleeping forever (sleep) it wakes up and checks $EXIT.
I'm trying to use EM-Synchrony for concurrency in an application and have come across an issue with my use of deferred code and Fibers.
Any calls to the database within either EM.defer or EM::Synchrony.defer results in the application crashing with the error can't yield from root fiber
Below is a very trimmed down runnable example of what I'm trying to accomplish. The first print works and displays [:first, 1] but the second is where I crash with the error mentioned above.
require 'mysql2'
require 'em-synchrony/activerecord'
ActiveRecord::Base.establish_connection(
:adapter => 'em_mysql2',
:username => 'user',
:password => 'pass',
:host => 'localhost',
:database => 'app_dev',
:pool => 60
)
class User < ActiveRecord::Base; end
EM.synchrony do
p [:first, User.all.count]
EM::Synchrony.defer do
p [:second, User.all.count]
end
end
My first thought was perhaps the Fiber.current and Fiber.yield within EM::Synchrony.defer meant I could fix the problem with an extra Fiber.new call
EM::Synchrony.defer do
Fiber.new do
p [:second, User.all.count]
end.resume
end
This fails to run as well but this time I get the error fiber called across threads.
Starting my mailman app by running rails runner lib/daemons/mailman_server.rb works fine.
When starting with my daemon script and command bundle exec rails runner script/daemon run mailman_server.rb, the script generates an error:
.rvm/gems/ruby-1.9.3-p194/gems/mailman-0.5.3/lib/mailman/route/conditions.rb:21:in `match': undefined method `each' for nil:NilClass (NoMethodError)
My code is as follows:
lib/daemons/mailman_server.rb
require 'mailman'
# Config Mailman
Mailman.config.ignore_stdin = false
Mailman.config.graceful_death = true
Mailman.config.poll_interval = 15
Mailman.config.logger = Logger.new File.expand_path("../../../log/mailman.log", __FILE__)
Mailman.config.pop3 = {
:username => 'alias#mygoogleapp.com',
:password => 'password',
:server => 'pop.gmail.com',
:port => 995,
:ssl => true
}
# Run the mailman
Mailman::Application.run do
from('%email%').to('alias+q%id%#mygoogleapp.com') do |email, id|
begin
# Get message without headers to pass to add_answer_from_email
if message.multipart?
reply = message.text_part.body.decoded
else
reply = message.body.decoded
end
# Call upon the question to add answer to his set
Question.find(id).add_answer_from_email(email, reply)
rescue Exception => e
Mailman.logger.error "Exception occured while receiving message:\n#{message}"
Mailman.logger.error [e, *e.backtrace].join("\n")
end
end
end
and my script/daemon file is:
#!/usr/bin/env ruby
require 'rubygems'
require "bundler/setup"
require 'daemons'
ENV["APP_ROOT"] ||= File.expand_path("#{File.dirname(__FILE__)}/..")
script = "#{ENV["APP_ROOT"]}/lib/daemons/#{ARGV[1]}"
Daemons.run(script, dir_mode: :normal, dir: "#{ENV["APP_ROOT"]}/tmp/pids")
Any insight as to why it fails as a daemon?
I'm using the SystemTimer gem to deal with timeout problems.
https://github.com/ph7/system-timer
I can't find a way to catch the Exception when a Timeout
begin
SystemTimer.timeout_after(10.seconds) do
# facebook api
rest_graph.fql(query)
end
rescue RestGraph::Error::InvalidAccessToken
return nil
rescue Timeout::Error
# never executed
end
But the last Exception Timeout::Error is never triggered.
Why not use Timeout, which comes with 1.9.2 and is designed to do this?
require 'timeout'
status = Timeout::timeout(5) {
# Something that should be interrupted if it takes too much time...
}
Try this: (based on your link)
class TimedOut < StandardError
end
begin
SystemTimer.timeout_after(10.seconds, TimedOut) do
# ...
end
rescue TimedOut
# ...
end