Starting a Sinatra app in a new thread. The thread immediately dies - ruby

I have the following Sinatra app defined:
require "sinatra/base"
class App < Sinatra::Base
configure do
set port: 5000
end
get "/" do
"Hello!"
end
end
From inside a Rails app, I am trying to start the Sinatra app in the background:
Thread.new do
App.run!
end
But it seems that the thread immediately dies. There is nothing keeping it alive.
How can I make it so that the Sinatra app will startup in the new thread and run indefinitely (or for at least the lifetime of the app)?

Thread.new do
App.run!
end
I'm willing to bet that App.run! is raising an exception. Thread.new with a block has a nasty habit of swallowing exceptions
https://bugs.ruby-lang.org/issues/6647
Do the following:
Thread.new do
begin
App.run!
rescue StandardError => e
$stderr << e.message
$stderr << e.backtrace.join("\n")
end
end
and see whether you see anything logged to stderr.

Related

Wait for eventmachine queue to be empty?

I'm using a Ruby library that assumes its running inside eventmachine (Faye) and starts the eventmachine reactor in a separate thread if it isn't inside an EM.run context. When the Rails application is started inside a thin server, no problem. But for background jobs the Rails environment is loaded and then resque is started, which spawns a new process for every job (the self.perform method below).
So, I know that I have the reactor running, but I need to know when it is safe to return from self.perform because that will exit the current process and cut off any pending actions in the EM reactor. Alternatively I could run the job inside an EM.run block but would then need to know when it is safe to exit.
class AsanaPopulateJob
#queue = :profiles
def self.logger; Rails.logger; end
def self.perform(ident_id)
begin
logger.debug "==> Start AsanaPopulateJob[#{ident_id}]"
ident = Identity.find(ident_id)
conn = AsanaConnector.new(ident)
Faye.ensure_reactor_running!
conn.populate_profile!
logger.debug "=== Sleep 5..."
sleep(5)
ensure
logger.debug "<== Done AsanaPopulateJob[#{ident_id}]"
end
end
end
Right now, I'm running my function call that invokes em-hiredis and then sleeping for 5 seconds to let things settle down. Surely there's something better.

Kill all threads on terminate

I'm trying to create an app in ruby which can be started from command line and it does two things: runs a continous job (loop with sleep which runs some action [remote feed parsing]) with one thread and sinatra in a second thread. My code (simplified) looks like that:
require 'sinatra'
class MyApp < Sinatra::Base
get '/' do
"Hello!"
end
end
threads = []
threads << Thread.new do
loop do
# do something heavy
sleep 10
end
end
threads << Thread.new do
MyApp.run!
end
threads.each { |t| t.join }
The above code actually does it's job very well - the sinatra app is started an available under 4567 port and the do something heavy task is beeing fired each 10 seconds. However, i'm not able to kill that script.
I'm running it with ruby app.rb but killing it with ctrl + c is not working. It kills just the sinatra thread but the second one is still running and, to stop the script, i need to close the terminal window.
I was trying to kill all the threads on SIGNINT but it's also not working as expected
trap "SIGINT" do
puts "Exiting"
threads.each { |t| Thread.kill t }
exit 130
end
Can you help me with this? Thanks in advance.
To trap ctrl-c, change "SIGINT" to "INT".
trap("INT") {
puts "trapping"
threads.each{|t|
puts "killing"
Thread.kill t
}
}
To configure Sinatra to skip catching traps:
class MyApp < Sinatra::Base
configure do
set :traps, false
end
...
Reference: Ruby Signal module
To list the available Ruby signals: Signal.list.keys
Reference: Sinatra Intro
(When I run your code and trap INT, I do get a Sinatra socket warning "Already in use". I presume that's fine for your purposes, or you can solve that by doing a Sinatra graceful shutdown. See Sinatra - terminate server from request)
Late to the party, but Trap has one big disadvantage - it gets overriden by the webserver. For example, Puma sets several traps which basically makes your one never to be called.
The best workaround is to use at_exit which can be defined multiple times and Ruby makes sure all blocks are called. I haven't tested this if it would work for your case tho.

log in non sidekiq class

I have a worker that delegates the work to another class like this:
class SynJob
include Sidekiq::Worker
sidekiq_options queue: :sync
def perform(user_id)
OtherClass.new(blah, blah, blah)
end
end
class OtherClass
def initialize
puts "we are in OtherClass"
end
end
My question is, how do I log to stdout from OtherClass.
My puts statements do not show up in the heroku stdout log.
The literal answer to your question is to use puts or other Ruby APIs for writing to stdout. You can call this both in your SynJob or your OtherClass code and it will execute the same, writing to the stdout of the sidekiq worker process.
However, this probably is not what you want to do. If this is a Rails app, you probably want to write to the Rails logger, which should be available both in your worker and in other code:
Rails.logger.info "I'm a debug message"
This will show up in the appropriate log both locally and when running deployed on Heroku.

How to recover a crashed EventMachine loop

I'm using Unicorn on Heroku and I created an EventMachine loop:
(from https://gist.github.com/jonkgrimes/5103321)
after_fork do |server,worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
if defined?(EventMachine)
unless EventMachine.reactor_running? && EventMachine.reactor_thread.alive?
if EventMachine.reactor_running?
EventMachine.stop_event_loop
EventMachine.release_machine
EventMachine.instance_variable_set("#reactor_running",false)
end
Thread.new { EventMachine.run }
end
end
Signal.trap("INT") { EventMachine.stop }
Signal.trap("TERM") { EventMachine.stop }
end
The EventMachine works great, but at some point my events start failing because "no eventmachine loop is running." I imagine two possible problems:
the loop is still running but somehow my unicorn forks are no longer bound properly (seems unlikely)
the loop crashed (seems likely)
How can I detect and restart a crashed eventmachine? And/or how should I go about debugging this problem?

How do you test code that forks using rspec

I have the following code
def start_sunspot_server
unless #server
pid = fork do
STDERR.reopen("/dev/null")
STDOUT.reopen("/dev/null")
server.run
end
at_exit { Process.kill("TERM", pid) }
wait_until_solr_starts
end
end
How would I effectively go about testing it using rspec?
I thought something along
Kernel.should_receive(:fork)
STDERR.should_receive(:reopen).with("/dev/null")
STDOUT.should_receive(:reopen).with("/dev/null")
server.should_receive(:run)
etc
I'm confused by the #server instance variable and server method in your example, but here is an example that should help you get where you're trying to go:
class Runner
def run
fork do
STDERR.reopen("/dev/null")
end
end
end
describe "runner" do
it "#run reopens STDERR at /dev/null" do
runner = Runner.new
runner.should_receive(:fork) do |&block|
STDERR.should_receive(:reopen).with("/dev/null")
block.call
end
runner.run
end
end
The key is that the fork message is sent to the Runner object itself, even though its implementation is in the Kernel module.
HTH,
David
David's solution didn't work for us. Maybe it's because we're not using RSpec 2?
Here's what did work.
def run
fork do
blah
end
end
describe '#run' do
it 'should create a fork which calls #blah' do
subject.should_receive(:fork).and_yield do |block_context|
block_context.should_receive(:blah)
end
subject.run_job
end
end
I'm not sure how this would apply when calling a constant, such as STDERR, but this was the only way we were able to accomplish fork testing.

Resources