Sinatra/Thin runs and cannot be stopped with Ctrl-C - ruby

I'm creating an application that has Sinatra running inside of EventMachine and when I run the barebones test app I cannot get the server to end with Ctrl-C, I have to kill it with -9 or -usr2 for example.
I cannot figure out why Sinatra reports it has stopped but continues to serve requests or why I cannot stop the server with Ctrl-C.
Thin 1.6.1 with Sinatra 1.4.4 STOPPED MESSAGE BUT CONTINUES
== Sinatra/1.4.4 has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.6.1 codename Death Proof)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
Stopping ...
== Sinatra has ended his set (crowd applauds)
Ping!
^CPing!
Stopping ...
Ping!
^CStopping ...
This is the barebones test app I'm using to generate the output
# Run with 'ruby test.rb'
require 'eventmachine'
require 'sinatra/base'
require 'thin'
class NeverStops < Sinatra::Base
settings.logging = true
configure do
set :threaded, true
end
get '/foobar' do
'Foobar'
end
end
EM.run do
# Does nothing
#trap(:INT) { EM::stop(); exit }
#trap(:TERM) { EM::stop(); exit }
#trap(:KILL) { EM::stop(); exit }
EventMachine.add_periodic_timer 2 do
puts 'Ping!'
end
NeverStops.run!
end
Downgrading either Thin or Sinatra has different results
Thin 1.6.1 with Sinatra 1.4.3 NO STOPPED MESSAGE BUT STILL WON'T DIE (DEATH PROOF INDEED)
== Sinatra/1.4.3 has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.6.1 codename Death Proof)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
Ping!
^CPing!
Stopping ...
Ping!
Thin 1.5.1 with Sinatra 1.4.4 JUST STOPS
== Sinatra/1.4.4 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.5.1 codename Straight Razor)
>> Maximum connections set to 1024
>> Listening on localhost:4567, CTRL+C to stop
>> Stopping ...
== Sinatra has ended his set (crowd applauds)
Thin 1.5.1 with Sinatra 1.4.3 WORKS
== Sinatra/1.4.3 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.5.1 codename Straight Razor)
>> Maximum connections set to 1024
>> Listening on localhost:4567, CTRL+C to stop
Ping!
Ping!
Ping!
^C>> Stopping ...
== Sinatra has ended his set (crowd applauds)
I've updated my gems to the latest versions and have tried downgrading various gems such as EventMachine and Rack to see what results I get and nothing was helpfully different.
Versions
OSX 10.8.5 and Ubuntu 12.04.1
Ruby 2.0.0p247 and 1.9.3p194
eventmachine 1.0.3
rack 1.5.2
sinatra 1.4.4
thin 1.6.1
tilt 1.4.1

This issue is specific to newer versions of Thin (notice that v1.5.1 does not exhibit this behavior). This behavior was introduced in 1.6 and a similar issue is documented here.
The code in question follows the same pattern as mentioned in the upstream issue.
TL;DR version of the issue: Thin will stop the server, but will not stop the reactor loop (because it does not "own" the reactor). It is possible to allow Thin to own its reactor loop, in which case you will get back the desired behavior (as seen in 1.5.1). In order to do this you must start Thin without the enclosing EM#run { } , which will allow Thin to bring up (and subsequently tear down) the reactor loop.
In this case, one can imagine the periodic "ping" as a separate application that shares the reactor loop with Thin. Neither of them can claim ownership of the reactor loop. It would be wrong for Thin to stop all other applications and exit the reactor when it did not start the reactor. It is then the users responsibility to handle signals and terminate individual applications as necessary and finally stop the reactor loop (causing the process to quit).
Hope this explanation helps!

Thin is taking advantage of EM, you should run your app as you would do with Webrick and no EM. Example config.ru:
require 'bundler'
Bundle.require
class UseToStop < Sinatra::Base
get '/foobar' { body "Foobar" }
end
run UseToStop
Are you sure that you need this option? This complicates things, and is last thing you need.
set :threaded, true

Related

Setting up resque-pool over a padrino Rakefile throwing errors

I have setup a Padrino bus application using super-cool Resque for handling background process and ResqueBus for pub/sub of events.
The ResqueBus setup creates a resque queue and a worker for it to work on. Everything upto here works fine. Now since the resquebus is only creating a single worker for a single queue, and the process in my bus app can go haywire since many events will be published and subscribed. So a single worker per application queue seems to be inefficient. So thought of integrating the resque-pool gem to handle the worker process.
I have followed all process that resque pool gem has specified. I have edited my Rakefile.
# Add your own tasks in files placed in lib/tasks ending in .rake,
# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.
require File.expand_path('../config/application', __FILE__)
Ojus::Application.load_tasks
require 'resque/pool/tasks'
# this task will get called before resque:pool:setup
# and preload the rails environment in the pool manager
task "resque:setup" => :environment do
# generic worker setup, e.g. Hoptoad for failed jobs
end
task "resque:pool:setup" do
# close any sockets or files in pool manager
ActiveRecord::Base.connection.disconnect!
# and re-open them in the resque worker parent
Resque::Pool.after_prefork do |job|
ActiveRecord::Base.establish_connection
end
end
Now I tried to run this resque-pool command.
resque-pool --daemon --environment production
This throws an error like this.
/home/ubuntu/.rvm/gems/ruby-2.0.0-p451#notification-engine/gems/activerecord-4.1.7/lib/active_record/connection_adapters/connection_specification.rb:257:in `resolve_symbol_connection': 'default_env' database is not configured. Available: [:development, :production, :test] (ActiveRecord::AdapterNotSpecified)
I tried to debug this and found out that it throws an error at line
ActiveRecord::Base.connection.disconnect!
For now I have removed this line and everything seems working fine. But due to this a problem may arise because if we restart the padrino application the older ActiveRecord connection will be hanging around.
**
I just wanted to know if there is any work around for this problem and
run the resque-pool command by closing all the ActiveRecord
connections.
**
It would have been helpful if you had given your database.rb file of padrino.
Never mind, you can try
defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!
instead of ActiveRecord::Base.connection.disconnect!
and
ActiveRecord::Base.establish_connection(ActiveRecord::Base.configurations[Padrino.env])
instead of ActiveRecord::Base.establish_connection()
to establish a connection with activerecord you have to pass a parameter to what environment you want to connect otherwise it will search 'default_env' which is default in activerecord.
checkout the source code source code

Trouble starting more than one instance of Sinatra in Cucumber env.rb

I'm writing automation tests which are dependent on two different web services and I've decided to mock these out using two very basic Sinatra applications. However I'm having trouble starting more than one Sinatra instance in my Cucumber env file. The second Sinatra instance stops as soon as its started.
Here's a snippet of the output I get when kicking off a test
== Sinatra/1.4.5 has taken the stage on 9000 for development with backup from Thin
Thin web server (v1.6.2 codename Doc Brown)
Maximum connections set to 1024
Listening on localhost:9000, CTRL+C to stop
== Sinatra/1.4.5 has taken the stage on 8082 for development with backup from Thin
Thin web server (v1.6.2 codename Doc Brown)
Maximum connections set to 1024
Listening on localhost:8082, CTRL+C to stop
Stopping ...
== Sinatra has ended his set (crowd applauds)
As you can see the first service starts up and runs fine but the second service starts up OK but then immediately begins stopping
Sinatra app 1
class MockService1 < Sinatra::Base
get '/some/endpoint' do
response =
{
enabled: false
}
end
end
Sinatra app 2
class MockService2 < Sinatra::Base
get '/some/endpoint' do
response =
{
enabled: false
}
end
end
Inside my Cucumber env file
Thread.new do
MockService1.run! host: 'localhost', port: '9000'
end
Thread.new do
MockService2.run! host: 'localhost', port: '8082'
end
I was able to use the childprocess gem to start a new sinatra server under a new child process.
mock_service = ChildProcess.build('ruby', File.join(File.dirname(__FILE__), 'mock_service.rb'))
mock_service.start

Ruby: Sinatra and an infinite loop in one program

I have a program, which does something in an infinte loop (it a daemon).
This works fine.
Now I am planning to offer a webinterface for that daemon with the help of sintra. The sinatra code itself works fine too. But as soon as I have the loop and the sinatra code in one script, die sinatra code is not executed. There are no error messages on startup, but the local webservice isn't started.
Here the code stripped down to the basics:
#!/usr/bin/env ruby
require 'rubygems'
require 'sinatra'
require_relative 'lib/functions'
do_init_env # (some init steps, no influence on the startup of sinatra)
get '/' do
erb :web
end
# infinity Loop
loop do
if File.exists? somefile
do_something
end
sleep 10
end
When disabling the loop, sinatra starts up fine:
ruby ./mydaemon.rb
[2013-02-26 12:57:24] INFO WEBrick 1.3.1
[2013-02-26 12:57:24] INFO ruby 1.9.3 (2013-02-06) [armv6l-linux-eabi]
== Sinatra/1.3.5 has taken the stage on 4567 for development with backup from WEBrick
[2013-02-26 12:57:24] INFO WEBrick::HTTPServer#start: pid=13457 port=4567
^C
== Sinatra has ended his set (crowd applauds)
[2013-02-26 12:57:36] INFO going to shutdown ...
[2013-02-26 12:57:36] INFO WEBrick::HTTPServer#start done.
When enabling the loop:
Silence, until interrupting the loop:
ruby ./mydaemon.rb
^C./mydaemon.rb:39:in `sleep': Interrupt
from ./mydaemon.rb:39:in `block in <main>'
from ./mydaemon.rb:33:in `loop'
from ./mydaemon.rb:33:in `<main>
Rack runs the script as-is when starting up. The "get" etc commands just stash information for Sinatra to respond to rack later. Any infinite loops will simply get started.
You could possibly solve this by adding threading, and starting the loop on a child thread. This might be worthwhile if the loop is doing something lightweight where you would gain performance by sharing a bit of memory with the web server. However, it is usually a coding headache to work with thread interactions.
You may be better off separating the web server and your long running loop into different scripts, running in their own processes, and have the loop emit readable data to e.g. a file or database, that the web server can pick up and serve.
If you really want to run the Sinatra process as a daemon, maybe consider running it in its own process (and therefore with its own script). Consider e.g. using the daemons gem: http://daemons.rubyforge.org/

How to make Thin run on a different port?

I've a very basic test app. When I execute this command the server ignores the port I specify and runs Thin on port 4567. Why is the port I specify ignored?
$ruby xxx.rb start -p 8000
== Sinatra/1.3.3 has taken the stage on 4567 for production with backup from Thin
>> Thin web server (v1.4.1 codename Chromeo)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop
xxx.rb file
require 'Thin'
rackup_file = "config.ru"
argv = ARGV
argv << ["-R", rackup_file ] unless ARGV.include?("-R")
argv << ["-e", "production"] unless ARGV.include?("-e")
puts argv.flatten
Thin::Runner.new(argv.flatten).run!
config.ru file
require 'sinatra'
require 'sinatra/base'
class SingingRain < Sinatra::Base
get '/' do
return 'hello'
end
end
SingingRain.run!
#\ -p 8000
put this at the top of the config.ru
Your problem is with the line:
SingingRain.run!
This is Sinatra’s run method, which tells Sinatra to start its own web server which runs on port 4567 by default. This is in your config.ru file, but config.ru is just Ruby, so this line is run as if it was in any other .rb file. This is why you see Sinatra start up on that port.
When you stop this server with CTRL-C, Thin will try to continue loading the config.ru file to determine what app to run. You don’t actually specify an app in your config.ru, so you’ll see something like:
^C>> Stopping ...
== Sinatra has ended his set (crowd applauds)
/Users/matt/.rvm/gems/ruby-1.9.3-p194/gems/rack-1.4.1/lib/rack/builder.rb:129:in `to_app': missing run or map statement (RuntimeError)
from config.ru:1:in `<main>'
...
This error is simply telling you that you didn’t actually specify an app to run in your config file.
Instead of SingingRain.run!, use:
run SingingRain
run is a Rack method that specifies which app to run. You could also do run SingingRain.new – Sinatra takes steps to enable you to use just the class itself here, or an instance.
The output to this should now just be:
>> Thin web server (v1.4.1 codename Chromeo)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:8000, CTRL+C to stop
You don’t get the == Sinatra/1.3.2 has taken the stage on 4567 for production with backup from Thin message because Sinatra isn’t running its built in server, it’s just your Thin server as you configured it.
in your config.ru add
set :port=> 8000
Also i would highly suggest using Sinatra with something like passenger+nginx which makes deploying to production a breeze. But You need not worry about this if you are going to deploy to heroku.

Job handler serialization incorrect when running delayed_job in production with Thin or Unicorn

I recently brought delayed_job into my Rails 3.1.3 app. In development
everything is fine. I even staged my DJ release on the same VPS as my
production app using the same production application server (Thin),
and everything was fine. Once I released to production, however, all
hell broke loose: none of the jobs were entered into the jobs table
correctly, and I started seeing the following in the logs for all
processed jobs:
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)]
NilClass# completed after 0.0151
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)] 1
jobs processed at 15.9666 j/s, 0 failed ...
NilClass and no method name? Certainly not correct. So I looked at the
serialized handler on the job in the DB and saw:
"--- !ruby/object:Delayed::PerformableMethod\nattributes:\n id: 13\n
event_id: 26\n name: memememe\n api_key: !!null \n"
No indication of a class or method name. And when I load the YAML into
an object and call #object on the resulting PerformableMethod I get
nil. For kicks I then fired up the console on the broken production
app and delayed the same job. This time the handler looked like:
"--- !ruby/object:Delayed::PerformableMethod\nobject: !ruby/
ActiveRecord:Domain\n attributes:\n id: 13\n event_id: 26\n
name: memememe\n api_key: !!null \nmethod_name: :create_a\nargs: []
\n"
And sure enough, that job runs fine. Puzzled, I then recalled reading
something about DJ not playing nice with Thin. So, I tried Unicorn and
was sad to see the same result. Hours of research later and I think
this has something to do with how the app server is loading the YAML
libraries Psych and Syck and DJ's interaction with them. I cannot,
however, pin down exactly what is wrong.
Note that I'm running delayed_job 3.0.1 official, but have tried upgrading to
the master branch and have even tried downgrading to 2.1.4.
Here are some notable differences between my stage and production
setups:
In stage I run 1 Thin server on a TCP port -- no web proxy in front
In production I run 2+ Thin servers and proxy to them with Nginx.
They talk over a UNIX socket
When I tried unicorn it was 1 app server proxied to by Nginx over a
UNIX socket
Could the web proxying/Nginx have something to do with it? Please, any insight is greatly appreciated. I've spent a lot of time
integrating delayed_job and would hate to have to shelve the work or, worse,
toss it. Thanks for reading.
I fixed this by not using #delay. Instead I replaced all of my "model.delay.method" code with custom jobs. Doing so works like a charm, and is ultimately more flexible. This fix works fine with Thin. I haven't tested with Unicorn.
I'm running into a similar problem with rails 3.0.10 and dj 2.1.4, it's most certainly a different yaml library being loaded when running from console vs from the app server; thin, unicorn, nginx. I'll share any solution I come up with
Ok so removing these lines from config/boot.rb fixed this issue for me.
require 'yaml'
YAML::ENGINE.yamler = 'syck'
This had been placed there to fix an YAML parsing error, forcing YAML to use 'syck'. Removing this required me to fix the underlying issues with the .yml files. More on this here
Now my delayed job record handlers match between those created via the server (unicorn in my case) and the console. Both my server and delayed job workers are kicked off within bundler
Unicorn
cd #{rails_root} && bundle exec unicorn_rails -c #{rails_root}/config/unicorn.rb -E #{rails_env} -D"
DJ
export LANG=en_US.utf8; export GEM_HOME=/data/reception/current/vendor/bundle/ruby/1.9.1; cd #{rail
s_root}; /usr/bin/ruby1.9.1 /data/reception/current/script/delayed_job start staging

Resources