Ruby: Sinatra and an infinite loop in one program - ruby

I have a program, which does something in an infinte loop (it a daemon).
This works fine.
Now I am planning to offer a webinterface for that daemon with the help of sintra. The sinatra code itself works fine too. But as soon as I have the loop and the sinatra code in one script, die sinatra code is not executed. There are no error messages on startup, but the local webservice isn't started.
Here the code stripped down to the basics:
#!/usr/bin/env ruby
require 'rubygems'
require 'sinatra'
require_relative 'lib/functions'
do_init_env # (some init steps, no influence on the startup of sinatra)
get '/' do
erb :web
end
# infinity Loop
loop do
if File.exists? somefile
do_something
end
sleep 10
end
When disabling the loop, sinatra starts up fine:
ruby ./mydaemon.rb
[2013-02-26 12:57:24] INFO WEBrick 1.3.1
[2013-02-26 12:57:24] INFO ruby 1.9.3 (2013-02-06) [armv6l-linux-eabi]
== Sinatra/1.3.5 has taken the stage on 4567 for development with backup from WEBrick
[2013-02-26 12:57:24] INFO WEBrick::HTTPServer#start: pid=13457 port=4567
^C
== Sinatra has ended his set (crowd applauds)
[2013-02-26 12:57:36] INFO going to shutdown ...
[2013-02-26 12:57:36] INFO WEBrick::HTTPServer#start done.
When enabling the loop:
Silence, until interrupting the loop:
ruby ./mydaemon.rb
^C./mydaemon.rb:39:in `sleep': Interrupt
from ./mydaemon.rb:39:in `block in <main>'
from ./mydaemon.rb:33:in `loop'
from ./mydaemon.rb:33:in `<main>

Rack runs the script as-is when starting up. The "get" etc commands just stash information for Sinatra to respond to rack later. Any infinite loops will simply get started.
You could possibly solve this by adding threading, and starting the loop on a child thread. This might be worthwhile if the loop is doing something lightweight where you would gain performance by sharing a bit of memory with the web server. However, it is usually a coding headache to work with thread interactions.
You may be better off separating the web server and your long running loop into different scripts, running in their own processes, and have the loop emit readable data to e.g. a file or database, that the web server can pick up and serve.

If you really want to run the Sinatra process as a daemon, maybe consider running it in its own process (and therefore with its own script). Consider e.g. using the daemons gem: http://daemons.rubyforge.org/

Related

OCRA missing submodules of libraries if rack/grape api endpoints aren't called

I'm trying to pack my REST app into into an executable with OCRA. I have a few gems required in my script:
require 'rack'
require 'rack/server'
require 'grape'
require 'grape-entity'
require 'rubygems'
I skip starting the server with this:
if not defined?(Ocra)
Rack::Server.start options
end
When I try to run my server.exe:
Temp/../server.rb:221:in `default_middleware_by_environment':
cannot load such file -- rack/content_length (LoadError)
Which means that it doesn't detect submodules of rack that exist, but aren't used and therefore not included. If I add a require 'rack/content_length' it continues with cannot load such file -- rack/chunkedEven` and so on.
When I interrupted my server by hand before I also had to call a few api endpoints to have everything included.
I think my options are either:
Tell OCRA to include all the submodules of rack and grape, but compiling that list is a bit time consuming and would increase the file size
I already tried ocra server.rb --gem-full=rack --gem-full=grape, which get my server started, but when calling the API 'rack/mount/strexp' is missing again..
Calling the API within my script, but I couldn't figure out how to do that. I can't add a block to Rack::Server.start options and it does only continue when I interrupt the server.
Any ideas to implement either option, or is there another solution?
If we run the rack app with a rack handler (webrick / thin / else), we can shutdown the server in another thread so that ocra can finish packing (not sure how to do same thing with Rack::Server).
app = Rack::Directory.new ENV['HOME'] # a sample app
handler = Rack::Handler.pick %w/ thin webrick /
handler.run app do |server|
# handler.run yields a server object,
# which we shutdown when ocra is packing
if ocra_is_packing # replace with proper condition
Thread.new { sleep 10; server.shutdown }
end
end
You may have to do something else (access the server etc.) to have ocra pack appropriate dependencies.

Setting up resque-pool over a padrino Rakefile throwing errors

I have setup a Padrino bus application using super-cool Resque for handling background process and ResqueBus for pub/sub of events.
The ResqueBus setup creates a resque queue and a worker for it to work on. Everything upto here works fine. Now since the resquebus is only creating a single worker for a single queue, and the process in my bus app can go haywire since many events will be published and subscribed. So a single worker per application queue seems to be inefficient. So thought of integrating the resque-pool gem to handle the worker process.
I have followed all process that resque pool gem has specified. I have edited my Rakefile.
# Add your own tasks in files placed in lib/tasks ending in .rake,
# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.
require File.expand_path('../config/application', __FILE__)
Ojus::Application.load_tasks
require 'resque/pool/tasks'
# this task will get called before resque:pool:setup
# and preload the rails environment in the pool manager
task "resque:setup" => :environment do
# generic worker setup, e.g. Hoptoad for failed jobs
end
task "resque:pool:setup" do
# close any sockets or files in pool manager
ActiveRecord::Base.connection.disconnect!
# and re-open them in the resque worker parent
Resque::Pool.after_prefork do |job|
ActiveRecord::Base.establish_connection
end
end
Now I tried to run this resque-pool command.
resque-pool --daemon --environment production
This throws an error like this.
/home/ubuntu/.rvm/gems/ruby-2.0.0-p451#notification-engine/gems/activerecord-4.1.7/lib/active_record/connection_adapters/connection_specification.rb:257:in `resolve_symbol_connection': 'default_env' database is not configured. Available: [:development, :production, :test] (ActiveRecord::AdapterNotSpecified)
I tried to debug this and found out that it throws an error at line
ActiveRecord::Base.connection.disconnect!
For now I have removed this line and everything seems working fine. But due to this a problem may arise because if we restart the padrino application the older ActiveRecord connection will be hanging around.
**
I just wanted to know if there is any work around for this problem and
run the resque-pool command by closing all the ActiveRecord
connections.
**
It would have been helpful if you had given your database.rb file of padrino.
Never mind, you can try
defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!
instead of ActiveRecord::Base.connection.disconnect!
and
ActiveRecord::Base.establish_connection(ActiveRecord::Base.configurations[Padrino.env])
instead of ActiveRecord::Base.establish_connection()
to establish a connection with activerecord you have to pass a parameter to what environment you want to connect otherwise it will search 'default_env' which is default in activerecord.
checkout the source code source code

NewRelic transaction traces in a Ruby Gem

I am developing a Ruby gem that I would like to add NewRelic monitoring to. The gem is used in a script that is run as a daemon and monitored by bluepill. I followed "Monitoring Ruby background processes and daemons" to get started.
I confirmed the gem is establishing a connection with NewRelic as the application shows up in my portal there, however, there is no transaction traces or any metrics breakdown of the code being invoked.
Here's the "entry" point of my gem as I tried to manually start the agent around the invoking method:
require 'fms/parser/version'
require 'fms/parser/core'
require 'fms/parser/env'
require 'mongoid'
ENV['NRCONFIG'] ||= File.dirname(__FILE__) + '/../newrelic.yml'
require 'newrelic_rpm'
module Fms
module Parser
def self.prepare_parse(filename)
::NewRelic::Agent.manual_start
Mongoid.load!("#{File.dirname(__FILE__)}/../mongoid.yml", :development)
Core.prepare_parse(filename)
::NewRelic::Agent.shutdown
end
end
end
I also tried adding this into the module:
class << self
include ::NewRelic::Agent::Instrumentation::ControllerInstrumentation
add_transaction_tracer :prepare_parse, :category => :task
end
I'm not entirely sure what else I can do. I confirmed the agent is able to communicate with the server and transaction traces are enabled. Nothing shows up in the background application tab either.
This is the most useful information I've gotten from the agent log so far:
[12/23/13 21:21:03 +0000 apivm (7819)] INFO : Environment: development
[12/23/13 21:21:03 +0000 apivm (7819)] INFO : No known dispatcher detected.
[12/23/13 21:21:03 +0000 apivm (7819)] INFO : Application: MY-APP
[12/23/13 21:21:03 +0000 apivm (7819)] INFO : Installing Net instrumentation
[12/23/13 21:21:03 +0000 apivm (7819)] INFO : Finished instrumentation
[12/23/13 21:21:04 +0000 apivm (7819)] INFO : Reporting to: https://rpm.newrelic.com/[MASKED_ACCOUNT_NUMBER]
[12/23/13 22:12:06 +0000 apivm (7819)] INFO : Starting the New Relic agent in "development" environment.
[12/23/13 22:12:06 +0000 apivm (7819)] INFO : To prevent agent startup add a NEWRELIC_ENABLE=false environment variable or modify the "development" section of your newrelic.yml.
[12/23/13 22:12:06 +0000 apivm (7819)] INFO : Reading configuration from /var/lib/gems/1.9.1/gems/fms-parser-0.0.6/lib/fms/../newrelic.yml
[12/23/13 22:12:06 +0000 apivm (7819)] INFO : Starting Agent shutdown
The only thing that's really concerning here is "No known dispatcher detected".
Is what I'm trying to do possible?
I work at New Relic and wanted to add some up-to-date details about the latest version of the newrelic_rpm gem. TrinitronX is on the right track, but unfortunately that code sample and blog post is based on a very old version of the gem, and the internals have changed significantly since then. The good news is that newer versions of the agent should make this simpler.
To start off, I should say I'm assuming that your process stays alive for a long time as a daemon, and makes repeated calls to prepare_parse.
Generally speaking, the explicit manual_start and shutdown calls you have inserted into your prepare_parse method should not be necessary - except for a few special cases (certain rake tasks and interactive sessions). The New Relic agent will automatically start as soon as it is required. You can see details about when the Ruby agent will automatically start and how to control this behavior here:
https://docs.newrelic.com/docs/ruby/forcing-the-ruby-agent-to-start
For monitoring background tasks like this, there are conceptually two levels of instrumentation that you might want: transaction tracers and method tracers. You already have a transaction tracer, but you may also want to add method tracers around the major chunks of work that happen within your prepare_parse method. Doing so will give you better visibility into what's happening within each prepare_parse invocation. You can find details about adding method tracers here:
https://docs.newrelic.com/docs/ruby/ruby-custom-metric-collection#method_tracers
With the way that you are calling add_transaction_tracer, your calls to prepare_parse should show up as transactions on the 'Background tasks' tab in the New Relic UI.
The one caveat here may be the fact that you're running this as a daemon. The Ruby agent uses a background thread to asynchronously communicate with New Relic servers. Since threads are not copied across calls to fork(), this means you will sometimes have to manually re-start the agent after a fork() (note that Ruby's Process.daemon uses fork underneath, so it's included as well). Whether or not this is necessary depends on the relative timing of the require of newrelic_rpm and the call to fork / daemon (if newrelic_rpm isn't required until after the call to fork / daemon, you should be good, otherwise see below).
There are two solutions to the fork issue:
Manually call NewRelic::Agent.after_fork from the forked child, right after the fork.
If you're using newrelic_rpm 3.7.1 or later, there's an experimental option to automatically re-start the background thread that you can enable in your newrelic.yml file by setting restart_thread_in_children: true. This is off by default at the moment, but may become the default behavior in future versions of the agent.
If you're still having trouble, the newrelic_agent.log file is your best bet to debugging things. You'll want to increase the verbosity by setting log_level: debug in your newrelic.yml file in order to get more detailed output.
For debugging this problem, try the following code:
require 'fms/parser/version'
require 'fms/parser/core'
require 'fms/parser/env'
require 'mongoid'
ENV['NRCONFIG'] ||= File.dirname(__FILE__) + '/../newrelic.yml'
# Make sure NewRelic has correct log file path
ENV['NEW_RELIC_LOG'] ||= File.dirname(__FILE__) + '/../log/newrelic_agent.log'
require 'newrelic_rpm'
::NewRelic::Agent.manual_start
# For debug purposes: output some dots until we're connected to NewRelic
until NewRelic::Agent.connected? do
print '.'
sleep 1
end
module Fms
module Parser
class << self
include ::NewRelic::Agent::Instrumentation::ControllerInstrumentation
add_transaction_tracer :prepare_parse, :category => :task
end
def self.prepare_parse(filename)
Mongoid.load!("#{File.dirname(__FILE__)}/../mongoid.yml", :development)
Core.prepare_parse(filename)
# Force the agent to prepare data before we shutdown
::NewRelic::Agent.load_data
# NOTE: Ideally you'd want to shut down the agent just before the process exits... not every time you call Fms::Parser#prepare_parse
::NewRelic::Agent.shutdown(:force_send => true)
end
end
end
I have a feeling that this probably has something to do with running your gem's code within the daemonized process that bluepill is starting. Ideally, we'd want to start the NewRelic agent within the process as soon after the daemon process is forked as we can get. Putting it after your library's requires should do this when the file is required.
We also would most likely want to stop the NewRelic agent just before the background task process exits, not every time the Fms::Parser#prepare_parse method is called. However, for our purposes this should get you enough debugging info to continue, so you can ensure that the task is contacting New Relic the first time it's run. We can also try using :force_send => true to ensure we send the data.
References:
Blog Post: Instrumenting your monitoring checks with New Relic

Sinatra/Thin runs and cannot be stopped with Ctrl-C

I'm creating an application that has Sinatra running inside of EventMachine and when I run the barebones test app I cannot get the server to end with Ctrl-C, I have to kill it with -9 or -usr2 for example.
I cannot figure out why Sinatra reports it has stopped but continues to serve requests or why I cannot stop the server with Ctrl-C.
Thin 1.6.1 with Sinatra 1.4.4 STOPPED MESSAGE BUT CONTINUES
== Sinatra/1.4.4 has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.6.1 codename Death Proof)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
Stopping ...
== Sinatra has ended his set (crowd applauds)
Ping!
^CPing!
Stopping ...
Ping!
^CStopping ...
This is the barebones test app I'm using to generate the output
# Run with 'ruby test.rb'
require 'eventmachine'
require 'sinatra/base'
require 'thin'
class NeverStops < Sinatra::Base
settings.logging = true
configure do
set :threaded, true
end
get '/foobar' do
'Foobar'
end
end
EM.run do
# Does nothing
#trap(:INT) { EM::stop(); exit }
#trap(:TERM) { EM::stop(); exit }
#trap(:KILL) { EM::stop(); exit }
EventMachine.add_periodic_timer 2 do
puts 'Ping!'
end
NeverStops.run!
end
Downgrading either Thin or Sinatra has different results
Thin 1.6.1 with Sinatra 1.4.3 NO STOPPED MESSAGE BUT STILL WON'T DIE (DEATH PROOF INDEED)
== Sinatra/1.4.3 has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.6.1 codename Death Proof)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
Ping!
^CPing!
Stopping ...
Ping!
Thin 1.5.1 with Sinatra 1.4.4 JUST STOPS
== Sinatra/1.4.4 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.5.1 codename Straight Razor)
>> Maximum connections set to 1024
>> Listening on localhost:4567, CTRL+C to stop
>> Stopping ...
== Sinatra has ended his set (crowd applauds)
Thin 1.5.1 with Sinatra 1.4.3 WORKS
== Sinatra/1.4.3 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.5.1 codename Straight Razor)
>> Maximum connections set to 1024
>> Listening on localhost:4567, CTRL+C to stop
Ping!
Ping!
Ping!
^C>> Stopping ...
== Sinatra has ended his set (crowd applauds)
I've updated my gems to the latest versions and have tried downgrading various gems such as EventMachine and Rack to see what results I get and nothing was helpfully different.
Versions
OSX 10.8.5 and Ubuntu 12.04.1
Ruby 2.0.0p247 and 1.9.3p194
eventmachine 1.0.3
rack 1.5.2
sinatra 1.4.4
thin 1.6.1
tilt 1.4.1
This issue is specific to newer versions of Thin (notice that v1.5.1 does not exhibit this behavior). This behavior was introduced in 1.6 and a similar issue is documented here.
The code in question follows the same pattern as mentioned in the upstream issue.
TL;DR version of the issue: Thin will stop the server, but will not stop the reactor loop (because it does not "own" the reactor). It is possible to allow Thin to own its reactor loop, in which case you will get back the desired behavior (as seen in 1.5.1). In order to do this you must start Thin without the enclosing EM#run { } , which will allow Thin to bring up (and subsequently tear down) the reactor loop.
In this case, one can imagine the periodic "ping" as a separate application that shares the reactor loop with Thin. Neither of them can claim ownership of the reactor loop. It would be wrong for Thin to stop all other applications and exit the reactor when it did not start the reactor. It is then the users responsibility to handle signals and terminate individual applications as necessary and finally stop the reactor loop (causing the process to quit).
Hope this explanation helps!
Thin is taking advantage of EM, you should run your app as you would do with Webrick and no EM. Example config.ru:
require 'bundler'
Bundle.require
class UseToStop < Sinatra::Base
get '/foobar' { body "Foobar" }
end
run UseToStop
Are you sure that you need this option? This complicates things, and is last thing you need.
set :threaded, true

Testing server ruby-application with cucumber

My ruby application runs Webrick server. I want to test it by cucumber and want to ensure that it gives me right response.
Is it normal to run server in test environment for testing? Where in my code I should start server process and where I should destroy it?
Now I start server by background step and destroy in After hook. It's slow because server starts before every scenario and destroys after.
I have idea to start server in env.rb and destroy it in at_exit block declared also in env.rb. What do you think about it?
Do you know any patterns for that problem?
I use Spork for this. It starts up one or more servers, and has the ability to reload these when needed. This way, each time you run your tests you're not incurring the overhead of firing up Rails.
https://github.com/sporkrb/spork
Check out this RailsCast for the details: http://railscasts.com/episodes/285-spork
Since cucumber does not support spork any more (why ?) I use the following code in env.rb
To fork a process I use this lib : https://github.com/jarib/childprocess
require 'childprocess'
ChildProcess.posix_spawn = true
wkDir=File.dirname(__FILE__)
server_dir = File.join(wkDir, '../../site/dev/bin')
#Because I use rvm , I have to run the server thru a shell
#server = ChildProcess.build("sh","-c","ruby pageServer.rb -p 4563")
#server.cwd = server_dir
#server.io.inherit!
#server.leader = true
#server.start
at_exit do
puts "----------------at exit--------------"
puts "Killing process " + #server.pid.to_s
#server.stop
if #server.alive?
puts "Server is still alive - kill it manually"
end
end

Resources