Capistrano remote execution with EventMachine periodic timer - ruby

I am using the ruby sprinkle gem (using Capistrano as its deployment mechanism) to execute a command to start a simple ruby app on a remote Ubuntu server. Below is a small snippet of the ruby app code where I think the problem could lie.
...
require 'eventmachine'
...
fork { run }
def run
EM.run{
Signal.trap('INT') { #log.debug("trapped INT signal"); stop(true) }
Signal.trap('TERM'){ #log.debug("trapped TERM signal"); stop(true) }
begin
pulsate
#log.info "still ok here"
EM.add_periodic_timer 60 do # 1 minute
#log.info "doesn't get here"
pulsate
end
#log.info "doesn't get here"
rescue => exc
#never gets here
#log.error "Unable to add EM timer due to: #{exc}"
exit -1
end
}
end
def pulsate...
def stop...
etc
...
The strange thing is that it all runs without any problems when I ssh on to the server and run it there. However when using sprinkle/capistrano, as soon as the process hits the EM.add_periodic_timer it just disappears. No exception is thrown, no signals, no log output, it just seems to never get to the next line?
Also, I am using the latest version of the EventMachine gem: eventmachine (1.0.0.rc.4) and capistrano (2.12.0) (sprinkle is a red herring I think as it just falls back on capistrano)
Any ideas of why it could fail during the remote execution but not when executed in place on the server? Any ideas of things I can try to get more information?

Try setting the global EM error handler before calling EM.run. You can see the documentation at EM#error_handler which may give you a clue as to what's going on.

Related

Testing multiple hosts with the same test using serverspec

The Advanced Tips section of the Serverspec site shows an example of testing multiple hosts with the same test set. I've built an example of my own (https://gist.github.com/neilhwatson/81249ad393800a76a8ad), but there are problems.
The first problem is that the tests stop at the first failure rather than proceeding through the lot and keeping a tally. The second is that the failure output does not indicate on which host the failure occurred. What can I do to fix these problems and produce a final report for all hosts?
For the first issue, ServerSpec by default will run all your tests. However, since you have a loop that executes a Rake task for each environment, the first environment to have a failure causes the task to fails and so an exception is raised and the rest of your tasks don't run.
I've forked your gist and updated the Rake task to surround it with a begin/rescue.
...
begin
desc "Run serverspec to #{host}"
RSpec::Core::RakeTask.new(host) do |t|
ENV['TARGET_HOST'] = host
t.pattern = "spec/base,cfengine3/*_spec.rb"
end
rescue
end
...
For the second problem, it doesn't look like ServerSpec will output which environment the tests are running in. But since the updated Gist shows that the host gets set in the spec_helper.rb we can use that to add an RSpec configuration that sets up an after(:each) and only output the host on errors. The relevant code changes are in a fork of the gist, but basically you'll just need the below snippet in your spec_helper.rb:
RSpec.configure do |c|
c.after(:each) do |example|
if example.exception
puts "Failed on #{host_run_on}"
end
end
end

Database cleaner not working with vim-rspec plugin

I use vim-rspec plugin to be able to run rspec tests from within vim, and it was working very well so far. But suddenly the database_cleaner gem stopped working.
Here is my configuration:
# spec/rspec_rails.rb
Rspec.configure do |config|
config.before(:suite) do
puts "Setting up the database cleaner."
DatabaseCleaner.strategy = :transaction
DatabaseCleaner.clean_with(:truncation)
end
config.around(:each) do |example|
puts "Cleaning the database"
DatabaseCleaner.cleaning do
example.run
end
end
end
I put those two messages to find out if the two blocks run. but they don't. Even if I stop spring than I run again it does not correct it. The strange thing is that if I run the rspec command from the command line every thing works well and I get both of the messages and the database cleaned, the first one time on running, and the second on every example run.
Problem might be in spring itself, remove it and try again. Also you can take a look at a g:rspec_command in your .vimrc file, maybe you bind any specific script to it?

Is it possible to send a notification when a Unicorn master finishes a restart?

I'm running a series of Rails/Sinatra apps behind nginx + unicorn, with zero-downtime deploys. I love this setup, but it takes a while for Unicorn to finish restarting, so I'd like to send some sort of notification when it finishes.
The only callbacks I can find in Unicorn docs are related to worker forking, but I don't think those will work for this.
Here's what I'm looking for from the bounty: the old unicorn master starts the new master, which then starts its workers, and then the old master stops its workers and lets the new master take over. I want to execute some ruby code when that handover completes.
Ideally I don't want to implement any complicated process monitoring in order to do this. If that's the only way, so be it. But I'm looking for easier options before going that route.
I've built this before, but it's not entirely simple.
The first step is to add an API that returns the git SHA of the current revision of code deployed. For example, you deploy AAAA. Now you deploy BBBB and that will be returned. For example, let's assume you added the api "/checks/version" that returns the SHA.
Here's a sample Rails controller to implement this API. It assumes capistrano REVISION file is present, and reads current release SHA into memory at app load time:
class ChecksController
VERSION = File.read(File.join(Rails.root, 'REVISION')) rescue 'UNKNOWN'
def version
render(:text => VERSION)
end
end
You can then poll the local unicorn for the SHA via your API and wait for it to change to the new release.
Here's an example using Capistrano, that compares the running app version SHA to the newly deployed app version SHA:
namespace :deploy do
desc "Compare running app version to deployed app version"
task :check_release_version, :roles => :app, :except => { :no_release => true } do
timeout_at = Time.now + 60
while( Time.now < timeout_at) do
expected_version = capture("cat /data/server/current/REVISION")
running_version = capture("curl -f http://localhost:8080/checks/version; exit 0")
if expected_version.strip == running_version.strip
puts "deploy:check_release_version: OK"
break
else
puts "=[WARNING]==========================================================="
puts "= Stale Code Version"
puts "=[Expected]=========================================================="
puts expected_version
puts "=[Running]==========================================================="
puts running_version
puts "====================================================================="
Kernel.sleep(10)
end
end
end
end
You will want to tune the timeouts/retries on the polling to match your average app startup time. This example assumes a capistrano structure, with app in /data/server/current and a local unicorn on port 8080.
If you have full access to the box, you could script the Unicorn script to start another script which loops through checking for /proc/<unicorn-pid>/exe which will link to the running process.
See: Detect launching of programs on Linux platform
Update
Based on the changes to the question, I see two options - neither of which are great, but they're options nonetheless...
You could have a cron job that runs a Ruby script every minute which monitors the PID directory mtime, then ensure that PID files exist (since this will tell you that a file has changed in the directory and the process is running) then executes additional code if both conditions are true. Again, this is ugly and is a cron that runs every minute, but it's minimal setup.
I know you want to avoid complicated monitoring, but this is how I'd try it... I would use monit to monitor those processes, and when they restart, kick off a Ruby script which sleeps (to ensure start-up), then checks the status of the processes (perhaps using monit itself again). If this all returns properly, execute additional Ruby code.
Option #1 isn't clean, but as I write the monit option, I like it even better.

Why won't my Rails function abort?

I'm trying to debug an application that someone else wrote. In my production.log, I see:
Processing by Friendster::AppsController#home as HTML
Parameters: {SOMESTUFF}
Completed 500 Internal Server Error in 3ms
So I go to the app/controller/friendster/apps_controllers and look at the home function and it is:
def home
show_app_container
end
So I changed it to:
def home
puts "container"
abort "SHAMOON"
show_app_container
end
Just so I can see some sort of error or log. But nothing shows up anywhere. Nothing renders differently. I don't know if there's caching going on or if I'm in the right function. Any help debugging this would be greatly appreciated.
I also ran a bundle exec rake routes and got:
friendster_app_home POST /publishers/:publisher_id/apps/:app_id/home(.:format) {:action=>"home", :controller=>"friendster/apps"}
GET /publishers/:publisher_id/apps/:app_id/home(.:format) {:action=>"home", :controller=>"friendster/apps"}
Although there are quite a few routes with GET /publishers/:publisher_id/apps/:app_id/home(.:format), so I'm not sure what that means. This is the only friendster one.
EDIT: Adding Base controller parent
class Friendster::BaseController < AppsController
protected
end
Any help debugging this would be greatly appreciated.
Include the pry gem, call binding.pry inside of #home and Pry will spawn an interactive debugger in the console.
def home
binding.pry
show_app_container
end

Testing server ruby-application with cucumber

My ruby application runs Webrick server. I want to test it by cucumber and want to ensure that it gives me right response.
Is it normal to run server in test environment for testing? Where in my code I should start server process and where I should destroy it?
Now I start server by background step and destroy in After hook. It's slow because server starts before every scenario and destroys after.
I have idea to start server in env.rb and destroy it in at_exit block declared also in env.rb. What do you think about it?
Do you know any patterns for that problem?
I use Spork for this. It starts up one or more servers, and has the ability to reload these when needed. This way, each time you run your tests you're not incurring the overhead of firing up Rails.
https://github.com/sporkrb/spork
Check out this RailsCast for the details: http://railscasts.com/episodes/285-spork
Since cucumber does not support spork any more (why ?) I use the following code in env.rb
To fork a process I use this lib : https://github.com/jarib/childprocess
require 'childprocess'
ChildProcess.posix_spawn = true
wkDir=File.dirname(__FILE__)
server_dir = File.join(wkDir, '../../site/dev/bin')
#Because I use rvm , I have to run the server thru a shell
#server = ChildProcess.build("sh","-c","ruby pageServer.rb -p 4563")
#server.cwd = server_dir
#server.io.inherit!
#server.leader = true
#server.start
at_exit do
puts "----------------at exit--------------"
puts "Killing process " + #server.pid.to_s
#server.stop
if #server.alive?
puts "Server is still alive - kill it manually"
end
end

Resources