Testing server ruby-application with cucumber - ruby

My ruby application runs Webrick server. I want to test it by cucumber and want to ensure that it gives me right response.
Is it normal to run server in test environment for testing? Where in my code I should start server process and where I should destroy it?
Now I start server by background step and destroy in After hook. It's slow because server starts before every scenario and destroys after.
I have idea to start server in env.rb and destroy it in at_exit block declared also in env.rb. What do you think about it?
Do you know any patterns for that problem?

I use Spork for this. It starts up one or more servers, and has the ability to reload these when needed. This way, each time you run your tests you're not incurring the overhead of firing up Rails.
https://github.com/sporkrb/spork
Check out this RailsCast for the details: http://railscasts.com/episodes/285-spork

Since cucumber does not support spork any more (why ?) I use the following code in env.rb
To fork a process I use this lib : https://github.com/jarib/childprocess
require 'childprocess'
ChildProcess.posix_spawn = true
wkDir=File.dirname(__FILE__)
server_dir = File.join(wkDir, '../../site/dev/bin')
#Because I use rvm , I have to run the server thru a shell
#server = ChildProcess.build("sh","-c","ruby pageServer.rb -p 4563")
#server.cwd = server_dir
#server.io.inherit!
#server.leader = true
#server.start
at_exit do
puts "----------------at exit--------------"
puts "Killing process " + #server.pid.to_s
#server.stop
if #server.alive?
puts "Server is still alive - kill it manually"
end
end

Related

OCRA missing submodules of libraries if rack/grape api endpoints aren't called

I'm trying to pack my REST app into into an executable with OCRA. I have a few gems required in my script:
require 'rack'
require 'rack/server'
require 'grape'
require 'grape-entity'
require 'rubygems'
I skip starting the server with this:
if not defined?(Ocra)
Rack::Server.start options
end
When I try to run my server.exe:
Temp/../server.rb:221:in `default_middleware_by_environment':
cannot load such file -- rack/content_length (LoadError)
Which means that it doesn't detect submodules of rack that exist, but aren't used and therefore not included. If I add a require 'rack/content_length' it continues with cannot load such file -- rack/chunkedEven` and so on.
When I interrupted my server by hand before I also had to call a few api endpoints to have everything included.
I think my options are either:
Tell OCRA to include all the submodules of rack and grape, but compiling that list is a bit time consuming and would increase the file size
I already tried ocra server.rb --gem-full=rack --gem-full=grape, which get my server started, but when calling the API 'rack/mount/strexp' is missing again..
Calling the API within my script, but I couldn't figure out how to do that. I can't add a block to Rack::Server.start options and it does only continue when I interrupt the server.
Any ideas to implement either option, or is there another solution?
If we run the rack app with a rack handler (webrick / thin / else), we can shutdown the server in another thread so that ocra can finish packing (not sure how to do same thing with Rack::Server).
app = Rack::Directory.new ENV['HOME'] # a sample app
handler = Rack::Handler.pick %w/ thin webrick /
handler.run app do |server|
# handler.run yields a server object,
# which we shutdown when ocra is packing
if ocra_is_packing # replace with proper condition
Thread.new { sleep 10; server.shutdown }
end
end
You may have to do something else (access the server etc.) to have ocra pack appropriate dependencies.

Database cleaner not working with vim-rspec plugin

I use vim-rspec plugin to be able to run rspec tests from within vim, and it was working very well so far. But suddenly the database_cleaner gem stopped working.
Here is my configuration:
# spec/rspec_rails.rb
Rspec.configure do |config|
config.before(:suite) do
puts "Setting up the database cleaner."
DatabaseCleaner.strategy = :transaction
DatabaseCleaner.clean_with(:truncation)
end
config.around(:each) do |example|
puts "Cleaning the database"
DatabaseCleaner.cleaning do
example.run
end
end
end
I put those two messages to find out if the two blocks run. but they don't. Even if I stop spring than I run again it does not correct it. The strange thing is that if I run the rspec command from the command line every thing works well and I get both of the messages and the database cleaned, the first one time on running, and the second on every example run.
Problem might be in spring itself, remove it and try again. Also you can take a look at a g:rspec_command in your .vimrc file, maybe you bind any specific script to it?

Debug a daemon in RubyMine

Is there any way to debug a daemon in Ruby Mine. The way I am calling this file is like this
Daemons.run(file, options)
where file is the name of file and options is the bunch of options that I am passing to the file.
I'm also trying to debug a Rails daemon (gem daemons). The ActiveMessaging poller script in particular. However the ruby-debug-ide fails to attach due to various reasons (race conditions, etc).
The solution I came up with is to use gem rails-daemons. This works with ActiveMessaging and allows to me to create all kinds of daemons, and importantly, be able to debug them in RubyMine
Workaround
Use https://github.com/mirasrael/daemons-rails to create a new daemon. This works with RubyMine 6.3.3, Rails 4, Ruby 2.1.2 on OSX 10.8.x
Sample Daemon
mqpoller.rb (after running rails generate daemon mqpoller)
#!/usr/bin/env ruby
# You might want to change this
ENV["RAILS_ENV"] ||= "development"
root = File.expand_path(File.dirname(__FILE__))
root = File.dirname(root) until File.exists?(File.join(root, 'config'))
Dir.chdir(root)
require File.join(root, "config", "environment")
$running = true
Signal.trap("TERM") do
$running = false
end
while($running) do
# Replace this with your code
Rails.logger.auto_flushing = true
Rails.logger.info "This daemon is still running at #{Time.now}.\n"
# ADDED MY CODE BELOW HERE TO START ActiveMessaging
Rails.logger = Logger.new(STDOUT)
ActiveMessaging.logger = Rails.logger
# Load ActiveMessaging
ActiveMessaging::load_processors
# Start it up!
ActiveMessaging::start
sleep 10
end

Is it possible to send a notification when a Unicorn master finishes a restart?

I'm running a series of Rails/Sinatra apps behind nginx + unicorn, with zero-downtime deploys. I love this setup, but it takes a while for Unicorn to finish restarting, so I'd like to send some sort of notification when it finishes.
The only callbacks I can find in Unicorn docs are related to worker forking, but I don't think those will work for this.
Here's what I'm looking for from the bounty: the old unicorn master starts the new master, which then starts its workers, and then the old master stops its workers and lets the new master take over. I want to execute some ruby code when that handover completes.
Ideally I don't want to implement any complicated process monitoring in order to do this. If that's the only way, so be it. But I'm looking for easier options before going that route.
I've built this before, but it's not entirely simple.
The first step is to add an API that returns the git SHA of the current revision of code deployed. For example, you deploy AAAA. Now you deploy BBBB and that will be returned. For example, let's assume you added the api "/checks/version" that returns the SHA.
Here's a sample Rails controller to implement this API. It assumes capistrano REVISION file is present, and reads current release SHA into memory at app load time:
class ChecksController
VERSION = File.read(File.join(Rails.root, 'REVISION')) rescue 'UNKNOWN'
def version
render(:text => VERSION)
end
end
You can then poll the local unicorn for the SHA via your API and wait for it to change to the new release.
Here's an example using Capistrano, that compares the running app version SHA to the newly deployed app version SHA:
namespace :deploy do
desc "Compare running app version to deployed app version"
task :check_release_version, :roles => :app, :except => { :no_release => true } do
timeout_at = Time.now + 60
while( Time.now < timeout_at) do
expected_version = capture("cat /data/server/current/REVISION")
running_version = capture("curl -f http://localhost:8080/checks/version; exit 0")
if expected_version.strip == running_version.strip
puts "deploy:check_release_version: OK"
break
else
puts "=[WARNING]==========================================================="
puts "= Stale Code Version"
puts "=[Expected]=========================================================="
puts expected_version
puts "=[Running]==========================================================="
puts running_version
puts "====================================================================="
Kernel.sleep(10)
end
end
end
end
You will want to tune the timeouts/retries on the polling to match your average app startup time. This example assumes a capistrano structure, with app in /data/server/current and a local unicorn on port 8080.
If you have full access to the box, you could script the Unicorn script to start another script which loops through checking for /proc/<unicorn-pid>/exe which will link to the running process.
See: Detect launching of programs on Linux platform
Update
Based on the changes to the question, I see two options - neither of which are great, but they're options nonetheless...
You could have a cron job that runs a Ruby script every minute which monitors the PID directory mtime, then ensure that PID files exist (since this will tell you that a file has changed in the directory and the process is running) then executes additional code if both conditions are true. Again, this is ugly and is a cron that runs every minute, but it's minimal setup.
I know you want to avoid complicated monitoring, but this is how I'd try it... I would use monit to monitor those processes, and when they restart, kick off a Ruby script which sleeps (to ensure start-up), then checks the status of the processes (perhaps using monit itself again). If this all returns properly, execute additional Ruby code.
Option #1 isn't clean, but as I write the monit option, I like it even better.

Capistrano remote execution with EventMachine periodic timer

I am using the ruby sprinkle gem (using Capistrano as its deployment mechanism) to execute a command to start a simple ruby app on a remote Ubuntu server. Below is a small snippet of the ruby app code where I think the problem could lie.
...
require 'eventmachine'
...
fork { run }
def run
EM.run{
Signal.trap('INT') { #log.debug("trapped INT signal"); stop(true) }
Signal.trap('TERM'){ #log.debug("trapped TERM signal"); stop(true) }
begin
pulsate
#log.info "still ok here"
EM.add_periodic_timer 60 do # 1 minute
#log.info "doesn't get here"
pulsate
end
#log.info "doesn't get here"
rescue => exc
#never gets here
#log.error "Unable to add EM timer due to: #{exc}"
exit -1
end
}
end
def pulsate...
def stop...
etc
...
The strange thing is that it all runs without any problems when I ssh on to the server and run it there. However when using sprinkle/capistrano, as soon as the process hits the EM.add_periodic_timer it just disappears. No exception is thrown, no signals, no log output, it just seems to never get to the next line?
Also, I am using the latest version of the EventMachine gem: eventmachine (1.0.0.rc.4) and capistrano (2.12.0) (sprinkle is a red herring I think as it just falls back on capistrano)
Any ideas of why it could fail during the remote execution but not when executed in place on the server? Any ideas of things I can try to get more information?
Try setting the global EM error handler before calling EM.run. You can see the documentation at EM#error_handler which may give you a clue as to what's going on.

Resources