EM::Synchrony.defer with fiber aware database call causes FiberError exception - ruby

I'm trying to use EM-Synchrony for concurrency in an application and have come across an issue with my use of deferred code and Fibers.
Any calls to the database within either EM.defer or EM::Synchrony.defer results in the application crashing with the error can't yield from root fiber
Below is a very trimmed down runnable example of what I'm trying to accomplish. The first print works and displays [:first, 1] but the second is where I crash with the error mentioned above.
require 'mysql2'
require 'em-synchrony/activerecord'
ActiveRecord::Base.establish_connection(
:adapter => 'em_mysql2',
:username => 'user',
:password => 'pass',
:host => 'localhost',
:database => 'app_dev',
:pool => 60
)
class User < ActiveRecord::Base; end
EM.synchrony do
p [:first, User.all.count]
EM::Synchrony.defer do
p [:second, User.all.count]
end
end
My first thought was perhaps the Fiber.current and Fiber.yield within EM::Synchrony.defer meant I could fix the problem with an extra Fiber.new call
EM::Synchrony.defer do
Fiber.new do
p [:second, User.all.count]
end.resume
end
This fails to run as well but this time I get the error fiber called across threads.

Related

Handling exceptions in forked process

I'm building a Sinatra API call that will trigger a long-running operation in a subprocess. I'm using the exception_handler gem, but don't understand how I'd use it in the forked process.
Sinatra app:
require 'sinatra'
require 'rubygems'
require 'bundler/setup'
require 'exception_notification'
use ExceptionNotification::Rack,
:email => {
:email_prefix => "[Example] ",
:sender_address => %{"notifier" <notifier#example.com>},
:exception_recipients => %w{me#example.com},
:delivery_method => :sendmail
}
get '/error' do
raise 'Bad!' # Notification gets sent
end
get '/error_async' do
p1 = fork do
sleep 10
raise 'Bad! (async)' # Notification never gets sent
end
Process.detach(p1)
end
Got it working, per the docs:
get '/error_async' do
p1 = fork do
begin
sleep 10
raise 'Bad! (async)'
rescue Exception => e
ExceptionNotifier.notify_exception(e)
end
end
Process.detach(p1)
end

Creating a variable per instance, rather than per request, with Sinatra modular style

I have a Sinatra app, written in modular style, running on Heroku. It uses Redis and I have a limited number (10) of Redis connections. I found that it would often throw errors complaining that it had run out of Redis connections. So I started using connection_pool in the hope that would fix things; a single pool of Redis connections and the app would choose one of those each time, rather than try to create a new connection on each request.
But I'm still getting the same issue. I can do loads of Redis queries on a single query without complaints. But if I reload a single test page, which just does some Redis queries, several times in fairly quick succession, I get the "Redis::CommandError - ERR max number of clients reached" error again.
So I'm assuming, maybe, it's creating a new instance of connection_pool on each request... I don't know. But it's not "pooling" as I would expect it to.
I have this kind of thing:
# myapp.rb
$LOAD_PATH.unshift(File.dirname(__FILE__))
$stdout.sync = true
require 'thin'
require 'myapp/frontend'
MyApp::Frontend.run!
And the Sinatra app:
# myapp/frontend.rb
require 'sinatra/base'
require 'redis'
require 'connection_pool'
require 'uuid'
module MyApp
class Frontend < Sinatra::Base
helpers do
def redis_pool
#redis_pool ||= ConnectionPool.new(:size => 8, :timeout => 5) do
redis_uri = URI.parse(ENV['REDISCLOUD_URL'])
client = ::Redis.new(:host => redis_uri.host,
:port => redis_uri.port,
:password => redis_uri.password)
end
end
end
get '/tester/'
redis_pool.with do |r|
id = UUID.generate
r.hset(:user, id, "Some data")
r.hget(:user, id)
r.hdel(:user, id)
end
p "DONE"
end
end
end
The Procfile looks like:
web: ruby myapp.rb
Any ideas? The current site is pretty low traffic, so this should be possible.
A new instance of #redis_pool is created every time a get request for /tester/ is processed because the helper method redis_pool is called every time.
You can use sinatra's settings helper to initialize a redis connection only once:
config do
redis_uri = URI.parse(ENV['REDISCLOUD_URL'])
set :redis, Redis.new(:host => redis_uri.host,
:port => redis_uri.port,
:password => redis_uri.password)
end
Now the each instance of the app has one redis connection that persists for all requests. Access the setting like so
get '/tester/'
id = UUID.generate
settings.redis.hset(:user, id, "some data")
settings.redis.hget(:user, id)
settings.redis.hdel(:user, id)
p "DONE"
end

Storage of variables without initializing an object? Ruby Gem 'Mail'

Working with the Ruby Gem 'Mail', I am confused as to how variables are able to be stored without initializing an object? For example:
Mail.defaults do
retriever_method :pop3, :address => "pop.gmail.com",
:port => 995,
:user_name => '<username>',
:password => '<password>',
:enable_ssl => true
end
After which you are able to call methods such as Mail.first and have it return the first message in the mailbox with the configured defaults.
I realize everything in Ruby is an object, even a class, so when require 'mail' is called, does an object containing the the class Mail actually get created and mad available to the program? What exactly is happening here?
The contents of mail.rb are loaded into the file that has the require 'mail' statement.
After having a look in the gem, mail.rb contains the Mail module, which in turn contains many other require statements.
mail.rb
module Mail
## skipped for brevity
# Finally... require all the Mail.methods
require 'mail/mail'
end
mail/mail.rb
module Mail
## skipped for brevity
# Receive the first email(s) from the default retriever
# See Mail::Retriever for a complete documentation.
def self.first(*args, &block)
retriever_method.first(*args, &block)
end
end
So then the methods are made available to your program.

In selenium ruby tests can I combine the setup and teardown methods into one location for all my tests?

In my ruby Selenium Tests there is a lot of the same code in every test. How can I best share code between tests? For example my setup and teardown methods are the same in every file, how can I remove them from every file into one shared file or is that even possible?
def setup
#verification_errors = []
#selenium = Selenium::Client::Driver.new \
:host => "#$sell_server",
:port => 4444,
:browser => "#$browser",
:url => "http://#$network.#$host:2086/",
:timeout_in_second => 60
#selenium.start_new_browser_session
end
def teardown
#selenium.close_current_browser_session
assert_equal [], #verification_errors
end
I've tried putting setup in a shared module and a required file but both present different problems with inheritance of the other methods that need access to the #selenium object that is started. What would be a good design if there is one for sharing the code?
I'm not really sure what test framework you're using, but in rspec you could place it into your spec_helper file and just do a before(:each) / after(:each). I'd check the callback documentation for your framework of choice.
For test unit framework - it seems to work to create a SharedTest class to inherit from Test::Unit::Testcase with setup and teadown methods. Then just subclass the test files SharedTest. The only negative consequence I've found is I had to add a test_default method that does nothing in SharedTest to get it to work. If I name my test method test_default that overides it and seems ok, but not very descriptive...
sharedtest.rb
class SharedTest < Test::Unit::Testcase
def setup
#verification_errors = []
#selenium = Selenium::Client::Driver.new \
:host => "#$sell_server",
:port => 4444,
:browser => "#$browser",
:url => "http://#$network.#$host:2086/",
:timeout_in_second => 60
#selenium.start_new_browser_session
end
def teardown
#selenium.close_current_browser_session
assert_equal [], #verification_errors
end
def test_default
#puts self
end
end
T01_testcasename.rb
class Test_01_whatever < SharedTest
def test_default
#test code
end
I'm still open to better solutions but this seems to be working for me.

Selenium Ruby Reporting

I'm trying to set the environment for testing using Selenium and selenium-client gem.
I prefer unit test style over RSpec style of tests.
Do I have to build my own system for reporting then?
How can I add exception handling without having begin-rescue-end in each test? Is there any way to do that using mixins?
I'm not sure I understand what your question means in terms of reporting but the selenium-client gem handles both BDD and UnitTesting.
Below is code copied from the rubyforge page:
require "test/unit"
require "rubygems"
gem "selenium-client", ">=1.2.16"
require "selenium/client"
class ExampleTest < Test::Unit::TestCase
attr_reader :browser
def setup
#browser = Selenium::Client::Driver.new \
:host => "localhost",
:port => 4444,
:browser => "*firefox",
:url => "http://www.google.com",
:timeout_in_second => 60
browser.start_new_browser_session
end
def teardown
browser.close_current_browser_session
end
def test_page_search
browser.open "/"
assert_equal "Google", browser.title
browser.type "q", "Selenium seleniumhq"
browser.click "btnG", :wait_for => :page
assert_equal "Selenium seleniumhq - Google Search", browser.title
assert_equal "Selenium seleniumhq", browser.field("q")
assert browser.text?("seleniumhq.org")
assert browser.element?("link=Cached")
end
end
As for exception handling, UnitTesting handles the exceptions with an Error message.
That being said, I may have misunderstood your question.
Initial build of Extent is available for Ruby. You can view the sample here. Latest source is available at github.
Sample code:
# main extent instance
extent = RelevantCodes::ExtentReports.new('extent_ruby.html')
# extent-test
extent_test = extent.start_test('First', 'description string')
# logs
extent_test.log(:pass, 'step', 'details')
extent.end_test(extent_test)
# flush to write everything to html file
extent.flush

Resources