Ruby Selenium Capybara not timing out - ruby

I'm having issues with my specs not timing out. Some of my specs are getting to a certain point and just hanging. I'm sure there is something wrong with one of the spec resulting in it being broken, what I can't figure out is why they are just hanging indefinitely when I have a timeout defined...
# frozen-string-literal: true
require 'rspec'
require 'capybara/rspec'
require 'capybara/dsl'
require 'selenium-webdriver'
require 'site_prism'
Dir[File.dirname(__FILE__) + '/page_objects/*/*.rb'].each do |page_object|
require page_object
end
def wait_for_ajax
Timeout.timeout(Capybara.default_max_wait_time) do
loop until page.evaluate_script('jQuery.active').zero? && page.has_no_css?(".k-loading-color")
end
end
def whole_page
Capybara.current_session
end
Capybara.register_driver :selenium do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.default_driver = :selenium
Capybara.app_host = #REDACTED
Capybara.default_max_wait_time = 20
RSpec.configure do |config|
config.before(:each) do
config.include Capybara::DSL
end
config.after(:each) do
Capybara.reset_sessions!
end
end

You don't mention what commands it's hanging on, but I'm going to guess it's in your wait_for_ajax method. If that's the case it's because you're using Timeout.timeout which is the most dangerous to use method Ruby provides. The way it works is by starting a second thread which will then raise an exception in the original thread when the timeout occurs. The problem with that is the exception can occur anywhere in the original thread which means if the block inside the timeout call is doing anything non-trivial it can end up in a completely unrecoverable state (network comms, etc). Basically Timeout.timeout can only ever be safely used with a VERY detailed knowledge of every little thing occurring it its block, which means it effectively should never be used around any calls to a third party library. Instead you should just use a timer and sleep if you need timeout. Something like
def wait_for_ajax
start = Time.now
until page.evaluate_script('jQuery.active').zero? && page.has_no_css?(".k-loading-color", wait: false) do
sleep 0.1
raise <Some Error> if (Time.now - start) > Capybara.default_max_wait_time
end
end
That being said you really shouldn't need wait_for_ajax with a useable UI and properly written tests.
Additionally, by including capybara/rspec you've already set up for reset_sessions to be called after every test and for Capybara::DSL to be included into the types of tests it should be included into - https://github.com/teamcapybara/capybara/blob/master/lib/capybara/rspec.rb#L9 - so by adding your own after block you're just ending up calling reset_sessions twice after every test which is just a waste of time.

Related

Running ruby scripts in parallel

Let's say I've got two ruby scripts - a.rb and b.rb. Both are web-scrapers used for different pages. They can work for many, many hours and I would like to run them simultaneously. In order to do that I've tried to run them by third script using 'promise' gem with the following code:
def method_1
require 'path to my file\a'
end
def method_2
require 'path to my file\b'
end
require 'future'
x=future{method_1}
y=future{method_2}
x+y
However this solution throws an error(below) and only one script is executed.
An operation was attempted on something that is not a socket.
(Errno::ENOTSOCK)
I also tried playing with Thread class:
def method_one
require 'path to my file\a'
end
def method_two
require 'path to my file\b'
end
x = Thread.new{method_one}
y = Thread.new{method_two}
x.join
y.join
And it gives me the same error as for 'promise' gem.
I've also run those scripts in separate shells- then they work at the same time, but the performance is much worse (aprox. about 50% slower).
Is it any way to run them at the same time and keep high performance?
You can use concurrent-ruby for this, here is how you can run both your scripts in parallel:
require 'concurrent'
# Create future for running script a
future1 = Concurrent::Promises.future do
require 'path to file\a'
:result
end
# Create future for running script b
future2 = Concurrent::Promises.future do
require 'path to file\b'
:result
end
# Combine both futures to run them in parallel
future = Concurrent::Promises.zip(future1, future1)
# Wait until both scripts are completed
future.value!

Sinatra app executes during load instead of after run method issued

This is a stripped down example of a real app I am building. When I execute my app, this is the result I get. You'll notice that it says that it is running before it is starting. You'll also notice it never says running after the start is issued.
bundle exec rackup
Using thin;
Sapp::App running.
Starting Sapp::App
== Sinatra (v1.4.7) has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.7.0 codename Dunder Mifflin)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
My config.ru is:
# http://www.rubydoc.info/gems/webmachine/Webmachine/Adapters/Rack
$started = false
require 'thin'
require 'sinatra'
set :server, (ENV['RACK_ENV'] == 'production' || ENV['RACK_ENV'] == 'staging' ? 'rack' : 'thin')
puts "Using #{settings.server};"
load 'webmachine/adapters/rack.rb'
load File.join(File.dirname(__FILE__), 'sapp.rb')
$started = true
puts 'Starting Sapp::App'
#Sapp::App.run!
Sinatra::Application.run!
I am setting $started just to try to fix this problem, but it doesn't help. My app is executed before it is set. I could control that but, and this is the rub, it does not execute after the run is issued.
sapp.rb is:
ENV['RACK_ENV'] ||= 'development'
Bundler.setup
$: << File.expand_path('../', __FILE__)
$: << File.expand_path('../lib', __FILE__)
require 'dotenv'
Dotenv.load(
File.expand_path("../.env.#{ENV['RACK_ENV']}", __FILE__),
File.expand_path("../.env", __FILE__))
module Sapp
class App < Sinatra::Application
puts 'Sapp::App has been started.' if $started
puts 'Sapp::App running.'
end
end
In the end, if nothing else, Once it says "Starting Sapp::App", it should also say "Sapp::App has been started." and "Sapp::App running."
For the record, both these options do the same thing:
Sapp::App.run!
Sinatra::Application.run!
Okay, I get it. I put the code in a class, but not it a method. Load or require both run open code like this. I need to wrap it in methods, and execute the methods, to do what I want to do.
Sinatra examples, which I followed, don't make this clear and simply avoid the topic. Many are so simple it doesn't make a difference, and some are just coded within the config.ru. I am coming from Rails and, while I knew this from Rails, it didn't make much of a difference since the vast majority of the code already exists in methods.

How do you test Postgres's LISTEN / NOTIFY with ActiveRecord?

Assuming I'm using the pg gem and RSpec, what approach should I take to properly test that my LISTEN and NOTIFY statements are working? pg's wait_for_notify blocks, so it seems like I wouldn't be able to "notify, then listen", or "listen, then notify". Am I overlooking something?
For example:
it "notifies" do
conn = ActiveRecord::Base.connection
it_ran = false
conn.execute "LISTEN my_channel"
conn.execute "NOTIFY my_channel, 'hello'"
conn.wait_for_notify(1) do |channel, pid, payload|
it_ran = true
end
expect(it_ran).to eq true
end
Edit:
This works in the controller, and even the rails console, but for some reason it doesn't work in an RSpec test. Strangely, using the pg gem directly does work. Why might ActiveRecord not be working in this scenario?
wait_for_notify() only blocks if it needs to. That is, when there isn't already something in the notification queue.
In your code, there will already be a notification in the queue based on your first NOTIFY, so wait_for_notify() will return immediately and it_ran will be set.
If I rip out the ActiveRecord stuff and just use pg directly, this is exactly what happens.
Turns out the problem was with DatabaseCleaner. With that I tried using other strategies instead of transaction, but nothing worked; it seems like the only way to get LISTEN / NOTIFY working on an ActiveRecord connection is to disable it for that one test.
Here's how I ended up disabling DatabaseCleaner for a specific test. In my support configuration file:
RSpec.configure do |config|
# BEFORE:
# config.before(:each) do
# DatabaseCleaner.strategy = :transaction
# end
# AFTER:
config.before(:each) do |test|
unless test.metadata[:no_database_cleaner]
DatabaseCleaner.strategy = :transaction
end
end
end
In my spec file:
RSpec.describe "Postgres LISTEN / NOTIFY" do
it "notifies", :no_database_cleaner => true do
# [clipped]
end
end
Now anytime I need to test for LISTEN / NOTIFY, I add :no_database_cleaner => true to the it block.

Simulate slow connection with Sinatra

I'm a frontend developer and when i code i use Sinatra as a static file server backend:
require 'sinatra'
configure do
set :public_folder, File.dirname(__FILE__)
end
get '/' do
send_file File.join(settings.public_folder, 'index.html')
end
get '/:name' do
file = File.join(settings.public_folder, params[:name])
if File.exist?(file)
send_file file
else
halt 404
end
end
I was happy with that, but this time i was given a task to create a JS intro that does some complex behavior only while the page is being loaded.
I'm unable to test such JS behavior because in my development sandbox Sinatra serves files immediately.
How do i make Sinatra serve files slowly, at given max rate, e. g. 10 Kbps? Alternative approach suggestons are also appreciated.
It is possible if you split up a file into chunks and expose them gradually, here is an example:
require 'sinatra'
require "sinatra/streaming"
def file_chunks
[].tap do |chunks|
File.open("index.html", "rb") do |io|
while not io.eof?
chunks << io.read(10)
end
end
end
end
get '/send_file_slowly' do
stream do |out|
file_chunks.each do |chunk|
out.print chunk
out.flush
sleep 0.2
end
end
end

How can I change ruby log level in unit tests based on context

I'm new to ruby so forgive me if this is simple or I get some terminology wrong.
I've got a bunch of unit tests (actually they're integration tests for another project, but they use ruby test/unit) and they all include from a module that sets up an instance variable for the log object. When I run the individual tests I'd like log.level to be debug, but when I run a suite I'd like log.level to be error. Is it possible to do this with the approach I'm taking, or does the code need to be restructured?
Here's a small example of what I have so far.
The logging module:
#!/usr/bin/env ruby
require 'logger'
module MyLog
def setup
#log = Logger.new(STDOUT)
#log.level = Logger::DEBUG
end
end
A test:
#!/usr/bin/env ruby
require 'test/unit'
require 'mylog'
class Test1 < Test::Unit::TestCase
include MyLog
def test_something
#log.info("About to test something")
# Test goes here
#log.info("Done testing something")
end
end
A test suite made up of all the tests in its directory:
#!/usr/bin/env ruby
Dir.foreach(".") do |path|
if /it-.*\.rb/.match(File.basename(path))
require path
end
end
As you noted you're going to need to set something in the suite.
Something like the following can be used in the setup method. Just call MyLog.setInSuite in your suite and it'll set the level to INFO on setup.
module MyLog
##use_info = false
def MyLog.setInSuite()
##use_info = true
end
def setup
#log = Logger.new(STDOUT)
if (##use_info)
#log.level = Logger::INFO
else
#log.level = Logger::DEBUG
end
end
end
Hmm. I think the way to do it is to change the logging module to test an environment variable, say TEST_SUITE and set the logging level to info if it is set, and debug if it is not.
Then update your test suite to set the TEST_SUITE environment variable at the top, and unset it at the bottom.
Seems a bit clunky, so I'd be interested to see if anyone else has any other ideas.

Resources