I have some JavaScript in my app that detects when the network connection goes away and temporarily caches data in local storage, to be synced with the server when the connection is re-established.
I've been trying to find a way to test this end-to-end using Capybara, but I can't seem to find any way of either temporarily disabling the app server or switching the headless browser into offline mode. FWIW I'm using Poltergeist as the driver.
Does anyone have an idea how this could be tested? (I can test the JavaScript app using sinon to fake the server going away, but I'd like to be able to test it end-to-end with a headless browser if possible).
If you stumbled on this question looking for a way to test offline / progressive web apps with Capybara and Chrome Headless, here's how:
params = {
cmd: 'Network.emulateNetworkConditions',
params: {
offline: true,
latency: 0,
downloadThroughput: 0,
uploadThroughput: 0
}
}
page.driver.browser.send(:bridge).send_command(params)
My team has stubbed out the Rack app to simulated errors from the server. It works well enough (in Firefox). Here are some relevant excerpts from the code:
class NoResponseRack
attr_reader :requests
def initialize disconnected_mode
#disconnected_mode = disconnected_mode
#requests = []
#sleeping_threads = []
end
def call(env)
#requests.push(env)
case #disconnected_mode
when :elb_pool_empty
#sleeping_threads << Thread.current
sleep 65
#sleeping_threads.delete Thread.current
[504, {}, ['']]
when :server_maintenance
[200, {}, ['status_message=Atlas is down for maintenance.']]
else
[999, {}, [""]]
end
end
def wakeup_sleeping_threads
#sleeping_threads.each &:wakeup
#sleeping_threads.clear
end
end
def go_disconnected_with_proxy disconnected_mode=:server_error
if $proxy_server_disconnected
puts 'going DISconnected'
$current_proxy = NoResponseRack.new disconnected_mode
rack_mappings.unshift([nil, "", /^(.*)/n, $current_proxy])
$proxy_server_disconnected = false
end
end
def rack_app
Capybara::current_session.driver.app
end
def rack_mappings
rack_app.instance_variable_get(:#mapping)
end
About the only way I can think of would be to allow the host to be overridden in your tests, and give it a bogus host (something like localhost:31337).
Maybe have a look at http://robots.thoughtbot.com/using-capybara-to-test-javascript-that-makes-http and see if anything jumps out.
I found also another option. To start Chrome with
options.add_argument('--host-resolver-rules=MAP * ~NOTFOUND')
I saw this in chromium docs because this flag was missing from google-chrome --help.
Which link also means that you can also use your own proxy server to simulate different conditions as well.
Related
I'm trying to get some rspec tests run using a mix of Capybara, Selenium, Capybara/webkit, and Poltergeist. I need it to run headless in certain cases and would rather not use xvfb to get webkit working. I am okay using selenium or poltergeist as the driver for phantomjs. The problem I am having is that my tests run fine with selenium and firefox or chrome but when I try phantomjs the elements always show as not found. After looking into it for a while and using page.save_screenshot in capybara I found out that the phantomjs browser wasn't loaded up when the driver told it to find elements so it wasn't returning anything. I was able to hack a fix to this in by editing the poltergeist source in <gem_path>/capybara/poltergeist/driver.rb as follows
def visit(url)
if #started
sleep_time = 0
else
sleep_time = 2
end
#started = true
browser.visit(url)
sleep sleep_time
end
This is obviously not an ideal solution for the problem and it doesn't work with selenium as the driver for phantomjs. Is there anyway I can tell the driver to wait for phantom to be ready?
UPDATE:
I was able to get it to run by changing where I included the Capybara::DSL. I added it to the RSpec.configure block as shown below.
RSpec.configure do |config|
config.include Capybara::DSL
I then passed the page object to all classes I created for interacting with the webpage ui.
An example class would now look like this
module LoginUI
require_relative 'webpage'
class LoginPage < WebPages::Pages
def initialize(page, values = {})
super(page)
end
def visit
browser.visit(login_url)
end
def login(username, password)
set_username(username)
set_password(password)
sign_in_button
end
def set_username(username)
edit = browser.find_element(#selectors[:login_edit])
edit.send_keys(username)
end
def set_password(password)
edit = browser.find_element(#selectors[:password_edit])
edit.send_keys(password)
end
def sign_in_button
browser.find_element(#selectors[:sign_in_button]).click
end
end
end
Webpage module looks like this
module WebPages
require_relative 'browser'
class Pages
def initialize(page)
#page = page
#browser = Browser::Browser.new
end
def browser
#browser
end
def sign_out
browser.visit(sign_out_url)
end
end
end
The Browser module looks like this
module Browser
class Browser
include Capybara::DSL
def refresh_page
page.evaluate_script("window.location.reload()")
end
def submit(locator)
find_element(locator).click
end
def find_element(hash)
page.find(hash.keys.first, hash.values.first)
end
def find_elements(hash)
page.find(hash.keys.first, hash.values.first, match: :first)
page.all(hash.keys.first, hash.values.first)
end
def current_url
return page.current_url
end
end
end
While this works I don't want to have to include the Capybara::DSL inside RSpec or have to include the page object in the classes. These classes have had some things removed for the example but show the general structure. Ideally I would like to have the Browser module include the Capybara::DSL and be able to handle all of the interaction with Capybara.
Your update completely changes the question so I'm adding a second answer. There is no need to include the Capybara::DSL in your RSpec configure if you don't call any Capybara methods from outside your Browser class, just as there is no need to pass 'page' to all your Pages classes if you limit all Capybara interaction to your Browser class. One thing to note is that the page method provided by Capybara::DSL is just an alias for Capybara.current_session so technically you could just always call that.
You don't show in your code how you're handling any assertions/expectations on the page content - so depending on how you're doing that you may need to include Capybara::RSpecMatchers in your RSpec config and/or your WebPages::Pages class.
Your example code has a couple of issues that immediately pop out, firstly your Browser#find_elements (assuming I'm reading your intention for having find first correctly) should probably just be
def find_elements(hash)
page.all(hash.keys.first, hash.values.first, minimum: 1)
end
Secondly, your LoginPage#login method should have an assertion/expectation on a visual change that indicates login succeeded as its final line (verify some message is displayed/logged in menu exists/ etc), to ensure the browser has received the auth cookies, etc before the tests move on. What that line looks like depends on exactly how you're architecting your expectations.
If this doesn't answer your question, please provide a concrete example of what exactly isn't working for you since none of the code you're showing indicates any need for Capybara::DSL to be included in either of the places you say you don't want it.
Capybara doesn't depend on visit having completed, instead the finders and matchers will retry up to a specified period of time until they succeed. You can increase this amount of time by increasing the value of Capybara.default_max_wait_time. The only methods that don't wait by default are first and all, but can be made to wait/retry by specifying any of the count options
first('.some_class', minimum: 1) # will wait up to Capybara.default_max_wait_time seconds for the element to exist on the page.
although you should always prefer find over first/all whenever possible
If increasing the maximum wait time doesn't solve your issue, add an example of a test that fails to your question.
I have a test pack where the tests run in Selenium_webdriver (Chrome) by default and have an option to run in Poltergeist via ENV['headless'] in the env.rb file. The headless tests get used in deployment (no browser installed on box).
I have some tests that are specific to being run headless such as checking status codes, response headers etc. I was wondering if there is a way I could use a hook to say while headless=true do x, y or z.
I've tried using tags but this interferes with the tests when they are run in browser.
Something like:
AfterStep (ENV['headless']) do
if page.status_code == 200
check_xss_settings
end
end
Is it possible to do what I want?
Thank you
I have identified where I was going wrong.
In my hook, I was using the following:
AfterStep (ENV['headless']) do
if page.status_code == 200
check_xss_settings
end
end
By doing this, I was able to achieve my result:
AfterStep do
if ENV['headless'] == 'true'
if page.status_code == 200
check_xss_script
end
end
end
I hope this could be helpful for anyone in a similar position.
I have to say I am new both to Ruby and to RSpec. Anyway I completed one RSpec script but after refactoring it failed. Here is the original working version:
describe Site do
browser = Watir::Browser.new :ie
site = Site.new(browser, "http://localhost:8080/site")
it "can navigate to any page at the site" do
site.pages_names.each do |page_name|
site.goto(page_name)
site.actual_page.name.should eq page_name
end
end
browser.close
end
and here is the modified version - I wanted to have reported all the pages which were visited during the test:
describe Site do
browser = Watir::Browser.new :ie
site = Site.new(browser, "http://localhost:8080/site")
site.pages_names.each do |page_name|
it "can navigate to #{page_name}" do
site.goto(page_name)
site.actual_page.name.should eq page_name
end
end
browser.close
end
The problem in the latter case is that site gets evaluated to nil within the code block associated with 'it' method.
But when I did this:
...
s = site
it "can navigate to #{page_name}" do
s.goto(page_name)
s.actual_page.name.should eq page_name
end
...
the nil problem was gone but tests failed with the reason "browser was closed"
Apparently I am missing something very basic Ruby knowledge - because the browser reference is not working correctly in modified script. Where did I go wrong? What refactoring shall be applied to make this work?
Thanks for your help!
It's important to understand that RSpec, like many ruby programs, has two runtime stages:
During the first stage, RSpec loads each of your spec files, and executes each of the describe and context blocks. During this stage, the execution of your code defines your examples, the hooks, etc. But your examples and hooks are NOT executed during this stage.
Once RSpec has finished loading the spec files (and all examples have been defined), it executes them.
So...trimming down your example to a simpler form, here's what you've got:
describe Site do
browser = Watir::Browser.new :ie
it 'does something with the browser' do
# do something with the browser
end
browser.close
end
While visually it looks like the browser instance is instantiated, then used in the example, then closed, here's what's really happening:
The browser instance is instantiated
The example is defined (but not run)
The browser is closed
(Later, after all examples have been defined...) The example is run
As O.Powell's answer shows, you can close the browser in an after(:all) hook to delay the closing until after all examples in this example group have run. That said, I'd question if you really need the browser instance at example definition time. Generally you're best off lazily creating resources (such as the browser instance) when examples need them as they are running, rather than during the example definition phase.
I replicated your code above using fake classes for Site and Watir. It worked perfectly. My only conclusion then is that the issue must lie with either one of the above classes. I noticed the Site instance only had to visit one page in your first working version, but has to visit multiple pages in the non working version. There may be an issue there involving the mutation happening inside the instance.
See if this makes a difference:
describe Site do
uri = "http://localhost:8080/site"
browser = Watir::Browser.new :ie
page_names = Site.new(browser, uri).page_names
before(:each) { #site = Site.new(browser, uri) }
after(:all) { browser.close }
pages_names.each do |page_name|
it "can navigate to #{page_name}" do
#site.goto(page_name)
#site.actual_page.name.should eq page_name
end
end
end
I have the following code (just as a test) and I want to create an HTTP proxy using EventMachine. The code below is an example on the es-proxy GitHub page. However, when I run this and open up a website that has a moderate amount of images, the images start loading incorrectly. What I mean by this is that some images are loaded twice or if I request my icon for the navigation bar, I instead get the profile picture. This is especially evident if I refresh the page a few times.
It seems that the responses do not correspond to the matching request; causing everything to be jumbled. However, I'm not sure why this is. The code below seems simple enough for this to not be a problem.
require 'rubygems'
require 'em-proxy'
require 'http/parser' # gem install http_parser.rb
require 'uuid' # gem install uuid
# > ruby em-proxy-http.rb
# > curl --proxy localhost:9889 www.google.com
host = "0.0.0.0"
port = 9889
puts "listening on #{host}:#{port}..."
Proxy.start(:host => host, :port => port) do |conn|
#p = Http::Parser.new
#p.on_headers_complete = proc do |h|
session = UUID.generate
puts "New session: #{session} (#{h.inspect})"
host, port = h['Host'].split(':')
conn.server session, :host => host, :port => (port || 80)
conn.relay_to_servers #buffer
#buffer = ''
end
#buffer = ''
conn.on_connect do |data,b|
puts [:on_connect, data, b].inspect
end
conn.on_data do |data|
#buffer << data
#p << data
data
end
conn.on_response do |backend, resp|
#puts [:on_response, backend, resp].inspect
resp
end
conn.on_finish do |backend, name|
puts [:on_finish, name].inspect
end
end
Update
I believe I have insight as to what is happening but, still no way of solving my problem. I am creating a server for each request and when I relay my requests I have multiple servers. Then in the on response I should only be returning the response if it is from the correct server. However, I don't have a way to correlate this as of yet.
Here a proper response:
Try removing every puts in the example so the main loop can concentrate on doing the actual network I/O, it works for me like that.
I think there may be some kind of timeout playing behind this, maybe the client does not wait long enough for the full answer to come back while the server is stuck outputing text to the console.
That's the downside of using an event reactor, you have to make sure nothing blocks it.
The code doesn't seem to account for persistent http connections. Maybe you could try a HTTP 1.0 browser.
By default Selenium runs as fast as possible through the scenarios I defined using Cucumber.
I would like to set it to run at a lower speed, so I am able to capture a video of the process.
I figured out that an instance of Selenium::Client::Driver has a set_speed method. Which corresponds with the Java API.
How can I obtain an instance of the Selenium::Client::Driver class? I can get as far as page.driver, but that returns an instance of Capybara::Driver::Selenium.
Thanks to http://groups.google.com/group/ruby-capybara/msg/6079b122979ffad2 for a hint.
Just a note that this uses Ruby's sleep, so it's somewhat imprecise - but should do the job for you. Also, execute is called for everything so that's why it's sub-second waiting. The intermediate steps - wait until ready, check field, focus, enter text - each pause.
Create a "throttle.rb" in your features/support directory (if using Cucumber) and fill it with:
require 'selenium-webdriver'
module ::Selenium::WebDriver::Firefox
class Bridge
attr_accessor :speed
def execute(*args)
result = raw_execute(*args)['value']
case speed
when :slow
sleep 0.3
when :medium
sleep 0.1
end
result
end
end
end
def set_speed(speed)
begin
page.driver.browser.send(:bridge).speed=speed
rescue
end
end
Then, in a step definition, call:
set_speed(:slow)
or:
set_speed(:medium)
To reset, call:
set_speed(:fast)
This will work, and is less brittle (for some small value of "less")
require 'selenium-webdriver'
module ::Selenium::WebDriver::Remote
class Bridge
alias_method :old_execute, :execute
def execute(*args)
sleep(0.1)
old_execute(*args)
end
end
end
As an update, the execute method in that class is no longer available. It is now here only:
module ::Selenium::WebDriver::Remote
I needed to throttle some tests in IE and this worked.
The methods mentioned in this thread no longer work with Selenium Webdriver v3.
You'll instead need to add a sleep to the execution command.
module Selenium::WebDriver::Remote
class Bridge
def execute(command, opts = {}, command_hash = nil)
verb, path = commands(command) || raise(ArgumentError, "unknown command: #{command.inspect}")
path = path.dup
path[':session_id'] = session_id if path.include?(':session_id')
begin
opts.each { |key, value| path[key.inspect] = escaper.escape(value.to_s) }
rescue IndexError
raise ArgumentError, "#{opts.inspect} invalid for #{command.inspect}"
end
Selenium::WebDriver.logger.info("-> #{verb.to_s.upcase} #{path}")
res = http.call(verb, path, command_hash)
sleep(0.1) # <--- Add your sleep here.
res
end
end
end
Note this is a very brittle way to slow down the tests since you're monkey patching a private API.
I wanted to slow down the page load speeds in my Capybara test suite to see if I could trigger some intermittently failing tests. I achieved this by creating an nginx reverse proxy container and sitting it between my test container and the phantomjs container I was using as a headless browser. The speed was limited by using the limit_rate directive. It didn't help me to achieve my goal in the end, but it did work and it may be a useful strategy for others to use!