Only run selenium test if previous selenium test fails - ruby

I have several 'it' blocks in my selenium test file (using Ruby and rspec) that test various portions of my web application. Each 'it' block stops executing and goes to the next 'it' block if any of the conditions or code fails.
Is there a way to run an 'it' block
only if the previous fails or call a
function to react to the failed test?
Is there a better way to accomplish what I am wanting to do that doesn't involve an 'it' block?
Example 'it' block
it "should load example.com" do
page.open("http://example.com")
page.wait_for_page_to_load(25)
end

I don't like my current solution and think there is a better way to accomplish this, but for now...
wrap original test code that was in the 'it' block in a begin rescue block
put code to respond to failure in the rescue section
.
it "should load example.com" do
begin
page.open("http://example.com")
page.wait_for_page_to_load(25)
"wow".should == "cool"
rescue Exception => e
// code that responds to failed test
end
end
Why I don't like this
I feel dirty writing test code like this (it feels wrong!)
The rspec reports a pass unless the rescue code also fails

Related

Mocking a Browser for RSpec, Without Test Doubles Leaking

I find mocking things with RSpec to be entirely problematic and I often don't know how much code to include, in terms of it being diagnostic. So I'll start with the situation I have and the code that I've isolated as causing the problem.
I have tests where I need to mock a browser. I have a mock driver I set up like this:
require "watir"
def mock_driver
browser = double("watir")
allow(browser).to receive(:is_a?).with(Watir::Browser).and_return(true)
allow(browser).to receive(:driver).and_return(true)
browser
end
The only problems I have in my test suite are these two tests:
context "an empiric driver is requested" do
it "a watir browser is provided" do
allow(Watir::Browser).to receive(:new).and_return(Empiric.browser)
Empiric.set_browser mock_driver
end
it "the requested watir browser can be shut down" do
#allow(Empiric.browser).to receive(:quit)
Empiric.quit_browser
#allow(mock_browser).to receive(:new).and_return(Empiric.browser)
#Empiric.set_browser mock_driver
end
end
(The commented out bits in the second test are on purpose to illustrate what's going on.)
With that one line in place in the second test, I get the following error on that test:
<Double "watir"> was originally created in one example but has leaked into another
example and can no longer be used. rspec-mocks' doubles are designed to only last for
one example, and you need to create a new one in each example you wish to use it for.
If I entirely comment out the first test above, that error doesn't happen so I know I've isolated the two tests that are interacting with each other.
Okay, now notice the final line of my second test that is commented out. That seems to be what the error is indicating to me. It's saying I need to create a new double in the other. Okay, so I'll change my last test:
it "the requested watir browser can be shut down" do
#allow(Empiric.browser).to receive(:quit)
Empiric.quit_browser
#allow(mock_browser).to receive(:new).and_return(Empiric.browser)
Empiric.set_browser mock_driver
end
So here I've uncommented the last line so I'm establishing the mock_driver in that test and not allowing the code to leak.
That, however, returns exactly the same error on exactly the same test.
I'm not sure if it would help to see the methods that are being called in that test, but here they are. First is set_browser:
def set_browser(app = :chrome, *args)
#browser = Watir::Browser.new(app, *args)
Empiric.browser = #browser
end
And here is quit_browser:
def quit_browser
#browser.quit
end
The fact that RSpec thought one test was "leaking" into the other made me think that perhaps my #browser instance was the problem, essentially being what's persisting between the two tests. But I don't see how to get around that. I thought that maybe if I quit the browser in the first test, that would help. So I changed the first test to this:
it "a watir browser is provided" do
Empiric.quit_browser
allow(Watir::Browser).to receive(:new).and_return(Empiric.browser)
Empiric.start_browser mock_driver
end
That, however, led to the above error being shown on both tests now.
My more likely accurate guess is that I simply don't know how to provide a mock in this context.
I think you have to use allow with the mock and not Watir::Browser.
For example, what happens if you allow the mock browser to receive whatever calls the browser would and have the it return the mock browser?
Right now you're allowing the "Watir::Browser" to receive those messages and that's returning an "Empiric.browser". Looking at your code, I understand why you put that in there but I think that might be what's screwing you up here.
Mocks in RSpec are horrible things that rarely if ever work correctly in situations like this. I would entirely recommend not using the mock_driver that you have set up. Rather, for each of your tests just do something similar to what you are doing in the mock_driver. My guess is you're including the mock driver as part of a shared context and that, too, is another thing that is very fragile in RSpec. Not recommended.
Instead you might want to use contexts to break up your tests. Then for each context block have a before block. I'm not sure if you should use before:all or before:each given that you're simulating a browser. But that way you can set up the browser in the before and tear it down in an after.
But I would recommend getting it working in each test individually first. Even if it's a lot of code duplication. Then once all tests are passing, refactor to put the browser stuff in those before/after blocks.
But, again, don't use mocks. Don't use shared contexts. It never ends well and honestly it makes your tests harder to reason about.
Given some advice from Micah, I wanted to provide an answer with a solution. I ended up doing this:
context "an empiric driver is requested" do
it "a watir browser is provided" do
allow(Watir::Browser).to receive(:new).and_return(Empiric.browser)
allow(Empiric.browser).to receive(:driver).and_return(true)
expect { Empiric.start_browser :some_browser }.not_to raise_error
end
it "the requested watir browser can be shut down" do
allow(Empiric.browser).to receive(:quit)
allow(Watir::Browser).to receive(:new).and_return(Empiric.browser)
allow(Empiric.browser).to receive(:driver).and_return(true)
expect { Empiric.quit_browser }.not_to raise_error
end
end
All of that was needed as it is or I would get some error or other. I removed my mock driver and, per Micah's suggestion, simply tried to incorporate what seemed to work. The above "contraption" is what I ended up with as the sweet spot.
This works in the sense of giving coverage of the methods in question. What was interesting was that I had to add this to my RSpec configuration:
RSpec.configure do |config|
config.mock_with :rspec do |mocks|
mocks.allow_message_expectations_on_nil = true
end
end
I needed to do this because RSpec was reporting that I was calling allowing something that was nil to receive a value.
This brought up some interesting things, if you think about it. I have a test that is clearly passing. And it adds to my code coverage. But is it actually testing the quit action on a browser? Well, not really since it was testing a quit action on something that it thought was nil.
But -- it does work. And it must be calling the lines of code in question because the code coverage, as reported my SimpleCov, indicates that the statements in question have been checked.

Don't run a cucumber feature if a element on the page is not available but exit without failing

I want to check a page for an element before I start running the feature file for it (It's an element that periodically appears with an event so I only want to run the feature if its present).
The approach I wanted to use was a tagged before hook to see if the element was present and if it wasn't just don't run the feature but exit without 'failing' the step just exit with a message. I tried variants on the below but
1. If I don't have a rescue clause it obviously fails the scenario when the element isn't present
2. If I do have the rescue clause it handles it and passes moving onto the features which will then fail as the event isn't available.
Is there a way to halt running the feature file if the rescue clause is invoked without the 'fail'?
Before('#event') do
begin
find('.event').visible?
rescue Capybara::ElementNotFound
puts 'THE EVENT IS NOT ON'
end
end
You shouldn't be using find if you want to make a decision based on existence. Instead you should be using the predicate methods provided by Capybara (has_selector?, has_css?, has_xpath?, etc) so you don't have to rescue exceptions.
The other thing to know is the Cucumber skip_this_sceanrio method, which means you should end up with something like
Before('#event') do
# visit '/some_page' # May not be needed if you have another `Before` already visiting the needed page
skip_this_scenario('Skipping due to missing event') unless page.has_css?('.event')
end

Padrino rspec controller testing always green

I am testing my controller with rspec on padrino with this code:
https://gist.github.com/anonymous/8d0df4c189e99c7cb7ea
If I run the test everythings goes fine and all the test will be green.
The problem is that those test must fail! The sign_in_admin on the before block doesn't allow the user to login and make the post call and also if I change the line
last_response.should_not be_ok
with
last_response.should be_ok
the test is always green.....
I don't know where I am wrong.
Here is my spec_helper.rb
https://gist.github.com/anonymous/6442d02654cbee2cf3b5
The reason your code is always passing is because your test is of the form:
lambda {}.should {}
which is equivalent to
lambda {}.should
since should ignores any block passed to it. That is further equivalent to:
lambda {}.should be_truthy
which always succeeds because lambda {} is a Proc, which is truthy.
You should only send should to a Proc if you want should to execute the proc for purposes of evaluating side effects (e.g. raising errors, interacting with other objects), for which there are matchers of the form raise_error and change. The same is true for passing a block to expect, which is the current syntax.
In your case, you can simply execute the code that in your current lambda expression and then check the value of last_response, as in:
it '...' do
post ...
last_response.should be_ok
end
or the more current:
it '...' do
post ...
expect(last_response).to be_ok
end

Data driven testing with ruby testunit

I have a very basic problem for which I am not able to find any solution.
So I am using Watir Webdriver with testunit to test my web application.
I have a test method which I would want to run against multiple set of test-data.
While I can surely use old loop tricks to run it but that would show as only 1 test ran which is not what I want.
I know in testng we have #dataprovider, I am looking for something similar in testunit.
Any help!!
Here is what I have so far:
[1,2].each do |p|
define_method :"test_that_#{p}_is_logged_in" do
# code to log in
end
end
This works fine. But my problem is how and where would I create data against which I can loop in. I am reading my data from excel say I have a list of hash which I get from excel something like
[{:name =>abc,:password => test},{:name =>def,:password => test}]
Current Code status:
class RubyTest < Test::Unit::TestCase
def setup
#excel_array = util.get_excel_map //This gives me an array of hash from excel
end
#excel_array.each do |p|
define_method :"test_that_#{p}_is_logged_in" do
//Code to check login
end
end
I am struggling to run the loop. I get an error saying "undefined method `each' for nil:NilClass (NoMethodError)" on class declaration line
You are wanting to do something like this:
require 'minitest/autorun'
describe 'Login' do
5.times do |number|
it "must allow user#{number} to login" do
assert true # replace this assert with your real code and validation
end
end
end
Of course, I am mixing spec and test/unit assert wording here, but in any case, where the assert is, you would place the assertion/expectation.
As this code stands, it will pass 5 times, and if you were to report in story form, it would be change by the user number for the appropriate test.
Where to get the data from, that is the part of the code that is missing, where you have yet to try and get errors.

Why do I not see the thrown exceptions in a Cucumber "Around"?

I have a set of cucumber tests that get run on a build server.
I often want faster feedback than the server directly provides and so I watch the console output as it runs. I was wanting a way of identifying any failing test with a single search term so I modified our Around to print "Failed Test" on any exception, but Ruby doesn't seem to be handing the exception back up to the around. I've verified this by having puts statements after the begin ... end.
Does anyone know why this is happening or a way of wrapping any exception thrown from a failing test in a begin?
Around() do |scenario, block|
begin
Timeout.timeout(0.1) do
block.call
end
rescue Timeout::Error => e
puts "Failed Test"
puts caller
rescue Exception => e
puts "Failed Test"
raise e
end
end
Looking at cucumber 1.3.12 it actually rescues any exceptions from scenario steps. So you can't see them in any way without modifying cucumber gem.
See my answer on how to put a debug hook in that place for more information:
https://stackoverflow.com/a/22654786/520567
Have you tried disabling Cucumber's exception capturing with the #allow-rescue tag?
#allow-rescue: Turns off Cucumber’s exception capturing for the tagged scenario(s). Used when the code being tested is expected to raise and handle exceptions.
https://github.com/cucumber/cucumber/wiki/Tags

Resources