Getting Capybara to wait for debounce - ruby

Using Rspec and Capybara after recently adding in a debounce to most of my pages the test now fail randomly.
Now locally these are passing fine but on Semaphore 2.0 I am getting the random failings on shorter tests.
We use WebMock to stub the request in remoteFetch() and it seems that this is removed on shorter tests. As this is called afterwards, the stub doesn't exist and the test fails
function debouncedFetch(ids) {
store.idsToFetch.push(ids);
$timeout.cancel(store.fetchTimeoutFn);
store.fetchTimeoutFn = $timeout(() => { remoteFetch(store.idsToFetch); }, 200);
}
I have tried putting the debounce/timeout to 0 still with no joy.
Is there a way to check if tests/rootscope have finished or destroyed or something and not run the remoteFetch function.
Or get the test to wait for this function to run

Assuming you're using the default Capybara configuration where Capybara manages the running of the app under test it will wait for all network connections to be closed during the test reset in an after block. Since you're cleaning up your WebMock in an after block it's possible it's occurring before the Capybara registered block. To fix that you can change the order they're defined in or defined your WebMock cleanup with append_after rather than after so it's guaranteed to run after the Capybara session reset.

It turns out that the after(each:) { WebMock.reset! } as part of the gem is called before Capybara.reset_sessions!
This causes a race condition in the code. the way around this is to change the order and make sure in your spec_helper that require 'webmock/rspec' is called before require 'rspec/rails'
This ensures the order of the hooks are setup in the right order.
hope this helps someone else

Related

How to control test execution speed in watir-webdriver with the Watir::Browser#speed= method? [duplicate]

This question already has answers here:
Is there a way to slow down execution of Watir Webdriver under Cucumber?
(2 answers)
Closed 7 years ago.
Is there any way to control test execution speed in watir? How can I slow down the speed of test execution?
Someone suggested me to use following method browser.speed = :slow but, there is no such method for the Watir::Browser class with the browser driver I'm currently using.
Speed Defaults to Slow
According to the documentation, Watir::Browser#speed= already defaults to :slow. However, as far as I can tell changing this option is only valid for Internet Explorer, and has no effect on other browsers. In fact, with Chrome or Firefox the constructor won't accept a speed argument, and provides no public interface for the option.
However, as with most things Ruby, you can always access the variables directly and tweak them. This may or may not work for your use case, but you could certainly do something like this:
browser = Watir::Browser.new :chrome
#=> #<Watir::Browser:0x..fee868224270c4e1c url="data:," title="data:,">
browser.instance_variable_set :#speed, :slow
#=> :slow
browser.instance_variable_get :#speed
#=> :slow
Other Alternatives
In practice, testing browsers often involves a lot of asynchronous JavaScript events, so you probably want to wait for events or elements rather than trying to slow down the test itself. To do that, you can use explicit or implicit waits.
Implicit Waits
You can add an implicit wait in seconds. For example, to wait up to 30 seconds for each event or element:
browser.driver.manage.timeouts.implicit_wait = 30
Explicit Waits
Watir supports both Watir::Wait#until and Watir::Wait#while. For example, to wait until a login field is visible:
Watir::Wait.until { browser.text_field(name: 'login').visible? }
Use Sleep
Under the hood, Watir is Ruby, so you can also put explicit sleeps into your tests with Kernel#sleep. The main downside to doing this is that it isn't responsive. Your code will sleep for the defined time period even if the event or element you are waiting on triggers or changes earlier. This can make your tests unnecessarily slow.

Restart app within OPA5 test using iTeardownMyAppFrame and iStartMyAppInAFrame timed out

I try to add another test to my existing .opa.qunit.js file which requires a complete restart of my app.
What I tried was to call "iTeardownMyAppFrame" in my test and then again "iStartMyAppInAFrame" to ensure a clean setup.
At first the iFrame is shown but closed immediatly and after some time the test just times out. Both methods below just call "iTeardownMyAppFrame" and "iStartMyAppInAFrame" nothing else.
opaTest("FirstTest", function(Given, When, Then) {
Given.iStartTheSampleApp();
//Testlogic
});
opaTest("TestWithCleanState", function(Given, When, Then) {
Given.iShutdownTheApp();
//Until here everything above works fine
Given.iStartTheSampleApp();
//Testlogic
});
//EOF
There is no error on the console, just some two messages repeating every second:
sap-ui-core.js:15219 2015-03-11 10:05:37 Opa check was undefined -
sap-ui-core.js:15219 2015-03-11 10:05:37 Opa is executing the check: function () {
if (!bFrameLoaded) {
return;
}
return checkForUI5ScriptLoaded();
} -
What's the intended functionality of "iTeardownMyAppFrame"?
Should it only be used to teardown the whole test at the end of all tests?
Or can it also be used to reset the app to ensure a clean state at the beginning of the test? If this is the case how should it work?
Thanks
teardonw removes the iframe and in the next test you have to bring it up again.
This way you can write separated tests that can be run standalone.
An example is here:
Opa sample with 2 isolated tests
If you press the rerun button on test2 it will execute standalone with no dependency on test1
BR,
Tobias

Spec testing EventMachine-based (Reactor) Code

I'm trying out the whole BDD approach and would like to test the AMQP-based aspect of a vanilla Ruby application I am writing. After choosing Minitest as the test framework for its balance of features and expressiveness as opposed to other aptly-named vegetable frameworks, I set out to write this spec:
# File ./test/specs/services/my_service_spec.rb
# Requirements for test running and configuration
require "minitest/autorun"
require "./test/specs/spec_helper"
# External requires
# Minitest Specs for EventMachine
require "em/minitest/spec"
# Internal requirements
require "./services/distribution/my_service"
# Spec start
describe "MyService", "A Gateway to an AMQP Server" do
# Connectivity
it "cannot connect to an unreachable AMQP Server" do
# This line breaks execution, commented out
# include EM::MiniTest::Spec
# ...
# (abridged) Alter the configuration by specifying
# an invalid host such as "l0c#alho$t" or such
# ...
# Try to connect and expect to fail with an Exception
MyApp::MyService.connect.must_raise EventMachine::ConnectionError
end
end
I have commented out the inclusion of the em-minitest-spec gem's functionality which should coerce the spec to run inside the EventMachine reactor, if I include it I run into an even sketchier exception regarding (I suppose) inline classes and such: NoMethodError: undefined method 'include' for #<#<Class:0x3a1d480>:0x3b29e00>.
The code I am testing against, namely the connect method within that Service is based on this article and looks like this:
# Main namespace
module MyApp
# Gateway to an AMQP Server
class MyService
# External requires
require "eventmachine"
require "amqp"
# Main entry method, connects to the AMQP Server
def self.connect
# Add debugging, spawn a thread
Thread.abort_on_exception = true
begin
#em_thread = Thread.new {
begin
EM.run do
#connection = AMQP.connect(#settings["amqp-server"])
AMQP.channel = AMQP::Channel.new(#connection)
end
rescue
raise
end
}
# Fire up the thread
#em_thread.join
rescue Exception
raise
end
end # method connect
end
end # class MyService
The whole "exception handling" is merely an attempt to bubble the exception out to a place where I can catch/handle it, that didn't help either, with or without the begin and raise bits I still get the same result when running the spec:
EventMachine::ConnectionError: unable to resolve server address, which actually is what I would expect, yet Minitest doesn't play well with the whole reactor concept and fails the test on ground of this Exception.
The question then remains: How does one test EventMachine-related code using Minitest's spec mechanisms? Another question has also been hovering around regarding Cucumber, also unanswered.
Or should I focus on my main functionality (e.g. messaging and seeing if the messages get sent/received) and forget about edge cases? Any insight would truly help!
Of course, it can all come down to the code I wrote above, maybe it's not the way one goes about writing/testing these aspects. Could be!
Notes on my environment: ruby 1.9.3p194 (2012-04-20) [i386-mingw32] (yes, Win32 :>), minitest 3.2.0, eventmachine (1.0.0.rc.4 x86-mingw32), amqp (0.9.7)
Thanks in advance!
Sorry if this response is too pedantic, but I think you'll have a much easier time writing the tests and the library if you distinguish between your unit tests and your acceptance tests.
BDD vs. TDD
Be careful not to confuse BDD with TDD. While both are quite useful, it can lead to problems when you try to test every edge case in an acceptance test. For example, BDD is about testing what you're trying to accomplish with your service, which has more to do with what you're doing with the message queue than connecting to the queue itself. What happens when you try to connect to a non-existent message queue fits more into the realm of a unit test in my opinion. It's also worth pointing out that your service shouldn't be responsible for testing the message queue itself, since that's the responsibility of AMQP.
BDD
While I'm not sure what your service is supposed to do exactly, I would imagine your BDD tests should look something like:
start the service (can do this in a separate thread in the tests if you need to)
write something to the queue
wait for your service to respond
check the results of the service
In other words, BDD (or acceptance tests, or integration tests, however you want to think about them) can treat your app as a black box that is supposed to provide certain functionality (or behavior). The tests keep you focused on your end goal, but are more meant for ensuring one or two golden use cases, rather than the robustness of the app. For that, you need to break down into unit tests.
TDD
When you're doing TDD, let the tests guide you somewhat in terms of code organization. It's difficult to test a method that creates a new thread and runs EM inside that thread, but it's not so hard to unit test either of these individually. So, consider putting the main thread code into a separate function that you can unit test separately. Then you can stub out that method when unit testing the connect method. Also, instead of testing what happens when you try to connect to a bad server (which tests AMQP), you can test what happens when AMQP throws an error (which is your code's responsibility to handle). Here, your unit test can stub out the response of AMQP.connect to throw an exception.

What is the best way to simulate no Internet connection within a Cucumber test?

Part of my command-line Ruby program involves checking if there is an internet connection before any commands are processed. The actual check in the program is trivial (using Socket::TCPSocket), but I'm trying to test this behaviour in Cucumber for an integration test.
The code:
def self.has_internet?(force = nil)
if !force.nil? then return force
begin
TCPSocket.new('www.yelp.co.uk', 80)
return true
rescue SocketError
return false
end
end
if has_internet? == false
puts("Could not connect to the Internet!")
exit 2
end
The feature:
Scenario: Failing to log in due to no Internet connection
Given the Internet is down
When I run `login <email_address> <password>`
Then the exit status should be 2
And the output should contain "Could not connect to the Internet!"
I obviously don't want to change the implementation to fit the test, and I require all my scenarios to pass. Clearly if there is actually no connection, the test passes as it is, but my other tests fail as they require a connection.
My question: How can I test for this in a valid way and have all my tests pass?
You can stub your has_internet? method and return false in the implementation of the Given the Internet is down step.
YourClass.stub!(:has_internet?).and_return(false)
There are three alternative solutions I can think of:
have the test temporarily monkeypatch TCPSocket.initialize (or maybe Socket#connect, if that's where it ends up) to pretend the internet is down.
write (suid) a script that adds/removes an iptables firewall rule to disable the internet, and have your test call the script
use LD_PRELOAD on a specially written .so shared library that overrides the connect C call. This is harder.
Myself, I would probably try option 1, give up after about 5 minutes and go with option 2.
maybe a bit late for you :), but have a look at
https://github.com/mmolhoek/vcr-uri-catcher
I made this to test network failures, so this should do the trick for you.

How can I implement wait_for_page_to_load in Selenium 2?

I am new to automated web testing and I am currently migrating from an old Selenium RC implementation to Selenium 2 in Ruby. Is there a way to halt the execution of commands until the page gets loaded, similar to "wait_for_page_to_load" in Selenium RC?
I fixed a lot of issues I was having in that department adding this line after starting my driver
driver.manage.timeouts.implicit_wait = 20
This basically makes every failed driver call you make retry for maximum 20 seconds before throwing an exception, which is usually enough time for your AJAX to finish.
Try using Javascript to inform you!
I created a couple methods that checks via our javascript libraries and waits to see if the page has finished loading the DOM and that all ajax requests are complete. Here's a sample snippet. The javascript you will need to use is just going to depend on your library.
Selenium::WebDriver::Wait.new(:timeout => 30).until { #driver.execute_script("[use javascript to return true once loaded, false if not]"}
I then wrapped these methods in a clickAndWait method that clicks the element and calls the waitForDomLoad and waitForAjaxComplete. Just for good measure, the very next command after a clickAndWait is usually a waitForVisble element command to ensure that we are on the right page.
# Click element and wait for page elements, ajax to complete, and then run whatever else
def clickElementAndWait(type, selector)
#url = #driver.current_url
clickElement(type, selector)
# If the page changed to a different URL, wait for DOM to complete loading
if #driver.current_url != #url
waitForDomLoad
end
waitForAjaxComplete
if block_given?
yield
end
end
If you are using capybara, whenever you are testing for page.should have_content("foo"), capybara will not fail instantly if the page doesn't have the content (yet) but will wait for a while to see if an ajax call will change that.
So basically: after you click, you want to check right away for have_content("some content that is a consequence of that click").

Resources