I've been playing with Capybara and the selenium web-driver to learn about web-automation. I've been trying to refresh a particular page with Capybara. I've seen a few methods but they each have issues that make them not feasible in certain cases.
session.visit link is just doing nothing, as the session is already at that link.
I can do session.reset! but then I lose the login.
The few other methods I've seen - don't use Capybara's inbuilt wait functionality.
This means if there is heavy server load - or in my tests with a restricted DL/UL rate the 'refresh' happens but then it quickly tries to look for a field in the next page that doesn't exist yet because it hasn't loaded.
So my question is specifically - how do I refresh a page in Capybara without loosing the login session using Capybara's inbuilt wait functionality?
you can do something like:
visit current_path
or define an RSpec helper:
def reload_page
visit current_path
end
Capybara now implements a refresh method so you can call it directly in your spec.
Since you're using selenium, you can either use the master branch of Capybara and call
session.refresh
or you can stick with the current release version and call
session.driver.browser.navigate.refresh
If the page you're trying to refresh was a POST it may pop up an "are you sure you want to resbumit" modal, in which case you'd need something like
session.accept_confirm do
session.driver.browser.navigate.refresh
end
Related
I have a script that use Capybara to publish links in Google+. I would like to have tests to cover this functionality. Usually Capybara is using as a tool for writing Integration tests. In may case i need to test Capybara itself.
I see 3 possible ways:
stub capybara's method (but in this case i test nothing but just stubbed methods)
test capybara agains saved HTML/JS page (that will help me understand that i did not break anything during refactoring)
do not test at all (no comments here)
Have you ever faced such a problem?
If you register different drivers for your app and your test code, possibly manage the sessions manually depending on how you're using it in your app, and make sure you're careful with Capybaras setting's you should be able to go with option 2. You have to be careful with Capybaras settings because most of them are global so changing them for your tests will also change them for your app.
I am testing a page which contains a pesky iframe which takes a long time to load; however, I'm not testing the iframe itself. If possible, I'd like to be able to run my tests without having to wait for this iframe each time.
One of the following potential solutions should do the trick, but I'm having difficulty either finding documentation or getting any of them to work:
a) Cancel the page load shortly after the request is placed.
b) Simulate an Escape press while the page is loading using send_keys method.
c) Use javascript - page.execute_script("return window.stop();") while the page is loading.
d) Cancel loading of a specific element
How would one send a cancellation request to a loading page or element using Capybara?
We were having problems with outstanding network requests (both Ajax and assets) at the end of our integration tests, when all the assertions had been met and the test was over and getting torn down.
So we added execute_script("window.stop();") to our integration_test_helper.rb file, in the teardown block, and it worked very well.
So I'd suggest adding an assertion that can be met before your iframe is fully loaded, and running window.stop() in the teardown of the test.
Altnerative here, you could use puffing-billy gem to stub that iframe request.
Hi is there any way to ignore the page load when running selenium cucumber, because it always fail my test and i just want to check if that content is present or not.
please don't say add sleep.
the issue im having is that the content is present but its always waiting the page to be fully loaded and sometimes it got stock trying to get some api call to a 3rd party company.
Here are some approaches you could try
Change your driver and us webkit. Setup webkit to not load external links. See http://robots.thoughtbot.com/speed-up-javascript-capybara-specs-by-blacklisting-urls
Ensure you understand and us the has_no methods if you are testing that something is not present e.g. use
expect(page).to have_no_css '.test' # fast
rather than
expect(page).to !have_css('.test') # slow will always wait until timeout
Change the default timeout to something shorter (perhaps only for this scenario, using a tag)
I'm using capybara-webkit to test integration with a third party website (I need javascript).
I want to use vcr to record requests made during the integration test but capybara-webkit doesn't go over net http so vcr is unable to record them. How would I go about writing an adaptor for vcr that would allow me to record the reqeusts?
Unfortunately, VCR is very much incompatible with capybara-webkit. The fact is that capybara webkit is using webkit, which is in c. Webmock and Fakeweb, which are the basis for VCR, can only be used for Ruby web requests. Making the two work together would likely be a monumental task.
I've solved this problem two ways:
The first (hacky, but valid) is to add a new javascript file to the application that is only included in the test environment. This file stubs out the JS classes which make external web requests. Aside from the pure hackatude of this approach, it requires that every time a new request is added or changed you must change the stubs as well.
The second approach is to route all external requests through my own server, effectively proxying all external requests through my server. This has the huge disadvantage that you have to have an action for everything you want to consume (you could genericize it, with some work). It also suffers from the fact that it could as much as double the time for the request to complete. However, since the requests are now being made by Ruby you can use VCR in all it's glory.
In my situations, approach #2 has been much more to my advantage thanks to the fact that I need ruby to manipulate the data so that I can keep my javascript source-agnostic. I was, however, using approach #1 for quite a while successfully.
I've written a small ruby library (puffing-billy) for rspec+capybara that does exactly this -- it injects a proxy in between your browser and the outside world and allows you to fake responses to specific requests.
Example:
describe 'fetching badges from stackoverflow API' do
it 'should show a nice message when you have no badges' do
# stub some JSONP
proxy.stub('http://api.stackoverflow.com/1.1/users/1/badges',
:jsonp => { :badges => [] })
visit '/my_badges'
page.should have_content("You don't have any badges :(")
end
end
I am new to Capybara testing and am having a few issues.
I have a scenario I am trying to run, and this is the step implementation:
When /^I select the signin link$/ do
click_link 'Sign in'
end
I have tried to access this link with xpath, css, and tried the within implementation as well. Capybara cannot seem to find it, and returns a Capybara::ElementNotFound exception in all cases.
When I load the webpage without JavaScript, the link is not visible, and I'm wondering if this is why Capybara cannot find it. I found a trigger method, but am unsure how it works. Does anyone have a working example of trigger, or any other ideas for what I should do?
Are you using the selenium webdriver to run this test? It sounds like you are trying to run a scenario that requires javascript to see certain elements, without using a driver that supports javascript.
In your .feature file all you have to do is add this line before the scenario:
#javascript
Scenario: My Scenario
When blah blah blah
...
The #javascript tag tells capybara to use selenium-webdriver to run the test. It'll fire up firefox and go through the test - allowing all javascript functionality to work. This slows tests down considerably so only use it when absolutely necessary to test ajax-y and javascript-y behavior.
If that still doesn't work you can use this step:
Then show me the page
When I select the signin link
Which will open the page up for you in a new browser in that page's current state for your inspecting pleasure.