How to create a test that detect if the page was refreshed? - ruby

I want to create a test in ruby that refresh the page and also check if it was refreshed.
it 'Refreshing the page' do
visit current_path
end
how to make the assertion of this?

Related

Unit Test for the New Stripe Checkout

This isn't much of a code since I'm really new to testing. I've done a couple of unit test using capybara and rspec but all of them were on my existing app. For Stripe how ever, once you click the checkout button you are redirected to their own checkout page, from there I lost any way to control or access any of the page.
Basic test code is as follows:
it "visits stripe checkout" do
login_as(user)
visit my_page
click_button "Checkout"
sleep(2) // im adding sleep to delay it since there's a bit of loading once checkout is clicked
// From here I cant access anything anymore such as
expect(page).to have_selector(".ProductSummary-totalAmount", text: "$20.00")
//Note: This is just for me to confirm Im in the page and that is the actual class name on the checkout app
end
Would appreciate if someone can help or point me to the right direction. Much better if anyone can show me an example on how to fill in the credentials once on the checkout page. Links also helps.
P.S. I did my research but most discussions were using the old checkout on stripe which is just an iframe/modal.
General rule of testing is that you should test your own code, not other's. Testing Stripe Checkout adds unnecessary complexity to your tests and is brittle, what if Stripe changes something on their Checkout page that breaks your tests?
Instead you should just test that your button can be clicked and mock the possible responses from stripe.redirectToCheckout. You also test that the success and cancel URL you set when creating the Checkout Session renders and works as expected.

Ruby Mechanize and changing URL after a login

I have a Mechanize script that currently goes to a login form and properly logs a user in. I'm seeing plenty of documentation to follow links, but I'd like to go to an ad-hoc page that isn't linked on the main page after I login. The page requires authentication and that's why I force the login first. Is there a way to change to another URL (that's still part of the same site) with Ruby's Mechanize gem and have it retain all of the cookies from the login? I looked up methods such as link_with but that's to follow a link on the current page. I'd like to go to a different url within the same website.
I believe you just need to make a subsequent get call after your initial transaction is complete.
client = Mechanize.new
client.get('http://example.com/login') do
# handle login
end
client.get('http://example.com/something-else') do
# another action
end

Capybara not finding elements on page

I have a landing page that contains 10 links which i need to click through and load. I'm using Capybara with Selenium web driver to create a RSpec test that will load the website, login, go to landing page, click the first dashboard link in landing page, return to landing page, click second link, etc.
Whenever Capybara returns to the landing page it always returns ElementErrorNotFound when attempting to click the 2nd link. My guess is that the JavaScript isn't loading before the element is clicked, but isn't Capybara now smart enough to wait for the page to load?
I am not to familiar with Ruby and Capybara but Selenium has Implicit Waits which should take care of that issue. http://docs.seleniumhq.org/docs/04_webdriver_advanced.jsp
You could always loop until it returns the element you are looking for, also.

How can I make cucumber with capybara and selenium fire Ajax on page load

I have the following as a cucumber story:
#javascript
Scenario: Showing the page
Given I am a logged in user
And there is a member with a site
And I go to the home page
.....
When that home page is loaded, there is a drop down that is populated via AJAX on page, in the selenium run browser test the
$(document).ready(function(){})
is not run and as such a drop down select that I need to use is not present. Is there any way that I can force Selenium to fire a javascript event for page load via a step maybe
When the page is loaded information about members is retrieved via AJAX
or something like that. I need that dropdown box, but due to some Action Caching in other parts of the application that use the navigation drop downs but should only sometimes have this extra drop down I load them in after with javascript, I can't really change that.
Any Ideas?
EDIT:
It turns out I had set up some test data wrong and that was failing a condition for it, so it seems that as usual the library is fine and I made a mistake, so it seems this will be fine.

Using a Ruby script to login to a website via https

Alright, so here's the dealio: I'm working on a Ruby app that'll take data from a website, and aggregate that data into an XML file.
The website I need to take data from does not have any APIs I can make use of, so the only thing I can think of is to login to the website, sequentially load the pages that have the data I need (in this case, PMs; I want to archive them), and then parse the returned HTML.
The problem, though, is that I don't know of any ways to programatically simulate a login session.
Would anyone have any advice, or know of any proven methods that I could use to successfully login to an https page, and then programatically load pages from the site using a temporary cookie session from the login? It doesn't have to be a Ruby-only solution -- I just wanna know how I can actually do this. And if it helps, the website in question is one that uses Microsoft's .NET Passport service as its login/session mechanism.
Any input on the matter is welcome. Thanks.
Mechanize
Mechanize is ruby library which imititates the behaviour of a web browser. You can click links, fill out forms und submit them. It even has a history and remebers cookies. It seems your problem could be easily solved with the help of mechanize.
The following example is taken from http://docs.seattlerb.org/mechanize/EXAMPLES_rdoc.html:
require 'rubygems'
require 'mechanize'
a = Mechanize.new
a.get('http://rubyforge.org/') do |page|
# Click the login link
login_page = a.click(page.link_with(:text => /Log In/))
# Submit the login form
my_page = login_page.form_with(:action => '/account/login.php') do |f|
f.form_loginname = ARGV[0]
f.form_pw = ARGV[1]
end.click_button
my_page.links.each do |link|
text = link.text.strip
next unless text.length > 0
puts text
end
end
You can try use wget to fetch the page. You can analyse login process with this app www.portswigger.net/proxy/.
For what it's worth, you could check out Webrat. It is meant to be used a tool for automated acceptance tests, but I think you could use it to simulate filling out the login fields, then click through links by their names, and grab the needed HTML as a string. Haven't tried doing anything like it, tho.

Resources