How can I check whether new page is completely loaded or not using Watir - ruby

Is there any way I can check whether a particular page completely loaded or not using Watir?
I tried with browser.status but it's not printing anything.

Depending on what you mean by completely loaded? All HTML or all HTML and JavaScripts.
For HTML
browser = Watir::Browser.new
#To check if page has been loaded
ready = browser.ready_state.eql? "complete"
#To wait until page has been loaded
browser.wait
For JavaScript and HTML. For these you have to wait for each specific element
This code will wait until the element i enable, meaning clickable.
browser = Watir::Browser.new
browser.text_field(:id => 'id_of_object').wait_until_present

Related

Ruby Watir -- Trying to loop through links in cnn.com and click each one of them

I have created this method to loop through the links in a certain div in the web site. My porpose of the method Is to collect the links insert them in an array then click each one of them.
require 'watir-webdriver'
require 'watir-webdriver/wait'
site = Watir::Browser.new :chrome
url = "http://www.cnn.com/"
site.goto url
box = Array.new
container = site.div(class: "column zn__column--idx-1")
wanted_links = container.links
box << wanted_links
wanted_links.each do |link|
link.click
site.goto url
site.div(id: "nav__plain-header").wait_until_present
end
site.close
So far it seems like I am only able to click on the first link then I get an error message stating this:
unable to locate element, using {:element=>#<Selenium::WebDriver::Element:0x634e0a5400fdfade id="0.06177683611003881-3">} (Watir::Exception::UnknownObjectException)
I am very new to ruby. I appreciate any help. Thank you.
The problem is that once you navigate to another page, all of the element references (ie those in wanted_links) become stale. Even if you return to the same page, Watir/Selenium does not know it is the same page and does not know where the stored elements are.
If you are going to navigate away, you need to collect all of the data you need first. In this case, you just need the href values.
# Collect the href of each link
wanted_links = container.links.map(&:href)
# You have each page URL, so you can navigate directly without returning to the homepage
wanted_links.each do |link|
site.goto url
end
In the event that the links do not directly navigate to a page (eg they execute JavaScript when clicked), you will need to collect enough data to re-locate the elements later. What you use as the locator will depend on what is known to be static/unique. As an example, I will assume that the link text is a good locator.
# Collect the text of each link
wanted_links = container.links.map(&:text)
# Iterate through the links
wanted_links.each do |link_text|
container = site.div(class: "column zn__column--idx-1")
container.link(text: link_text).click
site.back
end

Watir-webdriver throws 'not clickable' error even when element is visible, present

I am trying to automate tests in Ruby using the latest Watir-Webdriver 0.9.1, Selenium-Webdriver 2.53.0 and Chrome extension 2.21. However the website that I am testing has static headers at the top or sometimes static footers at the bottom. Hence since Watir auto-scrolls an element into view before clicking, the elements get hidden under the static header or the static footer. I do not want to set desired_capabitlites (ElementScrollBehavior) to 1 or 0 as the websites I am testing can have both - static header or static footer or both.
Hence the question are:
1) Why does Watir throw an exception Element not clickable even when the element is visible and present? See ruby code ( I have picked a random company website for an example) and the results below.
2) How can I resolve this without resorting to ElementScrollBehaviour?
Ruby code:
require 'watir-webdriver'
browser = Watir::Browser.new :chrome
begin
# Step 1
browser.goto "shop.coles.com.au/online/mobile/national"
# Step 2 - click on 'Full Website' link at the bottom
link = browser.link(text: "Full website")
#check if link exists, present and visible?
puts link.exists?
puts link.present?
puts link.visible?
#click on link
link.click
rescue => e
puts e.inspect
ensure
sleep 5
end
puts browser.url
browser.close
Result:
$ ruby link_not_clickable.rb
true
true
true
Selenium::WebDriver::Error::UnknownError: unknown error: Element is not clickable at point (460, 1295). Other element would receive the click: div class="shoppingFooter"...div
(Session info: chrome=50.0.2661.75)
(Driver info: chromedriver=2.21.371459 (36d3d07f660ff2bc1bf28a75d1cdabed0983e7c4),platform=Mac OS X 10.10.5 x86_64)>
http://shop.coles.com.au/online/mobile/national
thanks!
You can do a click at any element without getting it visible. Check this out:
link.fire_event('click')
BUT It is very very very not good decision as far as it will click the element even if it is not actually visible or in case when it is just impossible to click it (because of broken sticky footer for example).
That's why much better to wait the fooler, scroll the page and then click like:
browser.div(id: "footerMessageArea").wait_until_present
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
link.click
The sticky footer is blocking webdriver from performing the click, hence the message that says 'other element would receive the click'.
There are several different ways you can get around this.
Scroll down to the bottom of the page before the click
Hide/Delete the sticky footer before any/all link clicks
Focus on an element below the element you want to click before you perform the click
I Guess your element is visible in the screen.
Before clicking on the element first you have to scroll the webpage so that element is visible then perform the click. Hope it should work.
I had similar issue,
I just used following javascript code with watir:
link = browser.link(text: "Full website")
#browser.execute_script("arguments[0].focus(); arguments[0].click();", link)
Sometimes I have to use .click! which i believe is the fire_event equivalent. Basically something is layered weird, and you just have to go around the front end mess.

When I click a button that leads to another page, how do I get the contents of the new page?

I'm using selenium-webdriver to scrape a website. When the browser clicks the "Next" button, the next page loads, but when I try to find the elements I want, the driver prints contents from the previous page.
Here's my script:
require 'selenium-webdriver'
driver = Selenium::WebDriver.for :firefox
url = 'http://www.airforwarders.org/companies'
page = driver.navigate.to(url)
driver.find_elements(:css=>'.item_main').each{|div|
puts div.text
}
paginationToolbar = driver.find_element(:css=>'.pagination-toolbar')
paginationToolbar.find_elements(:css=>'.btn')[-2].click # Clicking the "next" button
driver.find_elements(:css=>'.item_main').each{|div|
puts div.text # This shows the same stuff from the previous loop
}
If I can get the contents from the new page, this would be no problem. How do I do this?
If you are sure, that Selenium had clicked on the button the next page is loaded, then I think you should add sleep 1 after click.
Possibly ajax wasn't finished on the moment after the click action.
Try to wait 1-3 seconds before doing additional actions.

Ruby cucumber watir-webdriver is not identifying the object which is placed in a frame

Watir code which is working but add condition link is not identifying again here :(
require 'spec'
require 'watir'
browser = Watir::Browser.new
pages = { "RCM Workspace Homepage" => "http://rcm-bpmt.apmoller.net/workspace/faces/jsf/workspace/workspace.xhtml" }
Given(/^that I am on the (.*?)$/) do |page|
# Opening new browser and going to the page which is specified
browser.goto(pages[page])
#Maximizing the opened browser window
browser.maximize
end
When(/^I search for (.*?)$/) do |text|
# Ensuring that we have opened expected page only by verifying the page content
browser.html.include?(text).should == true
end
Then(/^I click on Show Filters link$/) do
#Opening the Conditions window by clicking on the show filters link
browser.link(:id, "portletComponentWorkList_viewNormalModeWorkList_viewPanel_showFiltersLink").click
#Clicking on Add condition link which is placed in a frame and it opnes when I click on Show filters link
browser.element(:id, 'portletComponentWorkList_viewNormalModeWorkList_viewPanel_conditionButton').click
end
HTML details:
<A id=portletComponentWorkList_viewNormalModeWorkList_viewPanel_conditionButton onclick="oc.ajax.jsf.doCallback('portletComponentWorkList','portletComponentWorkList:viewNormalModeWorkList:viewPanel:conditionButton');return false;" href="http://rcm-bpmt.apmoller.net/workspace/faces/jsf/workspace/workspace.xhtml#">
Add condition
</A>
In the following line, Watir will look for the element anywhere except for in the frames.
browser.element(:id, 'portletComponentWorkList_viewNormalModeWorkList_viewPanel_conditionButton').click
Unlike other elements, you must tell Watir when an element is in a frame. This is done similar to how you would scope the search of an element to a specific element.
For example, if there is only 1 frame or your element is in the first frame, you can do (noting the addition of the .frame):
browser.frame.element(:id, 'portletComponentWorkList_viewNormalModeWorkList_viewPanel_conditionButton').click
If there are multiple frames, you will need to add parameters to be more specific about which frame to use. For example, if the frame has an id:
browser.frame(:id => 'myframe').element(:id, 'portletComponentWorkList_viewNormalModeWorkList_viewPanel_conditionButton').click

How to capture screen shot using watir webdriver on a web page that does lazy load of images

I am using watir webdriver to do some automated testing of web pages. The pages have many images which are lazy loaded when the user scrolls the content into view (uses jquery lazyload plugin)
I am doing
10.times do
browser.send_keys :space
end
To scroll items in view and it loads fine
I also do
browser.div(:id => 'footer').wd.location_once_scrolled_into_view
which scrolls it to the bottom
and then I do
browser.screenshot.save
This does not seem to capture any images that are lazy loaded via jquery plugin.
What can I do to capture the entire page
The simplest thing you could do is to scroll to the bottom of the page. Count the images, send space, count the images again. If the number of images increased, send space again. If the number is the same, you have loaded all images.
Something like this (not tested):
old_image_count = 0
new_image_count = browser.imgs
while old_image_count < new_image_count
old_image_count = browser.imgs
browser.send_keys :space
new_image_count = browser.imgs
end
Instead of:
browser.screenshot.save
try:
browser.driver.save_screenshot("<path>/photo.jpg")

Resources