driver.navigate.refresh not working as expected with selenium-webdriver - ruby

Please follow the code below:
driver.get "https://example.com/"
driver.find_element(:class, "button").submit
driver.navigate.refresh
wait = Selenium::WebDriver::Wait.new(:timeout => 10) # seconds
element = wait.until { driver.find_element(:name => "username") }
I wrote the code keeping in my mind that till the page which contains element : username comes, continue the previous page to refresh. But it seems my code not meeting that requirement. Thus script throwing error as below "
Error
C:/Ruby193/lib/ruby/gems/1.9.1/gems/selenium-webdriver-2.27.2/lib/selenium/webdr
iver/common/wait.rb:57:in `until': timed out after 10 seconds (Unable to locate
element: {"method":"name","selector":"username"})} (Selenium::WebDriver::Error::
TimeOutError)
Any good idea to meet my requirement,please?
Thanks,

I have not come across a built-in way to do this in selenium-webdriver, so I would do the following:
#Submit your first page
driver.get "https://example.com/"
driver.find_element(:class, "button").submit
#Refresh page until your element appears
end_time = Time.now + 10 #seconds
begin
element = driver.find_element(:name => "username")
rescue Selenium::WebDriver::Error::NoSuchElementError
if Time.now < end_time
driver.navigate.refresh
retry
end
end
Basically this is attempting to find the element. If it is not found, catches the exception, refreshes the page and retries again. This is repeated until the time limit hsa been reached.

Related

Ruby : Watir : How to avoid closing browser from Net::ReadTimeout?

I am making an automation program using Watir , that reads links from a file links.txt and then open one by one on chrome browser. When it takes to much time to open then browser and its on loading time it shows me the Net::ReadTimeout. I have tried to rescue and and if its not rescued go to the next link from the list.
I have tried this one but when max_retries = 3 it shows again the error. I want to make browser to wait for specific amount of time and then if it is still loading close the browser and go to the next link from the list
file='links.txt'
max_retries = 3
times_retried = 0
n = 0
begin
browser = Watir::Browser.new :chrome
Watir.default_timeout = 1000
rescue Net::ReadTimeout
browser.wait
retry
end
line = File.readlines(file).sample
while n <= 50 do
n+=1
begin
browser.goto "#{line}"
rescue Net::ReadTimeout => error
if times_retried < max_retries
times_retried += 1
puts "Failed to load page, retry #{times_retried}/#{max_retries}"
retry
else
puts "Exiting script. Timeout Loading Page"
exit(1)
end
end
break if n == 50
end
You have to increase the page load time out, it waits default time of 60 seconds but you can increase the page load timeout by the following code
client = Selenium::WebDriver::Remote::Http::Default.new
client.read_timeout = 120 # seconds
driver = Selenium::WebDriver.for :chrome,http_client: client
b=Watir::Browser.new driver
Now your code would wait for 120 seconds for any page load which has been caused by #click and also wait to load the url by goto method.
This is an old question, but maybe that helps someone:
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 600 # instead of the default 60 (seconds)
Watir::Browser.new :chrome, http_client: client

Browsermob Proxy + Watir not capturing traffic continuously

I have the BrowserMob Proxy set up correctly with Watir and it is capturing traffic and saving the HAR file; however, what it's not doing is that it's not capturing the traffic continuously. So following is what I'm trying to achieve:
Go to homepage
Click on a link to go to another page where I need to wait for some events to happen
Once on the second page, start capturing traffic after the event happens and wait for a specific call to occur and capture its contents.
What I'm noticing however, is that it's following all of the above steps, but on step 3 the proxy stops capturing traffic before that call is even made on that page. The HAR that is returned doesn't have that call in it hence the test fails before it even does its job. Following is how the code looks like.
class BMP
attr_accessor :server, :proxy, :net_har, :sel_proxy
def initialize
bm_path = File.path(Support::Paths.cucumber_root + "/browsermob-
proxy-2.1.4/bin/browsermob-proxy")
#server = BrowserMob::Proxy::Server.new(bm_path, {:port => 9999,
:log => false, :use_little_proxy => true, :timeout => 100})
#server.start
#proxy = #server.create_proxy
#sel_proxy = #proxy.selenium_proxy
#proxy.timeouts(:read => 50000, :request => 50000, :dns_cache =>
50000)
#net_har = #proxy.new_har("new_har", :capture_binary_content =>
true, :capture_headers => true, :capture_content => true)
end
def fetch_har_entries(target_url)
har_logs = File.join(Support::Paths.har_logs, "har_file # .
{Time.now.strftime("%m%d%y_%H%M%S")} .har")
#net_har.save_to har_logs
index = 0
while (#net_har.entries.count > index) do
if #net_har.entries[index].request.url.include?(target_url) &&
entry.request.method.eql?("GET")
logs = JSON.parse(entry.response.content.text) if not
entry.response.content.text.nil?
har_logs = File.join(Support::Paths.har_logs, "json_file_# .
{Time.now.strftime("%m%d%y_%H%M%S")}.json")
File.open(har_logs, "w") do |json|
json.write(logs)
end
break
end
index += 1
end
end
end
In my test file I have following
Then("I navigate to the homepage") do
visit(HomePage) do |page|
page.element.click
end
end
And("I should wait for event to capture traffic") do
visit(SecondPage) do |page|
page.wait_until{page.element2.present?)
BMP.fetch_har_entries("target/url")
end
end
What am I missing that is causing the proxy to not capture traffic in its entirety?
In case anyone gets here from a google search, I figured out how to resolve this on my own (thanks stackoverflow community for nothing, lol). So to resolve the issue, i used a custom retriable loop called eventually method.
logs = nil
eventually(timeout: 110, interval: 1) do
#net_har = #proxy.new_har("har", capture_binary_content: true, capture_headers: true, capture_content: true)
#net_har.entries.each do |entry|
begin
break if #net_har.entries.index entry == #net_har.entries.count
next unless entry.request.url.include?(target_url) &&
entry.request.post_data.text.include?(target_body_text)
logs = entry.request.post_data.text
break
rescue TypeError
fail("Response body for the network call came back empty")
end
end
raise EOFError if logs_hash.nil?
end
logs
end
Basically I'm assuming what was happening was the BMP would only cache or capture 30 seconds worth of har logs, and if my network event didn't occur during those 30 secs, i was SOL. So the what above code is doing is that's it's waiting for the logs variable to be not nil, if it is, it raises an EOFError and goes back to the loop initializes the har again and looks for the network call again. It keeps on doing that until it find the call or 110 seconds are up. Following is the eventually method I'm using
def eventually(options = {})
timeout = options[:timeout] || 30
interval = options[:interval] || 0.1
time_limit = Time.now + timeout
loop do
begin
yield
rescue EOFError => error
end
return if error.nil?
raise error if Time.now >= time_limit
sleep interval
end
end

Ruby Selenium - find_elements if nil => Net::ReadTimeout

Hi I got Net::ReadTimeout every time I use find_elements when there are no elements like this. A short example my Feature-file: (cucamber)
Feature: Retest Faild Testcases
Scenario: Simpel test find_elements
Given open website humblebundle.com
And search for civilization
my ruby step-file:
Given /^open website (.*)$/ do |url|
$driver.navigate.to "http://www."+url
$wait = Selenium::WebDriver::Wait.new(:timeout => 180)
$wait.until {
$driver.find_element(:css => "div[class='navbar-content']").displayed?
}
end
And /^search for (.*)$/ do |name|
element=$driver.find_element(:css => "input[placeholder='Search']")
element.send_keys name
sleep(1)
element.send_keys :enter
sleep(10)
#line199 items=$driver.find_elements(:id => "Games")
puts items.count
end
and the error I get:
Net::ReadTimeout: Net::ReadTimeout
./features/step_definitions/basic_steps.rb:199:in /^search for (.*)$/'
./features/a_test.feature:4:inAnd search for civilization'
I would be very thankful for any help with this.
By default, the timeout for a request is set to 60 seconds. If your implicit wait setting is set to greater than this value, you will get a Net::ReadTimeout if no elements are located. You must keep your implicit wait setting less than your request timeout.
Thanks, that was it.
I change the implicit wait 2 secends lesser than my ReadTimeout.

Selenium in ruby/chrome, Selenium wait will not function

No matter what I try to do, the browser tries to run the test too fast before it has a chance to find the element that I am looking for. If I put in a simple "sleep 2", it has a chance for the drop down menu to drop and load and successfully find the element. But I am wanting to learn to use the Selenium Wait command. I have tried numerous combinations of the below and looked all over the web for documentation or perhaps examples. I have found plenty of firefox and people say that some of the things below worked perfectly in firefox, but for me and my project team mates, we can not get any of the waits, implicit or explicit, to pause long enough for it to detect the element. The element does not exist until the drop down menu is fully dropped, then it can detect it. It doesn't take 2 seconds but it seems that none of my wait commands will actually make it wait. Like I said, I have tried numerous different things and almost all of them are below. If anyone can help guide me, i would appreciate it. Here is some of the code I have tried:
def setup
#driver = Selenium::WebDriver.for :chrome
#driver.get "https://website.herokuapp.com/"
#wait = Selenium::WebDriver::Wait.new(:timeout => 10) # seconds
# #driver = Selenium::WebDriver.for:chrome
# #driver.manage.timeouts.implicit_wait = 30
# #wait = Selenium::WebDriver::Wait.new(:timeout => 15)
# #wait = Selenium::WebDriver::Wait.new(:timeout => 10)
# #driver.manage.timeouts.implicit_wait = 10
# #wait = Selenium::WebDriver::Wait.new(timeout: 10)
#driver.manage.window.maximize()
# #driver.navigate.to("https://website.herokuapp.com/")
end
def test_user_name_is_present
login()
#driver.find_element(:class, "navbar-toggle").click()
# user = #driver.find_element(:class, "dropdown-toggle").text
# #wait.until{#driver.find_element(:class, "dropdown-toggle")}
#driver.find_element(:class, "dropdown-toggle")
#wait.until { #driver.find_element(:class => "dropdown-toggle") }
user = #driver.find_element(:class, "dropdown-toggle").text
assert_equal(true, user.include?('HEATHER'), "no user")
end
I'm more familiar with JavaScript or Java bindings.
But what about:
def test_user_name_is_present
login()
#driver.find_element(:class, "navbar-toggle").click()
#wait.until { #driver.find_element(:class => "dropdown-toggle").displayed? }
user = #driver.find_element(:class, "dropdown-toggle").text
assert_equal(true, user.include?('HEATHER'), "no user")
end
Our team uses this, just add to your test_helper.rb if using rails
def wait_for_ajax(sleep_sec = 4)
assert page.has_no_content?(:css, 'body.loading')
sleep sleep_sec if sleep_sec
end

How to determine that element is clickable using Selenium WebDriver with Ruby?

On my web page, page loads but sub-tabs of page aren't clickable for 20 seconds (sometimes more than this). Page contents are -
<nav id="subTabHeaders">
<div class="selected" data-name="ab">AB</div>
<div class="" data-name="cd">CD</div>
<div class="" data-name="ef">EF</div>
<div class="" data-name="gh">GH</div>
</nav>
I've to click on sub-tab, hence I tried this in following way -
Put sleep & then element.click
But sleep is not ideal way to deal because sometimes it may happen that sub-tab element is clickable before or after the time given to sleep.
Using sleep, I did following -
element = WAIT.until { driver.find_element(:xpath, ".//*[#id='subTabHeaders']/div[3]")}
sleep 20
element.click
If element is clickable after more than the sleep time & we click on element immediate after sleep time expires, (I mean (using above code) suppose element becomes clickable after 30 seconds but we click on element immediate after 20 seconds), actual click action doesn't happen & also click doesn't return any error.
Is there Ruby method to check whether element is clickable or not? So that we'll get to know when to click.
From the ruby bindings page: (see driver examples)
# wait for a specific element to show up
wait = Selenium::WebDriver::Wait.new(:timeout => 10) # seconds
wait.until { driver.find_element(:id => "foo") }
So ordinarily you could do something like:
wait = Selenium::WebDriver::Wait.new(:timeout => 40)
wait.until do
element = driver.find_element(:xpath, ".//*[#id='subTabHeaders']/div[3]")
element.click
end
Or more succinctly
wait = Selenium::WebDriver::Wait.new(:timeout => 40)
wait.until { driver.find_element(:xpath, ".//*[#id='subTabHeaders']/div[3]").click }
However, since you say that the click doesn't raise an error, it sounds like the click is in fact working, just your page isn't really ready to display that tab. I'm guessing there's some async javascript going on here.
So what you can try is inside the wait block, check that the click caused the desired change. I'm guessing, but you could try something like:
wait = Selenium::WebDriver::Wait.new(:timeout => 40)
wait.until do
driver.find_element(:xpath, ".//*[#id='subTabHeaders']/div[3]").click
driver.find_element(:xpath, ".//*[#id='subTabHeaders']/div[3][#class='selected']")
end
The important thing here is that #until will wait and repeat until the block gets a true result or the timeout is exceeded.
How about
WebDriverWait wait = new WebDriverWait(driver,30);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(subTabHeaders)));
Hope it would help you:
begin
element = WAIT.until { driver.find_element(:xpath, ".//*[#id='subTabHeaders']/div[3]")}
sleep 20
element.click
rescue Selenium::WebDriver::Error::StaleElementReferenceError
p "Indicates that a reference to an element is now “stale” - the element no longer appears in the DOM of the page."
end
OR
you could try this one:
begin
wait = Selenium::WebDriver::Wait.new(:timeout => 10) # seconds
wait.until { driver.title.include? "page title" }
driver.find_element(:xpath, ".//*[#id='subTabHeaders']/div[3]")}.click
rescue Selenium::WebDriver::Error::StaleElementReferenceError
p "element is not present"
end
Here is what I use - to test if the link is clickable, else go to another URL:
if (logOutLink.Exists() && ExpectedConditions.ElementToBeClickable(logOutLink).Equals(true))
{
logOutLink.Click();
}
else
{
Browser.Goto("/");
}

Resources