Ruby : Watir : How to avoid closing browser from Net::ReadTimeout? - ruby

I am making an automation program using Watir , that reads links from a file links.txt and then open one by one on chrome browser. When it takes to much time to open then browser and its on loading time it shows me the Net::ReadTimeout. I have tried to rescue and and if its not rescued go to the next link from the list.
I have tried this one but when max_retries = 3 it shows again the error. I want to make browser to wait for specific amount of time and then if it is still loading close the browser and go to the next link from the list
file='links.txt'
max_retries = 3
times_retried = 0
n = 0
begin
browser = Watir::Browser.new :chrome
Watir.default_timeout = 1000
rescue Net::ReadTimeout
browser.wait
retry
end
line = File.readlines(file).sample
while n <= 50 do
n+=1
begin
browser.goto "#{line}"
rescue Net::ReadTimeout => error
if times_retried < max_retries
times_retried += 1
puts "Failed to load page, retry #{times_retried}/#{max_retries}"
retry
else
puts "Exiting script. Timeout Loading Page"
exit(1)
end
end
break if n == 50
end

You have to increase the page load time out, it waits default time of 60 seconds but you can increase the page load timeout by the following code
client = Selenium::WebDriver::Remote::Http::Default.new
client.read_timeout = 120 # seconds
driver = Selenium::WebDriver.for :chrome,http_client: client
b=Watir::Browser.new driver
Now your code would wait for 120 seconds for any page load which has been caused by #click and also wait to load the url by goto method.

This is an old question, but maybe that helps someone:
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 600 # instead of the default 60 (seconds)
Watir::Browser.new :chrome, http_client: client

Related

Unable to fetch a link from an email in the gmail when run on jenkins but works and fetches desired result locally (in the WATIR framework using ruby)

The following code when run on jenkins throws the error as 'The set password url is
invalid argument(Session info: chrome=100.0.4896.88) (Selenium::WebDriver::Error::InvalidArgumentError)
Backtrace: Ordinal0 [0x00A67413+2389011]"
STEP FILE:
When(/^the user clicks on activate online account link$/) do
on(CheckoutPage) do |page|
#sleep for 30 seconds for the email to be received
sleep 30
p #set_password_link = page.get_password_token
puts "The set password url is #{#set_password_link}"
page.navigate_to(#set_password_link)
end
end
Code FILE:
def get_password_token
begin
retries ||= 0
Gmail.new("xxxxxxx#gmail.com", "xxxxxxxx") do |gmail|
email = gmail.inbox.emails(:from => 'orders#cottonon.com', :subject => 'Activate your online account').last
html = email.html_part.body.to_s
urls = URI.extract(html, %w(https))
return urls[1]
end
rescue
retry if (retries += 1) < $code_retry
end
end
it could be number of things, maybe you just need URI.parse(urls[1]) or fetched url is invalid
also it seems like your gmail code always fetches last mail, which can return wrong one if email is still not received
Here is a gmail_check method that should be more resistant to mail content and time received
def gmail_check(url_part, receiver, timeout = 30)
time = (Time.now-5.minutes).to_i
Gmail.connect("xxxxxxx#gmail.com", "xxxxxxxx") do |gmail|
puts("Reading emails to: #{receiver}")
while (timeout > 0)
gmail.inbox.find(:gm => "\"after:#{time}\"").each do |mail|
if mail.message.to.first == receiver
content = mail.multipart? ? mail.html_part.decoded : mail.message.decoded
Nokogiri::HTML(content).css("a").each do |a|
href = a.attributes["href"].to_s
return href if href.include?(url_part)
end
end
end
puts("Waiting 5 seconds before reading mail again.")
timeout = timeout - 5
sleep 5
end
end
end
but you should be able to easily debug the problem by ssh-ing into jenkins machine,
type irb
type require gmail
paste your code there
check the url
good luck :)

Browsermob Proxy + Watir not capturing traffic continuously

I have the BrowserMob Proxy set up correctly with Watir and it is capturing traffic and saving the HAR file; however, what it's not doing is that it's not capturing the traffic continuously. So following is what I'm trying to achieve:
Go to homepage
Click on a link to go to another page where I need to wait for some events to happen
Once on the second page, start capturing traffic after the event happens and wait for a specific call to occur and capture its contents.
What I'm noticing however, is that it's following all of the above steps, but on step 3 the proxy stops capturing traffic before that call is even made on that page. The HAR that is returned doesn't have that call in it hence the test fails before it even does its job. Following is how the code looks like.
class BMP
attr_accessor :server, :proxy, :net_har, :sel_proxy
def initialize
bm_path = File.path(Support::Paths.cucumber_root + "/browsermob-
proxy-2.1.4/bin/browsermob-proxy")
#server = BrowserMob::Proxy::Server.new(bm_path, {:port => 9999,
:log => false, :use_little_proxy => true, :timeout => 100})
#server.start
#proxy = #server.create_proxy
#sel_proxy = #proxy.selenium_proxy
#proxy.timeouts(:read => 50000, :request => 50000, :dns_cache =>
50000)
#net_har = #proxy.new_har("new_har", :capture_binary_content =>
true, :capture_headers => true, :capture_content => true)
end
def fetch_har_entries(target_url)
har_logs = File.join(Support::Paths.har_logs, "har_file # .
{Time.now.strftime("%m%d%y_%H%M%S")} .har")
#net_har.save_to har_logs
index = 0
while (#net_har.entries.count > index) do
if #net_har.entries[index].request.url.include?(target_url) &&
entry.request.method.eql?("GET")
logs = JSON.parse(entry.response.content.text) if not
entry.response.content.text.nil?
har_logs = File.join(Support::Paths.har_logs, "json_file_# .
{Time.now.strftime("%m%d%y_%H%M%S")}.json")
File.open(har_logs, "w") do |json|
json.write(logs)
end
break
end
index += 1
end
end
end
In my test file I have following
Then("I navigate to the homepage") do
visit(HomePage) do |page|
page.element.click
end
end
And("I should wait for event to capture traffic") do
visit(SecondPage) do |page|
page.wait_until{page.element2.present?)
BMP.fetch_har_entries("target/url")
end
end
What am I missing that is causing the proxy to not capture traffic in its entirety?
In case anyone gets here from a google search, I figured out how to resolve this on my own (thanks stackoverflow community for nothing, lol). So to resolve the issue, i used a custom retriable loop called eventually method.
logs = nil
eventually(timeout: 110, interval: 1) do
#net_har = #proxy.new_har("har", capture_binary_content: true, capture_headers: true, capture_content: true)
#net_har.entries.each do |entry|
begin
break if #net_har.entries.index entry == #net_har.entries.count
next unless entry.request.url.include?(target_url) &&
entry.request.post_data.text.include?(target_body_text)
logs = entry.request.post_data.text
break
rescue TypeError
fail("Response body for the network call came back empty")
end
end
raise EOFError if logs_hash.nil?
end
logs
end
Basically I'm assuming what was happening was the BMP would only cache or capture 30 seconds worth of har logs, and if my network event didn't occur during those 30 secs, i was SOL. So the what above code is doing is that's it's waiting for the logs variable to be not nil, if it is, it raises an EOFError and goes back to the loop initializes the har again and looks for the network call again. It keeps on doing that until it find the call or 110 seconds are up. Following is the eventually method I'm using
def eventually(options = {})
timeout = options[:timeout] || 30
interval = options[:interval] || 0.1
time_limit = Time.now + timeout
loop do
begin
yield
rescue EOFError => error
end
return if error.nil?
raise error if Time.now >= time_limit
sleep interval
end
end

Firefox Webdriver take so long to open on multithread

I Trying to open multiple instance of Firefox browser from my ruby code.
I use selenium 2.53.4 and firefox 47.0.2
The problem is after the thread has been created, the driver not initiate imidiately. it's took so long time until it's opened. And the second driver will opened after the first one is almost finished, this make multithread useless.
Here is my code
require "selenium-webdriver"
th = Array.new
i = 0
limit = 3
while i < 10
if(Thread.list.count <= 3)
th[i] = Thread.new(i){ |index|
start = Time.new
puts "#{index} - Start Initiate at #{start}"
driver = Selenium::WebDriver.for :firefox
finish = Time.new
puts "#{index} - Finish Initiate at #{finish}"
driver.get "http://google.com"
sleep(10)
driver.quit
puts "#{index} - Finished"
}
i = i + 1
puts "Thread - #{i} Created"
end # end if
end # end while
th.each{|t|
if(!t.nil?)
t.join
end
}
Did I code it properly? or it's firefox limitation? or selenium?
Note :
When I remove the thread it's take lesser time (about 6 s until navigate to intended URL)
it's work great using chrome driver

Retry testing sites after timeout error in Watir

I am going through a list of sites and going to each one using Watir to look for something in the source code of each page. However, after about 20 or 30 sites, the browser times out when loading a certain page and it breaks my script and I get this error:
rbuf_fill: execution expired (Timeout::Error)
I am trying to implement a way to detect when it times out and then restart testing the sites from where it left off but am having trouble.
This is my code:
ie = Watir::Browser.new :firefox, :profile => "default"
testsite_array = Array.new
y=0
File.open('topsites.txt').each do |line|
testsite_array[y] = line
y=y+1
end
total = testsite_array.length
count = 0
begin
while count <= total
site = testsite_array[count]
ie.goto site
if ie.html.include? 'teststring'
puts site + ' yes'
else
puts site + ' no'
end
rescue
retry
count = count+1
end
end
ie.close
Your loop can be:
#Use Ruby's method for iterating through the array
testsite_array.each do |site|
attempt = 1
begin
ie.goto site
if ie.html.include? 'teststring'
puts site + ' yes'
else
puts site + ' no'
end
rescue
attempt += 1
#Retry accessing the site or stop trying
if attempt > MAX_ATTEMPTS
puts site + ' site failed, moving on'
else
retry
end
end
end

driver.navigate.refresh not working as expected with selenium-webdriver

Please follow the code below:
driver.get "https://example.com/"
driver.find_element(:class, "button").submit
driver.navigate.refresh
wait = Selenium::WebDriver::Wait.new(:timeout => 10) # seconds
element = wait.until { driver.find_element(:name => "username") }
I wrote the code keeping in my mind that till the page which contains element : username comes, continue the previous page to refresh. But it seems my code not meeting that requirement. Thus script throwing error as below "
Error
C:/Ruby193/lib/ruby/gems/1.9.1/gems/selenium-webdriver-2.27.2/lib/selenium/webdr
iver/common/wait.rb:57:in `until': timed out after 10 seconds (Unable to locate
element: {"method":"name","selector":"username"})} (Selenium::WebDriver::Error::
TimeOutError)
Any good idea to meet my requirement,please?
Thanks,
I have not come across a built-in way to do this in selenium-webdriver, so I would do the following:
#Submit your first page
driver.get "https://example.com/"
driver.find_element(:class, "button").submit
#Refresh page until your element appears
end_time = Time.now + 10 #seconds
begin
element = driver.find_element(:name => "username")
rescue Selenium::WebDriver::Error::NoSuchElementError
if Time.now < end_time
driver.navigate.refresh
retry
end
end
Basically this is attempting to find the element. If it is not found, catches the exception, refreshes the page and retries again. This is repeated until the time limit hsa been reached.

Resources