I have a series of scripts that I have developed using Ruby and the Watir gem. Those are wrapped by Spinach, but that is beside what I am about to ask.
The intent of those scripts is to do some functional spot check or simply alleviate some very repetitive tasks.
They have been running well for a while, but lately, I've started to see a lot of failure due to Timeouts between the Chromedriver / Geckodriver (tried both browsers) and the scripts. Of course, I could simply restart the script, but when the success rate goes below 70 % it really starts to be aggravating.
What I ended up doing is wrap up all of my calls to Watir in a Proc with a Begin, rescue that would do a retry in case of a timeout.
This is ugly and violates so many rules that I am nearly ashamed to had to resort to this solution, but at least using this my scripts are now completing.
here is how I worked around the issue:
# takes a proc and wraps it around a series of rescue
def execute_block_and_rety_if_needed
yield
rescue Net::ReadTimeout
puts 'Read Timeout detected, retrying operation'
retry
rescue Net::HTTPRequestTimeOut
puts 'Http Request Timeout detected, retrying operation'
retry
rescue Errno::ETIMEDOUT
puts 'Errno::ETIMEDOUT detected, retrying operation'
retry
end
a sample use would look like this:
execute_block_and_rety_if_needed { #browser.link(name: 'OK').wait_until_present.click } # click the 'OK' button
As you can see, this clearly violates the DRY principle as I need to call this proc every single time.
My question is: how can I move this as a module / feature of Watir so that it picks it up automatically. (ideally I would add a maximum number of retry to prevent an infinite loop).
Version information:
- Chromedriver => 2.29.461585
- GeckoDriver => 0.16.1
- Firefox => ESR 52
- Chrome => 58
- Watir => 6.2.1
As far as the DRY comment, I referred to the fact that I had to wrap ALL of my Watir calls with the proc, sorry if this wasn't clear.
execute_block_and_rety_if_needed { #browser.link(name: 'User').wait_until_present.click } # click the 'Edit' button
execute_block_and_rety_if_needed { #browser.link(name: 'Cancel').wait_until_present.click } # click the 'Cancel' button
execute_block_and_rety_if_needed { #browser.link(name: 'OK').wait_until_present.click } # click the 'OK' button
The above is just an example that has to happen if I want to use the retry mechanism.
Given that you want to retry every command sent to the browser, you might want to consider addressing the issue in the underlying Selenium-WebDriver rather than Watir. Watir commands get sent to Selenium-WebDriver, which in turn sends them to the browser/driver.
Each command (or at least most) is currently sent through Selenium::WebDriver::Remote::Http:Default#request. You could patch the method to wrap it in a retry. Not only would your clicks retry for timeouts, but so would every other command - eg navigation, setting fields, getting values, etc.
# Patch to retry timeouts during requests
require 'watir'
module Selenium
module WebDriver
module Remote
module Http
module DefaultExt
def request(*args)
tries ||= 3
super
rescue Net::ReadTimeout, Net::HTTPRequestTimeOut, Errno::ETIMEDOUT => ex
puts "#{ex.class} detected, retrying operation"
(tries -= 1).zero? ? raise : retry
end
end
end
end
end
end
Selenium::WebDriver::Remote::Http::Default.prepend(Selenium::WebDriver::Remote::Http::DefaultExt)
# Then you can use Watir as usual
browser = Watir::Browser.new :chrome # this will retry timeouts
browser.goto('http://www.example.com') # this will also retry timeouts
browser.link.click # this will also retry timeouts
You shouldn't need to use a block for this. You can implement a method that does something like:
def ensure_click(element, retries = 3)
#retries ||= retries
element.click
rescue Net::ReadTimeout, Net::HTTPRequestTimeOut, Errno::ETIMEDOUT => ex
raise unless #retries > 0
#retries = #retries - 1
puts "#{ex.class} detected, retrying"
retry
end
...
ensure_click(#browser.link(name: 'User'))
...
That being said, those exceptions are not typically driver errors, but network issues of some sort. The are not normal.
Related
I've created a program that pulls websites off of google and then strips them down to their basic url: example http://google.com/search/owie/weikw => http://google.com. It then saves these to a file.
After that it runs a .each_line on the file then runs a whois command, what I want to do is if the command doesn't respond in a certain amount of time, skip that line of the file and go to the next one, is there a way I can do this?
Use the Timeout Module
If your scraper or whois doesn't support timeout natively, you can use Timeout::timeout to set an upper bound in seconds. For example:
require 'timeout'
MAX_SECONDS = 10
begin
Timeout::timeout(MAX_SECONDS) do
# run your whois
end
rescue Timeout::Error
# handle the exception
end
By default, this will raise a Timeout::Error exception if the block exceeds the time limit, but you can have the method raise other exceptions if you prefer. How you handle the exceptions is then up to you.
I am using a RubyGem (DeathByCaptcha) that makes HTTP calls to deathbycaptcha.com. Every so often the HTTP request times out or fails for some other unknown reason, and my Ruby scripts exits with an exception. I am trying to automate repeated instances of this method ("decode") and I am trying to determine if there is a way to prevent an error in this method from exiting the whole script.
EDIT: Since I'm bound to get flamed on here, I will mention upfront that the purpose of this is to determine the effectiveness of different captcha options on my website's registration page with common captcha-breakers, because I have had problems with spam signups.
Here is how to prevent the exception from exiting the script.
tries = 0
begin
# risky failing code
rescue
sleep(1) # sleep n seconds
tries += 1
retry if tries <= 3 # retry the risky code again
end
You would need to catch the exception that is raised and somehow handle it.
You are looking for something like
begin
# Send HTTP request
rescue WhateverExceptionClassYouGet > error
# Do something with the error
end
I have a small script which scans all the ips ranging from 192.168.190.xxx to 192.168.220.xxx on port 411.
The script works fine sometimes, but sometimes I get the error "No buffer space available"
dcport.rb:8:ininitialize': No buffer space available - connect(2) (Errno::ENOBUFS)`
I have read that this occurs when the socket were not closed properly, but I have used mysocket.close to prevent that which I suppose does not work properly.
How to prevent this from happening, I mean how to close the socket properly?
My code is as follows
require 'socket'
require 'timeout'
(190...216).each do |i|
(0...255).each do |j|
begin
#puts "Scanning 192.168.#{i}.#{j}"
scan=Timeout::timeout(10/1000.0) {
s=TCPSocket.new("192.168.#{i}.#{j}",411)
s.close
puts "192.168.#{i}.#{j} => Hub running"
}
rescue Timeout::Error
rescue Errno::ENETUNREACH
rescue Errno::ECONNREFUSED
end
end
end
My guess is that, sometimes, the timeout fires between the socket creation and the socket closing, which makes you leak some sockets. Since (as far as a quick google search told me), ENOBUFS happens by default after 1024 sockets opened, that could definitely be it.
Timeout, as well as Thread.raise, is very harmful in situations where you need to be sure that something happens (in your case, s.close), as you actually cannot guarantee it anymore: the exception could be raised anywhere, even within an ensure block.
In your case, I think that you could fix it by adding an ensure clause outside the timeout block (untested code follows):
require 'socket'
require 'timeout'
(190...216).each do |i|
(0...255).each do |j|
begin
#puts "Scanning 192.168.#{i}.#{j}"
s = nil
scan=Timeout::timeout(10/1000.0) do
s=TCPSocket.new("192.168.#{i}.#{j}",411)
puts "192.168.#{i}.#{j} => Hub running"
end
rescue Timeout::Error
rescue Errno::ENETUNREACH
rescue Errno::ECONNREFUSED
ensure
s.close if s
end
end
end
I'm implementing a ruby server for handling sockets being created from GPRS modules. The thing is that when the module powers down, there's no indication that the socket closed.
I'm doing threads to handle multiple sockets with the same server. What I'm asking is this: Is there a way to use a timer inside a thread, reset it after every socket input, and that if it hits the timeout, closes the thread? Where can I find more information about this?
EDIT: Code example that doesn't detect the socket closing
require 'socket'
server = TCPServer.open(41000)
loop do
Thread.start(server.accept) do |client|
puts "Client connected"
begin
loop do
line = client.readline
open('log.txt', 'a') { |f|
f.puts line.strip
}
end
rescue
puts "Client disconnected"
end
end
end
I think you need a heartbeat mechanism.
At a guess, your sockets are inexplably closing because you're not catching exceptions that are raised when they are closed by the remote end.
you need to wrap the connection handler in an exception catching block.
Without knowing what module/model you're using I will just fudge it and say you have a process_connection routine. So you need to do something like this:
def process_connection(conn)
begin
# do stuff
rescue Exception => e
STDERR.print "Caught exception #{e}: #{e.message}\n#{e.backtrace}\n"
ensure
conn.close
end
end
This will catch all exceptions and dump them to stderr with a stack trace. From there you can see what is causing them, and possibly handle them more gracefully elsewhere.
Just check the standar API Timeout:
require 'timeout'
status = Timeout::timeout(3){sleep(1)}
puts status.inspect
status = Timeout::timeout(1){sleep(2)}
I read this snippet, and I am trying to understand how I can use retry, and I am unable to think of a use. How are others using it?
#!/usr/bin/ruby
for i in 1..5
retry if i > 2
puts "Value of local variable is #{i}"
end
There are several use cases. Here's one from the Programming Ruby book
#esmtp = true
begin
# First try an extended login. If it fails because the
# server doesn't support it, fall back to a normal login
if #esmtp then
#command.ehlo(helodom)
else
#command.helo(helodom)
end
rescue ProtocolError
if #esmtp then
#esmtp = false
retry
else
raise
end
end
An other common case is the email delivery. You might want to retry the SMTP delivery for N times adding a sleep between each retry to avoid temporary issues caused by network connectivity.
I am using it for a module that makes an api call to a 3rd party web api, and so if it fails, I retry 2 more times.