A use case for retry function? - ruby

I read this snippet, and I am trying to understand how I can use retry, and I am unable to think of a use. How are others using it?
#!/usr/bin/ruby
for i in 1..5
retry if i > 2
puts "Value of local variable is #{i}"
end

There are several use cases. Here's one from the Programming Ruby book
#esmtp = true
begin
# First try an extended login. If it fails because the
# server doesn't support it, fall back to a normal login
if #esmtp then
#command.ehlo(helodom)
else
#command.helo(helodom)
end
rescue ProtocolError
if #esmtp then
#esmtp = false
retry
else
raise
end
end
An other common case is the email delivery. You might want to retry the SMTP delivery for N times adding a sleep between each retry to avoid temporary issues caused by network connectivity.

I am using it for a module that makes an api call to a 3rd party web api, and so if it fails, I retry 2 more times.

Related

Is a retry necessary here in a timeout?

I have the following code:
task :wait_for_vault_ssl_up do
Timeout::timeout(15) do
begin
TCPSocket.new('localhost', 8200).close
puts "Vault ssl up and ready!"
true
rescue Errno::ECONNREFUSED, Errno::EHOSTUNREACH, SocketError
puts "Vault ssl not ready... retrying"
retry
false
end
end
rescue Timeout::Error
false
end
In this, i set a timeout of 15 seconds and try to connect to the localhost:8200 my question is, if its not up, i currently do a retry in the rescue. is that necessary or will it automatically continue trying to connect for 15 seconds?
The use of retry will retry the execution of the whole block being rescued. So in this case, retry will start execution over again at your call to TCPSocket.new. This will happen immediately after the exception is rescued and there will not be a delay.
As Stefan has mentioned in the comments, you should use Kernel#sleep before calling retry. This will continue retrying until interrupted, so if you found yourself in a situation where the SSL connection will never be ready, the code will loop forever. You can either keep track of a retry count, or continue utilizing Timeout::timeout, using a longer timeout length, at which point the loop should give up entirely.
I think the clearer and simpler solution would be to use a retry counter, and give up after a predefined number of retries.

Adding Retry mechanism to Watir in case of Timeout

I have a series of scripts that I have developed using Ruby and the Watir gem. Those are wrapped by Spinach, but that is beside what I am about to ask.
The intent of those scripts is to do some functional spot check or simply alleviate some very repetitive tasks.
They have been running well for a while, but lately, I've started to see a lot of failure due to Timeouts between the Chromedriver / Geckodriver (tried both browsers) and the scripts. Of course, I could simply restart the script, but when the success rate goes below 70 % it really starts to be aggravating.
What I ended up doing is wrap up all of my calls to Watir in a Proc with a Begin, rescue that would do a retry in case of a timeout.
This is ugly and violates so many rules that I am nearly ashamed to had to resort to this solution, but at least using this my scripts are now completing.
here is how I worked around the issue:
# takes a proc and wraps it around a series of rescue
def execute_block_and_rety_if_needed
yield
rescue Net::ReadTimeout
puts 'Read Timeout detected, retrying operation'
retry
rescue Net::HTTPRequestTimeOut
puts 'Http Request Timeout detected, retrying operation'
retry
rescue Errno::ETIMEDOUT
puts 'Errno::ETIMEDOUT detected, retrying operation'
retry
end
a sample use would look like this:
execute_block_and_rety_if_needed { #browser.link(name: 'OK').wait_until_present.click } # click the 'OK' button
As you can see, this clearly violates the DRY principle as I need to call this proc every single time.
My question is: how can I move this as a module / feature of Watir so that it picks it up automatically. (ideally I would add a maximum number of retry to prevent an infinite loop).
Version information:
- Chromedriver => 2.29.461585
- GeckoDriver => 0.16.1
- Firefox => ESR 52
- Chrome => 58
- Watir => 6.2.1
As far as the DRY comment, I referred to the fact that I had to wrap ALL of my Watir calls with the proc, sorry if this wasn't clear.
execute_block_and_rety_if_needed { #browser.link(name: 'User').wait_until_present.click } # click the 'Edit' button
execute_block_and_rety_if_needed { #browser.link(name: 'Cancel').wait_until_present.click } # click the 'Cancel' button
execute_block_and_rety_if_needed { #browser.link(name: 'OK').wait_until_present.click } # click the 'OK' button
The above is just an example that has to happen if I want to use the retry mechanism.
Given that you want to retry every command sent to the browser, you might want to consider addressing the issue in the underlying Selenium-WebDriver rather than Watir. Watir commands get sent to Selenium-WebDriver, which in turn sends them to the browser/driver.
Each command (or at least most) is currently sent through Selenium::WebDriver::Remote::Http:Default#request. You could patch the method to wrap it in a retry. Not only would your clicks retry for timeouts, but so would every other command - eg navigation, setting fields, getting values, etc.
# Patch to retry timeouts during requests
require 'watir'
module Selenium
module WebDriver
module Remote
module Http
module DefaultExt
def request(*args)
tries ||= 3
super
rescue Net::ReadTimeout, Net::HTTPRequestTimeOut, Errno::ETIMEDOUT => ex
puts "#{ex.class} detected, retrying operation"
(tries -= 1).zero? ? raise : retry
end
end
end
end
end
end
Selenium::WebDriver::Remote::Http::Default.prepend(Selenium::WebDriver::Remote::Http::DefaultExt)
# Then you can use Watir as usual
browser = Watir::Browser.new :chrome # this will retry timeouts
browser.goto('http://www.example.com') # this will also retry timeouts
browser.link.click # this will also retry timeouts
You shouldn't need to use a block for this. You can implement a method that does something like:
def ensure_click(element, retries = 3)
#retries ||= retries
element.click
rescue Net::ReadTimeout, Net::HTTPRequestTimeOut, Errno::ETIMEDOUT => ex
raise unless #retries > 0
#retries = #retries - 1
puts "#{ex.class} detected, retrying"
retry
end
...
ensure_click(#browser.link(name: 'User'))
...
That being said, those exceptions are not typically driver errors, but network issues of some sort. The are not normal.

Prevent exceptions in Rubygem that cannot communicate with its associated web service?

I am using a RubyGem (DeathByCaptcha) that makes HTTP calls to deathbycaptcha.com. Every so often the HTTP request times out or fails for some other unknown reason, and my Ruby scripts exits with an exception. I am trying to automate repeated instances of this method ("decode") and I am trying to determine if there is a way to prevent an error in this method from exiting the whole script.
EDIT: Since I'm bound to get flamed on here, I will mention upfront that the purpose of this is to determine the effectiveness of different captcha options on my website's registration page with common captcha-breakers, because I have had problems with spam signups.
Here is how to prevent the exception from exiting the script.
tries = 0
begin
# risky failing code
rescue
sleep(1) # sleep n seconds
tries += 1
retry if tries <= 3 # retry the risky code again
end
You would need to catch the exception that is raised and somehow handle it.
You are looking for something like
begin
# Send HTTP request
rescue WhateverExceptionClassYouGet > error
# Do something with the error
end

Ruby Thread with "watchdog"

I'm implementing a ruby server for handling sockets being created from GPRS modules. The thing is that when the module powers down, there's no indication that the socket closed.
I'm doing threads to handle multiple sockets with the same server. What I'm asking is this: Is there a way to use a timer inside a thread, reset it after every socket input, and that if it hits the timeout, closes the thread? Where can I find more information about this?
EDIT: Code example that doesn't detect the socket closing
require 'socket'
server = TCPServer.open(41000)
loop do
Thread.start(server.accept) do |client|
puts "Client connected"
begin
loop do
line = client.readline
open('log.txt', 'a') { |f|
f.puts line.strip
}
end
rescue
puts "Client disconnected"
end
end
end
I think you need a heartbeat mechanism.
At a guess, your sockets are inexplably closing because you're not catching exceptions that are raised when they are closed by the remote end.
you need to wrap the connection handler in an exception catching block.
Without knowing what module/model you're using I will just fudge it and say you have a process_connection routine. So you need to do something like this:
def process_connection(conn)
begin
# do stuff
rescue Exception => e
STDERR.print "Caught exception #{e}: #{e.message}\n#{e.backtrace}\n"
ensure
conn.close
end
end
This will catch all exceptions and dump them to stderr with a stack trace. From there you can see what is causing them, and possibly handle them more gracefully elsewhere.
Just check the standar API Timeout:
require 'timeout'
status = Timeout::timeout(3){sleep(1)}
puts status.inspect
status = Timeout::timeout(1){sleep(2)}

'Who's online?' Ruby Network Program

I have several embedded linux systems that I want to write a 'Who's Online?' network service in Ruby. Below is related part of my code:
mySocket = UDPSocket.new
mySocket.bind("<broadcast>", 50050)
loop do
begin
text, sender = mySocket.recvfrom(1024)
puts text
if text =~ /KNOCK KNOCK/ then
begin
sock = UDPSocket.open
sock.send(r.ipaddress, 0, sender[3], 50051)
sock.close
rescue
retry
end
end
rescue Exception => inLoopEx
puts inLoopEx.message
puts inLoopEx.backtrace.inspect
retry
end
end
I send the 'KNOCK KNOCK' command from a PC. Now, the problem is since they all receive the message at the same time, they try to respond at the same time too, which causes a Broken Pipe exception (which is the reason of my 'rescue retry' code). This code works OK sometimes but; other times the rescue retry part of the code (which is waked by Broken Pipe exception from sock.send) causes one or more systems to respond after 5 seconds or so.
Is there a better way of doing this since I assume I cant escape the Broken Pipe exception?
I have found that exception was caused by the 'r.ipaddress' part in the send command, which is related to my embedded system's internals...

Resources