"rescue Exception" not rescuing Timeout::Error in net_http - ruby

We appear to have a situation where rescue Exception is not catching a particular exception.
I'm trying to send an email alert about any exception that occurs, and then continue processing. We've put in the requisit handling of intentional exits. We want the loop to keep going, after alerting us, for anything else.
The exception that is not being caught is ostensibly Timeout::Error, according to the stack trace.
Here is the stack trace, with references to my intermediate code removed (the last line of my code is request.rb:93):
/opt/ruby-enterprise/lib/ruby/1.8/timeout.rb:64:in `rbuf_fill': execution expired (Timeout::Error)
from /opt/ruby-enterprise/lib/ruby/1.8/net/protocol.rb:134:in `rbuf_fill'
from /opt/ruby-enterprise/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
from /opt/ruby-enterprise/lib/ruby/1.8/net/protocol.rb:126:in `readline'
from /opt/ruby-enterprise/lib/ruby/1.8/net/http.rb:2028:in `read_status_line'
from /opt/ruby-enterprise/lib/ruby/1.8/net/http.rb:2017:in `read_new'
from /opt/ruby-enterprise/lib/ruby/1.8/net/http.rb:1051:in `__request__'
from /mnt/data/blueleaf/releases/20150211222522/vendor/bundle/ruby/1.8/gems/rest-client-1.6.7/lib/restclient/net_http_ext.rb:51:in `request'
from /opt/ruby-enterprise/lib/ruby/1.8/net/http.rb:1037:in `__request__'
from /opt/ruby-enterprise/lib/ruby/1.8/net/http.rb:543:in `start'
from /opt/ruby-enterprise/lib/ruby/1.8/net/http.rb:1035:in `__request__'
from /mnt/data/blueleaf/releases/20150211222522/vendor/bundle/ruby/1.8/gems/rest-client-1.6.7/lib/restclient/net_http_ext.rb:51:in `request'
from /mnt/data/blueleaf/releases/20150211222522/app/models/dst/request.rb:93:in `send'
[intermediate code removed]
from script/dst_daemon.rb:49
from script/dst_daemon.rb:46:in `each'
from script/dst_daemon.rb:46
from /opt/ruby-enterprise/lib/ruby/1.8/benchmark.rb:293:in `measure'
from script/dst_daemon.rb:45
from script/dst_daemon.rb:24:in `loop'
from script/dst_daemon.rb:24
from script/runner:3:in `eval'
from /mnt/data/blueleaf/releases/20150211222522/vendor/bundle/ruby/1.8/gems/rails-2.3.14/lib/commands/runner.rb:46
from script/runner:3:in `require'
Here is request.rb#send, with line 93 indicated with a comment:
def send
build
uri = URI.parse([DST::Request.configuration[:prefix], #path].join('/'))
https = Net::HTTP.new(uri.host, uri.port)
https.use_ssl = true
https.verify_mode = OpenSSL::SSL::VERIFY_NONE
https_request = Net::HTTP::Post.new(uri.request_uri.tap{|e| debug_puts "\nURL: #{e}, host:#{uri.host}"})
# line 93:
https_request.body = request
response = https.request(https_request)
# the rest should be irrelevant
Here is dst_daemon.rb; line 49 is indicated with a comment, and the rescue Exception that should catch anything other than deliberate interrupts is near the end:
DST::Request.environment = :production
class DST::Request::RequestFailed < Exception; end
Thread.abort_on_exception = true
SEMAPHORE = 'import/dst/start.txt' unless defined?(SEMAPHORE)
DEBUG_DST = 'import/dst/debug.txt' unless defined?(DEBUG_DST)
DEBUG_LOG = 'import/dst/debug.log' unless defined?(DEBUG_LOG)
def debug_dst(*args)
File.open(DEBUG_LOG, 'a') do |f|
f.print "#{Time.now.localtime}: "
f.puts(*args)
end if debug_dst?
end
def debug_dst?
File.exist?(DEBUG_DST)
end
dst_ids = [Institution::BAA_DST_WS_CLIENT_ID, Institution::BAA_DST_WS_DEALER_ID]
institutions = Institution.find_all_by_baa_api_financial_institution_id(dst_ids)
DST::Collector.prime_key!
loop do
begin
if File.exist?(SEMAPHORE)
debug_dst 'waking up...'
custodians = InstitutionAccount.acts_as_baa_custodian.
find_all_by_institution_id(institutions).select(&:direct?)
good,bad = custodians.partition do |c|
c.custodian_users.map{|e2|e2.custodian_passwords.count(:conditions => ['expired is not true']) == 1}.all?
end
if bad.present?
msg = " skipping: \n"
bad.each do |c|
msg += " #{c.user.full_name_or_email}, custodian id #{c.id}: "
c.custodian_users.each{|cu| msg += "#{cu.username}:#{cu.custodian_passwords.count(:conditions => ['expired is not true'])}; "}
msg += "\n"
end
AdminSimpleMailer.deliver_generic_mail("DST Daemon skipping #{bad.size} connections", msg)
debug_dst msg
end
Benchmark.measure do
good.each do |custodian|
begin
debug_dst " collecting for: #{custodian.name}, #{custodian.subtitle}, (#{custodian.id.inspect})"
# line 49:
DST::Collector.new(custodian, 0).collect!
rescue DST::Request::PasswordFailed, DST::Request::RequestFailed => e
message = e.message + "\n\n" + e.backtrace.join("\n")
AdminSimpleMailer.deliver_generic_mail("DST Daemon Connection Failed #{e.class.name}", message)
debug_dst " skipping, #{e.class}"
end
end
end.tap{|duration| debug_dst "collection done, duration #{duration.real.to_f/60} minutes. importing" }
DST::Strategy.new(Date.yesterday, :recompute => true).import!
debug_dst 'import done.'
rm SEMAPHORE, :verbose => debug_dst?
else
debug_dst 'sleeping.' if Time.now.strftime("%M").to_i % 5 == 0
end
rescue SystemExit, Interrupt
raise
rescue Exception => e
message = e.message + "\n\n" + e.backtrace.join("\n")
AdminSimpleMailer.deliver_generic_mail("DST Daemon Exception #{e.class.name}", message)
ensure
sleep 60
end
end
Shouldn't it be impossible for this loop to exit with a stack trace other than from SystemExit or Interrupt?

As you probably know already, calling raise inside a rescue block will raise the exception to the caller.
Since Timeout::Error is an Interrupt in ruby 1.8*, the timeout exception raised by net_http gets handled in the rescue SystemExit, Interrupt block rather than in the following rescue Exception => e.
To verify that Timeout::Error is an Interrupt, just evaluate Timeout::Error.ancestors. What you get out of that is the hierarchy of classes Timeout::Error inherits from.
*this is no longer the case in ruby1.9.

Related

Ruby curb (libcurl): testing for time-out in GET request

Using curb gem (https://github.com/taf2/curb) to GET from a REST API.
resp = Curl.get("http://someurl.com/users.json") do |http|
http.headers["API-Key"] = ENV["API_KEY"]
end
# do stuff with resp.body_str
I've started encountering occasional time-outs with the Curl.get.
Would like to add logic where I try to GET: if the request times out, we try it again, i.e.
loop do
resp = Curl.get("http://someurl.com/users.json") do |http|
http.headers["API-Key"] = ENV["API_KEY"]
end
# test result of Curl.get
# if time-out, then then try again
end
Haven't been able to find/figure out how to test for a time-out result.
What am I missing?
UPDATED: added exception details
Curl::Err::TimeoutError: Timeout was reached
/app/vendor/bundle/ruby/2.3.0/gems/curb-0.9.3/lib/curl/easy.rb:73:in `perform'
/app/vendor/bundle/ruby/2.3.0/gems/curb-0.9.3/lib/curl.rb:17:in `http'
/app/vendor/bundle/ruby/2.3.0/gems/curb-0.9.3/lib/curl.rb:17:in `http'
/app/vendor/bundle/ruby/2.3.0/gems/curb-0.9.3/lib/curl.rb:22:in `get'
/app/lib/tasks/redmine.rake:307:in `block (4 levels) in <top (required)>'
Here is the general idea of the rescue approach I mentioned in my comment:
loop do
begin
resp = Curl.get("http://someurl.com/users.json") do |http|
http.headers["API-Key"] = ENV["API_KEY"]
end
# process successful response here
rescue Curl::Err::TimeoutError
# process error here
end
end
You would then need to modify this to do the retries. Here is 1 implementation (not tested though):
# Returns the response on success, nil on TimeoutError
def get1(url)
begin
Curl.get(url) do |http|
http.headers["API-Key"] = ENV["API_KEY"]
end
rescue Curl::Err::TimeoutError
nil
end
end
# Returns the response on success, nil on TimeoutErrors after all retry_count attempts.
def get_with_retries(url, retry_count)
retry_count.times do
result = get1(url)
return result if result
end
nil
end
response = get_with_retries("http://someurl.com/users.json", 3)
if response
# handle success
else
# handle timeout failure
end
We can also do it in block
def handle_timeouts
begin
yield
rescue Curl::Err::TimeoutError
retry
end
end
handle_timeouts do
resp = Curl.get("http://someurl.com/users.json") do |http|
http.headers["API-Key"] = ENV["API_KEY"]
end
end

Ruby gem 'tcp_timeout' failed to suppress raising error

I wrote a script using 'socket' that connects to a host and port and because socket.timeout doesn't really work I tried using the 'tcp_timeout' gem that works properly but I can't seem to suppress the error raised when connect/read/write timeout happens. Any idea where am I wrong?
begin
socket = TCPTimeout::TCPSocket.new(server, port, connect_timeout: 6, read_timeout: 6)
unless socket.read(12) =~ /^SMTH\n$/
puts "[!] #{server} banner error"
exit(1)
end
rescue TCPTimeout::SocketTimeout => err
puts "[!] #{server} Timeout"
exit(1)
end
The error raised, as expected is a read timeout error:
/usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:160:in `select_timeout': read timeout (TCPTimeout::SocketTimeout)
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:108:in `block in read'
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:107:in `loop'
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:107:in `read'
from ./myhost.rb:67:in `<main>'
I tried even:
rescue TCPTimeout::SocketTimeout, StandardError, Timeout::Error => err
Same thing happens.
Author of tcp_timeout here; your code looks correct. This snippet works as expected (for me):
require 'tcp_timeout'
begin
socket = TCPTimeout::TCPSocket.new('stackoverflow.com', 80, read_timeout: 1)
socket.read(100)
rescue TCPTimeout::SocketTimeout => e
puts 'Rescued!', e
end
If you can find a snippet that fails reliably against a public server please file a bug: https://github.com/lann/tcp-timeout-ruby/issues

how to test open-uri url exist before processing any data

I'm trying to process content from a list of links using "open-uri" in ruby (1.8.6), but the bad thing happens when I'm getting an error when one link is broken or requires authentication:
open-uri.rb:277:in `open_http': 404 Not Found (OpenURI::HTTPError)
from C:/tools/Ruby/lib/ruby/1.8/open-uri.rb:616:in `buffer_open'
from C:/tools/Ruby/lib/ruby/1.8/open-uri.rb:164:in `open_loop'
from C:/tools/Ruby/lib/ruby/1.8/open-uri.rb:162:in `catch'
or
C:/tools/Ruby/lib/ruby/1.8/net/http.rb:560:in `initialize': getaddrinfo: no address associated with hostname. (SocketError)
from C:/tools/Ruby/lib/ruby/1.8/net/http.rb:560:in `open'
from C:/tools/Ruby/lib/ruby/1.8/net/http.rb:560:in `connect'
from C:/tools/Ruby/lib/ruby/1.8/timeout.rb:53:in `timeout'
or
C:/tools/Ruby/lib/ruby/1.8/net/protocol.rb:133:in `sysread': An existing connection was forcibly closed by the remote host. (Errno::ECONNRESET)
from C:/tools/Ruby/lib/ruby/1.8/net/protocol.rb:133:in `rbuf_fill'
from C:/tools/Ruby/lib/ruby/1.8/timeout.rb:62:in `timeout'
from C:/tools/Ruby/lib/ruby/1.8/timeout.rb:93:in `timeout'
is there a way to test the response (url) before processing any data?
the code is:
require 'open-uri'
smth.css.each do |item|
open('item[:name]', 'wb') do |file|
file << open('item[:href]').read
end
end
Many thanks
You could try something along the lines of
require 'open-uri'
smth.css.each do |item|
begin
open('item[:name]', 'wb') do |file|
file << open('item[:href]').read
end
rescue => e
case e
when OpenURI::HTTPError
# do something
when SocketError
# do something else
else
raise e
end
rescue SystemCallError => e
if e === Errno::ECONNRESET
# do something else
else
raise e
end
end
end
I don't know of any way of testing the connection without opening it and trying, so rescuing these errors would be the only way I can think of. The thing to be aware of is that OpenURI::HTTPError and SocketError are both subclasses of StandardError, whereas Errno::ECONNRESET is a subclass of SystemCallError. So rescue => e won't catch Errno::ECONNRESET.
I was able to solve this problem by using a conditional if/else statement to check the return value of the action for "failure":
def controller_action
url = "some_API"
response = open(url).read
data = JSON.parse(response)["data"]
if response["status"] == "failure"
redirect_to :action => "home"
else
do_something_else
end
end

stream closed (IOError) when closing Ruby TCPSocket client

I've got a Ruby TCPSocket client that works great except when I'm trying to close it. When I call the disconnect method in my code below, I get this error:
./smartlinc.rb:70:in `start_listen': stream closed (IOError)
from ./smartlinc.rb:132:in `initialize'
from ./smartlinc.rb:132:in `new'
from ./smartlinc.rb:132:in `start_listen'
from bot.rb:45:in `initialize'
from bot.rb:223:in `new'
from bot.rb:223
Here's the (simplified) code:
class Smartlinc
def initialize
#socket = TCPSocket.new(HOST, PORT)
end
def disconnect
#socket.close
end
def start_listen
# Listen on a background thread
th = Thread.new do
Thread.current.abort_on_exception = true
# Listen for Ctrl-C and disconnect socket gracefully.
Kernel.trap('INT') do
self.disconnect
exit
end
while true
ready = IO.select([#socket])
readable = ready[0]
readable.each do |soc|
if soc == #socket
buf = #socket.recv_nonblock(1024)
if buf.length == 0
puts "The socket connection is dead. Exiting."
exit
else
puts "Received Message"
end
end
end # end each
end # end while
end # end thread
end # end message callback
end
Is there a way I can prevent or catch this error? I'm no expert in socket programming (obviously!), so all help is appreciated.
Your thread is sitting in IO.select() while the trap code happily slams the door in its face with #socket.close, hence you get some complaining.
Don't set abort_on_exception to true, or then handle the exception properly in your code:
Something along these lines...
Kernel.trap('INT') do
#interrupted = true
disconnect
exit
end
...
ready = nil
begin
ready = IO.select(...)
rescue IOError
if #interrupted
puts "Interrupted, we're outta here..."
exit
end
# Else it was a genuine IOError caused by something else, so propagate it up..
raise
end
...

Limit to how many errors can be rescued?

I have a program that I'm using as a pentesting tool, I'm in the process of discovering if websites are SQL vulnerable and came across a Timeout::Error now I have tried to rescue the error but there's also a few other errors that need to be rescued as well. So my question is, is there a limit to how many errors can be rescued within a rescue block? And if not why is this Timeout not getting rescued?
Source:
def get_urls
info("Searching for possible SQL vulnerable sites.")
#agent = Mechanize.new
page = #agent.get('http://www.google.com/')
google_form = page.form('f')
google_form.q = "#{SEARCH}"
url = #agent.submit(google_form, google_form.buttons.first)
url.links.each do |link|
if link.href.to_s =~ /url.q/
str = link.href.to_s
str_list = str.split(%r{=|&})
urls = str_list[1]
next if str_list[1].split('/')[2] == "webcache.googleusercontent.com"
urls_to_log = urls.gsub("%3F", '?').gsub("%3D", '=')
success("Site found: #{urls_to_log}")
File.open("#{PATH}/temp/SQL_sites_to_check.txt", "a+") {|s| s.puts("#{urls_to_log}'")}
end
end
info("Possible vulnerable sites dumped into #{PATH}/temp/SQL_sites.txt")
end
def check_if_vulnerable
info("Checking if sites are vulnerable.")
IO.read("#{PATH}/temp/SQL_sites_to_check.txt").each_line do |parse|
Timeout::timeout(5) do
begin
#parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}"))
rescue Timeout::Error, RestClient::ResourceNotFound, RestClient::SSLCertificateNotVerified
if RestClient::ResourceNotFound
warn("URL: #{parse.chomp} returned 404 error, URL dumped into 404 bin")
File.open("#{PATH}/lib/404_bin.txt", "a+"){|s| s.puts(parse)}
elsif RestClient::SSLCertificateNotVerified
err("URL: #{parse.chomp} requires SSL cert, url dumped into SSL bin")
File.open("#{PATH}/lib/SSL_bin.txt", "a+"){|s| s.puts(parse)}
elsif Timeout::Error
warn("URL: #{parse.chomp} failed to load resulting in time out after 10 seconds. URL dumped into TIMEOUT bin")
File.open("#{PATH}/lib/TIMEOUT_bin.txt", "a+"){|s| s.puts(parse)}
end
end
end
end
end
Error:
C:/Ruby22/lib/ruby/2.2.0/net/http.rb:892:in `new': execution expired (Timeout::E
rror)
from C:/Ruby22/lib/ruby/2.2.0/net/http.rb:892:in `connect'
from C:/Ruby22/lib/ruby/2.2.0/net/http.rb:863:in `do_start'
from C:/Ruby22/lib/ruby/2.2.0/net/http.rb:852:in `start'
from C:/Ruby22/lib/ruby/gems/2.2.0/gems/rest-client-1.8.0-x86-mingw32/li
b/restclient/request.rb:413:in `transmit'
from C:/Ruby22/lib/ruby/gems/2.2.0/gems/rest-client-1.8.0-x86-mingw32/li
b/restclient/request.rb:176:in `execute'
from C:/Ruby22/lib/ruby/gems/2.2.0/gems/rest-client-1.8.0-x86-mingw32/li
b/restclient/request.rb:41:in `execute'
from C:/Ruby22/lib/ruby/gems/2.2.0/gems/rest-client-1.8.0-x86-mingw32/li
b/restclient.rb:65:in `get'
from whitewidow.rb:94:in `block (2 levels) in check_if_vulnerable'
from C:/Ruby22/lib/ruby/2.2.0/timeout.rb:88:in `block in timeout'
from C:/Ruby22/lib/ruby/2.2.0/timeout.rb:32:in `block in catch'
from C:/Ruby22/lib/ruby/2.2.0/timeout.rb:32:in `catch'
from C:/Ruby22/lib/ruby/2.2.0/timeout.rb:32:in `catch'
from C:/Ruby22/lib/ruby/2.2.0/timeout.rb:103:in `timeout'
from whitewidow.rb:92:in `block in check_if_vulnerable'
from whitewidow.rb:91:in `each_line'
from whitewidow.rb:91:in `check_if_vulnerable'
from whitewidow.rb:113:in `<main>'
As you can see in the check_vulns method I have the Timeout::Error rescued. So what is causing this to timeout without moving to the next URL? I've tried adding a next to the rescue but it still doesn't work, help please?
By simply moving the Timeout I can rescue the error
def check_if_vulnerable
info("Checking if sites are vulnerable.")
IO.read("#{PATH}/temp/SQL_sites_to_check.txt").each_line do |parse|
begin
Timeout::timeout(5) do
#parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}"))
end
rescue Timeout::Error, RestClient::ResourceNotFound, RestClient::SSLCertificateNotVerified
if RestClient::ResourceNotFound
warn("URL: #{parse.chomp} returned 404 error, URL dumped into 404 bin")
File.open("#{PATH}/lib/404_bin.txt", "a+"){|s| s.puts(parse)}
elsif RestClient::SSLCertificateNotVerified
err("URL: #{parse.chomp} requires SSL cert, url dumped into SSL bin")
File.open("#{PATH}/lib/SSL_bin.txt", "a+"){|s| s.puts(parse)}
elsif Timeout::Error
warn("URL: #{parse.chomp} failed to load resulting in time out after 10 seconds. URL dumped into TIMEOUT bin")
File.open("#{PATH}/lib/TIMEOUT_bin.txt", "a+"){|s| s.puts(parse)}
end
end
end
end
end

Resources