ruby : open `join': No live threads left. Deadlock? (fatal) [duplicate] - ruby

I have is a deadlock, but I am not using any threads in my program. Plus, the error only happens about once every 1000 to 1500 function calls, making it very difficult to pinpoint and correct.
Here is the complete error message when the issue occurs:
/usr/lib/ruby/2.3.0/timeout.rb:95:in `join': No live threads left. Deadlock? (fatal)
from /usr/lib/ruby/2.3.0/timeout.rb:95:in `ensure in block in timeout'
from /usr/lib/ruby/2.3.0/timeout.rb:95:in `block in timeout'
from /usr/lib/ruby/2.3.0/timeout.rb:101:in `timeout'
from /usr/lib/ruby/2.3.0/net/http.rb:878:in `connect'
from /usr/lib/ruby/2.3.0/net/http.rb:863:in `do_start'
from /usr/lib/ruby/2.3.0/net/http.rb:852:in `start'
from /usr/lib/ruby/2.3.0/open-uri.rb:319:in `open_http'
from /usr/lib/ruby/2.3.0/open-uri.rb:737:in `buffer_open'
from /usr/lib/ruby/2.3.0/open-uri.rb:212:in `block in open_loop'
from /usr/lib/ruby/2.3.0/open-uri.rb:210:in `catch'
from /usr/lib/ruby/2.3.0/open-uri.rb:210:in `open_loop'
from /usr/lib/ruby/2.3.0/open-uri.rb:151:in `open_uri'
from /usr/lib/ruby/2.3.0/open-uri.rb:717:in `open'
from /usr/lib/ruby/2.3.0/open-uri.rb:35:in `open'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/utils.rb:85:in `get_pic'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_download.rb:87:in `page_link'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_download.rb:116:in `chapter_link'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_download.rb:142:in `chapter'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:57:in `block in MF_manga_missing_chapters'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:45:in `reverse_each'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:45:in `MF_manga_missing_chapters'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:80:in `MF_update'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:5:in `update_manga'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:15:in `block in update_all'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:14:in `each'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:14:in `update_all'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:22:in `update'
from ./MangaScrap.rb:28:in `<main>'
The link to the complete program is https://github.com/Hellfire01/MangaScrap
The issue happens to the three different methods that use open. Here is the one that crashed this time:
# conect to link and download picture
def get_pic(link)
safe_link = link.gsub(/[\[\]]/) { '%%%s' % $&.ord.to_s(16) }
tries ||= 20
begin
page = open(safe_link, "User-Agent" => "Ruby/#{RUBY_VERSION}")
rescue URI::InvalidURIError => error
puts "Warning : bad url"
puts link
puts "message is : " + error.message
return nil
rescue => error
if tries > 0
tries -= 1
sleep(0.2)
retry
else
puts 'could not get picture ' + safe_link + ' after ' + $nb_tries.to_s + ' tries'
puts "message is : " + error.message
return nil
end
end
sleep(0.2)
return page
end
Here is the link to the file: https://github.com/Hellfire01/MangaScrap/blob/master/sources/utils.rb
I would like to know:
How can I fix this error?
If I can not fix this error, are there alternatives to OpenUri that I can use?

You're not catching all exceptions here. When nothing is specified after rescue, it means that you're catching StandardError which is not at the root of Exceptions' hierarchy.
If you want to make sure you're catching all exceptions and retry opening a URL (or whatever behavior you'd like), what you want to do is:
rescue Exception => error

Related

How can I catch this Capybara::Poltergeist::ObsoleteNode error?

I am trying to automate some interaction with a web app, but capybara keeps throwing this
{"id":"c0616941-0375-4c42-a6d8-a3a5201c5235","name":"tag_name","args":[3,1]}
{"command_id":"c0616941-0375-4c42-a6d8-a3a5201c5235","error":{"name":"Poltergeist.ObsoleteNode","args":[]}}
Capybara::Poltergeist::ObsoleteNode: Capybara::Poltergeist::ObsoleteNode
from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/poltergeist-1.14.0/lib/capybara/poltergeist/node.rb:21:in `rescue in command'
from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/poltergeist-1.14.0/lib/capybara/poltergeist/node.rb:17:in `command'
from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/poltergeist-1.14.0/lib/capybara/poltergeist/node.rb:111:in `tag_name'
from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/capybara-2.13.0/lib/capybara/node/element.rb:258:in `block in tag_name'
from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/capybara-2.13.0/lib/capybara/node/base.rb:85:in `synchronize'
from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/capybara-2.13.0/lib/capybara/node/element.rb:258:in `tag_name'
from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/capybara-2.13.0/lib/capybara/node/element.rb:374:in `inspect'
from C:/Ruby23-x64/bin/irb.cmd:19:in `<main>'
I'd like to catch this exception and throw it away, but when I try
begin
#session.first(:button, 'Save').click
rescue Capybara::Poltergeist::ObsoleteNode
puts "Whoops!"
end
It still throws the exception and doesn't print out "Whoops!".

How to fix a deadlock caused by open

I have is a deadlock, but I am not using any threads in my program. Plus, the error only happens about once every 1000 to 1500 function calls, making it very difficult to pinpoint and correct.
Here is the complete error message when the issue occurs:
/usr/lib/ruby/2.3.0/timeout.rb:95:in `join': No live threads left. Deadlock? (fatal)
from /usr/lib/ruby/2.3.0/timeout.rb:95:in `ensure in block in timeout'
from /usr/lib/ruby/2.3.0/timeout.rb:95:in `block in timeout'
from /usr/lib/ruby/2.3.0/timeout.rb:101:in `timeout'
from /usr/lib/ruby/2.3.0/net/http.rb:878:in `connect'
from /usr/lib/ruby/2.3.0/net/http.rb:863:in `do_start'
from /usr/lib/ruby/2.3.0/net/http.rb:852:in `start'
from /usr/lib/ruby/2.3.0/open-uri.rb:319:in `open_http'
from /usr/lib/ruby/2.3.0/open-uri.rb:737:in `buffer_open'
from /usr/lib/ruby/2.3.0/open-uri.rb:212:in `block in open_loop'
from /usr/lib/ruby/2.3.0/open-uri.rb:210:in `catch'
from /usr/lib/ruby/2.3.0/open-uri.rb:210:in `open_loop'
from /usr/lib/ruby/2.3.0/open-uri.rb:151:in `open_uri'
from /usr/lib/ruby/2.3.0/open-uri.rb:717:in `open'
from /usr/lib/ruby/2.3.0/open-uri.rb:35:in `open'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/utils.rb:85:in `get_pic'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_download.rb:87:in `page_link'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_download.rb:116:in `chapter_link'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_download.rb:142:in `chapter'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:57:in `block in MF_manga_missing_chapters'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:45:in `reverse_each'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:45:in `MF_manga_missing_chapters'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/mangafox/MF_update.rb:80:in `MF_update'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:5:in `update_manga'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:15:in `block in update_all'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:14:in `each'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:14:in `update_all'
from /home/mat/travail_perso/RUBY/MangaScrapp_github/sources/update.rb:22:in `update'
from ./MangaScrap.rb:28:in `<main>'
The link to the complete program is https://github.com/Hellfire01/MangaScrap
The issue happens to the three different methods that use open. Here is the one that crashed this time:
# conect to link and download picture
def get_pic(link)
safe_link = link.gsub(/[\[\]]/) { '%%%s' % $&.ord.to_s(16) }
tries ||= 20
begin
page = open(safe_link, "User-Agent" => "Ruby/#{RUBY_VERSION}")
rescue URI::InvalidURIError => error
puts "Warning : bad url"
puts link
puts "message is : " + error.message
return nil
rescue => error
if tries > 0
tries -= 1
sleep(0.2)
retry
else
puts 'could not get picture ' + safe_link + ' after ' + $nb_tries.to_s + ' tries'
puts "message is : " + error.message
return nil
end
end
sleep(0.2)
return page
end
Here is the link to the file: https://github.com/Hellfire01/MangaScrap/blob/master/sources/utils.rb
I would like to know:
How can I fix this error?
If I can not fix this error, are there alternatives to OpenUri that I can use?
You're not catching all exceptions here. When nothing is specified after rescue, it means that you're catching StandardError which is not at the root of Exceptions' hierarchy.
If you want to make sure you're catching all exceptions and retry opening a URL (or whatever behavior you'd like), what you want to do is:
rescue Exception => error

Ruby and Https: A socket operation was attempted to an unreachable network

I'm trying to download all of my class notes from coursera. I figured that since I'm learning ruby this would be a good practice exercise, downloading all the PDFs they have for future use. Unfortunately though, I'm getting an exception saying ruby can't connect for some reason. Here is my code:
require 'net/http'
module Coursera
class Downloader
attr_accessor :page_url
attr_accessor :destination_directory
attr_accessor :cookie
def initialize(page_url,dest,cookie)
#page_url=page_url
#destination_directory = dest
#cookie=cookie
end
def download
puts #page_url
request = Net::HTTP::Get.new(#page_url)
puts #cookie.encoding
request['Cookie']=#cookie
# the line below is where the exception is thrown
res = Net::HTTP.start(#page_url.hostname, use_ssl=true,#page_url.port) {|http|
http.request(request)
}
html_page = res.body
pattern = /http[^\"]+\.pdf/
i=0
while (match = pattern.match(html_page,i)) != nil do
# 0 is the entire string.
url_string = match[0]
# make sure that 'i' is updated
i = match.begin(0)+1
# we want just the name of the file.
j = url_string.rindex("/")
filename = url_string[j+1..url_string.length]
destination = #destination_directory+"\\"+filename
# I want to download that resource to that file.
uri = URI(url_string)
res = Net::HTTP.get_response(uri)
# write that body to the file
f=File.new(destination,mode="w")
f.print(res.body)
end
end
end
end
page_url_string = 'https://class.coursera.org/datasci-002/lecture'
puts page_url_string.encoding
dest='C:\\Users\\michael\\training material\\data_science'
page_url=URI(page_url_string)
# I copied this from my browsers developer tools, I'm omitting it since
# it's long and has my session key in it
cookie="..."
downloader = Coursera::Downloader.new(page_url,dest,cookie)
downloader.download
At runtime the following is written to console:
Fast Debugger (ruby-debug-ide 0.4.22, debase 0.0.9) listens on 127.0.0.1:65485
UTF-8
https://class.coursera.org/datasci-002/lecture
UTF-8
Uncaught exception: A socket operation was attempted to an unreachable network. - connect(2)
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:878:in `initialize'
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:878:in `open'
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:878:in `block in connect'
C:/Ruby200-x64/lib/ruby/2.0.0/timeout.rb:52:in `timeout'
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:877:in `connect'
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:862:in `do_start'
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:851:in `start'
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:582:in `start'
C:/Users/michael/Documents/Aptana Studio 3 Workspace/practice/CourseraDownloader.rb:20:in `download'
C:/Users/michael/Documents/Aptana Studio 3 Workspace/practice/CourseraDownloader.rb:52:in `<top (required)>'
C:/Ruby200-x64/bin/rdebug-ide:23:in `load'
C:/Ruby200-x64/bin/rdebug-ide:23:in `<main>'
C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:878:in `initialize': A socket operation was attempted to an unreachable network. - connect(2) (Errno::ENETUNREACH)
from C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:878:in `open'
from C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:878:in `block in connect'
from C:/Ruby200-x64/lib/ruby/2.0.0/timeout.rb:52:in `timeout'
from C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:877:in `connect'
from C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:862:in `do_start'
from C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:851:in `start'
from C:/Ruby200-x64/lib/ruby/2.0.0/net/http.rb:582:in `start'
from C:/Users/michael/Documents/Aptana Studio 3 Workspace/practice/CourseraDownloader.rb:20:in `download'
from C:/Users/michael/Documents/Aptana Studio 3 Workspace/practice/CourseraDownloader.rb:52:in `<top (required)>'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/ruby-debug-ide-0.4.22/lib/ruby-debug-ide.rb:86:in `debug_load'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/ruby-debug-ide-0.4.22/lib/ruby-debug-ide.rb:86:in `debug_program'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/ruby-debug-ide-0.4.22/bin/rdebug-ide:110:in `<top (required)>'
from C:/Ruby200-x64/bin/rdebug-ide:23:in `load'
from C:/Ruby200-x64/bin/rdebug-ide:23:in `<main>'
I was following instructions here to write all the HTTP code. As far as I can see I'm following them ver-batim.
I'm using Windows 7, ruby 2.0.0p481, and Aptana Studio 3. When I copy the url into my browser it goes straight to the page without a problem. When I look at the request headers in my browser for that url, I don't see anything else I think I'm missing. I also tried setting the Host and Referer request headers, it made no difference.
I am out of ideas, and have already searched Stack Overflow for similar questions but that didn't help. Please let me know what I'm missing.
So, I had this same error message with a different project and the problem was that my machine literally couldn't connect to the IP / Port. Have you tried connecting with curl? If it works in your browser, it could be using a proxy or something to actually get there. Testing the URL with curl solved the problem for me.

Skipping slow websites when looping through an array of URLs using watir-webdriver

I'm trying to loop through an array of websites in Chrome using watir-webdriver, but I always encounter an error on certain websites. Recently, I have had this problem with http://adage.com. The loop will execute perfectly until it reaches http://adage.com and then it will hang until the following error is displayed:
/Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/protocol.rb:146:in `rescue in rbuf_fill': Timeout::Error (Timeout::Error)
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/protocol.rb:140:in `rbuf_fill'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/protocol.rb:122:in `readuntil'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/protocol.rb:132:in `readline'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:2562:in `read_status_line'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:2551:in `read_new'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1319:in `block in transport_request'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1316:in `catch'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1316:in `transport_request'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1293:in `request'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1286:in `block in request'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:745:in `start'
from /Users/default/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1284:in `request'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/selenium-webdriver-2.25.0/lib/selenium/webdriver/remote/http/default.rb:82:in `response_for'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/selenium-webdriver-2.25.0/lib/selenium/webdriver/remote/http/default.rb:38:in `request'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/selenium-webdriver-2.25.0/lib/selenium/webdriver/remote/http/common.rb:40:in `call'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/selenium-webdriver-2.25.0/lib/selenium/webdriver/remote/bridge.rb:598:in `raw_execute'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/selenium-webdriver-2.25.0/lib/selenium/webdriver/remote/bridge.rb:576:in `execute'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/selenium-webdriver-2.25.0/lib/selenium/webdriver/remote/bridge.rb:536:in `getActiveElement'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/selenium-webdriver-2.25.0/lib/selenium/webdriver/common/target_locator.rb:60:in `active_element'
from /Users/default/.rvm/gems/ruby-1.9.3-p125/gems/watir-webdriver-0.6.1/lib/watir-webdriver/browser.rb:136:in `send_keys'
from /Users/default/Dropbox/beta_scripts/loop_test.rb:16:in `rescue in <main>'
from /Users/default/Dropbox/beta_scripts/loop_test.rb:11:in `<main>'
I have no idea how to avoid this. I have tried setting timeouts and even sending the ESC key during rescue to stop Chrome from loading the page, but have not had any success. Ultimately, I want to be able to reliably load an array of 500+ websites in succession, but this seems impossible given the likelihood that one of the websites will hang. Is there any way to stop a slow page from loading and move on to the next element in the array?
Below is a shortened version of my code that isolates the problem:
#!/usr/bin/env ruby
require 'watir-webdriver'
b = Watir::Browser.new :chrome
sites = ["twitter.com", "cars.com", "autotrader.com", "rolex.com", "newyorker.com", "adage.com", "theatlantic.com", "pcmag.com"]
sites.each do |uri|
begin
Timeout::timeout(10) do
b.goto uri
end
rescue Timeout::Error => e_time
sleep 5
b.send_keys :escape
p "#{uri} is taking forever to load (#{e_time})"
rescue Exception => e_exception
p e_exception
end
end
b.close
Well I can understand your frustration mate because I have encountered the same when dealing with selenium webdriver. Here it is what you need to do to be 100% sure that your script will run flawless and robust till the end for your 500+ websites.
sites.each do |uri|
!30.times { if ((b.goto uri)rescue false)then break else sleep 1; end }
end
The code above will try to access each website for a maximum of 30sec and then move to the next website.

Rescue Timeout::Error from Redis Gem (Ruby)

I need to rescue a Timeout::Error raised from a the Redis library but i'm running into a problem, rescuing that specific class doesn't seem to work.
begin
Redis.new( { :host => "127.0.0.X" } )
rescue Timeout::Error => ex
end
=> Timeout::Error: Timeout::Error from /Users/me/.rvm/gems/ree-1.8.7-2011.03#gowalla/gems/redis-2.2.0/lib/redis/connection/hiredis.rb:23:in `connect'
When i try to rescue Exception it still doesn't work
begin
Redis.new( { :host => "127.0.0.X" } )
rescue Exception => ex
end
=> Timeout::Error: Timeout::Error from /Users/me/.rvm/gems/ree-1.8.7-2011.03#gowalla/gems/redis-2.2.0/lib/redis/connection/hiredis.rb:23:in `connect'
If i try to raise the exception manually, i can rescue it but don't know why i can't rescue it when it's called from within the Redis Gem (2.2.0).
begin
raise Timeout::Error
rescue Timeout::Error => ex
puts ex
end
Timeout::Error
=> nil
Any clue how to rescue this exception?
You ran this code in irb, right? The exception you are getting is not actually being raised by Redis.new. It is being raised by the inspect method, which irb calls to show you the value of the expression you just typed.
Just look at the stack trace (I shortened the paths to make it legible):
ruby-1.8.7-p330 :009 > Redis.new(:host => "google.com")
Timeout::Error: time's up!
from /.../SystemTimer-1.2.3/lib/system_timer/concurrent_timer_pool.rb:63:in `trigger_next_expired_timer_at'
from /.../SystemTimer-1.2.3/lib/system_timer/concurrent_timer_pool.rb:68:in `trigger_next_expired_timer'
from /.../SystemTimer-1.2.3/lib/system_timer.rb:85:in `install_ruby_sigalrm_handler'
from /..../lib/ruby/1.8/monitor.rb:242:in `synchronize'
from /.../SystemTimer-1.2.3/lib/system_timer.rb:83:in `install_ruby_sigalrm_handler'
from /.../redis-2.2.2/lib/redis/connection/ruby.rb:26:in `call'
from /.../redis-2.2.2/lib/redis/connection/ruby.rb:26:in `initialize'
from /.../redis-2.2.2/lib/redis/connection/ruby.rb:26:in `new'
from /.../redis-2.2.2/lib/redis/connection/ruby.rb:26:in `connect'
from /.../SystemTimer-1.2.3/lib/system_timer.rb:60:in `timeout_after'
from /.../redis-2.2.2/lib/redis/connection/ruby.rb:115:in `with_timeout'
from /.../redis-2.2.2/lib/redis/connection/ruby.rb:25:in `connect'
from /.../redis-2.2.2/lib/redis/client.rb:227:in `establish_connection'
from /.../redis-2.2.2/lib/redis/client.rb:23:in `connect'
from /.../redis-2.2.2/lib/redis/client.rb:247:in `ensure_connected'
from /.../redis-2.2.2/lib/redis/client.rb:137:in `process'
... 2 levels...
from /.../redis-2.2.2/lib/redis/client.rb:46:in `call'
from /.../redis-2.2.2/lib/redis.rb:90:in `info'
from /..../lib/ruby/1.8/monitor.rb:242:in `synchronize'
from /.../redis-2.2.2/lib/redis.rb:89:in `info'
from /.../redis-2.2.2/lib/redis.rb:1075:in `inspect'
from /..../lib/ruby/1.8/monitor.rb:242:in `synchronize'
from /.../redis-2.2.2/lib/redis.rb:1074:in `inspect'
from /..../lib/ruby/1.8/irb.rb:310:in `output_value'
from /..../lib/ruby/1.8/irb.rb:159:in `eval_input'
from /..../lib/ruby/1.8/irb.rb:271:in `signal_status'
from /..../lib/ruby/1.8/irb.rb:155:in `eval_input'
from /..../lib/ruby/1.8/irb.rb:154:in `eval_input'
from /..../lib/ruby/1.8/irb.rb:71:in `start'
from /..../lib/ruby/1.8/irb.rb:70:in `catch'
from /..../lib/ruby/1.8/irb.rb:70:in `start'
from /..../bin/irb:17
As you can see above, the exception occurs inside inspect, not Redis.new. When you call inspect on a Redis object, instead of just printing out its state it actually does a lot of things. In this case, inspect attempts to connect to the server and throws an exception when that times out. This seems like a very bad design to me and maybe we should file a bug report to the maintainers of the Redis gem.
This leads to some interesting behavior in IRB:
Typing Redis.new(:host => "google.com") results in an exception as shown above
Typing Redis.new(:host => "google.com"); 'hello' results in '=> "hello"'
If you want to catch this exception, try calling ensure_connected inside your begin/rescue/end block.

Resources