I wrote a script using 'socket' that connects to a host and port and because socket.timeout doesn't really work I tried using the 'tcp_timeout' gem that works properly but I can't seem to suppress the error raised when connect/read/write timeout happens. Any idea where am I wrong?
begin
socket = TCPTimeout::TCPSocket.new(server, port, connect_timeout: 6, read_timeout: 6)
unless socket.read(12) =~ /^SMTH\n$/
puts "[!] #{server} banner error"
exit(1)
end
rescue TCPTimeout::SocketTimeout => err
puts "[!] #{server} Timeout"
exit(1)
end
The error raised, as expected is a read timeout error:
/usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:160:in `select_timeout': read timeout (TCPTimeout::SocketTimeout)
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:108:in `block in read'
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:107:in `loop'
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/tcp_timeout-0.1.1/lib/tcp_timeout.rb:107:in `read'
from ./myhost.rb:67:in `<main>'
I tried even:
rescue TCPTimeout::SocketTimeout, StandardError, Timeout::Error => err
Same thing happens.
Author of tcp_timeout here; your code looks correct. This snippet works as expected (for me):
require 'tcp_timeout'
begin
socket = TCPTimeout::TCPSocket.new('stackoverflow.com', 80, read_timeout: 1)
socket.read(100)
rescue TCPTimeout::SocketTimeout => e
puts 'Rescued!', e
end
If you can find a snippet that fails reliably against a public server please file a bug: https://github.com/lann/tcp-timeout-ruby/issues
Related
Intro
I have a client that makes numerous SSL connections to a 3rd party service. In certain cases, the 3rd party stops responding during the socket and ssl negotiation process. When this occurs, my current implementation "sits" for hours on end before timing out.
To combat this, I'm trying to implement the following process:
require 'socket'
require 'openssl'
# variables
host = '....'
port = ...
p12 = #OpenSSL::PKCS12 object
# set up socket
addr = Socket.getaddrinfo(host, nil)
sockaddr = Socket.pack_sockaddr_in(port, addr[0][3])
socket = Socket.new(Socket.const_get(addr[0][0]), Socket::SOCK_STREAM, 0)
socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)
begin
socket.connect_nonblock(sockaddr)
rescue IO::WaitWritable
if IO.select(nil, [socket], nil, timeout)
begin
socket.connect_nonblock(sockaddr)
rescue Errno::EISCONN
puts "socket connected"
rescue
puts "socket error"
socket.close
raise
end
else
socket.close
raise "Connection timeout"
end
end
# negotiate ssl
context = OpenSSL::SSL::SSLContext.new
context.cert = p12.certificate
context.key = p12.key
ssl_socket = OpenSSL::SSL::SSLSocket.new(socket, context)
ssl_socket.sync_close = true
puts "ssl connecting"
ssl_socket.connect_nonblock
puts "ssl connected"
# cleanup
ssl_socket.close
puts "socket closed"
ssl_socket.connect_nonblock will eventually be wrapped in a similar structure as socket.connect_nonblock is.
The Problem
The issue I'm running into is that ssl_socket.connect_nonblock raises the following when run:
`connect_nonblock': read would block (OpenSSL::SSL::SSLError)
Instead, I'd expect it to raise an IO::WaitWritable as socket.connect_nonblock does.
I've scoured the internet for information on this particular error but can't find anything of particular use. From what I gather, others have had success using this method, so I'm not sure what I'm missing. For the sake of completeness, I've found the same results with both ruby 2.2.0 and 1.9.3.
Any suggestions are greatly appreciated!
Have same problem, I tried below, it seems works right for my situation.
ssl_socket = OpenSSL::SSL::SSLSocket.new socket, context
ssl_socket.sync = true
begin
ssl_socket.connect_nonblock
rescue IO::WaitReadable
if IO.select([ssl_socket], nil, nil, timeout)
retry
else
# timeout
end
rescue IO::WaitWritable
if IO.select(nil, [ssl_socket], nil, timeout)
retry
else
# timeout
end
end
$ irb
1.9.3-p448 :001 > require 'socket'
=> true
1.9.3-p448 :002 > TCPSocket.new('www.example.com', 111)
gives
Errno::ETIMEDOUT: Operation timed out - connect(2)
Questions:
How can I define the timeout value for TCPSocket.new?
How can I properly catch the timeout (or, in general, socket) exception(s)?
At least since 2.0 one can simply use Socket::tcp:
Socket.tcp("www.ruby-lang.org", 10567, connect_timeout: 5) {}
Note the block at the end of the expression, which is used to get connection closed in case such is established.
For older versions #falstru answer appears to be best.
Use begin .. rescue Errno::ETIMEDOUT to catch the timeout:
require 'socket'
begin
TCPSocket.new('www.example.com', 111)
rescue Errno::ETIMEDOUT
p 'timeout'
end
To catch any socket exceptions, use SystemCallError instead.
According to the SystemCallError documentation:
SystemCallError is the base class for all low-level platform-dependent errors.
The errors available on the current platform are subclasses of
SystemCallError and are defined in the Errno module.
TCPSocket.new does not support timeout directly.
Use Socket::connect_non_blocking and IO::select to set timeout.
require 'socket'
def connect(host, port, timeout = 5)
# Convert the passed host into structures the non-blocking calls
# can deal with
addr = Socket.getaddrinfo(host, nil)
sockaddr = Socket.pack_sockaddr_in(port, addr[0][4])
Socket.new(Socket.const_get(addr[0][0]), Socket::SOCK_STREAM, 0).tap do |socket|
socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)
begin
# Initiate the socket connection in the background. If it doesn't fail
# immediatelyit will raise an IO::WaitWritable (Errno::EINPROGRESS)
# indicating the connection is in progress.
socket.connect_nonblock(sockaddr)
rescue IO::WaitWritable
# IO.select will block until the socket is writable or the timeout
# is exceeded - whichever comes first.
if IO.select(nil, [socket], nil, timeout)
begin
# Verify there is now a good connection
socket.connect_nonblock(sockaddr)
rescue Errno::EISCONN
# Good news everybody, the socket is connected!
rescue
# An unexpected exception was raised - the connection is no good.
socket.close
raise
end
else
# IO.select returns nil when the socket is not ready before timeout
# seconds have elapsed
socket.close
raise "Connection timeout"
end
end
end
end
connect('www.example.com', 111, 2)
The above code comes from "Setting a Socket Connection Timeout in Ruby".
If you like the idea of avoiding the pitfalls of Timeout, but prefer to avoid having to deal with your own implementation of the *_nonblock+select implementation, you can use the tcp_timeout gem.
The tcp_timeout gem monkey-patches TCPSocket#connect, #read, and #write so that they use non-blocking I/O and have timeouts that you can enable.
You to make a timeout you can use ruby's Timeout module:
reqiure 'socket'
reqiure 'timeout'
begin
Timeout.timeout(10) do
begin
TCPSocket.new('www.example.com', 111)
rescue Errno::ENETUNREACH
retry # or do something on network timeout
end
end
rescue Timeout::Error
puts "timed out"
# do something on timeout
end
and you'll get after 10 seconds:
# timed out
# => nil
NOTE: Some people may think that it is dangerous solution, well, this opinion has right to exist, but there were no real investigations proceeded, so, that opinion is just a hypothesis. And currently it is better to use internal ruby's timeout engine in Socket class like the following:
Socket.tcp("www.ruby-lang.org", 80, connect_timeout: 80) do |sock|
...
end
I want to handle timeout for IP range taken from console for which I make requests to IPs within taken range and getting timeout error.
I want to make requests to all IPs and get responses from them.
For IP that time out , want to skip it and move to next one. How to handle this so loop dont get exception and script sends request to all IPs that can give response handling timed out ones.
Attaching code here:
require 'net/http'
require 'uri'
require 'ipaddr'
puts "Origin IP:"
originip = gets()
(IPAddr.new("209.85.175.121")..IPAddr.new("209.85.175.150")).each do |address|
req = Net::HTTP.get(URI.parse("http://#{address.to_s}"))
puts req
end
Error:
C:/Ruby187/lib/ruby/1.8/net/http.rb:560:in `initialize': A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2) (Errno::ETIMEDOUT)
from C:/Ruby187/lib/ruby/1.8/net/http.rb:560:in `open'
from C:/Ruby187/lib/ruby/1.8/net/http.rb:560:in `connect'
from C:/Ruby187/lib/ruby/1.8/timeout.rb:53:in `timeout'
from C:/Ruby187/lib/ruby/1.8/timeout.rb:101:in `timeout'
from C:/Ruby187/lib/ruby/1.8/net/http.rb:560:in `connect'
from C:/Ruby187/lib/ruby/1.8/net/http.rb:553:in `do_start'
from C:/Ruby187/lib/ruby/1.8/net/http.rb:542:in `start'
from C:/Ruby187/lib/ruby/1.8/net/http.rb:379:in `get_response'
from C:/Ruby187/lib/ruby/1.8/net/http.rb:356:in `get'
from IP Range 2.rb:9
from IP Range 2.rb:8:in `each'
Just like Marc says. You should rescue the exception. Like so:
begin
response = Net::HTTP.get(...)
rescue Errno::ECONNREFUSED => e
# Do what you think needs to be done
end
Also, what you get back from the call to get() is a response, not a request.
Catch the exception using timeout,
require 'timeout'
(IPAddr.new("209.85.175.121")..IPAddr.new("209.85.175.150")).each do |address|
begin
req = Net::HTTP.get(URI.parse("http://#{address.to_s}"))
puts req
rescue Timeout::Error => exc
puts "ERROR: #{exc.message}"
rescue Errno::ETIMEDOUT => exc
puts "ERROR: #{exc.message}"
# uncomment the following two lines, if you are not able to track the exception type.
#rescue Exception => exc
# puts "ERROR: #{exc.message}"
end
end
Edit: When we rescue Timeout::Error, only those exceptions which belongs to Timeout::Error class will be caught. We need to catch the raised exception using their error class, updated the code accordingly.
I'm trying to process content from a list of links using "open-uri" in ruby (1.8.6), but the bad thing happens when I'm getting an error when one link is broken or requires authentication:
open-uri.rb:277:in `open_http': 404 Not Found (OpenURI::HTTPError)
from C:/tools/Ruby/lib/ruby/1.8/open-uri.rb:616:in `buffer_open'
from C:/tools/Ruby/lib/ruby/1.8/open-uri.rb:164:in `open_loop'
from C:/tools/Ruby/lib/ruby/1.8/open-uri.rb:162:in `catch'
or
C:/tools/Ruby/lib/ruby/1.8/net/http.rb:560:in `initialize': getaddrinfo: no address associated with hostname. (SocketError)
from C:/tools/Ruby/lib/ruby/1.8/net/http.rb:560:in `open'
from C:/tools/Ruby/lib/ruby/1.8/net/http.rb:560:in `connect'
from C:/tools/Ruby/lib/ruby/1.8/timeout.rb:53:in `timeout'
or
C:/tools/Ruby/lib/ruby/1.8/net/protocol.rb:133:in `sysread': An existing connection was forcibly closed by the remote host. (Errno::ECONNRESET)
from C:/tools/Ruby/lib/ruby/1.8/net/protocol.rb:133:in `rbuf_fill'
from C:/tools/Ruby/lib/ruby/1.8/timeout.rb:62:in `timeout'
from C:/tools/Ruby/lib/ruby/1.8/timeout.rb:93:in `timeout'
is there a way to test the response (url) before processing any data?
the code is:
require 'open-uri'
smth.css.each do |item|
open('item[:name]', 'wb') do |file|
file << open('item[:href]').read
end
end
Many thanks
You could try something along the lines of
require 'open-uri'
smth.css.each do |item|
begin
open('item[:name]', 'wb') do |file|
file << open('item[:href]').read
end
rescue => e
case e
when OpenURI::HTTPError
# do something
when SocketError
# do something else
else
raise e
end
rescue SystemCallError => e
if e === Errno::ECONNRESET
# do something else
else
raise e
end
end
end
I don't know of any way of testing the connection without opening it and trying, so rescuing these errors would be the only way I can think of. The thing to be aware of is that OpenURI::HTTPError and SocketError are both subclasses of StandardError, whereas Errno::ECONNRESET is a subclass of SystemCallError. So rescue => e won't catch Errno::ECONNRESET.
I was able to solve this problem by using a conditional if/else statement to check the return value of the action for "failure":
def controller_action
url = "some_API"
response = open(url).read
data = JSON.parse(response)["data"]
if response["status"] == "failure"
redirect_to :action => "home"
else
do_something_else
end
end
I am trying to run some functional test on a small server I have created. I am running Ruby 1.9.2 and RSpec 2.2.1 on Mac OS X 10.6. I have verified that the server works correctly and is not causing the problems I am experiencing. In my spec, I am attempting to spawn of a process to start the server, run some examples, and then kill the process running the server. Here is the code for my spec:
describe "Server" do
describe "methods" do
let(:put) { "put foobar beans 5\nhowdy" }
before(:all) do
#pid = spawn("bin/server")
end
before(:each) do
#sock = TCPSocket.new "127.0.0.1", 3000
end
after(:each) do
#sock.close
end
after(:all) do
Process.kill("HUP", #pid)
end
it "should be valid for a valid put method" do
#sock.send(put, 0).should == put.length
response = #sock.recv(1000)
response.should == "OK\n"
end
#more examples . . .
end
end
When I run the spec, it appears that the before(:all) and after(:all) blocks are run and the server processes is killed before the examples are run, because I get the following output:
F
Failures:
1) Server methods should be valid for a valid put method
Failure/Error: #sock = TCPSocket.new "127.0.0.1", 3000
Connection refused - connect(2)
# ./spec/server_spec.rb:11:in `initialize'
# ./spec/server_spec.rb:11:in `new'
# ./spec/server_spec.rb:11:in `block (3 levels) in <top (required)>'
When I comment out the call to Process.kill, the server is started and the tests are run, but the server remains running, which means I have to go manually kill it.
It seems like I am misunderstanding what the after(:all) method is supposed to do, because it is not being run in the order I thought it would. Why is this happening? What do I need to do so that my specs
Are you sure the server is ready to accept connections? Maybe something like this would help:
before(:each) do
3.times do
begin
#sock = TCPSocket.new "127.0.0.1", 2000
break
rescue
sleep 1
end
end
raise "could not open connection" unless #sock
end