I'm using the SerialPort Ruby gem to write and read form a serial port, using a Ruby script "driver".
I want to run the application from both Windows and Linux Os;
btw, code here below have been tested on Windows 7.
Specifically I have to print ESC/POS text data on a thermal printer,
attached to a host computer with a serial line (COM/USB port).
I do not have any problems WRITING data on the printer (a superb&cheap Epson TM-T20)!
But I have to manage errors/connection status READING data from the device,
by example to see if the printer if powered ON (on-line) I would ask the printer status to the printer device with some ESC/POS commnads.
The problem arise because, even if I set serial port timeouts (I mean settimg timeout parameters of SerialPort class initializer), without an exception handling, that seem not foreseen by the gem :-(, I am not able to understand if the printer is alive or powered off. See the a chunk of code:
begin
printer = SerialPort.new PRINTER_SERIALPORT_NAME, BAUDRATE
# http://curiosity.roguepenguin.net/?p=35
# timeouts are in milliseconds
printer.read_timeout = 2000
printer.write_timeout = 2000
puts "Success for SerialPort: #{ printer.inspect }"
rescue => e
puts "Failed to open as SerialPort: #{ e.inspect }"
end
With the code above, a printer.read in facts exit from 2000 milliseconds .. but i do not know if because I have less than expected (0) bytes back.. or because the the device is powered off..
So, I tryied using Ruby Timeout ( http://ruby-doc.org/stdlib-2.0.0/libdoc/timeout/rdoc/Timeout.html ), running the script from Winows 7 shell, but unfortunately the code below do not run as expected (if the printer is off, the printer.read HANG, I mean do not exit at all :-(, and I do not have an expected Timeout::Error ); stange isn't it ?
begin
Timeout.timeout(2) do
puts printer.read
end
rescue Timeout::Error
puts "SerialPort timeout."
rescue => e
puts "Serialport ERROR: Failed to open: #{ e.inspect }"
end
Any idea to solve the point?
Thank you
giorgio
From the documentation I gather that when specifying a read_timeout the read operation will return immediately (non-blocking), thus never causing a Timeout::Error as you would expect.
Could you try setting a timeout of 0, which according to the docs will block until a byte is available and see whether Ruby will raise a Timeout::Error?
Or try setting no timeout at all and see whether that works.
Alternatively you could specify a negative timeout which returns immediately with whatever data are available and make a decision based on that.
sorry for answering to myself question, just to share some tests, also following rkhon suggestions:
premise:
I'm now testing using Ruby 2.0 on Windows 7 with a Epson TM-T20 printer attached to the PC with a USB connection (and a serial port - USB Epson "matcher" )
About SerialPort (read) timeouts tests:
if I set:
printer.read_timeout <= 0
serialport read hang (do not return control)
if I set:
printer.read_timeout > 0
# read 1 byte
printer.read(1)
return after read_timeout milliseconds if there arenot AVAILABLE bytes in read buffer, otherwise return immediately with the bytes read.
Thta's ok!
BTW, it seem that serialport gem Timeouts run with priority (WINs) against Ruby language Timeouts support; that mean that if I set a printer.read_timeout = 2000 and afterward i do
# 1000 msecs
Timeout.timeout(1) do
puts printer.read
end
nevertheless I got control back after 2000 msec...
to manage errors/connection status:
even if I can't use an exception timeout, it's be OK my side to check the power on /connection status of printer also without an explicit timeout exception...
I verified that if i send (write) to printer a DLE EOT m (STATUS request)
I get back (read) NO DATA (nil)
Hope the report could be useful...
Related
I am writing a Ruby client which will open tcp socket and stream data.
If I could not able to open socket within 20 secs I will trigger the Timeout error.
begin
Timeout::timeout(20) { socket = open_socket(host, port) }
rescue Errno::ECONNREFUSED
puts "Failed to connect to server"
rescue Timeout::Error
puts "Timeout error occurred while connecting to the server"
end
My open_socket method is given below.
def open_socket(host,port)
TCPSocket.new(host,port)
end
Code works fine. My question is
What is the standard timeout in secs in socket programming?
Does the timeout in secs can be setup according to our need?
I found 2 articles that seem to confirm the timeout as 20 seconds:
In Windows
In Linux (not ruby-specific)
The second article seems to imply that the timeout period is defined by the OS.
I do not have a answer for your second question.
That's exactly how you do it.The default value of timeout is 10 seconds.
timeout( sec) { ...}
Executes the block and returns true if the block execution terminates successfully prior to elapsing of the timeout period,
otherwise immediately terminates execution of the block and raises a TimeoutError exception.
require 'timeout'
status = timeout(5) {
# something that may take time
}
On Linux, the send/recv timeout can be accessed using setsockopt/getsocktopt.
Do man 7 socket and look for the SO_RCVTIMEO and SO_SNDTIMEO options. setsockopt/getsockopt is available on socket objects in Ruby.
I'm trying to make sockets timeout in Ruby via the SO_RCVTIMEO socket option however it seems to have no effect on any recent *nix operating system.
Using Ruby's Timeout module is not an option as it requires spawning and joining threads for each timeout which can become expensive. In applications that require low socket timeouts and which have a high number of threads it essentially kills performance. This has been noted in many places including Stack Overflow.
I've read Mike Perham's excellent post on the subject here and in an effort to reduce the problem to one file of runnable code created a simple example of a TCP server that will receive a request, wait the amount of time sent in the request and then close the connection.
The client creates a socket, sets the receive timeout to be 1 second, and then connects to the server. The client tells the server to close the session after 5 seconds then waits for data.
The client should timeout after one second but instead successfully closes the connection after 5.
#!/usr/bin/env ruby
require 'socket'
def timeout
sock = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0)
# Timeout set to 1 second
timeval = [1, 0].pack("l_2")
sock.setsockopt Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, timeval
# Connect and tell the server to wait 5 seconds
sock.connect(Socket.pack_sockaddr_in(1234, '127.0.0.1'))
sock.write("5\n")
# Wait for data to be sent back
begin
result = sock.recvfrom(1024)
puts "session closed"
rescue Errno::EAGAIN
puts "timed out!"
end
end
Thread.new do
server = TCPServer.new(nil, 1234)
while (session = server.accept)
request = session.gets
sleep request.to_i
session.close
end
end
timeout
I've tried doing the same thing with a TCPSocket as well (which connects automatically) and have seen similar code in redis and other projects.
Additionally, I can verify that the option has been set by calling getsockopt like this:
sock.getsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO).inspect
Does setting this socket option actually work for anyone?
You can do this efficiently using select from Ruby's IO class.
IO::select takes 4 parameters. The first three are arrays of sockets to monitor and the last one is a timeout (specified in seconds).
The way select works is that it makes lists of IO objects ready for a given operation by blocking until at least one of them is ready to either be read from, written to, or wants to raise an error.
The first three arguments therefore, correspond to the different types of states to monitor.
Ready for reading
Ready for writing
Has pending exception
The fourth is the timeout you want to set (if any). We are going to take advantage of this parameter.
Select returns an array that contains arrays of IO objects (sockets in this case) which are deemed ready by the operating system for the particular action being monitored.
So the return value of select will look like this:
[
[sockets ready for reading],
[sockets ready for writing],
[sockets raising errors]
]
However, select returns nil if the optional timeout value is given and no IO object is ready within timeout seconds.
Therefore, if you want to do performant IO timeouts in Ruby and avoid having to use the Timeout module, you can do the following:
Let's build an example where we wait timeout seconds for a read on socket:
ready = IO.select([socket], nil, nil, timeout)
if ready
# do the read
else
# raise something that indicates a timeout
end
This has the benefit of not spinning up a new thread for each timeout (as in the Timeout module) and will make multi-threaded applications with many timeouts much faster in Ruby.
I think you're basically out of luck. When I run your example with strace (only using an external server to keep the output clean), it's easy to check that setsockopt is indeed getting called:
$ strace -f ruby foo.rb 2>&1 | grep setsockopt
[pid 5833] setsockopt(5, SOL_SOCKET, SO_RCVTIMEO, "\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
strace also shows what's blocking the program. This is the line I see on the screen before the server times out:
[pid 5958] ppoll([{fd=5, events=POLLIN}], 1, NULL, NULL, 8
That means that the program is blocking on this call to ppoll, not on a call to recvfrom. The man page that lists socket options (socket(7)) states that:
Timeouts have no effect for select(2), poll(2), epoll_wait(2), etc.
So the timeout is being set but has no effect. I hope I'm wrong here, but it seems there's no way to change this behavior in Ruby. I took a quick look at the implementation and didn't find an obvious way out. Again, I hope I'm wrong -- this seems to be something basic, how come it's not there?
One (very ugly) workaround is by using dl to call read or recvfrom directly. Those calls are affected by the timeout you set. For example:
require 'socket'
require 'dl'
require 'dl/import'
module LibC
extend DL::Importer
dlload 'libc.so.6'
extern 'long read(int, void *, long)'
end
sock = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0)
timeval = [3, 0].pack("l_l_")
sock.setsockopt Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, timeval
sock.connect( Socket.pack_sockaddr_in(1234, '127.0.0.1'))
buf = "\0" * 1024
count = LibC.read(sock.fileno, buf, 1024)
if count == -1
puts 'Timeout'
end
This code works here. Of course: it's an ugly solution, which won't work on many platforms, etc. It may be a way out though.
Also please notice that this is the first time I do something similar in Ruby, so I'm not aware of all the pitfalls I may be overlooking -- in particular, I'm suspect of the types I specified in 'long read(int, void *, long)' and of the way I'm passing a buffer to read.
Based on my testing, and Jesse Storimer's excellent ebook on "Working with TCP Sockets" (in Ruby), the timeout socket options do not work in Ruby 1.9 (and, I presume 2.0 and 2.1). Jesse says:
Your operating system also offers native socket timeouts that can be set via the
SNDTIMEO and RCVTIMEO socket options. But, as of Ruby 1.9, this feature is no longer
functional."
Wow. I think the moral of the story is to forget about these options and use IO.select or Tony Arcieri's NIO library.
Is there a way to find out how many bytes of data is available on an TCPSocket in Ruby? I.e. how many bytes can be ready without blocking?
The standard library io/wait might be useful here. Requiring it gives stream-based I/O (sockets and pipes) some new methods, among which is ready?. According to the documentation, ready? returns non-nil if there are bytes available without blocking. It just so happens that the non-nil value it returns it the number of bytes that are available in MRI.
Here's an example which creates a dumb little socket server, and then connects to it with a client. The server just sends "foo" and then closes the connection. The client waits a little bit to give the server time to send, and then prints how many bytes are available for reading. The interesting stuff for you is in the client:
require 'socket'
require 'io/wait'
# Server
server_socket = TCPServer.new('localhost', 0)
port = server_socket.addr[1]
Thread.new do
session = server_socket.accept
sleep 0.5
session.puts "foo"
session.close
end
# Client
client_socket = TCPSocket.new('localhost', port)
puts client_socket.ready? # => nil
sleep 1
puts client_socket.ready? # => 4
Don't use that server code in anything real. It's deliberately short in order to keep the example simple.
Note: According to the Pickaxe book, io/wait is only available if "FIONREAD feature in ioctl(2)", which it is in Linux. I don't know about Windows & others.
I've this ruby code that connects to a TCP server (namely, netcat). It loops 20 times, and sends "ABCD ". If I kill netcat, it takes TWO iterations of the loop for an exception to be triggered. On the first loop after netcat is killed, no exception is triggered, and "send" reports that 5 bytes have been correctly written... Which in the end is not true, since of course the server never received them.
Is there a way to work around this issue ? Right now I'm losing data : since I think it's been correctly transfered, I'm not replaying it.
#!/usr/bin/env ruby
require 'rubygems'
require 'socket'
sock = TCPSocket.new('192.168.0.10', 5443)
sock.sync = true
20.times do
sleep 2
begin
count = sock.write("ABCD ")
puts "Wrote #{count} bytes"
rescue Exception => myException
puts "Exception rescued : #{myException}"
end
end
When you're sending data your blocking call will return when the data is written to the TCP output buffer. It would only block if the buffer was full, waiting for the server to acknowledge receipt of previous data that was sent.
Once this data is in the buffer, the network drivers try to send the data. If the connection is lost, on the second attempt to write, your application discovers the broken state of the connection.
Also, how does the connection close? Is the server actively closing the connection? In which case client socket would be notified at its next socket call. Or has it crashed? Or perhaps there's a network fault which means you can no longer communicate.
Discovering a broken connection only occurs when you try to send or receive data over the socket. This is different from having the connection actively closed. You simply can't determine if the connection is still alive without doing something with it.
So try doing sock.recv(0) after the write - if the socket has failed this would raise "Errno::ECONNRESET: Connection reset by peer - recvfrom(2)". You could also try sock.sendmsg "", 0 (not sock.write, or sock.send), and this would report a "Errno::EPIPE: Broken pipe - sendmsg(2)".
Even if you got your hands on the TCP packets and get acknowledgement that the data had been received at the other end, there's still no guarantee that the server will have processed this data - it might in its input buffer but not yet processed.
All of this might help identify a broken connection earlier, but it still won't guarantee that the data was received and processed by the server. The only sure way to know that the application has processed your message is with an application level response.
I tried without the sleep function (just to make sure it wasn't putting on hold anything) and still no luck:
#!/usr/bin/env ruby
require 'rubygems'
require 'socket'
require 'activesupport' # Fixnum.seconds
sock = TCPSocket.new('127.0.0.1', 5443)
sock.sync = true
will_restart_at = Time.now + 2.seconds
should_continue = true
while should_continue
if will_restart_at <= Time.now
will_restart_at = Time.now + 2.seconds
begin
count = sock.write("ABCD ")
puts "Wrote #{count} bytes"
rescue Exception => myException
puts "Exception rescued : #{myException}"
should_continue = false
end
end
end
I analyzed with Wireshark and the two solutions are exactly behaving identically.
I think (and can't be sure) that until you actually call your_socket.write (which will not fail as the socket is still opened because you weren't probing for its possible destruction), the socket won't raise any error.
I tried to simulate this with nginx and manual TCP sockets. And look at that:
irb> sock = TCPSocket.new('127.0.0.1', 80)
=> #<TCPSocket:0xb743b824>
irb> sock.write("salut")
=> 5
irb> sock.read
=> "<html>\r\n<head><title>400 Bad Request</title></head>\r\n<body>\r\n</body>\r\n</html>\r\n"
# Here, I kill nginx
irb> sock.write("salut")
=> 5
irb> sock.read
=> ""
irb> sock.write("salut")
Errno::EPIPE: Broken pipe
So what's the conclusion from here? Unless you're actually expecting some data from the server, you're screwed to detect that you've lost the connection :)
To detect a gracefully close, you'll have to read from the socket - read returning 0 indicates the socket has closed.
If you do need know if data got sent successfully though, there's no way other than implementing ACKs of the data at the application level.
I have several embedded linux systems that I want to write a 'Who's Online?' network service in Ruby. Below is related part of my code:
mySocket = UDPSocket.new
mySocket.bind("<broadcast>", 50050)
loop do
begin
text, sender = mySocket.recvfrom(1024)
puts text
if text =~ /KNOCK KNOCK/ then
begin
sock = UDPSocket.open
sock.send(r.ipaddress, 0, sender[3], 50051)
sock.close
rescue
retry
end
end
rescue Exception => inLoopEx
puts inLoopEx.message
puts inLoopEx.backtrace.inspect
retry
end
end
I send the 'KNOCK KNOCK' command from a PC. Now, the problem is since they all receive the message at the same time, they try to respond at the same time too, which causes a Broken Pipe exception (which is the reason of my 'rescue retry' code). This code works OK sometimes but; other times the rescue retry part of the code (which is waked by Broken Pipe exception from sock.send) causes one or more systems to respond after 5 seconds or so.
Is there a better way of doing this since I assume I cant escape the Broken Pipe exception?
I have found that exception was caused by the 'r.ipaddress' part in the send command, which is related to my embedded system's internals...