An older version of Net::SSH had #send_signal. Since that method no longer seems to be available, have tried sending "\cC" via #send_data, and also tried closing the channel, but the remote command continues running.
What's the right way to send signals to a Net::SSH::Channel now?
You want to use request_pty to create a pseudo-terminal. You can use that terminal to send an interrupt command.
Unless you have a PTY, you can't send control characters. This can be as easy as calling request_pty.
It took me a while to figure out, mostly as my system is a bit more complicated - I have multiple SSH sessions running and another thread needs to cause all channels to terminate, so my code looks something like this:
def do_funny_things_to_server host,user
Net::SSH.start(host, user) do |ssh|
#connections << ssh
ssh.open_channel do |chan|
chan.on_data do |ch, data|
puts data
end
# get a PTY with no echo back, this is mostly to hide the "^C" that
# is otherwise output when I interrupt the channel
chan.request_pty(modes: [ [ Net::SSH::Connection::Term::ECHO, 0] ])
chan.exec "tail -f /some/log/file"
end
ssh.loop(0.1)
#connections.delete ssh
end
end
def cancel_all
#connections.each do |ssh|
ssh.channels.each do |id,chan|
chan.send_data "\x03"
end
end
end
###Rant:
There's so little documentation about how to use request_pty parameters that it can be said to not exist. I had to figure out modes: by reading both the source of net-ssh and the SSH RFC and sprinkling some educated guesses about the meaning of the word "flags" in section 8 of the RFC.
Someone pointed in another relevant (though not Ruby specific) answer that there's a "signal" message that can be used to send signals over the SSH connection, but also that OpenSSH (the implementation I use for the server) does not support it. If your server supports it, you might want to try to use it like this:
channel.send_channel_request 'signal', :string, 'INT'
See "signal" channel message in the RFC and the buffer.rb source file to understand the parameters. Insert here the expected rant about complete lack of documentation of how to use send_channel_request. The above suggestion is mostly to document for myself how to use this method.
The answer linked above also mentions an SSH extension called "break" which is supposedly supported by OpenSSH, but I couldn't get it to work to interrupt a session.
Related
I'm putting together a TCPServer in Ruby 3.0.2 and I'm finding that I can't seem to read the entire packet without blocking (until the socket is closed).
Edit: There was some confusion on what I was trying to do - my bad - so just to help clarify: I wanted to read everything that had been sent over the TCP connection so far. (end edit)
My first try was:
#!/snap/bin/ruby
require 'socket'
server = TCPServer.new('localhost', 4200)
loop {
Thread.start(server.accept) do |connection|
puts connection.gets # The important line
end
}
But that hangs until the client closes the connection. Okay, so I take a look at connection.methods, and the ruby docs and try a bunch of options that seem promising. Basically, there is two types of read methods: blocking and nonblocking.
The blocking methods that I tried are .read, .gets, .readlines, .readline, .recv, and .recvmsg. Now .read, .readlines, and .gets all hang (until the socket is closed) - so that's not helpful. The other ones (eg. .readline, the recv methods) don't read the entire message. Now, I could read each line until I see an empty line and parse the HTTP header from there. But there's got to be a better way; I don't want to have to worry about getting a corrupted message and hanging because I didn't read an empty line at the end of the header.
So I went looking at the non-blocking options. Specifically .recv_nonblock and .recvmsg_nonblock. Both of these throw errors (Resource temporarily unavailable - recvfrom(2) would block and Resource temporarily unavailable - recvmsg(2) respectively).
Any ideas on what could be going on? I think it has something to with me using Ruby 3, because trying out the code on Ruby 2.5, client.gets returns a line (doesn't hang), although .readlines does hang - so not sure what's going on.
Ideally, I could just call something along the lines of client.get_message and I would get the entire message that has been sent, but I'd also be okay with working at the TCP level and getting the packet size, reading that size, and reconstructing the message from there.
TCP just transmits the bytes that you write to the socket, and guarantees that the are received in the order they were sent. If you have the concept of a 'message' then you'll need to add that into your server and client.
.gets specifically will block until it reads a new 'line', or whatever you define as the separator for the string - see the docs IO#gets. This means that until your server receives that byte from the client, it will block.
In your client have a look at how you're writing your data - if you're using ruby then puts would work, as it will terminate the string with a new line. If you're using write then it will only write the string without a new line
Ie.
# client.rb
c = TCPSocket.new 'localhost', 5000
c.puts "foo"
c.write "bar"
c.write "baz\n"
# server.rb
s = TCPServer.new 5000
loop do
client = s.accept
puts client.gets
puts client.gets
end
will output
foo
barbaz
Thanks to everyone who commented/answered, but I found the solution that I think was intended by the creators of the Socket class!
The recv_nonblock method takes some optional arguments - one of which is a buffer that the Socket will store what it has read to. So a call like client.recv_nonblock(1000, 0, buffer) stores up to 1000 characters from the Socket into buffer and then exits instead of blocking.
Just to make life easy, I put together a monkey patch to the TCPSocket class:
class TCPSocket
def eat_buffer
contents = ''
buffer = ''
begin
loop {
recv_nonblock(256, 0, buffer)
contents += buffer
}
rescue IO::EAGAINWaitReadable
contents
end
end
end
The point that Steffen makes in the comments is well taken - TCP isn't designed to be used this way. This is a hacky (in the bad sense) method, and should be avoided.
I'm trying to check if a given host is up, running, and listening to a specific port, and to handle any errors correctly.
I found a a number of references of Ruby socket programming but none of them seems to able to handle "socket time-out" efficiently. I tried IO.select, which takes four parameters, of which, the last one is the timeout value:
IO.select([TCPSocket.new('example.com', 22)], [nil], [nil], 4)
The problem is, it gets stuck, especially if the port number is wrong or the server is not listening on to it. So, finally I ended up with this, which I didn't like that much but doing the job:
require 'socket'
require 'timeout'
dns = "example.com"
begin
Timeout::timeout(3) { TCPSocket.new(dns, 22) }
puts "Responded!!"
# do some stuff here...
rescue SocketError
puts "No connection!!"
# do some more stuff here...
rescue Timeout::Error
puts "No connection, timed out!!"
# do some other stuff here...
end
Is there a better way doing this?
The best test for availability of any resource is to try to use it. Adding extra code to try to predict ahead of time whether the use will work is bound to fail:
You test the wrong thing and get a different answer.
You test the right thing but at the wrong time, and the answer changes between the test and the use, and your application performs double the work for nothing, and you write redundant code.
The code you have to write to handle the test failure is identical to the code you should write to handle the use-failure. Why write that twice?
We make extensive use of Net::SSH in one of our systems, and ran into timeout issues.
Probably the biggest fix was to implement use of the select method, to set a low-level timeout, and not try to use the Timeout class, which is thread based.
"How do I set the socket timeout in Ruby?" and "Set socket timeout in Ruby via SO_RCVTIMEO socket option" have code to investigate for that. Also, one of those links to "Socket Timeouts in Ruby" which has useful code, however be aware that it was written for Ruby 1.8.6.
The version of Ruby can make a difference too. Pre-1.9 the threading wasn't capable of stopping a blocking IP session so the code would hang until the socket timed out, then the Timeout would fire. Both the above questions go over that.
So, I'm trying to simulate some basic HTTP persistent connections using sockets and Ruby - for a college class.
The point is to build a server - able to handle multiple clients - that receives a file path and gives back the file content - just like an HTTP GET.
The current server implementation loops listening for clients, fires a new thread when there's an incoming connection and reads the file paths from this socket. It's very dumb, but it works fine when working with non-presistent connections - one request per connection.
But they should be persistent.
Which means the client shouldn't worry about closing the connection. In the non-persistent version the servers echoes the response and close the connection - goodbye client, farewell.
But being persistent means the server thread should loop and wait for more incoming requests until... well until there's no more requests. How does the server knows that? It doesn't! Some sort of timeout is needed. I tried to do that with Ruby's Timeout, but it didn't work.
Googling for some solutions - besides being thoroughly advised to avoid using Timeout module - I've seen a lot of posts about the IO.select method, that should handle this socket waiting issue way better than using threads and stuff (which really sounds cool, considering how Ruby threads (don't) work). I'm trying to understand here how IO.select works, but still wasn't able to make it work in the current scenario.
So I aske basically two things:
how can I efficiently work this timeout issue on the server-side, either using some thread based solution, low-level socket options or some IO.select magic?
how can the client side know that the server has closed its side of the connection?
Here's the current code for the server:
require 'date'
module Sockettp
class Server
def initialize(dir, port = Sockettp::DEFAULT_PORT)
#dir = dir
#port = port
end
def start
puts "Starting Sockettp server..."
puts "Serving #{#dir.yellow} on port #{#port.to_s.green}"
Socket.tcp_server_loop(#port) do |socket, client_addrinfo|
handle socket, client_addrinfo
end
end
private
def handle(socket, addrinfo)
Thread.new(socket) do |client|
log "New client connected"
begin
loop do
if client.eof?
puts "#{'-' * 100} end connection"
break
end
input = client.gets.chomp
body = content_for(input)
response = {}
if body
response.merge!({
status: 200,
body: body
})
else
response.merge!({
status: 404,
body: Sockettp::STATUSES[404]
})
end
log "#{addrinfo.ip_address} #{input} -- #{response[:status]} #{Sockettp::STATUSES[response[:status]]}".send(response[:status] == 200 ? :green : :red)
client.puts(response.to_json)
end
ensure
socket.close
end
end
end
def content_for(path)
path = File.join(#dir, path)
return File.read(path) if File.file?(path)
return Dir["#{path}/*"] if File.directory?(path)
end
def log(msg)
puts "#{Thread.current} -- #{DateTime.now.to_s} -- #{msg}"
end
end
end
Update
I was able to simulate the timeout behaviour using the IO.select method, but the implementation doesn't feel good when combining with a couple of threads for accepting new connections and another couple for handling requests. The concurrency makes the situation mad and unstable, and I'm probably not sticking with it unless I can figure out a better way of using this solution.
Update 2
Seems like Timeout is still the best way to handle this. I'm sticking with it till find a better option.
I still don't know how to deal with zombie client connections.
Solution
I endend up using IO.select (got inspired when looking at the webrick code). You cha check the final version here (lib/http/server/client_handler.rb)
You should implement something like heartbeat packets.Client side should send special packets to after few secs/mins to ensure that server doesn't time out the connection on the client end.You just avoid doing anything in this call.
The console application I would like to control is Bluesoleil. It's a Bluetooth software/driver, but details of the software isn't that important I think. What I want to do is basically, type console commands in Windows or Linux terminal environment using web browser running Ruby on Rails app.
So high level design of the Ruby on Rails app would be something like this.
Web browser showing a page with UI for Bluesoleil
Ruby on Rails app render the page for UI, takes in commands from the user and displays result through web browser, just like regular Ruby on Rails app
On the backend, Ruby on Rails types in commands in the console that is running Bluesoleil. And the result shown in the console is grabbed as string by Ruby on Rails.
Is something like this possible with Ruby on Rails?
Just to clear possible confusion, when I say console and console application here, I don't mean Rails console or Ruby console. Console here is just a terminal environment running console applications and so on.
Thank you.
If you only need to run a "one-off" command, just use backticks. If you need to maintain a long-running background process, which accepts commands and returns responses, you can do something like this (some of the details have been edited out, since this code is from a proprietary application):
class Backend
def initialize
#running = false
#server = nil
# if we forget to call "stop", make sure to close down background process on exit
ObjectSpace.define_finalizer(self,lambda { stop if #running })
end
def start
stop if #running
#server = IO.popen("backend","w+")
#running = true
end
def stop
return if not #running
#server << "exit\n"
#server.flush
#running = false
end
def query(*args)
raise "backend not running" if not #running
#server << "details edited out\n"
#server.flush
loop do
response = parse_response
# handle response
# break loop when backend is finished sending data
end
end
private
def parse_response
# details edited out, uses c = #server.getc to read data from backend
# getc will block if there is nothing to read,
# so there needs to be an unambiguous terminator telling you where
# to stop reading
end
end
end
You can adapt this to your own needs. Beware of situations where the background process dies and leaves the main process hanging.
Although it doesn't apply to your situation, if you are designing the background program yourself: Build the background process so that if ANYTHING makes it crash, it will send an unambiguous message like "PANIC" or something which tells the main process to either exit with an error message, or try starting another background process. Also, make sure it is completely unambiguous where each "message" begins and ends, and test the coordination between main/background process well -- if there are bugs on either end, it is very easy to get a situation where both processes get stuck waiting for each other. One more thing: design the "protocol" which the 2 processes speak to each other in a way which makes it easy to maintain synchronization between the 2.
Ryan Tomayko touched off quite a fire storm with this post about using Unix process control commands.
We should be doing more of this. A lot more of this. I'm talking about fork(2), execve(2), pipe(2), socketpair(2), select(2), kill(2), sigaction(2), and so on and so forth. These are our friends. They want so badly just to help us.
I have a bit of code (a delayed_job clone for DataMapper that I think would fit right in with this, but I'm not clear on how to take advantage of the listed commands. Any Ideas on how to improve this code?
def start
say "*** Starting job worker #{#name}"
t = Thread.new do
loop do
delay = Update.work_off(self)
break if $exit
sleep delay
break if $exit
end
clear_locks
end
trap('TERM') { terminate_with t }
trap('INT') { terminate_with t }
trap('USR1') do
say "Wakeup Signal Caught"
t.run
end
end
Ahh yes... the dangers of "We should do more of this" without explaining what each of those do and in what circumstances you'd use them. For something like delayed_job you may even be using fork without knowing that you're using fork. That said, it really doesn't matter. Ryan was talking about using fork for preforking servers. delayed_job would use fork for turning a process into a daemon. Same system call, different purposes. Running delayed_job in the foreground (without fork) vs in the background (with fork) will result in a negligible performance difference.
However, if you write a server that accepts concurrent connections, now Ryan's advice is right on the money.
fork: creates a copy of the original process
execve: stops executing the current file and begins executing a new file in the same process (very useful in rake tasks)
pipe: creates a pipe (two file descriptors, one for read, one for write)
socketpair: like a pipe, but for sockets
select: let's you wait for one or more of multiple file descriptors to be ready with a timeout
kill: used to send a signal to a process
sigaction: lets you change what happens when a process receives a signal
5 months later, you can view my solution at http://github.com/antarestrader/Updater. Look at lib/updater/fork_worker.rb