Anything wrong with "throw" inside a PTY.spawn? - ruby

This is a for a dev machine and testing purposes. I'm trying to start a script that starts a server, then ignores it once the output meets a condition. The process does not end, so I can't use backticks. I will be -QUITing the process independently (somewhere else in the script). So far this is the closest I've got:
require "pty"
catch :ready do
begin
PTY.spawn( "bin/start" ) do |stdin, stdout, pid|
begin
stdin.each { |line|
throw(:ready) if line['worker ready']
}
rescue Errno::EIO
puts "no more output"
end
end
rescue PTY::ChildExited
puts "child process exit"
end
end # === end of catch block
# Continue with the rest of the script...
So far it works, but I am new to Ruby and processes. All the other solutions I've seen online require pipes, forks, and Process.detach. Am I doing anything wrong by using throw while I am in a PTY.spawn?

Related

Fork child process with timeout and capture output

Say I have a function like below, how do I capture the output of the Process.spawn call? I should also be able to kill the process if it takes longer than a specified timeout.
Note that the function must also be cross-platform (Windows/Linux).
def execute_with_timeout!(command)
begin
pid = Process.spawn(command) # How do I capture output of this process?
status = Timeout::timeout(5) {
Process.wait(pid)
}
rescue Timeout::Error
Process.kill('KILL', pid)
end
end
Thanks.
You can use IO.pipe and tell Process.spawn to use the redirected output without the need of external gem.
Of course, only starting with Ruby 1.9.2 (and I personally recommend 1.9.3)
The following is a simple implementation used by Spinach BDD internally to capture both out and err outputs:
# stdout, stderr pipes
rout, wout = IO.pipe
rerr, werr = IO.pipe
pid = Process.spawn(command, :out => wout, :err => werr)
_, status = Process.wait2(pid)
# close write ends so we could read them
wout.close
werr.close
#stdout = rout.readlines.join("\n")
#stderr = rerr.readlines.join("\n")
# dispose the read ends of the pipes
rout.close
rerr.close
#last_exit_status = status.exitstatus
The original source is in features/support/filesystem.rb
Is highly recommended you read Ruby's own Process.spawn documentation.
Hope this helps.
PS: I left the timeout implementation as homework for you ;-)
I followed Anselm's advice in his post on the Ruby forum here.
The function looks like this -
def execute_with_timeout!(command)
begin
pipe = IO.popen(command, 'r')
rescue Exception => e
raise "Execution of command #{command} unsuccessful"
end
output = ""
begin
status = Timeout::timeout(timeout) {
Process.waitpid2(pipe.pid)
output = pipe.gets(nil)
}
rescue Timeout::Error
Process.kill('KILL', pipe.pid)
end
pipe.close
output
end
This does the job, but I'd rather use a third-party gem that wraps this functionality. Anyone have any better ways of doing this? I have tried Terminator, it does exactly what I want but it does not seem to work on Windows.

ruby exit tail and continue to next part of script from keypress

I found this snippet of code I am having trouble trying to find a good way to exit the tail and move on to the next part of the script. Essentially I ant the first part to make a change for the person running the scripts, show log output using the below code, then move on to the next part of the script on user keypress. I cant get out of the tail without CTRL-C.
def do_tail( session, file )
session.open_channel do |channel|
channel.on_data do |ch, data|
puts "[#{file}] -> #{data}"
end
channel.exec "tail -f #{file}"
end
end
Net::SSH.start("server", "user", :keys => ["/user/.ssh/id_dsa"]) do |session|
do_tail session, "/var/log/apache2/error.log"
do_tail session, "/var/log/apache2/access.log"
session.loop
end
UPDATE
The -f takes over I/O and makes it difficult to exit that ssh channel. I decided to move towards the suggestions and modify it. Here is the result in case someone else would like help on this topic.
require 'rubygems'
require 'net/ssh'
def exit?
begin
while input = STDIN.read_nonblock(1)
return true if input == 'q'
end
false
rescue Errno::EINTR
false
rescue Errno::EAGAIN
false
rescue EOFError
true
end
end
def do_tail( session, file )
session.open_channel do |channel|
channel.on_data do |ch, data|
puts "[#{file}]\n\n#{data}"
end
channel.exec "tail -n22 #{file}"
end
end
def loggy
iteration = 0
loop do
iteration = (iteration + 1)
Net::SSH.start("server", "user", :keys => ["/user/.ssh/id_dsa"]) do |session|
do_tail session, "/var/log/apache2/error.log"
end
puts "\n\nType 'q' and <ENTER> to exit log stream when you are done!\n\n"
sleep 5
break if exit? or iteration == 3
end
end
loggy
loop do
puts "\nDo you need to view more of the log? (y/n)\n"
confirm = gets.chomp
if confirm =="y"
loggy
else
end
break if confirm == "n"
end
puts "Part Deaux!"
You've given the -f command line option to tail(1). That explicitly instructs tail(1) to exit when the user types ^C or otherwise kills the program. If you just want a specific amount of the file to be shown and not followed, then you might wish to use the -n command line option instead:
channel.exec "tail -n 24 #{file}"
24 will show roughly one terminal's worth of data, though if your terminals are larger or smaller -- or you're interested in different amounts of data -- then you might wish to tweak it further.
tail(1) is powerful; it'd be worth reading its documentation just in case there's an even better way to do what you're trying to accomplish.
The solution I found was to use less instead of tail. Try this:
less +F filename; echo 'after less'
When you hit ctrl+c, q in less it will quit and then echo what you want.

How to execute interactive shell program on a remote host from ruby

I am trying to execute an interactive shell program on a remote host from another ruby program. For the sake of simplicity let's suppose that the program I want to execute is something like this:
puts "Give me a number:"
number = gets.chomp()
puts "You gave me #{number}"
The approach that most successful has been so far is using the one I got from here. It is this one:
require 'open3'
Open3.popen3("ssh -tt root#remote 'ruby numbers.rb'") do |stdin, stdout, stderr|
# stdin = input stream
# stdout = output stream
# stderr = stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
sleep(2)
puts "Give me an answer: "
answer = gets.chomp()
stdin.puts answer
threads.each{|t| t.join()} #in order to cleanup when you're done.
end
The problem is that this is not "interactive" enough to me, and the program that I would like to execute (not the simple numbers.rb) has a lot more of input / output. You can think of it as an apt-get install that will ask you for some input to solve some problems.
I have read about net::ssh and pty, but couldn't see if they were going to be the (easy/elegant) solution I am looking for.
The ideal solution will be to make it in such a way that the user does not realize that the IO is being done on a remote host: the stdin goes to the remote host stdin, the stdout from the remote host comes to me and I show it.
If you have any ideas I could try I will be happy to hear them. Thank you!
Try this:
require "readline"
require 'open3'
Open3.popen3("ssh -tt root#remote 'ruby numbers.rb'") do |i, o, e, th|
Thread.new {
while !i.closed? do
input =Readline.readline("", true).strip
i.puts input
end
}
t_err = Thread.new {
while !e.eof? do
putc e.readchar
end
}
t_out = Thread.new {
while !o.eof? do
putc o.readchar
end
}
Process::waitpid(th.pid) rescue nil
# "rescue nil" is there in case process already ended.
t_err.join
t_out.join
end
I got it working, but don't ask me why it works. It was mainly trial/error.
Alternatives:
Using Net::SSH, you need to use :on_process and a Thread: ruby net/ssh channel dies? Don't forget to add session.loop(0.1). More info at the link. The Thread/:on_process idea inspired me to write a gem for my own use: https://github.com/da99/Chee/blob/master/lib/Chee.rb
If the last call in your Ruby program is SSH, then you can exec ssh -tt root#remote 'ruby numbers.rb'. But, if you still want interactivity between User<->Ruby<->SSH, then the previous alternative is the best.

Why is IO::WaitReadable being raised differently for STDOUT than STDERR?

Given that I wish to test non-blocking reads from a long command, I created the following script, saved it as long, made it executable with chmod 755, and placed it in my path (saved as ~/bin/long where ~/bin is in my path).
I am on a *nix variant with ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.0.0] compiled with RVM defaults. I do not use Windows, and am therefore unsure if the test script will work for you if you do.
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDERR.puts 'message on stderr'
sleep 1
end
Why does long_err produce each STDERR message as it is printed by "long"
def long_err( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'err -> ' + stderr.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stderr])
retry
rescue EOFError
puts 'EOF'
end
end
while long_out remains blocked until all STDOUT messages are printed?
def long_out( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'out -> ' + stdout.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stdout])
retry
rescue EOFError
puts 'EOF'
end
end
I assume you will require 'open3' before testing either function.
Why is IO::WaitReadable being raised differently for STDOUT than STDERR?
Workarounds using other ways to start subprocesses also appreciated if you have them.
In most OS's STDOUT is buffered while STDERR is not. What popen3 does is basically open a pipe between the exeutable you launch and Ruby.
Any output that is in buffered mode is not sent through this pipe until either:
The buffer is filled (thereby forcing a flush).
The sending application exits (EOF is reached, forcing a flush).
The stream is explicitly flushed.
The reason STDERR is not buffered is that it's usually considered important for error messages to appear instantly, rather than go for for efficiency through buffering.
So, knowing this, you can emulate STDERR behaviour with STDOUT like this:
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDOUT.flush
STDERR.puts 'message on stderr'
sleep 1
end
and you will see the difference.
You might also want to check "Understanding Ruby and OS I/O buffering".
Here's the best I've got so far for starting subprocesses. I launch a lot of network commands so I needed a way to time them out if they take too long to come back. This should be handy in any situation where you want to remain in control of your execution path.
I adapted this from a Gist, adding code to test the exit status of the command for 3 outcomes:
Successful completion (exit status 0)
Error completion (exit status is non-zero) - raises an exception
Command timed out and was killed - raises an exception
Also fixed a race condition, simplified parameters, added a few more comments, and added debug code to help me understand what was happening with exits and signals.
Call the function like this:
output = run_with_timeout("command that might time out", 15)
output will contain the combined STDOUT and STDERR of the command if it completes successfully. If the command doesn't complete within 15 seconds it will be killed and an exception raised.
Here's the function (2 constants you'll need defined at the top):
DEBUG = false # change to true for some debugging info
BUFFER_SIZE = 4096 # in bytes, this should be fine for many applications
def run_with_timeout(command, timeout)
output = ''
tick = 1
begin
# Start task in another thread, which spawns a process
stdin, stderrout, thread = Open3.popen2e(command)
# Get the pid of the spawned process
pid = thread[:pid]
start = Time.now
while (Time.now - start) < timeout and thread.alive?
# Wait up to `tick' seconds for output/error data
Kernel.select([stderrout], nil, nil, tick)
# Try to read the data
begin
output << stderrout.read_nonblock(BUFFER_SIZE)
puts "we read some data..." if DEBUG
rescue IO::WaitReadable
# No data was ready to be read during the `tick' which is fine
print "." # give feedback each tick that we're waiting
rescue EOFError
# Command has completed, not really an error...
puts "got EOF." if DEBUG
# Wait briefly for the thread to exit...
# We don't want to kill the process if it's about to exit on its
# own. We decide success or failure based on whether the process
# completes successfully.
sleep 1
break
end
end
if thread.alive?
# The timeout has been reached and the process is still running so
# we need to kill the process, because killing the thread leaves
# the process alive but detached.
Process.kill("TERM", pid)
end
ensure
stdin.close if stdin
stderrout.close if stderrout
end
status = thread.value # returns Process::Status when process ends
if DEBUG
puts "thread.alive?: #{thread.alive?}"
puts "status: #{status}"
puts "status.class: #{status.class}"
puts "status.exited?: #{status.exited?}"
puts "status.exitstatus: #{status.exitstatus}"
puts "status.signaled?: #{status.signaled?}"
puts "status.termsig: #{status.termsig}"
puts "status.stopsig: #{status.stopsig}"
puts "status.stopped?: #{status.stopped?}"
puts "status.success?: #{status.success?}"
end
# See how process ended: .success? => true, false or nil if exited? !true
if status.success? == true # process exited (0)
return output
elsif status.success? == false # process exited (non-zero)
raise "command `#{command}' returned non-zero exit status (#{status.exitstatus}), see below output\n#{output}"
elsif status.signaled? # we killed the process (timeout reached)
raise "shell command `#{command}' timed out and was killed (timeout = #{timeout}s): #{status}"
else
raise "process didn't exit and wasn't signaled. We shouldn't get to here."
end
end
Hope this is useful.

How to proxy a shell process in ruby

I'm creating a script to wrap jdb (java debugger). I essentially want to wrap this process and proxy the user interaction. So I want it to:
start jdb from my script
send the output of jdb to stdout
pause and wait for input when jdb does
when the user enters commands, pass it to jdb
At the moment I really want a pass thru to jdb. The reason for this is to initialize the process with specific parameters and potentially add more commands in the future.
Update:
Here's the shell of what ended up working for me using expect:
PTY.spawn("jdb -attach 1234") do |read,write,pid|
write.sync = true
while (true) do
read.expect(/\r\r\n> /) do |s|
s = s[0].split(/\r\r\n/)
s.pop # get rid of prompt
s.each { |line| puts line }
print '> '
STDOUT.flush
write.print(STDIN.gets)
end
end
end
Use Open3.popen3(). e.g.:
Open3.popen3("jdb args") { |stdin, stdout, stderr|
# stdin = jdb's input stream
# stdout = jdb's output stream
# stderr = jdb's stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
stdin.puts "blah"
threads.each{|t| t.join()} #in order to cleanup when you're done.
}
I've given you examples for threads, but you of course want to be responsive to what jdb is doing. The above is merely a skeleton for how you open the process and handle communication with it.
The Ruby standard library includes expect, which is designed for just this type of problem. See the documentation for more information.

Resources