ruby exit tail and continue to next part of script from keypress - ruby

I found this snippet of code I am having trouble trying to find a good way to exit the tail and move on to the next part of the script. Essentially I ant the first part to make a change for the person running the scripts, show log output using the below code, then move on to the next part of the script on user keypress. I cant get out of the tail without CTRL-C.
def do_tail( session, file )
session.open_channel do |channel|
channel.on_data do |ch, data|
puts "[#{file}] -> #{data}"
end
channel.exec "tail -f #{file}"
end
end
Net::SSH.start("server", "user", :keys => ["/user/.ssh/id_dsa"]) do |session|
do_tail session, "/var/log/apache2/error.log"
do_tail session, "/var/log/apache2/access.log"
session.loop
end
UPDATE
The -f takes over I/O and makes it difficult to exit that ssh channel. I decided to move towards the suggestions and modify it. Here is the result in case someone else would like help on this topic.
require 'rubygems'
require 'net/ssh'
def exit?
begin
while input = STDIN.read_nonblock(1)
return true if input == 'q'
end
false
rescue Errno::EINTR
false
rescue Errno::EAGAIN
false
rescue EOFError
true
end
end
def do_tail( session, file )
session.open_channel do |channel|
channel.on_data do |ch, data|
puts "[#{file}]\n\n#{data}"
end
channel.exec "tail -n22 #{file}"
end
end
def loggy
iteration = 0
loop do
iteration = (iteration + 1)
Net::SSH.start("server", "user", :keys => ["/user/.ssh/id_dsa"]) do |session|
do_tail session, "/var/log/apache2/error.log"
end
puts "\n\nType 'q' and <ENTER> to exit log stream when you are done!\n\n"
sleep 5
break if exit? or iteration == 3
end
end
loggy
loop do
puts "\nDo you need to view more of the log? (y/n)\n"
confirm = gets.chomp
if confirm =="y"
loggy
else
end
break if confirm == "n"
end
puts "Part Deaux!"

You've given the -f command line option to tail(1). That explicitly instructs tail(1) to exit when the user types ^C or otherwise kills the program. If you just want a specific amount of the file to be shown and not followed, then you might wish to use the -n command line option instead:
channel.exec "tail -n 24 #{file}"
24 will show roughly one terminal's worth of data, though if your terminals are larger or smaller -- or you're interested in different amounts of data -- then you might wish to tweak it further.
tail(1) is powerful; it'd be worth reading its documentation just in case there's an even better way to do what you're trying to accomplish.

The solution I found was to use less instead of tail. Try this:
less +F filename; echo 'after less'
When you hit ctrl+c, q in less it will quit and then echo what you want.

Related

Anything wrong with "throw" inside a PTY.spawn?

This is a for a dev machine and testing purposes. I'm trying to start a script that starts a server, then ignores it once the output meets a condition. The process does not end, so I can't use backticks. I will be -QUITing the process independently (somewhere else in the script). So far this is the closest I've got:
require "pty"
catch :ready do
begin
PTY.spawn( "bin/start" ) do |stdin, stdout, pid|
begin
stdin.each { |line|
throw(:ready) if line['worker ready']
}
rescue Errno::EIO
puts "no more output"
end
end
rescue PTY::ChildExited
puts "child process exit"
end
end # === end of catch block
# Continue with the rest of the script...
So far it works, but I am new to Ruby and processes. All the other solutions I've seen online require pipes, forks, and Process.detach. Am I doing anything wrong by using throw while I am in a PTY.spawn?

How to execute interactive shell program on a remote host from ruby

I am trying to execute an interactive shell program on a remote host from another ruby program. For the sake of simplicity let's suppose that the program I want to execute is something like this:
puts "Give me a number:"
number = gets.chomp()
puts "You gave me #{number}"
The approach that most successful has been so far is using the one I got from here. It is this one:
require 'open3'
Open3.popen3("ssh -tt root#remote 'ruby numbers.rb'") do |stdin, stdout, stderr|
# stdin = input stream
# stdout = output stream
# stderr = stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
sleep(2)
puts "Give me an answer: "
answer = gets.chomp()
stdin.puts answer
threads.each{|t| t.join()} #in order to cleanup when you're done.
end
The problem is that this is not "interactive" enough to me, and the program that I would like to execute (not the simple numbers.rb) has a lot more of input / output. You can think of it as an apt-get install that will ask you for some input to solve some problems.
I have read about net::ssh and pty, but couldn't see if they were going to be the (easy/elegant) solution I am looking for.
The ideal solution will be to make it in such a way that the user does not realize that the IO is being done on a remote host: the stdin goes to the remote host stdin, the stdout from the remote host comes to me and I show it.
If you have any ideas I could try I will be happy to hear them. Thank you!
Try this:
require "readline"
require 'open3'
Open3.popen3("ssh -tt root#remote 'ruby numbers.rb'") do |i, o, e, th|
Thread.new {
while !i.closed? do
input =Readline.readline("", true).strip
i.puts input
end
}
t_err = Thread.new {
while !e.eof? do
putc e.readchar
end
}
t_out = Thread.new {
while !o.eof? do
putc o.readchar
end
}
Process::waitpid(th.pid) rescue nil
# "rescue nil" is there in case process already ended.
t_err.join
t_out.join
end
I got it working, but don't ask me why it works. It was mainly trial/error.
Alternatives:
Using Net::SSH, you need to use :on_process and a Thread: ruby net/ssh channel dies? Don't forget to add session.loop(0.1). More info at the link. The Thread/:on_process idea inspired me to write a gem for my own use: https://github.com/da99/Chee/blob/master/lib/Chee.rb
If the last call in your Ruby program is SSH, then you can exec ssh -tt root#remote 'ruby numbers.rb'. But, if you still want interactivity between User<->Ruby<->SSH, then the previous alternative is the best.

Why is IO::WaitReadable being raised differently for STDOUT than STDERR?

Given that I wish to test non-blocking reads from a long command, I created the following script, saved it as long, made it executable with chmod 755, and placed it in my path (saved as ~/bin/long where ~/bin is in my path).
I am on a *nix variant with ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.0.0] compiled with RVM defaults. I do not use Windows, and am therefore unsure if the test script will work for you if you do.
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDERR.puts 'message on stderr'
sleep 1
end
Why does long_err produce each STDERR message as it is printed by "long"
def long_err( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'err -> ' + stderr.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stderr])
retry
rescue EOFError
puts 'EOF'
end
end
while long_out remains blocked until all STDOUT messages are printed?
def long_out( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'out -> ' + stdout.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stdout])
retry
rescue EOFError
puts 'EOF'
end
end
I assume you will require 'open3' before testing either function.
Why is IO::WaitReadable being raised differently for STDOUT than STDERR?
Workarounds using other ways to start subprocesses also appreciated if you have them.
In most OS's STDOUT is buffered while STDERR is not. What popen3 does is basically open a pipe between the exeutable you launch and Ruby.
Any output that is in buffered mode is not sent through this pipe until either:
The buffer is filled (thereby forcing a flush).
The sending application exits (EOF is reached, forcing a flush).
The stream is explicitly flushed.
The reason STDERR is not buffered is that it's usually considered important for error messages to appear instantly, rather than go for for efficiency through buffering.
So, knowing this, you can emulate STDERR behaviour with STDOUT like this:
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDOUT.flush
STDERR.puts 'message on stderr'
sleep 1
end
and you will see the difference.
You might also want to check "Understanding Ruby and OS I/O buffering".
Here's the best I've got so far for starting subprocesses. I launch a lot of network commands so I needed a way to time them out if they take too long to come back. This should be handy in any situation where you want to remain in control of your execution path.
I adapted this from a Gist, adding code to test the exit status of the command for 3 outcomes:
Successful completion (exit status 0)
Error completion (exit status is non-zero) - raises an exception
Command timed out and was killed - raises an exception
Also fixed a race condition, simplified parameters, added a few more comments, and added debug code to help me understand what was happening with exits and signals.
Call the function like this:
output = run_with_timeout("command that might time out", 15)
output will contain the combined STDOUT and STDERR of the command if it completes successfully. If the command doesn't complete within 15 seconds it will be killed and an exception raised.
Here's the function (2 constants you'll need defined at the top):
DEBUG = false # change to true for some debugging info
BUFFER_SIZE = 4096 # in bytes, this should be fine for many applications
def run_with_timeout(command, timeout)
output = ''
tick = 1
begin
# Start task in another thread, which spawns a process
stdin, stderrout, thread = Open3.popen2e(command)
# Get the pid of the spawned process
pid = thread[:pid]
start = Time.now
while (Time.now - start) < timeout and thread.alive?
# Wait up to `tick' seconds for output/error data
Kernel.select([stderrout], nil, nil, tick)
# Try to read the data
begin
output << stderrout.read_nonblock(BUFFER_SIZE)
puts "we read some data..." if DEBUG
rescue IO::WaitReadable
# No data was ready to be read during the `tick' which is fine
print "." # give feedback each tick that we're waiting
rescue EOFError
# Command has completed, not really an error...
puts "got EOF." if DEBUG
# Wait briefly for the thread to exit...
# We don't want to kill the process if it's about to exit on its
# own. We decide success or failure based on whether the process
# completes successfully.
sleep 1
break
end
end
if thread.alive?
# The timeout has been reached and the process is still running so
# we need to kill the process, because killing the thread leaves
# the process alive but detached.
Process.kill("TERM", pid)
end
ensure
stdin.close if stdin
stderrout.close if stderrout
end
status = thread.value # returns Process::Status when process ends
if DEBUG
puts "thread.alive?: #{thread.alive?}"
puts "status: #{status}"
puts "status.class: #{status.class}"
puts "status.exited?: #{status.exited?}"
puts "status.exitstatus: #{status.exitstatus}"
puts "status.signaled?: #{status.signaled?}"
puts "status.termsig: #{status.termsig}"
puts "status.stopsig: #{status.stopsig}"
puts "status.stopped?: #{status.stopped?}"
puts "status.success?: #{status.success?}"
end
# See how process ended: .success? => true, false or nil if exited? !true
if status.success? == true # process exited (0)
return output
elsif status.success? == false # process exited (non-zero)
raise "command `#{command}' returned non-zero exit status (#{status.exitstatus}), see below output\n#{output}"
elsif status.signaled? # we killed the process (timeout reached)
raise "shell command `#{command}' timed out and was killed (timeout = #{timeout}s): #{status}"
else
raise "process didn't exit and wasn't signaled. We shouldn't get to here."
end
end
Hope this is useful.

How to proxy a shell process in ruby

I'm creating a script to wrap jdb (java debugger). I essentially want to wrap this process and proxy the user interaction. So I want it to:
start jdb from my script
send the output of jdb to stdout
pause and wait for input when jdb does
when the user enters commands, pass it to jdb
At the moment I really want a pass thru to jdb. The reason for this is to initialize the process with specific parameters and potentially add more commands in the future.
Update:
Here's the shell of what ended up working for me using expect:
PTY.spawn("jdb -attach 1234") do |read,write,pid|
write.sync = true
while (true) do
read.expect(/\r\r\n> /) do |s|
s = s[0].split(/\r\r\n/)
s.pop # get rid of prompt
s.each { |line| puts line }
print '> '
STDOUT.flush
write.print(STDIN.gets)
end
end
end
Use Open3.popen3(). e.g.:
Open3.popen3("jdb args") { |stdin, stdout, stderr|
# stdin = jdb's input stream
# stdout = jdb's output stream
# stderr = jdb's stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
stdin.puts "blah"
threads.each{|t| t.join()} #in order to cleanup when you're done.
}
I've given you examples for threads, but you of course want to be responsive to what jdb is doing. The above is merely a skeleton for how you open the process and handle communication with it.
The Ruby standard library includes expect, which is designed for just this type of problem. See the documentation for more information.

How do I display progress bars from a shell command over ssh

I've got a script thats supposed to mimic ffmpeg on my local machine, by sending the command of to a remote machine, running it there and then returning the results.
(see previous stackoverflow question.)
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
require 'net/sftp'
require 'highline/import'
file = ARGV[ ARGV.index( '-i' ) + 1] if ARGV.include?( '-i' )
puts 'No input file specified' unless file;
host = "10.0.0.10"
user = "user"
prod = "new-#{file}" # product filename (call it <file>-new)
rpath = "/home/#{user}/.rffmpeg" # remote computer operating directory
rfile = "#{rpath}/#{file}" # remote filename
rprod = "#{rpath}/#{prod}" # remote product
cmd = "ffmpeg -i #{rfile} #{rprod}"# remote command, constructed
pass = ask("Password: ") { |q| q.echo = false } # password from stdin
Net::SSH.start(host, user ) do |ssh|
ssh.sftp.connect do |sftp|
# upload local 'file' to remote 'rfile'
sftp.upload!(file, rfile)
# run remote command 'cmd' to produce 'rprod'
ssh.exec!(cmd)
# download remote 'rprod' to local 'prod'
sftp.download!(rprod, prod)
end
end
now my problem is at
ssh.exec!(cmd)
I want to display the cmd's output to the local user in real-time. But making it
puts ssh.exec!(cmd)
I only get the resulting output after the command has finished running. How would I have to change the code to make this work?
On the display side of your question, you can generate an updating progress bar in Ruby using the "\r" string char. This backs you up to the beginning of the current line allowing you to re-write it. For example:
1.upto(100) { |i| sleep 0.05; print "\rPercent Complete #{i}%"}
Or if you just want a progress bar across the screen you can simply do something similar to this:
1.upto(50) { sleep 0.05; print "|"}
Also, relating to stdout, in addition to flushing output per previous example (STDOUT.flush), you can ask Ruby to automatically sync writes to an IO buffer (in this case STDOUT) with associated device writes (basically turns off internal buffering):
STDOUT.sync = true
Also, I find that sometimes flush doesn't work for me, and I use "IO.fsync" instead. For me that's mostly been related to file system work, but it's worth knowing.
From ri Net::SSH::start:
-------------------------------------------------------- Net::SSH::start
Net::SSH::start(host, user, options={}, &block) {|connection| ...}
------------------------------------------------------------------------
The standard means of starting a new SSH connection. When used with
a block, the connection will be closed when the block terminates,
otherwise the connection will just be returned. The yielded (or
returned) value will be an instance of
Net::SSH::Connection::Session (q.v.). (See also
Net::SSH::Connection::Channel and Net::SSH::Service::Forward.)
Net::SSH.start("host", "user") do |ssh|
ssh.exec! "cp /some/file /another/location"
hostname = ssh.exec!("hostname")
ssh.open_channel do |ch|
ch.exec "sudo -p 'sudo password: ' ls" do |ch, success|
abort "could not execute sudo ls" unless success
ch.on_data do |ch, data|
print data
if data =~ /sudo password: /
ch.send_data("password\n")
end
end
end
end
ssh.loop
end
So it looks like you can get more interactive by using #open_channel
Here's some example code:
user#server% cat echo.rb
#! /usr/local/bin/ruby
def putsf s
puts s
STDOUT.flush
end
putsf "hello"
5.times do
putsf gets.chomp
end
putsf "goodbye"
And on your local machine:
user#local% cat client.rb
#! /usr/local/bin/ruby
require 'rubygems'
require 'net/ssh'
words = %w{ earn more sessions by sleaving }
index = 0;
Net::SSH.start('server', 'user') do |ssh|
ssh.open_channel do |ch|
ch.exec './echo.rb' do |ch, success|
abort "could not execute ./echo.rb" unless success
ch.on_data do |ch, data|
p [:data, data]
index %= words.size
ch.send_data( words[index] + "\n" )
index += 1
end
end
end
end
user#local% ./client.rb
[:data, "hello\n"]
[:data, "earn\n"]
[:data, "more\n"]
[:data, "sessions\n"]
[:data, "by\n"]
[:data, "sleaving\n"]
[:data, "goodbye\n"]
So you can interact with a running process this way.
It's important that the running process flush its output before requesting input - otherwise, the program might hang as the channel may not have received the unflushed output.

Resources