Runy Open3.popen3 Entering input into the subprocess from the command-line - ruby

Goal: I am writing a workflow command-line program in ruby that sequentially executes other programs on the UNIX shell, some of which require the user to enter input.
Problem: Although I can successfully handle the stdout and stderr thanks to this helpful blog post by Nick Charlton, I am however stuck on capturing user input and passing it into the sub-processes via the command line. The code is as follows:
Method
module CMD
def run(cmd, &block)
Open3.popen3(cmd) do |stdin, stdout, stderr, thread|
Thread.new do # STDOUT
until (line = stdout.gets).nil? do
yield nil, line, nil, thread if block_given?
end
end
Thread.new do # STDERR
until (line = stderr.gets).nil? do
yield nil, nil, line, thread if block_given?
end
end
Thread.new do # STDIN
# ????? How to handle
end
thread.join
end
end
end
Calling the method
This example calls the shell command units which prompts the user to enter a unit of measurement and then prompts for a unit to convert to. This is how it would look in the shell
> units
586 units, 56 prefixes # stdout
You have: 1 litre # user input
You want: gallons # user input
* 0.26417205 # stdout
/ 3.7854118 # stdout
When I run this from my program I expect to be able to interact with it in exactly the same way.
unix_cmd = 'units'
run unix_cmd do | stdin, stdout, stderr, thread|
puts "stdout #{stdout.strip}" if stdout
puts "stderr #{stderr.strip}" if stderr
# I'm unsure how I would allow the user to
# interact with STDIN here?
end
Note: Calling the run method this way allows the user to be able to parse the output, control process flow and add custom logging.
From what I've gathered about STDIN, the snippet below is as close as I've come in understanding how to handle STDIN, there are clearly some gaps in my knowledge because I'm still unsure how to integrate this into my run method above and pass the input into the child process.
# STDIN: Constant declared in ruby
# stdin: Parameter declared in Open3.popen3
Thread.new do
# Read each line from the console
STDIN.each_line do |line|
puts "STDIN: #{line}" # print captured input
stdin.write line # write input into stdin
stdin.sync # sync the input into the sub process
break if line == "\n"
end
end
Summary: I wish to understand how to handle user input from the command-line via the Open3.popen3 method so that I can allow users to enter data into various sequence of sub-commands called from my program.

Here's something that should work:
module CMD
def run(cmd, &block)
Open3.popen3(cmd) do |stdin, stdout, stderr, thread|
Thread.new do # STDOUT
until (line = stdout.gets).nil? do
yield nil, line, nil, thread if block_given?
end
end
Thread.new do # STDERR
until (line = stderr.gets).nil? do
yield nil, nil, line, thread if block_given?
end
end
t = Thread.new { loop { stdin.puts gets } }
thread.join
t.kill
end
end
end
I've just added two lines to your original run method: t = Thread.new { loop { stdin.puts gets } }, and t.kill.

After a lot of reading about STDIN as well as some good old trial and error, I discovered an implementation not to dissimilar to Charles Finkel's answer but with some subtle differences.
require "open3"
module Cmd
def run(cmd, &block)
Open3.popen3(cmd) do |stdin, stdout, stderr, thread|
# We only need to check if the block is provided once
# rather than every cycle of the loop as we were doing
# in the original question.
if block_given?
Thread.new do
until (line = stdout.gets).nil? do
yield line, nil, thread
end
end
Thread.new do
until (line = stderr.gets).nil? do
yield nil, line, thread
end
end
end
# $stdin.gets reads from the console
#
# stdin.puts writes to child process
#
# while thread.alive? means that we keep on
# reading input until the child process ends
Thread.new do
stdin.puts $stdin.gets while thread.alive?
end
thread.join
end
end
end
include Cmd
Calling the method like so:
run './test_script.sh' do | stdout, stderr, thread|
puts "#{thread.pid} stdout: #{stdout}" if stdout
puts "#{thread.pid} stderr: #{stderr}" if stderr
end
Where test_script.sh is as follows:
echo "Message to STDOUT"
>&2 echo "Message to STDERR"
echo "enter username: "
read username
echo "enter a greeting"
read greeting
echo "$greeting $username"
exit 0
Produces the following successful output:
25380 stdout: Message to STDOUT
25380 stdout: enter username:
25380 stderr: Message to STDERR
> Wayne
25380 stdout: enter a greeting
> Hello
25380 stdout: Hello Wayne
Note: You will notice the stdout and stderr don't appear in order, this is a limitation I'm yet to solve.
If you're interested in knowing more about stdin it's worth reading the following answer to the question - What is the difference between STDIN and $stdin in Ruby?

Related

Thor: run command without capturing stdout or stderr, and fail on error

I'm writing a Thor script to run some tests from a different tool i.e. running a shell command. I'd like the stdout and stderr from the command to continuously stream out into my console.
First attempt was to just use backticks, but naturally the stdout/stderr are not printed (rather, stdout is captured in the return value).
desc "mytask", "my description"
def mytask
`run-my-tests.sh`
end
My next approach was to use Open3 as in:
require "open3"
desc "mytask", "my description"
def mytask
Open3.popen3("run-my-tests.sh") do |stdin, stdout, stderr|
STDOUT.puts(stdout.read())
STDERR.puts(stderr.read())
end
end
However, the above approach will get the whole output from both stdout and stderr and only print at the end. Un my use case, I'd rather see the output of failing and passing tests as it becomes available.
From http://blog.bigbinary.com/2012/10/18/backtick-system-exec-in-ruby.html, I saw that we can read the streams by chunks i.e. with gets() instead of read(). For example:
require "open3"
desc "mytask", "my description"
def mytask
Open3.popen3(command) do |stdin, stdout, stderr|
while (out = stdout.gets()) || err = (stderr.gets())
STDOUT.print(out) if out
STDERR.print(err) if err
end
exit_code = wait_thr.value
unless exit_code.success?
raise "Failure"
end
end
end
Does it look like the best and cleanest approach? Is it an issue that I have to manually try to print stdout before stderr?
I'm using IO.popen for similar task, like so:
IO.popen([env, *command]) do |io|
io.each { |line| puts ">>> #{line}" }
end
To capture stderr I'd just redirect it to stdout command = %w(run-my-tests.sh 2>&1)
Update
I've constructed a script using Open3::popen3 to capture stdout and stderr separately. It obviously has a lot of room form improvement, but basic idea hopefully is clear.
require 'open3'
command = 'for i in {1..5}; do echo $i; echo "$i"err >&2; sleep 0.5; done'
stdin, stdout, stderr, _command_thread = Open3.popen3(command)
reading_thread = Thread.new do
kilobyte = 1024
loop do
begin
stdout.read_nonblock(kilobyte).lines { |line| puts "stdout >>> #{line}" }
stderr.read_nonblock(kilobyte).lines { |line| puts "stderr >>> #{line}" }
rescue IO::EAGAINWaitReadable
next
rescue EOFError
break
end
sleep 1
end
end
reading_thread.join
stdin.close
stdout.close
stderr.close
Seems to me like the simplest way to run a shell command and not try to capture the stdout or stderr (instead, let them bubble up as they come) was something like:
def run *args, **options
pid = spawn(*args, options)
pid, status = Process.wait2(pid)
exit(status.exitstatus) unless status.success?
end
The problem with backticks or system() is that the former captures the stdout and the latter only returns whether the command succeeded or not. spawn() is a more informative alternative to system(). I'd rather have my Thor script tool fail as if it was merely a wrapper for those shell commands.

Run shell command inside ruby, attach STDIN to command input

I'm running a long-running shell command inside ruby script like this:
Open3.popen2(command) {|i,o,t|
while line = o.gets
MyRubyProgram.read line
puts line
end
}
So I would be able to look at the command output in the shell window.
How do I attach STDIN to command input?
You need to:
Wait for the user input from STDIN
Wait for the output of the command from the popen3 -- o
You may need IO.select or other IO scheduler, or some other multi-task scheduler, such as Thread.
Here is a demo for the Thread approach:
require 'open3'
Open3.popen3('ruby -e "while line = gets; print line; end"') do |i, o, t|
tin = Thread.new do
# here you can manipulate the standard input of the child process
i.puts "Hello"
i.puts "World"
i.close
end
tout = Thread.new do
# here you can fetch and process the standard output of the child process
while line = o.gets
print "COMMAND => #{line}"
end
end
tin.join # wait for the input thread
tout.join # wait for the output thread
end

Cannot handle STDOUT and STDERR output properly with popen3?

I'm writing a function to execute a shell command, and returns its exit code, STDOUT and STDERR.
The problem is, this function cannot capture STDOUT and STDERR output properly.
def sh(*args)
options = args[-1].respond_to?(:to_hash) ? args.pop.to_hash: {}
options = { :timeout => 0, :sudo => false }.merge(options)
cmd = options[:sudo] == false ? args[0] : "sudo " << args[0]
begin
stdin, stdout, stderr, wait_thr = Open3.popen3(cmd)
pid = wait_thr[:pid]
out_buf = ""
err_buf = ""
start = Time.now
# Manually ping the process per 0.2 second to check whether the process is alive or not
begin
out_buf << stdout.read_nonblock(4096)
err_buf << stderr.read_nonblock(4096)
# kill the process if it timeouts
if options[:timeout] != 0 && (Time.now - start) > options[:timeout]
Process.kill("KILL", pid)
Process.detach(pid)
raise RuntimeError, "process with pid #{pid} timed out with #{options[:timeout]} seconds."
end
sleep 0.2
rescue IO::WaitReadable, EOFError
end while wait_thr.alive?
rescue => e
NtfLogger.warn("sh '#{args}' executed with failure: #{e}")
ensure
if wait_thr.nil?
return 1, out_buf, err_buf
else
return wait_thr.value.exitstatus, out_buf, err_buf
end
end
end # end of sh
Could anybody please help me figure out what the problem is?
My understanding of popen3's docs are that it's better to do the processing within a block:
Open3.popen3([env,] cmd... [, opts]) do |stdin, stdout, stderr, wait_thr|
pid = wait_thr.pid # pid of the started process.
...
exit_status = wait_thr.value # Process::Status object returned.
end
In non-block form, the docs note that the streams must be closed:
stdin, stdout, stderr, wait_thr = Open3.popen3([env,] cmd... [, opts])
pid = wait_thr[:pid] # pid of the started process.
...
stdin.close # stdin, stdout and stderr should be closed explicitly in this form.
stdout.close
stderr.close
exit_status = wait_thr.value # Process::Status object returned.
http://www.ruby-doc.org/stdlib-2.0.0/libdoc/open3/rdoc/Open3.html#method-c-popen3
Lastly, FWIW, here's a wrapper around capture3 which I use on my end. You could easily extend it to add a sudo option, in case the thread-related parts of your sh utility aren't critical:
#
# Identical to Open3.capture3, except that it rescues runtime errors
#
# #param env optional (as `Kernel.system')
# #param *cmd the command and its (auto-escaped) arguments
# #param opts optional a hash of options (as `Kernel.system')
#
# #return [stdout, stderr, success] | [$/, $/, nil] on error
#
def system3(*cmd)
begin
stdout, stderr, status = Open3.capture3(*cmd)
[stdout, stderr, status.success?]
rescue
[$/, $/, nil]
end
end

equivalent of backticks operator with ability to display output during execution

I'm looking for something equivalent of the backticks operator (``) with the capability to display output during shell command execution.
I saw a solution in another post:
(Running a command from Ruby displaying and capturing the output)
output = []
IO.popen("ruby -e '3.times{|i| p i; sleep 1}'").each do |line|
p line.chomp
output << line.chomp
end
p output
This solution doesn't fit my needs since $? remains nil after the shell command execution. The solution I'm looking for should also set $? (returning the value of $?.exitstatus in another way is also sufficient)
Thanks!
First, I'd recommend using one of the methods in Open3.
I use capture3 for one of my systems where we need to grab the output of STDOUT and STDERR of a lot of command-line applications.
If you need a piped sub-process, try popen3 or one of the other "pipeline" commands.
Here's some code to illustrate how to use popen2, which ignores the STDERR channel. If you want to track that also use popen3:
require 'open3'
output = []
exit_status = Open3.popen2(ENV, "ruby -e '3.times{|i| p i; sleep 1}'") { |stdin, stdout, thr|
stdin.close
stdout.each_line do |o|
o.chomp!
output << o
puts %Q(Read from pipe: "#{ o }")
end
thr.value
}
puts "Output array: #{ output.join(', ') }"
puts "Exit status: #{ exit_status }"
Running that outputs:
Read from pipe: "0"
Read from pipe: "1"
Read from pipe: "2"
Output array: 0, 1, 2
Exit status: pid 43413 exit 0
The example code shows one way to do it.
It's not necessary to use each_line, but that demonstrates how you can read line-by-line until the sub-process closes its STDOUT.
capture3 doesn't accept a block; It waits until the child has closed its output and exits, then it returns the content, which is great when you want a blocking process. popen2 and popen3 have blocking and non-blocking versions, but I show only the non-blocking version here to demonstrate how to read and output the content as it comes in from the sub-process.
Try following:
output = []
IO.popen("ruby -e '3.times{|i| p i; sleep 1 }'") do |f|
f.each do |line|
p line.chomp
output << line.chomp
end
end
p $?
prints
"0"
"1"
"2"
#<Process::Status: pid 2501 exit 0>
Using open3
require 'open3'
output = []
Open3.popen2("ruby -e '3.times{|i| p i; sleep 1}'") do |stdin,stdout,wait_thr|
stdout.each do |line|
p line.chomp
output << line.chomp
end
p wait_thr.value
end

How do I redirect stderr and stdout to file for a Ruby script?

How do I redirect stderr and stdout to file for a Ruby script?
From within a Ruby script, you can redirect stdout and stderr with the IO#reopen method.
# a.rb
$stdout.reopen("out.txt", "w")
$stderr.reopen("err.txt", "w")
puts 'normal output'
warn 'something to stderr'
$ ls
a.rb
$ ruby a.rb
$ ls
a.rb err.txt out.txt
$ cat err.txt
something to stderr
$ cat out.txt
normal output
Note: reopening of the standard streams to /dev/null is a good old method of helping a process to become a daemon. For example:
# daemon.rb
$stdout.reopen("/dev/null", "w")
$stderr.reopen("/dev/null", "w")
def silence_stdout
$stdout = File.new( '/dev/null', 'w' )
yield
ensure
$stdout = STDOUT
end
./yourscript.rb 2>&1 > log.txt
will redirect stdout and stderr to the same file.
A full example with $stdout and $stderr redirected to a file and how to restore the initial behavior.
#!/usr/bin/ruby
logfile = "/tmp/testruby.log"
#original_stdout = $stderr.dup
#original_stderr = $stderr.dup
$stdout.reopen(logfile, "w")
$stdout.sync = true
$stderr.reopen($stdout)
def restore_stdout
$stdout.reopen(#original_stdout)
$stderr.reopen(#original_stderr)
end
def fail_exit(msg)
puts "- #{msg}" # to the logfile
restore_stdout
$stderr.puts "+ #{msg}" # to standard error
exit!
end
def success_exit(msg)
puts "- #{msg}" # to the logfile
restore_stdout
$stdout.puts "+ #{msg}" # to standard output
exit
end
puts "This message goes to the file"
success_exit "A successful exit message"

Resources