Run multiple commands in the same shell process - ruby

I'm attempting to run a series of commands through Ruby, and capture stdin, stdout, stderr and the exitstatus.
require "open3"
require "pp"
command_list = [
"export MY_ENV_VAR=foobar",
"printenv MY_ENV_VAR"
]
executed_commands = []
result = nil
command_list.each do |command|
stdout, stderr, status = Open3.capture3(command)
result = status.exitstatus
executed_commands << [command, stdout, stderr, result]
break if result != 0
end
pp executed_commands
puts "exited with #{result} exit status."
This process exits with a non-zero status, indicating that the printenv MY_ENV_VAR command fails, and that the commands are not being run in the same process.
How can I execute a series of commands in a single shell process, recording stdin, stdout, stderr and the exitstatus of each command?

I would strongly suggest you don't chain together multiple shell commands into a single system call if you don't absolutely have to. A major caveat is that you can't individually inspect the return codes of each command in the chain. This leads to lack of control over the command flow. For example, if the first command in the chain fails for any reason, the subsequent commands will still attempt to execute regardless of the state of the first command. This may be undesirable.
I suggest encapsulating the popen functionality into a method and just call the method for each command you want to run. This would allow you to react to any failed execution on a command-by-command basis.

Your code for running a series of commands is fine. The issue is that you were setting the environment variable incorrectly. A child process cannot set the environment of its parent like you were trying to do. Child processes do inherit the environment of their parent, so here is one way to fix your code:
require "open3"
require "pp"
ENV['MY_ENV_VAR'] = 'hi'
command_list = [
"printenv MY_ENV_VAR"
]
executed_commands = []
result = nil
command_list.each do |command|
stdout, stderr, status = Open3.capture3(command)
result = status.exitstatus
executed_commands << [command, stdout, stderr, result]
break if result != 0
end
pp executed_commands
puts "exited with #{result} exit status."
The result when I run this on Linux with Ruby 2.3.1 is:
[["printenv MY_ENV_VAR", "hi\n", "", 0]]
exited with 0 exit status.
Now if you wanted to pass an environment variable to the child process without modifying your own process's environment, see the documentation for the arguments of Open3.capture3:
https://ruby-doc.org/stdlib/libdoc/open3/rdoc/Open3.html#method-c-capture3

Related

Ruby script with parrallel threads and logging bash output

I have a ruby script I'm making to upload a directory of roles to chef server. Doing this 1 at a time with a .each loop is slow. So I added parallelism by running each command in separate threads. Now I'm trying to figure out how to store the output of the commands so I can read them back in order of the threads that were created. The roles array is already in alphabetical order. We also use bash_profile aliases for running the knife command with different configuration files for dev and prod.
I've tried many different ways to run the bash command and trying to store the output in an array or to a file etc... Currently this displays the output from each thread as it runs or finishes so out put is hard to read or tell if everything finished correctly and the files that the bash command output is supposed to be redirecting to get created but have empty content.
Sorry if this script isn't the easiest to read. I've only been doing ruby for a little over a year now and self taught myself it when we started to get into chef. I didn't have a programming background before that.
#!/opt/chefdk/embedded/bin/ruby
def print_usage_and_exit
puts 'Need to specify 1 or more role.json files or no arguments to upload all roles'
puts "ruby #{__FILE__} or ruby #{__FILE__} [role1.json] [role2.json] [...]"
exit(1)
end
def fetch_roles
roles = []
current_dir = File.dirname(__FILE__)
Dir.foreach("#{current_dir}/roles") do |role|
next if role == '.' || role == '..' || role == 'README.md'
roles.push(role)
end
roles
end
upload = []
i = 0
roles = (ARGV.empty? ? fetch_roles : ARGV[0..-1])
# Probably redundant, but a cheap check to make sure we're only looking at json files
roles.keep_if { |b| b.end_with?('.json') }
print_usage_and_exit if roles.empty?
print "\nSpecify new knife command if you have seperate knife command for dev and prod created with .bash_profile function."
print "\nLeave blank to use default 'knife' command"
print "\nWhich knife command to use: "
knife = ($stdin.gets.chomp('') ? 'knife' : $stdin.gets.chomp)
print "\n**** Starting upload of roles to chef server ****\n"
roles.each do |role|
upload[i] = Thread.new{
system("bash", "-cl", "#{knife} role from file #{role} > /tmp/#{role}.log")
}
i += 1
end
upload.each {|t| t.join}
roles.each do |role|
logfile = "/tmp/#{role}.log"
logmsg = open(logfile)
print "\n#{logmsg.read}\n"
#FileUtils.rm("/tmp/#{role}.log")
end
print "\n**** Finished uploading roles to chef server ****\n"
The right way to do this is knife upload roles/. That doesn't actually answer your question per se, but I think you'll find it a lot simpler.
I prefer to use Open3's caputure3 function to execute subprocesses, as it makes it easy to handle all the various details ( stdin, stdout, stderr, environment variables, etc ).
Pair that with the use of thread-local data, a built in feature of ruby threads, and you have a pretty easy method of running subprocesses. I'm a big fan of using threads for this kind of concurrency. The GIL prevents ruby from running all the threads concurrently, but the capture3 subprocesses run concurrently anyway, so it doesn't really matter.
require 'open3'
commands = [
'true',
'echo "a more complex command from `pwd`" 1>&2 && echo "and stdout"',
]
threads = []
commands.each_with_index do |cmd, i|
threads[i] = Thread.new do
stdout, stderr, status = Open3.capture3("bash", stdin_data: cmd)
Thread.current['stdout'] = stdout
Thread.current['stderr'] = stderr
Thread.current['status'] = status
end
end
threads.each_with_index do |th,i|
th.join
puts "Thread # #{i}:"
%w( stdout stderr status ).each do |s|
puts "\t#{s}: #{th[s]}"
end
puts
end
The results are exactly what you'd expect:
$ ruby ./t.rb
Thread # 0:
stdout:
stderr:
status: pid 34244 exit 0
Thread # 1:
stdout: and stdout
stderr: a more complex command from /Users/dfarrell/t
status: pid 34243 exit 0
You can use the exit status to give a final summary of how many commands failed or succeeded.

Kill a process called using open3 in ruby

I'm using a command line program, it works as mentioned below:
$ ROUTE_TO_FOLDER/app < "long text"
If "long text" is written using the parameters "app" needs, then it will fill a text file with results. If not, it will fill the text file with dots continuously (I can't handle or modify the code of "app" in order to avoid this).
In a ruby script there's a line like this:
text = "long text that will be used by app"
output = system("ROUTE_TO_FOLDER/app < #{text}")
Now, if text is well written, there won't be problems and I will get an output file as mentioned before. The problem comes when text is not well written. What happens next is that my ruby script hangs and I'm not sure how to kill it.
I've found Open3 and I've used the method like this:
irb> cmd = "ROUTE_TO_FOLDER/app < #{text}"
irb> stdin, stdout, stderr, wait_thr = Open3.popen3(cmd)
=> [#<IO:fd 10>, #<IO:fd 11>, #<IO:fd 13>, #<Thread:0x007f3a1a6f8820 run>]
When I do:
irb> wait_thr.value
it also hangs, and :
irb> wait_thr.status
=> "sleep"
How can I avoid these problems? Is it not recognizing that "app" has failed?
wait_thr.pid provides you the pid of the started process. Just do
Process.kill("KILL",wait_thr.pid)
when you need to kill it.
You can combine it with detecting if the process is hung (continuously outputs dots) in one of the two ways.
1) Set a timeout for waiting for the process:
get '/process' do
text = "long text that will be used by app"
cmd = "ROUTE_TO_FOLDER/app < #{text}"
Open3.popen3(cmd) do |i,o,e,w|
begin
Timeout.timeout(10) do # timeout set to 10 sec, change if needed
# process output of the process. it will produce EOF when done.
until o.eof? do
# o.read_nonblock(N) ...
end
end
rescue Timeout::Error
# here you know that the process took longer than 10 seconds
Process.kill("KILL", w.pid)
# do whatever other error processing you need
end
end
end
2) Check the process output. (The code below is simplified - you probably don't want to read the output of your process into a single String buf first and then process, but I guess you get the idea).
get '/process' do
text = "long text that will be used by app"
cmd = "ROUTE_TO_FOLDER/app < #{text}"
Open3.popen3(cmd) do |i,o,e,w|
# process output of the process. it will produce EOF when done.
# If you get 16 dots in a row - the process is in the continuous loop
# (you may want to deal with stderr instead - depending on where these dots are sent to)
buf = ""
error = false
until o.eof? do
buf << o.read_nonblock(16)
if buf.size>=16 && buf[-16..-1] == '.'*16
# ok, the process is hung
Process.kill("KILL", w.pid)
error = true
# you should also get o.eof? the next time you check (or after flushing the pipe buffer),
# so you will get out of the until o.eof? loop
end
end
if error
# do whatever error processing you need
else
# process buf, it contains all the output
end
end
end

Why is IO::WaitReadable being raised differently for STDOUT than STDERR?

Given that I wish to test non-blocking reads from a long command, I created the following script, saved it as long, made it executable with chmod 755, and placed it in my path (saved as ~/bin/long where ~/bin is in my path).
I am on a *nix variant with ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.0.0] compiled with RVM defaults. I do not use Windows, and am therefore unsure if the test script will work for you if you do.
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDERR.puts 'message on stderr'
sleep 1
end
Why does long_err produce each STDERR message as it is printed by "long"
def long_err( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'err -> ' + stderr.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stderr])
retry
rescue EOFError
puts 'EOF'
end
end
while long_out remains blocked until all STDOUT messages are printed?
def long_out( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'out -> ' + stdout.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stdout])
retry
rescue EOFError
puts 'EOF'
end
end
I assume you will require 'open3' before testing either function.
Why is IO::WaitReadable being raised differently for STDOUT than STDERR?
Workarounds using other ways to start subprocesses also appreciated if you have them.
In most OS's STDOUT is buffered while STDERR is not. What popen3 does is basically open a pipe between the exeutable you launch and Ruby.
Any output that is in buffered mode is not sent through this pipe until either:
The buffer is filled (thereby forcing a flush).
The sending application exits (EOF is reached, forcing a flush).
The stream is explicitly flushed.
The reason STDERR is not buffered is that it's usually considered important for error messages to appear instantly, rather than go for for efficiency through buffering.
So, knowing this, you can emulate STDERR behaviour with STDOUT like this:
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDOUT.flush
STDERR.puts 'message on stderr'
sleep 1
end
and you will see the difference.
You might also want to check "Understanding Ruby and OS I/O buffering".
Here's the best I've got so far for starting subprocesses. I launch a lot of network commands so I needed a way to time them out if they take too long to come back. This should be handy in any situation where you want to remain in control of your execution path.
I adapted this from a Gist, adding code to test the exit status of the command for 3 outcomes:
Successful completion (exit status 0)
Error completion (exit status is non-zero) - raises an exception
Command timed out and was killed - raises an exception
Also fixed a race condition, simplified parameters, added a few more comments, and added debug code to help me understand what was happening with exits and signals.
Call the function like this:
output = run_with_timeout("command that might time out", 15)
output will contain the combined STDOUT and STDERR of the command if it completes successfully. If the command doesn't complete within 15 seconds it will be killed and an exception raised.
Here's the function (2 constants you'll need defined at the top):
DEBUG = false # change to true for some debugging info
BUFFER_SIZE = 4096 # in bytes, this should be fine for many applications
def run_with_timeout(command, timeout)
output = ''
tick = 1
begin
# Start task in another thread, which spawns a process
stdin, stderrout, thread = Open3.popen2e(command)
# Get the pid of the spawned process
pid = thread[:pid]
start = Time.now
while (Time.now - start) < timeout and thread.alive?
# Wait up to `tick' seconds for output/error data
Kernel.select([stderrout], nil, nil, tick)
# Try to read the data
begin
output << stderrout.read_nonblock(BUFFER_SIZE)
puts "we read some data..." if DEBUG
rescue IO::WaitReadable
# No data was ready to be read during the `tick' which is fine
print "." # give feedback each tick that we're waiting
rescue EOFError
# Command has completed, not really an error...
puts "got EOF." if DEBUG
# Wait briefly for the thread to exit...
# We don't want to kill the process if it's about to exit on its
# own. We decide success or failure based on whether the process
# completes successfully.
sleep 1
break
end
end
if thread.alive?
# The timeout has been reached and the process is still running so
# we need to kill the process, because killing the thread leaves
# the process alive but detached.
Process.kill("TERM", pid)
end
ensure
stdin.close if stdin
stderrout.close if stderrout
end
status = thread.value # returns Process::Status when process ends
if DEBUG
puts "thread.alive?: #{thread.alive?}"
puts "status: #{status}"
puts "status.class: #{status.class}"
puts "status.exited?: #{status.exited?}"
puts "status.exitstatus: #{status.exitstatus}"
puts "status.signaled?: #{status.signaled?}"
puts "status.termsig: #{status.termsig}"
puts "status.stopsig: #{status.stopsig}"
puts "status.stopped?: #{status.stopped?}"
puts "status.success?: #{status.success?}"
end
# See how process ended: .success? => true, false or nil if exited? !true
if status.success? == true # process exited (0)
return output
elsif status.success? == false # process exited (non-zero)
raise "command `#{command}' returned non-zero exit status (#{status.exitstatus}), see below output\n#{output}"
elsif status.signaled? # we killed the process (timeout reached)
raise "shell command `#{command}' timed out and was killed (timeout = #{timeout}s): #{status}"
else
raise "process didn't exit and wasn't signaled. We shouldn't get to here."
end
end
Hope this is useful.

How to proxy a shell process in ruby

I'm creating a script to wrap jdb (java debugger). I essentially want to wrap this process and proxy the user interaction. So I want it to:
start jdb from my script
send the output of jdb to stdout
pause and wait for input when jdb does
when the user enters commands, pass it to jdb
At the moment I really want a pass thru to jdb. The reason for this is to initialize the process with specific parameters and potentially add more commands in the future.
Update:
Here's the shell of what ended up working for me using expect:
PTY.spawn("jdb -attach 1234") do |read,write,pid|
write.sync = true
while (true) do
read.expect(/\r\r\n> /) do |s|
s = s[0].split(/\r\r\n/)
s.pop # get rid of prompt
s.each { |line| puts line }
print '> '
STDOUT.flush
write.print(STDIN.gets)
end
end
end
Use Open3.popen3(). e.g.:
Open3.popen3("jdb args") { |stdin, stdout, stderr|
# stdin = jdb's input stream
# stdout = jdb's output stream
# stderr = jdb's stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
stdin.puts "blah"
threads.each{|t| t.join()} #in order to cleanup when you're done.
}
I've given you examples for threads, but you of course want to be responsive to what jdb is doing. The above is merely a skeleton for how you open the process and handle communication with it.
The Ruby standard library includes expect, which is designed for just this type of problem. See the documentation for more information.

How do I get the STDOUT of a ruby system() call while it is being run?

Similar to Getting output of system() calls in Ruby , I am running a system command, but in this case I need to output the STDOUT from the command as it runs.
As in the linked question, the answer is again not to use system at all as system does not support this.
However this time the solution isn't to use backticks, but IO.popen, which returns an IO object that you can use to read the input as it is being generated.
In case someone might want to read stdout and stderr:
It is important to read them in parallel, not first one then the other. Because programs are allowed to output to stdout and stderr by turns and even in parallel. So, you need threads. This fact isn't even Ruby-specific.
Stolen from here.
require 'open3'
cmd = './packer_mock.sh'
data = {:out => [], :err => []}
# see: http://stackoverflow.com/a/1162850/83386
Open3.popen3(cmd) do |stdin, stdout, stderr, thread|
# read each stream from a new thread
{ :out => stdout, :err => stderr }.each do |key, stream|
Thread.new do
until (raw_line = stream.gets).nil? do
parsed_line = Hash[:timestamp => Time.now, :line => "#{raw_line}"]
# append new lines
data[key].push parsed_line
puts "#{key}: #{parsed_line}"
end
end
end
thread.join # don't exit until the external process is done
end
here is my solution
def io2stream(shell, &block)
Open3.popen3(shell) do |_, stdout, stderr|
while line = stdout.gets
block.call(line)
end
while line = stderr.gets
block.call(line)
end
end
end
io2stream("ls -la", &lambda { |str| puts str })
With following you can capture stdout of a system command:
output = capture(:stdout) do
system("pwd") # your system command goes here
end
puts output
shortened version:
output = capture(:stdout) { system("pwd") }
Similarly we can also capture standard errors too with :stderr
capture method is provided by active_support/core_ext/kernel/reporting.rb
Looking at that library's code comments, capture is going to be deprecated, so not sure what is the current supported method name is.

Resources