Ruby script with parrallel threads and logging bash output - ruby

I have a ruby script I'm making to upload a directory of roles to chef server. Doing this 1 at a time with a .each loop is slow. So I added parallelism by running each command in separate threads. Now I'm trying to figure out how to store the output of the commands so I can read them back in order of the threads that were created. The roles array is already in alphabetical order. We also use bash_profile aliases for running the knife command with different configuration files for dev and prod.
I've tried many different ways to run the bash command and trying to store the output in an array or to a file etc... Currently this displays the output from each thread as it runs or finishes so out put is hard to read or tell if everything finished correctly and the files that the bash command output is supposed to be redirecting to get created but have empty content.
Sorry if this script isn't the easiest to read. I've only been doing ruby for a little over a year now and self taught myself it when we started to get into chef. I didn't have a programming background before that.
#!/opt/chefdk/embedded/bin/ruby
def print_usage_and_exit
puts 'Need to specify 1 or more role.json files or no arguments to upload all roles'
puts "ruby #{__FILE__} or ruby #{__FILE__} [role1.json] [role2.json] [...]"
exit(1)
end
def fetch_roles
roles = []
current_dir = File.dirname(__FILE__)
Dir.foreach("#{current_dir}/roles") do |role|
next if role == '.' || role == '..' || role == 'README.md'
roles.push(role)
end
roles
end
upload = []
i = 0
roles = (ARGV.empty? ? fetch_roles : ARGV[0..-1])
# Probably redundant, but a cheap check to make sure we're only looking at json files
roles.keep_if { |b| b.end_with?('.json') }
print_usage_and_exit if roles.empty?
print "\nSpecify new knife command if you have seperate knife command for dev and prod created with .bash_profile function."
print "\nLeave blank to use default 'knife' command"
print "\nWhich knife command to use: "
knife = ($stdin.gets.chomp('') ? 'knife' : $stdin.gets.chomp)
print "\n**** Starting upload of roles to chef server ****\n"
roles.each do |role|
upload[i] = Thread.new{
system("bash", "-cl", "#{knife} role from file #{role} > /tmp/#{role}.log")
}
i += 1
end
upload.each {|t| t.join}
roles.each do |role|
logfile = "/tmp/#{role}.log"
logmsg = open(logfile)
print "\n#{logmsg.read}\n"
#FileUtils.rm("/tmp/#{role}.log")
end
print "\n**** Finished uploading roles to chef server ****\n"

The right way to do this is knife upload roles/. That doesn't actually answer your question per se, but I think you'll find it a lot simpler.

I prefer to use Open3's caputure3 function to execute subprocesses, as it makes it easy to handle all the various details ( stdin, stdout, stderr, environment variables, etc ).
Pair that with the use of thread-local data, a built in feature of ruby threads, and you have a pretty easy method of running subprocesses. I'm a big fan of using threads for this kind of concurrency. The GIL prevents ruby from running all the threads concurrently, but the capture3 subprocesses run concurrently anyway, so it doesn't really matter.
require 'open3'
commands = [
'true',
'echo "a more complex command from `pwd`" 1>&2 && echo "and stdout"',
]
threads = []
commands.each_with_index do |cmd, i|
threads[i] = Thread.new do
stdout, stderr, status = Open3.capture3("bash", stdin_data: cmd)
Thread.current['stdout'] = stdout
Thread.current['stderr'] = stderr
Thread.current['status'] = status
end
end
threads.each_with_index do |th,i|
th.join
puts "Thread # #{i}:"
%w( stdout stderr status ).each do |s|
puts "\t#{s}: #{th[s]}"
end
puts
end
The results are exactly what you'd expect:
$ ruby ./t.rb
Thread # 0:
stdout:
stderr:
status: pid 34244 exit 0
Thread # 1:
stdout: and stdout
stderr: a more complex command from /Users/dfarrell/t
status: pid 34243 exit 0
You can use the exit status to give a final summary of how many commands failed or succeeded.

Related

Run multiple commands in the same shell process

I'm attempting to run a series of commands through Ruby, and capture stdin, stdout, stderr and the exitstatus.
require "open3"
require "pp"
command_list = [
"export MY_ENV_VAR=foobar",
"printenv MY_ENV_VAR"
]
executed_commands = []
result = nil
command_list.each do |command|
stdout, stderr, status = Open3.capture3(command)
result = status.exitstatus
executed_commands << [command, stdout, stderr, result]
break if result != 0
end
pp executed_commands
puts "exited with #{result} exit status."
This process exits with a non-zero status, indicating that the printenv MY_ENV_VAR command fails, and that the commands are not being run in the same process.
How can I execute a series of commands in a single shell process, recording stdin, stdout, stderr and the exitstatus of each command?
I would strongly suggest you don't chain together multiple shell commands into a single system call if you don't absolutely have to. A major caveat is that you can't individually inspect the return codes of each command in the chain. This leads to lack of control over the command flow. For example, if the first command in the chain fails for any reason, the subsequent commands will still attempt to execute regardless of the state of the first command. This may be undesirable.
I suggest encapsulating the popen functionality into a method and just call the method for each command you want to run. This would allow you to react to any failed execution on a command-by-command basis.
Your code for running a series of commands is fine. The issue is that you were setting the environment variable incorrectly. A child process cannot set the environment of its parent like you were trying to do. Child processes do inherit the environment of their parent, so here is one way to fix your code:
require "open3"
require "pp"
ENV['MY_ENV_VAR'] = 'hi'
command_list = [
"printenv MY_ENV_VAR"
]
executed_commands = []
result = nil
command_list.each do |command|
stdout, stderr, status = Open3.capture3(command)
result = status.exitstatus
executed_commands << [command, stdout, stderr, result]
break if result != 0
end
pp executed_commands
puts "exited with #{result} exit status."
The result when I run this on Linux with Ruby 2.3.1 is:
[["printenv MY_ENV_VAR", "hi\n", "", 0]]
exited with 0 exit status.
Now if you wanted to pass an environment variable to the child process without modifying your own process's environment, see the documentation for the arguments of Open3.capture3:
https://ruby-doc.org/stdlib/libdoc/open3/rdoc/Open3.html#method-c-capture3

Kill a process called using open3 in ruby

I'm using a command line program, it works as mentioned below:
$ ROUTE_TO_FOLDER/app < "long text"
If "long text" is written using the parameters "app" needs, then it will fill a text file with results. If not, it will fill the text file with dots continuously (I can't handle or modify the code of "app" in order to avoid this).
In a ruby script there's a line like this:
text = "long text that will be used by app"
output = system("ROUTE_TO_FOLDER/app < #{text}")
Now, if text is well written, there won't be problems and I will get an output file as mentioned before. The problem comes when text is not well written. What happens next is that my ruby script hangs and I'm not sure how to kill it.
I've found Open3 and I've used the method like this:
irb> cmd = "ROUTE_TO_FOLDER/app < #{text}"
irb> stdin, stdout, stderr, wait_thr = Open3.popen3(cmd)
=> [#<IO:fd 10>, #<IO:fd 11>, #<IO:fd 13>, #<Thread:0x007f3a1a6f8820 run>]
When I do:
irb> wait_thr.value
it also hangs, and :
irb> wait_thr.status
=> "sleep"
How can I avoid these problems? Is it not recognizing that "app" has failed?
wait_thr.pid provides you the pid of the started process. Just do
Process.kill("KILL",wait_thr.pid)
when you need to kill it.
You can combine it with detecting if the process is hung (continuously outputs dots) in one of the two ways.
1) Set a timeout for waiting for the process:
get '/process' do
text = "long text that will be used by app"
cmd = "ROUTE_TO_FOLDER/app < #{text}"
Open3.popen3(cmd) do |i,o,e,w|
begin
Timeout.timeout(10) do # timeout set to 10 sec, change if needed
# process output of the process. it will produce EOF when done.
until o.eof? do
# o.read_nonblock(N) ...
end
end
rescue Timeout::Error
# here you know that the process took longer than 10 seconds
Process.kill("KILL", w.pid)
# do whatever other error processing you need
end
end
end
2) Check the process output. (The code below is simplified - you probably don't want to read the output of your process into a single String buf first and then process, but I guess you get the idea).
get '/process' do
text = "long text that will be used by app"
cmd = "ROUTE_TO_FOLDER/app < #{text}"
Open3.popen3(cmd) do |i,o,e,w|
# process output of the process. it will produce EOF when done.
# If you get 16 dots in a row - the process is in the continuous loop
# (you may want to deal with stderr instead - depending on where these dots are sent to)
buf = ""
error = false
until o.eof? do
buf << o.read_nonblock(16)
if buf.size>=16 && buf[-16..-1] == '.'*16
# ok, the process is hung
Process.kill("KILL", w.pid)
error = true
# you should also get o.eof? the next time you check (or after flushing the pipe buffer),
# so you will get out of the until o.eof? loop
end
end
if error
# do whatever error processing you need
else
# process buf, it contains all the output
end
end
end

Ruby: Printing system output in real time?

I have a ruby rake task that calls a bash script via:
Open3.popen('/path/file_converter.sh', file_list, output_format)
That bash script outputs logs to the command line as it processes (which takes from 30 secs to 5 hours)
When I call the rake task, the output from bash is returned to the command line, but only as one large message after the entire script has run. Anyone know of a way to pipe command line output direct to ruby output as it occurs?
According to the documentation you should be able to use the output stream given in the block:
Open3.popen3('/path/file_converter.sh', file_list, output_format) do |_,out,_,_|
out.each_line do |line|
puts line
end
end
Put the output into a file. And run the process in the background creating a new thread. After it you can parse the file.
class FileConverter
def initialize
#output_file = '/tmp/something.txt'
output_format = 'foo'
file_list = 'bar foo something'
#child = Thread.new do
`/path/file_converter.sh #{file_list} #{output_format} 2>&1 >#{#output_file}`
end
end
def data
File.readlines(#output_file)
end
def parse
while #child.alive?
# parse data # TODO: need to implement real parsing
sleep 0.5
end
end
end
fc = FileConverter.new
fc.parse

How to execute interactive shell program on a remote host from ruby

I am trying to execute an interactive shell program on a remote host from another ruby program. For the sake of simplicity let's suppose that the program I want to execute is something like this:
puts "Give me a number:"
number = gets.chomp()
puts "You gave me #{number}"
The approach that most successful has been so far is using the one I got from here. It is this one:
require 'open3'
Open3.popen3("ssh -tt root#remote 'ruby numbers.rb'") do |stdin, stdout, stderr|
# stdin = input stream
# stdout = output stream
# stderr = stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
sleep(2)
puts "Give me an answer: "
answer = gets.chomp()
stdin.puts answer
threads.each{|t| t.join()} #in order to cleanup when you're done.
end
The problem is that this is not "interactive" enough to me, and the program that I would like to execute (not the simple numbers.rb) has a lot more of input / output. You can think of it as an apt-get install that will ask you for some input to solve some problems.
I have read about net::ssh and pty, but couldn't see if they were going to be the (easy/elegant) solution I am looking for.
The ideal solution will be to make it in such a way that the user does not realize that the IO is being done on a remote host: the stdin goes to the remote host stdin, the stdout from the remote host comes to me and I show it.
If you have any ideas I could try I will be happy to hear them. Thank you!
Try this:
require "readline"
require 'open3'
Open3.popen3("ssh -tt root#remote 'ruby numbers.rb'") do |i, o, e, th|
Thread.new {
while !i.closed? do
input =Readline.readline("", true).strip
i.puts input
end
}
t_err = Thread.new {
while !e.eof? do
putc e.readchar
end
}
t_out = Thread.new {
while !o.eof? do
putc o.readchar
end
}
Process::waitpid(th.pid) rescue nil
# "rescue nil" is there in case process already ended.
t_err.join
t_out.join
end
I got it working, but don't ask me why it works. It was mainly trial/error.
Alternatives:
Using Net::SSH, you need to use :on_process and a Thread: ruby net/ssh channel dies? Don't forget to add session.loop(0.1). More info at the link. The Thread/:on_process idea inspired me to write a gem for my own use: https://github.com/da99/Chee/blob/master/lib/Chee.rb
If the last call in your Ruby program is SSH, then you can exec ssh -tt root#remote 'ruby numbers.rb'. But, if you still want interactivity between User<->Ruby<->SSH, then the previous alternative is the best.

How to proxy a shell process in ruby

I'm creating a script to wrap jdb (java debugger). I essentially want to wrap this process and proxy the user interaction. So I want it to:
start jdb from my script
send the output of jdb to stdout
pause and wait for input when jdb does
when the user enters commands, pass it to jdb
At the moment I really want a pass thru to jdb. The reason for this is to initialize the process with specific parameters and potentially add more commands in the future.
Update:
Here's the shell of what ended up working for me using expect:
PTY.spawn("jdb -attach 1234") do |read,write,pid|
write.sync = true
while (true) do
read.expect(/\r\r\n> /) do |s|
s = s[0].split(/\r\r\n/)
s.pop # get rid of prompt
s.each { |line| puts line }
print '> '
STDOUT.flush
write.print(STDIN.gets)
end
end
end
Use Open3.popen3(). e.g.:
Open3.popen3("jdb args") { |stdin, stdout, stderr|
# stdin = jdb's input stream
# stdout = jdb's output stream
# stderr = jdb's stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
stdin.puts "blah"
threads.each{|t| t.join()} #in order to cleanup when you're done.
}
I've given you examples for threads, but you of course want to be responsive to what jdb is doing. The above is merely a skeleton for how you open the process and handle communication with it.
The Ruby standard library includes expect, which is designed for just this type of problem. See the documentation for more information.

Resources