Long running shell command in ruby - ruby

I want to launch and manage long running (several hours) shell commands on Linux, from a ruby script.
I need to be able to parse the output line by line, as they come. Not when the process is done.
I need to be able to kill the command and restart it if I don't like the output.
I also need to know if the process dies.
I found the right_popen gem but it hasn't been updated for 2 years and has no documentation. What would be the cleanest way to do all this?

I found a solution that allows me to do all the above, so I thought I'd share it with people who come after me :-)
We do it through pseudo-terminals, which are in stdlib (no gem required).
Here is an example of code that does what I need:
require 'pty'
cmd = 'for i in 1 2 3 4 5; do echo $i; sleep 1; done'
PTY.spawn cmd do |r, w, pid|
begin
r.sync
r.each_line do |l|
# Process each line immediately
line = l.strip
puts line
if line == '3'
# We kill the command on 3, because 3s are evil...
::Process.kill('SIGINT', pid)
end
end
rescue Errno::EIO => e
# Linux raises this error when the command ends
puts "COMMAND DIED"
ensure
# Wait for the process to end before we go on
::Process.wait pid
puts "WE ARE CLEAR"
end
end
Output:
1
2
3
COMMAND DIED
WE ARE CLEAR

Related

How to run multiple ruby scripts simultaneously

I am trying to run multiple ruby scripts simultaneously on my mac, and I'm not having any luck. I can see the the ruby processes start up, but then they immediately stop. The script works fine as a single process, no errors. Here are some examples of things I've tried.
10.times do
system "nohup ruby program.rb \"arg1 arg2\" &"
end
10.times do
`nohup ruby program.rb \"arg1 arg2\" &`
end
10.times do
system "ruby program.rb \"arg1 arg2\""
end
Do you need to start it from ruby for any specific reason? Why don't you start it 10 times directly from bash? Like:
$ for i in seq 1 10; do nohup ruby foo.rb \&; done
Let me know..
nohup redirects its output to a file $HOME/nohup.out, unless it is explicitly redirected. You should redirect the output of each invocation to a different file.
Also, for the safe side, I would redirect stdin to /dev/null - just in case the called program reads from stdin.
10.times do |i|
system "nohup ruby program.rb 'arg1 arg2' </dev/null >#{ENV['HOME']}/nohup#{i}.out &"
end
BTW (and off topic): Are you sure, that you want to pass arg1 arg2 as a SINGLE argument to program.rb?
You can build a solution with fork, exec and wait of the module Process.
# start child processes
10.times { fork { exec(cmd) } }
# wait for child processes
10.times { |pid| Process.wait }
Or a bit longer to play around with (Tested with Ruby 1.8.7 on Ubuntu). Added rescue nil to suppress error when waiting.
10.times do |i|
fork do
ruby_cmd = "sleep(#{10-i});puts #{i}"
exec("ruby -e \"#{ruby_cmd}\"")
end
end
10.times { Process.wait rescue nil }
puts "Finished!"

Show every line of output of an external command in real time

I found that you can run an external command from ruby, like this:
command = "find /home/user/workspace -name *.java"
%x(#{command})
and it works nice for commands that don't take too much time to execute, but for commands like the one above, which takes more time and progressively outputs the result, there's no way I can see the results until the command completes.
What I would like is to have the same look and feel as when the command is run directly from shell, in this particular case, as soon as a file is found, to show it on console.
Is this possible?
Use IO.popen or Open3.
IO.popen("echo 1; sleep 1; echo 2; sleep 1; echo 3") do |io|
io.each_line do |line|
puts line
end
end

Ruby - Open3 not finishing subprocess

I'm using:
- Ruby 1.9.3-p448
- Windows Server 2008
I have a file that contains commands that is used by a program, I'm using it in this way
C:\> PATH_TO_FOLDER/program.exe file.txt
File.txt have some commands so program.exe will do the following:
- Execute commands
- Reads from a DB using an ODBC method used by program
- Outputs result in a txt file
Using powershell this command works fine and as expected.
Now I have this in a file (app.rb):
require 'sinatra'
require 'open3'
get '/process' do
program_path = "path to program.exe"
file_name = "file.txt"
Open3.popen3(program_path, file_name) do |i, o, e, w|
# I have some commands here to execute but just as an example I'm using o.read
puts o.read
end
end
Now when using this by accessing http://localhost/process, Open3 works by doing this (I'm not 100% sure but after trying several times I think is the only option)
Reads commands and executes them (this is ok)
Tries to read from DB by using ODBC method (Here is my problem. I
need to receive some output from Open3 so I can show it in a browser, but I guess when it tries to read it starts another process that Open3 is not aware of, so Open3 goes on and finish without waiting for it)
Exits
Exits
I've found about following:
Use Thread.join (in this case, w.join) in order to wait for process to finish, but it doesn't work
Open4 seems to handle child process but doesn't work on Windows
Process.wait(pid), in this case pid = w.pid, but also doesn't work
Timeout.timeout(n), the problem here is that I'm not sure how long
will it take.
Is there any way of handling this? (waiting for Open3 subprocess so I get proper output).
We had a similar problem getting the exit status and this is what we did
Open3.popen3(*cmd) do |stdin, stdout, stderr, wait_thr|
# print stdout and stderr as it comes in
threads = [stdout, stderr].collect do |output|
Thread.new do
while ((line = output.gets rescue '') != nil) do
unless line.blank?
puts line
end
end
end
end
# get exit code as a Process::Status object
process_status = wait_thr.value #.exitstatus
# wait for logging threads to finish before continuing
# so we don't lose any logging output
threads.each(&:join)
# wait up to 5 minutes to make sure the process has really exited
Timeout::timeout(300) do
while !process_status.exited?
sleep(1)
end
end rescue nil
process_status.exitstatus.to_i
end
Using Open3.popen3 is easy only for trivial cases. I do not know the real code for handling the input, output and error channels of your subprocess. Neither do I know the exact behaviour of your subprocesses: Does it write on stdout? Does it write on stderr? Does it try to read from stdin?
This is why I assume that there are problems in the code that you replaced by puts o.read.
A good summary about the problems you can run into is on http://coldattic.info/shvedsky/pro/blogs/a-foo-walks-into-a-bar/posts/63.
Though I disagree with the author of the article, Pavel Shved, when it comes to finding a solution. He recommends his own solution. I just use one of the wrapper functions for popen3 in my projects: Open3.capture*. They do all the difficult things like waiting for stdout and stderr at the same time.

`exec` kills script

Running sdiff through exec causes my script to exit without errors. Even the ensure block does not get run:
begin
puts "I occur"
exec("sdiff onefile.csv anotherfile.csv > filediffs.txt")
rescue Exception => e
puts "I do not get printed"
puts e
ensure
puts "I do not get printed"
end
puts "I used to get printed, repeatedly, now not, repeatedly"
It was working as expected for a while, then it started mysteriously exiting and the conditions are the same. No terminal output after "I occur".
It is the expected behaviour of the exec method, the documentation about it says:
Replaces the current process by running the given external command
You probably want to use system instead of exec.
exec will replace the current process by the command passed as argument to it. After exec() has been executed the calling process won't exist anymore.
Check this for reference and alternatives.

How to wait for system command to end

I'm converting an XLS 2 CSV file with a system command in Ruby.
After the conversion I'm processing the CSV files, but the conversion is still running when the program wants to process the files, so at that time they are non-existent.
Can someone tell me if it's possible to let Ruby wait the right amount of time for the system command to finish?
Right now I'm using:
sleep 20
but if it will take longer once, it isn't right of course.
What I do specifically is this:
#Call on the program to convert xls
command = "C:/Development/Tools/xls2csv/xls2csv.exe C:/TDLINK/file1.xls"
system(command)
do_stuff
def do_stuff
#This is where i use file1.csv, however, it isn't here yet
end
Ruby's system("...") method is synchronous; i.e. it waits for the command it calls to return an exit code and system returns true if the command exited with a 0 status and false if it exited with a non-0 status. Ruby's backticks return the output of the commmand:
a = `ls`
will set a to a string with a listing of the current working directory.
So it appears that xls2csv.exe is returning an exit code before it finishes what it's supposed to do. Maybe this is a Windows issue. So it looks like you're going to have to loop until the file exists:
until File.exist?("file1.csv")
sleep 1
end
Try to use threads:
command = Thread.new do
system('ruby programm.rb') # long-long programm
end
command.join # main programm waiting for thread
puts "command complete"

Resources