Killing shell processes when thread dies - ruby

I have code like this:
#! /usr/bin/env ruby
Thread.report_on_exception=false
Thread.abort_on_exception=true
Thread.new do
`shellcommand 1 2 3`
end
do_other_stuff
If do_other_stuff encounters an exception, it kills the thread and the whole ruby process, which is what I want. But, shellcommand 1 2 3 continues running in the background.
How can I have shellcommand 1 2 3 also be killed when the ruby process aborts?

You can't (with a general flag on Thread at least). It's a separate process that you started in a thread. The thread dying doesn't stop the process.
You have to save the pid of the process and terminate it explicitly:
Thread.report_on_exception = false
Thread.abort_on_exception = true
pids = []
at_exit do
pids.each do |pid|
`kill -9 #{pid}`
end
end
Thread.new do
pids.push Process.spawn('for((i=0; ;++i)); do echo "$i"; sleep 1; done')
end
sleep 5
raise 'Failure'
Output:
0
1
2
3
4
Traceback (most recent call last):
test.rb:17:in `<main>': Failure (RuntimeError)
Needless to say, don't use this code as is in production.

Related

Non-blocking backticks

I want to run multiple time-consuming shell commands from Ruby in a non-blocking (asynchronous) way.
I want to pass options to commands, receive output in Ruby, and (ideally) handle errors.
The script below will naturally take 15 seconds to execute:
test.rb
3.times do |i|
puts `sleep 5; echo #{i} | tail -n 1` # some time-consuming complex command
end
$ /usr/bin/time ruby test.rb
0
1
2
15.29 real 0.13 user 0.09 sys
With Thread, it can apparently be executed in parallel, and it takes only 5 seconds, as expected:
threads = []
3.times do |i|
threads << Thread.new {
puts `sleep 5; echo #{i} | tail -n 1`
}
end
threads.each {|t| t.join() }
$ /usr/bin/time ruby test.rb
2
0
1
5.17 real 0.12 user 0.06 sys
But is this the best approach? Is there any other way?
I have also written using Open3.popen2, but this seems to take 15 seconds to execute as in the first example (unless wrapped in a Thread):
require 'open3'
3.times do |i|
Open3.popen2("sleep 5; echo #{i} | tail -n 1") do |stdin, stdout|
puts stdout.read()
end
end
The documentation describes "block form" and "non-block form", but this "block" refers to anonymous functions, and has nothing to do with concurrency, correct?
Is the Open3 class alone only capable of blocking execution?
The problem with your code is that stdout.read is a blocking call.
You could defer the reading until the command is finished.
At first, create the commands:
commands = Array.new(3) { |i| Open3.popen2("sleep 5; echo hello from #{i}") }
Then, wait for each command to finish:
commands.each { |stdin, stdout, wait_thr| wait_thr.join }
Finally, gather the output and close the IO streams:
commands.each do |stdin, stdout, wait_thr|
puts stdout.read
stdin.close
stdout.close
end
Output: (after 5 seconds)
hello from 0
hello from 1
hello from 2

Ruby netx command after while loop

I try to launch one command using while loop and the continue my script, but the loop never finish.Condition is true i don't want to put false because the command has to be executed every 10 minutes.
while true
pid = spawn('xterm -e command')
sleep 600
Process.kill('TERM', pid)
end
The same bash code work fine because i can execute the next commands of the script using & after done
while : ; do
xterm -e command ; sleep 600 ; done &
echo $! >/tmp/mycommand.pid
In ruby does the end statement block the script in my loop ? or the true value is not appropriate here ?
If I understand right you want to create a thread:
Thread.new do
while true
sleep(1)
puts 'inside'
end
end
puts 'outside'
sleep(3)
And output:
outside
inside
inside

trap not working when called as a script; but works on IRB

I am experimenting with multiple processes. I am trapping SIGCLD to execute something when the child is done. It is working on IRB but not when I execute as a ruby script.
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
When I run the above from IRB, I both lines are printed but when I run it as a ruby script, the line within the trap procedure does not show up.
IRB gives you an outer loop, which means that the ruby process doesn't exit until you decide to kill it. The problem with your ruby script is that the main process is finishing and killing your child (yikes) before it has the chance to trap the signal.
My guess is that this is a test script, and the chances are that your desired program won't have the case where the parent finishes before the child. To see your trap working in a plain ruby script, add a sleep at the end:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
sleep 3
To populate the $? global variable, you should explicitly wait for the child process to exit:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code #{$? >> 8}" }
Process.wait
If you do want the child to run after the parent process has died, you want a daemon (double fork).
When you run your code in IRB, the main thread belongs to IRB so that all the stuff you’ve called is living within virtually infinite time loop.
In a case of script execution, the main thread is your own and it dies before trapping. Try this:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
sleep 5 # this is needed to prevent main thread to die ASAP
Hope it helps.

Read STDOUT and STDERR from subprocess continiously

I'm using IO.popen to start a subprocess, but I only get the result of everything that happened in the time it took for the subprocess to run (sometimes 5 minutes or whatever) when the subprocess exits. I really need to be able to see everything the subprocess writes to stderr and stdout as-and-when it happens.
So far I could not find anything that works like this, but I'm sure it's possible.
if you need to get output in real time i would recommend to use stdlib PTY instead of popen
something like this:
require 'pty'
cmd = 'echo a; sleep 1; cat /some/file; sleep 1; echo b'
PTY.spawn cmd do |r, w, pid|
begin
r.sync
r.each_line { |l| puts "#{Time.now.strftime('%M:%S')} - #{l.strip}" }
rescue Errno::EIO => e
# simply ignoring this
ensure
::Process.wait pid
end
end
exit "#{cmd} failed" unless $? && $?.exitstatus == 0
> 33:36 - a
> 33:37 - cat: /some/file: No such file or directory
> 33:38 - b
this way you get output instantly, just as in terminal
You might want to use Open3.popen3 from standard library, it gives access to stdin, stdout, and stderr as streams.

How to run multiple external commands in the background in ruby

Given this Unix shell script:
test.sh:
#!/bin/sh
sleep 2 &
sleep 5 &
sleep 1 &
wait
time ./test.sh
real 0m5.008s
user 0m0.040s
sys 0m0.000s
How would you accomplish the same thing in Ruby on a Unix machine?
The sleep commands are just an example, just assume that they are long running external commands instead.
Straight from Process#waitall documentation:
fork { sleep 0.2; exit 2 } #=> 27432
fork { sleep 0.1; exit 1 } #=> 27433
fork { exit 0 } #=> 27434
p Process.waitall
Of course, instead of using Ruby's sleep, you can call whichever external command using Kernel#system, or backtick operator.
#!/usr/bin/env ruby
pids = []
pids << Kernel.fork { `sleep 2` }
pids << Kernel.fork { `sleep 5` }
pids << Kernel.fork { `sleep 1` }
pids.each { |pid| Process.wait(pid) }
To answer my own question (just found out about this):
​#!/usr/bin/ruby
spawn 'sleep 2'
spawn 'sleep 5'
spawn 'sleep 1'
Process.waitall
On ruby 1.8 you need to install the sfl gem and also require this:
require 'rubygems'
require 'sfl'

Resources