Ruby Wait for SubProcess to Finish - ruby

I'm forking another ruby script because there's no good way to catch exceptions on that script and it occasionally errors out. If and when it finishes and/or errors out I want to re-spawn it and let it continue where it left off from. Here's my script:
def runit(number)
r = spawn("cmd /c ruby c:\\bot\\myscript.rb", [:out, :err] => ["c:\\bot\\myscript#{number}.log", 'w'], :new_pgroup => true)
Process.detach(r)
r
end
def runme
r = runit(1)
Process.wait(r)
end
something = "nothing"
while something == 'nothing'
runme
end
My problem is that when the subprocess errors out so does the main process. This causes the loop to end.
How can I fork a process and restart it in case it dies or errors out?

Related

How to terminate a child process as part of terminating the thread that created it

I have a Ruby application that spawns a thread on-demand which in turn does a system call to execute a native binary.
I want to abort this call before the native call completes.
I tried using all options the Thread documentation provided, like kill, raise and terminate, but nothing seems to help.
This is what I'm trying to do:
class Myserver < Grape::API
##thr = nil
post "/start" do
puts "Starting script"
##thr = Thread.new do
op=`sh chumma_sh.sh`
puts op
end
puts ##thr.status
end
put "/stop" do
##thr.terminate
##thr.raise
Thread.kill(##thr)
puts ##thr.status
end
end
The thread appears to enter a sleep state as an IO operation is in process, but how do I kill the thread so that all child processes it created are terminated and not attached to root.
Doing ps-ef | grep for the script returns the pid, and I could try Process.kill pid but wanted to know if there are better options.
I don't have the option at this moment of modifying how the script is executed as it is part of an inherited library.
Using ps is the only approach I've found that works. If you also want to kill child threads, you could use something like this:
def child_pids_recursive(pid)
# get children
pipe = IO.popen("ps -ef | grep #{pid}")
child_pids = pipe.readlines.map do |line|
parts = line.split(/\s+/)
parts[2] if parts[3] == pid.to_s && parts[2] != pipe.pid.to_s
end.compact
pipe.close
# get grandchildren
grandchild_pids = child_pids.map do |cpid|
child_pids_recursive(cpid)
end.flatten
child_pids + grandchild_pids
end
def kill_all(pid)
child_pids_recursive(pid).reverse.each do |p|
begin
Process.kill('TERM', p.to_i)
rescue
# ignore
end
end
end

How do I see the output of multiple forked processes simultaneously

I am using the "fork" option in ruby as follows:
pid1 = fork do
pid1_output = `ruby scrape1.rb`
puts "#{pid1_output}"
puts ""
exit
end
pid2 = fork do
pid2_output = `ruby scrape2.rb`
puts "#{pid2_output}"
puts ""
exit
end
pid3 = fork do
pid3_output = `ruby scrape3.rb`
puts "#{pid3_output}"
puts ""
exit
end
pid4 = fork do
pid4_output = `ruby scrape4.rb`
puts "#{pid4_output}"
puts ""
exit
end
Process.waitall
The problem here is that sometimes one of the processes (eg: ruby scrape1.rb) might fail or end up returning ginormous amounts of text that cannot be captured in a variable... How do I still simultaneously run 4 processes and see all their outputs in one terminal window in realtime? I understand the order of output might be mushed up but that is alright.. I basically want to re-route the STDOUT and STDERR of each forked process to the main program.. That way I can see what is being scraped by each of my scrapers and follow their progress and errors as they happen.
fork do
exec("ruby scrape1.rb")
end
fork do
exec("ruby scrape2.rb")
end
fork do
exec("ruby scrape3.rb")
end
fork do
exec("ruby scrape4.rb")
end
Process.waitall

Fork child process with timeout and capture output

Say I have a function like below, how do I capture the output of the Process.spawn call? I should also be able to kill the process if it takes longer than a specified timeout.
Note that the function must also be cross-platform (Windows/Linux).
def execute_with_timeout!(command)
begin
pid = Process.spawn(command) # How do I capture output of this process?
status = Timeout::timeout(5) {
Process.wait(pid)
}
rescue Timeout::Error
Process.kill('KILL', pid)
end
end
Thanks.
You can use IO.pipe and tell Process.spawn to use the redirected output without the need of external gem.
Of course, only starting with Ruby 1.9.2 (and I personally recommend 1.9.3)
The following is a simple implementation used by Spinach BDD internally to capture both out and err outputs:
# stdout, stderr pipes
rout, wout = IO.pipe
rerr, werr = IO.pipe
pid = Process.spawn(command, :out => wout, :err => werr)
_, status = Process.wait2(pid)
# close write ends so we could read them
wout.close
werr.close
#stdout = rout.readlines.join("\n")
#stderr = rerr.readlines.join("\n")
# dispose the read ends of the pipes
rout.close
rerr.close
#last_exit_status = status.exitstatus
The original source is in features/support/filesystem.rb
Is highly recommended you read Ruby's own Process.spawn documentation.
Hope this helps.
PS: I left the timeout implementation as homework for you ;-)
I followed Anselm's advice in his post on the Ruby forum here.
The function looks like this -
def execute_with_timeout!(command)
begin
pipe = IO.popen(command, 'r')
rescue Exception => e
raise "Execution of command #{command} unsuccessful"
end
output = ""
begin
status = Timeout::timeout(timeout) {
Process.waitpid2(pipe.pid)
output = pipe.gets(nil)
}
rescue Timeout::Error
Process.kill('KILL', pipe.pid)
end
pipe.close
output
end
This does the job, but I'd rather use a third-party gem that wraps this functionality. Anyone have any better ways of doing this? I have tried Terminator, it does exactly what I want but it does not seem to work on Windows.

Change STDIN with a pipe and it's a directory

I have this
pipe_in, pipe_out = IO.pipe
fork do
# child 1
pipe_in.close
STDOUT.reopen pipe_out
STDERR.reopen pipe_out
puts "Hello World"
pipe_out.close
end
fork do
# child 2
pipe_out.close
STDIN.reopen pipe_in
while line = gets
puts 'child2:' + line
end
pipe_in.close
end
Process.wait
Process.wait
get will always raise an error saying "gets: Is a directory", which doesn't make sense to me. If I change gets to pipe_in.gets it works. What I want to know is, why doesn't STDIN.reopen pipe_in and gets not work?
It works for me, with the following change:
pipe_in.close
end
+pipe_in.close
+pipe_out.close
+
Process.wait
Process.wait
Without this change, you still have the pipes open in the original process, so the reader will never see an end of file. That is, process doing the wait still had the write pipe open leading to a deadlock.

Ruby daemon with clean shutdown

I would like to make a ruby daemon that would gracefully shutdown with a kill command.
I would like to add a signal trap that would wait until #code that could take some time to run finishes before shutting down. How would I add that to something like this:
pid = fork do
pid_file = "/tmp/pids/daemon6.pid"
File.open(pid, 'w'){ |f| f.write(Process.pid) }
loop do
begin
#code that could take some time to run
rescue Exception => e
Notifier.deliver_daemon_rescued_notification(e)
end
sleep(10)
end
end
Process.detach pid
Also, would it be better to have that in a separate script, like a separate kill script instead of having it as part of the daemon code? Like something monit or God would call to stop it?
I appreciate any suggestions.
You can catch Interrupt, like this:
pid = fork do
begin
loop do
# do your thing
sleep(10)
end
rescue Interrupt => e
# clean up
end
end
Process.detach(pid)
You can do the same with Signal.trap('INT') { ... }, but with sleep involved I think it's easier to catch an exception.
Update: this is a more traditional way of doing it, and it makes sure the loop always finishes a complete go before it stops:
pid = fork do
stop = false
Signal.trap('INT') { stop = true }
until stop
# do your thing
sleep(10)
end
end
The downside is that it will always do the sleep, so there will almost always be a delay until the process stops after you've killed it. You can probably get around that by sleeping in bursts, or doing a combination of the variants (rescuing the Interrupt just around the sleep or something).

Resources