interprocess signal handling in Ruby - ruby

I run this script:
t = fork do
Signal.trap "INT" do
puts "child"
exit
end
sleep 10
end
Signal.trap "INT" do
puts "parent"
Process.kill "INT", t
Process.waitpid t
exit
end
Process.waitpid t
When I do CTRL+C, I get
$ ruby sigtest.rb
^Cchild
parent
You can see that "INT" passed to every process and Process.kill "INT", t try to kill process which already died. Is there way to do so that user INT signal will be passed only to the parent? And output will be:
$ ruby sigtest.rb
^Cparent
child
Solution
Rules:
When you press ctrl+c, SIGINT is passed to whole process group.
When you fork new process, signal handlers are not passed to new process
So if you want to control child process signals manually, you have to change GID of the process.
See
http://corelib.rubyonrails.org/classes/Process/Sys.html#M001961
http://ruby.runpaint.org/processes (paragraph "Options Hash")
def system cmd
pid = fork do
exec cmd, {:pgroup => true}
end
Process.wait pid
$?.success?
end
def ` cmd # `make syntax highlight happy
readme, writeme = IO.pipe
pid = fork do
$stdout.reopen writeme
readme.close
exec cmd, {:pgroup => true}
end
writeme.close
data = readme.read
Process.wait pid
data
end

You could always have the child ignore the INT signal.

Related

Ruby spawn an object method

I'm trying to figure out a way to track the status of a process that I've created. In my script I start out by creating my object:
ov = OpenVASOMP::OpenVASOMP.new({"host" => "localhost", "port" => "9390", "user" => "admin", "password" => "#{ENV["OV"]}"})
Which creates an ov object and exposes a bunch of other methods. In particular: ov.task_start.
I need to be able to track the process and perform other actions while it's running, such as sending a status update to a remote server.
My initial thought was to wrap this in a Process.spawn and track the PID, but that's throwing an error:
TypeError: no implicit conversion of REXML::Element into String
and the stack trace points to this line: pid = Process.spawn(ov.task_start(taskid))
So, I guess you can't pass objects and their methods into spawn?
Here's my whole block of code in case there is something else that I'm missing:
ov = OpenVASOMP::OpenVASOMP.new({"host" => "localhost", "port" => "9390", "user" => "admin", "password" => "#{ENV["OV"]}"})
taskid = ov.task_create({"name" => timestamp, "target" => target, "config" => config})
running = true
pid = Process.spawn(ov.task_start(taskid))
Signal.trap("HUP") { log("#{results_dir}/events.log", "[!] Stop triggered by user!"); exit }
until running == false
begin
running = Process.getpgid(pid)
log("#{results_dir}/events.log", "Scan PID: #{pid}")
stat = ov.task_get_byid(taskid)
update_ov_status(stat['progress'])
log("#{results_dir}/events.log", "[+] Sending progress to server: #{stat['progress']}%")
scan_status = get_scan_status
if scan_status == "Stopped"
ov.task_stop(taskid)
ov.task_delete(taskid)
ov.target_delete(target)
Process.kill("HUP", pid)
Process.wait
update_task_id("")
update_ov_status(0)
update_scan_status("Idle")
end
sleep 60
rescue Errno::ESRCH
running = false
puts "PID: #{pid} done!"
log("#{results_dir}/events.log", "[!] Scan complete")
end
end
And task_start looks like:
def task_start (task_id)
xmlreq=xml_attr("start_task",{"task_id" => task_id}).to_s()
begin
xr=omp_request_xml(xmlreq)
rescue
raise OMPResponseError
end
return xr
end
Am I going about this all wrong?
Just repeating what I said in the comment in an answer, for closure.
since task_start is not a shell script string, but rather a block of code that should be executed asynchronously, use Process.fork { ov.task_start taskid } instead of Process.spawn.
The Process.fork call returns a PID which can be used to stop the process, for example:
# in one terminal
ruby -e "puts Process.fork { loop { puts('tick'); sleep 1 } }"
# it then prints a PID like 20626
# then in another terminal:
kill -9 20626
# the "tick" will stop getting printed every second.

How to terminate a child process as part of terminating the thread that created it

I have a Ruby application that spawns a thread on-demand which in turn does a system call to execute a native binary.
I want to abort this call before the native call completes.
I tried using all options the Thread documentation provided, like kill, raise and terminate, but nothing seems to help.
This is what I'm trying to do:
class Myserver < Grape::API
##thr = nil
post "/start" do
puts "Starting script"
##thr = Thread.new do
op=`sh chumma_sh.sh`
puts op
end
puts ##thr.status
end
put "/stop" do
##thr.terminate
##thr.raise
Thread.kill(##thr)
puts ##thr.status
end
end
The thread appears to enter a sleep state as an IO operation is in process, but how do I kill the thread so that all child processes it created are terminated and not attached to root.
Doing ps-ef | grep for the script returns the pid, and I could try Process.kill pid but wanted to know if there are better options.
I don't have the option at this moment of modifying how the script is executed as it is part of an inherited library.
Using ps is the only approach I've found that works. If you also want to kill child threads, you could use something like this:
def child_pids_recursive(pid)
# get children
pipe = IO.popen("ps -ef | grep #{pid}")
child_pids = pipe.readlines.map do |line|
parts = line.split(/\s+/)
parts[2] if parts[3] == pid.to_s && parts[2] != pipe.pid.to_s
end.compact
pipe.close
# get grandchildren
grandchild_pids = child_pids.map do |cpid|
child_pids_recursive(cpid)
end.flatten
child_pids + grandchild_pids
end
def kill_all(pid)
child_pids_recursive(pid).reverse.each do |p|
begin
Process.kill('TERM', p.to_i)
rescue
# ignore
end
end
end

Daemonizing a child process consequently changes its PID

pid = Process.fork
#sleep 4
Process.daemon nil, true
if pid.nil? then
job.exec
else
#Process.detach(pid)
end
The pid returned by Process.fork is changed as soon as Process.daemon(nil, true) is run. Is there a way to preserve/track the pid of a forked child process that is subsequently daemonized?
I want to know the pid of the child process from within the parent process. So far the only way I've been able to communicate the pid is through IO.pipe writing the Process.pid to IO#write and then using IO#read from the parent to read it. Less than ideal
Process.daemon does it's own fork, that's why the pid is changed. If you need to know the daemon's pid, why not use Process.pid in the forked part of if?
pid = Process.fork
#sleep 4
Process.daemon nil, true
if pid.nil? then
job.exec
else
Process.detach(Process.pid)
end
The solutions I've come up with involve using Ruby's IO to pass the pid from the child process to the parent.
r, w = IO.pipe
pid = Process.fork
Process.daemon nil, true
w.puts Process.pid
if pid.nil? then
job.exec
else
#Process.detach(pid)
end
pid = r.gets
Another solution involves invoking the process status of all processes that have controlling terminals.
def Process.descendant_processes(base=Process.pid)
descendants = Hash.new{|ht,k| ht[k]=[k]}
Hash[*`ps -eo pid,ppid`.scan(/\d+/).map{|x|x.to_i}].each{|pid,ppid|
descendants[ppid] << descendants[pid]
}
descendants[base].flatten - [base]
end

Change STDIN with a pipe and it's a directory

I have this
pipe_in, pipe_out = IO.pipe
fork do
# child 1
pipe_in.close
STDOUT.reopen pipe_out
STDERR.reopen pipe_out
puts "Hello World"
pipe_out.close
end
fork do
# child 2
pipe_out.close
STDIN.reopen pipe_in
while line = gets
puts 'child2:' + line
end
pipe_in.close
end
Process.wait
Process.wait
get will always raise an error saying "gets: Is a directory", which doesn't make sense to me. If I change gets to pipe_in.gets it works. What I want to know is, why doesn't STDIN.reopen pipe_in and gets not work?
It works for me, with the following change:
pipe_in.close
end
+pipe_in.close
+pipe_out.close
+
Process.wait
Process.wait
Without this change, you still have the pipes open in the original process, so the reader will never see an end of file. That is, process doing the wait still had the write pipe open leading to a deadlock.

Why is IO::WaitReadable being raised differently for STDOUT than STDERR?

Given that I wish to test non-blocking reads from a long command, I created the following script, saved it as long, made it executable with chmod 755, and placed it in my path (saved as ~/bin/long where ~/bin is in my path).
I am on a *nix variant with ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.0.0] compiled with RVM defaults. I do not use Windows, and am therefore unsure if the test script will work for you if you do.
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDERR.puts 'message on stderr'
sleep 1
end
Why does long_err produce each STDERR message as it is printed by "long"
def long_err( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'err -> ' + stderr.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stderr])
retry
rescue EOFError
puts 'EOF'
end
end
while long_out remains blocked until all STDOUT messages are printed?
def long_out( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'out -> ' + stdout.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stdout])
retry
rescue EOFError
puts 'EOF'
end
end
I assume you will require 'open3' before testing either function.
Why is IO::WaitReadable being raised differently for STDOUT than STDERR?
Workarounds using other ways to start subprocesses also appreciated if you have them.
In most OS's STDOUT is buffered while STDERR is not. What popen3 does is basically open a pipe between the exeutable you launch and Ruby.
Any output that is in buffered mode is not sent through this pipe until either:
The buffer is filled (thereby forcing a flush).
The sending application exits (EOF is reached, forcing a flush).
The stream is explicitly flushed.
The reason STDERR is not buffered is that it's usually considered important for error messages to appear instantly, rather than go for for efficiency through buffering.
So, knowing this, you can emulate STDERR behaviour with STDOUT like this:
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDOUT.flush
STDERR.puts 'message on stderr'
sleep 1
end
and you will see the difference.
You might also want to check "Understanding Ruby and OS I/O buffering".
Here's the best I've got so far for starting subprocesses. I launch a lot of network commands so I needed a way to time them out if they take too long to come back. This should be handy in any situation where you want to remain in control of your execution path.
I adapted this from a Gist, adding code to test the exit status of the command for 3 outcomes:
Successful completion (exit status 0)
Error completion (exit status is non-zero) - raises an exception
Command timed out and was killed - raises an exception
Also fixed a race condition, simplified parameters, added a few more comments, and added debug code to help me understand what was happening with exits and signals.
Call the function like this:
output = run_with_timeout("command that might time out", 15)
output will contain the combined STDOUT and STDERR of the command if it completes successfully. If the command doesn't complete within 15 seconds it will be killed and an exception raised.
Here's the function (2 constants you'll need defined at the top):
DEBUG = false # change to true for some debugging info
BUFFER_SIZE = 4096 # in bytes, this should be fine for many applications
def run_with_timeout(command, timeout)
output = ''
tick = 1
begin
# Start task in another thread, which spawns a process
stdin, stderrout, thread = Open3.popen2e(command)
# Get the pid of the spawned process
pid = thread[:pid]
start = Time.now
while (Time.now - start) < timeout and thread.alive?
# Wait up to `tick' seconds for output/error data
Kernel.select([stderrout], nil, nil, tick)
# Try to read the data
begin
output << stderrout.read_nonblock(BUFFER_SIZE)
puts "we read some data..." if DEBUG
rescue IO::WaitReadable
# No data was ready to be read during the `tick' which is fine
print "." # give feedback each tick that we're waiting
rescue EOFError
# Command has completed, not really an error...
puts "got EOF." if DEBUG
# Wait briefly for the thread to exit...
# We don't want to kill the process if it's about to exit on its
# own. We decide success or failure based on whether the process
# completes successfully.
sleep 1
break
end
end
if thread.alive?
# The timeout has been reached and the process is still running so
# we need to kill the process, because killing the thread leaves
# the process alive but detached.
Process.kill("TERM", pid)
end
ensure
stdin.close if stdin
stderrout.close if stderrout
end
status = thread.value # returns Process::Status when process ends
if DEBUG
puts "thread.alive?: #{thread.alive?}"
puts "status: #{status}"
puts "status.class: #{status.class}"
puts "status.exited?: #{status.exited?}"
puts "status.exitstatus: #{status.exitstatus}"
puts "status.signaled?: #{status.signaled?}"
puts "status.termsig: #{status.termsig}"
puts "status.stopsig: #{status.stopsig}"
puts "status.stopped?: #{status.stopped?}"
puts "status.success?: #{status.success?}"
end
# See how process ended: .success? => true, false or nil if exited? !true
if status.success? == true # process exited (0)
return output
elsif status.success? == false # process exited (non-zero)
raise "command `#{command}' returned non-zero exit status (#{status.exitstatus}), see below output\n#{output}"
elsif status.signaled? # we killed the process (timeout reached)
raise "shell command `#{command}' timed out and was killed (timeout = #{timeout}s): #{status}"
else
raise "process didn't exit and wasn't signaled. We shouldn't get to here."
end
end
Hope this is useful.

Resources