Exiting ruby in subshell without killing parent - ruby

I have Ruby programA that calls Ruby programB with:
system("ruby programB.rb <parameters>")
Under certain conditions, I want programB to terminate its operation (and the associated subshell) but allow programA to continue on to the next set of parameters.
However, exit() and abort() kill both the subshell and the parent, and I am unable to get Process.kill("SIGTERM",0) to work in programB (unfortunately, this is on Windows). I'm running ruby 1.9.2.
How can I terminate programB without also killing programA?

If the regular system call isn't cutting it, the usual way is to do something like this:
pid = fork do
exec("ruby programB.rb ...")
end
kill("SIGTERM", pid)
The fork operation gives you a process identifier you can kill. system will block until the child process returns, so any call to kill in the parent process will affect only the parent process.
Unfortunately there's no fork in Windows, but there are alternatives that achieve the same thing.

exit() and abort() don't kill the parent, at least not on Mac OS, and Linux in my experience.
Try saving this as abort.rb:
puts RUBY_VERSION
puts `date`
puts 'aborting'
abort
and this as exit.rb:
puts RUBY_VERSION
puts `date`
puts 'exiting'
exit
Then save this as test.rb in the same directory and run it:
puts `ruby exit.rb`
puts `ruby abort.rb`
On my system I see:
1.9.3
Fri Dec 21 22:17:12 MST 2012
exiting
1.9.3
Fri Dec 21 22:17:12 MST 2012
aborting
They do exit the currently running script in the sub-shell, which then exits because it's not a log-in shell, and could set a return status which is important to the calling program, but I have yet to see them kill the parent.
If you need to capture STDERR, using backticks or %x won't work. I'd recommend using Open3.capture3 for simplicity if you need to know what status code was returned, or whether STDERR returned anything.

The only thing that works reliably for me is this:
kill -INT $$
It reliably kills the script and only the script, even if it was source'd from the command line. Note that I'm running GNU bash, version 4.4.12(1)-release (x86_64-apple-darwin15.6.0); I can't recall if this works on bash 3.x

Related

Tcl and Cygwin and a Background Process which should hangup

I have a bash script server.sh which is maintained by an external source and ideally should not be modified. This script writes to stdout and stderr.
In fact, this server.sh itself is doing an exec tclsh immediately:
#!/bin/sh
# \
exec tclsh "$0" ${1+"$#"}
so in fact, it is just a wrapper around a Tcl script. I just mention this in case you think that this matters.
I need a Tcl script setup.tcl which is supposed to do some preparatory work, then invoke server.sh (in the background), then do some cleanup work (and display the PID of the background process), and terminate.
server.sh is supposed to continue running until explicitly killed.
setup.tcl is usually invoked manually, either from a Cygwin bash shell or from a Windows cmd shell. In the latter case, it is ensured that Cygwin's bash.exe is in the PATH.
The environment is Windows 7 and Cygwin. The Tcl is either Cygwin's (8.5) or ActiveState 8.4.
The first version (omitting error handling) went like this:
# setup.tcl:
# .... preparatory work goes here
set childpid [exec bash.exe server.sh &]
# .... clean up work goes here
puts $childpid
exit 0
While this works when started as ActiveState Tcl from a Windows CMD shell, it does not work in a pure Cygwin setup. The reason is that as soon as setup.tcl ends, a signal is sent to the child process and this is killed too.
Using nohup would not help here, because I want to see the output of server.sh as soon as it occurs.
My next idea would be to created an intermediate bash script, mediator.sh, which uses disown -h to detach the child process and keep it from being killed:
#!/usr/bin/bash
# mediator.sh
server.sh &
child=$!
disown -h $child
and invoke mediator.sh from setup.tcl. But aside from the fact that I don't see an easy way to pass the child PID up to setup.tcl, the main problem is that it doesn't work either: While mediator.sh indeed keeps the child alive when called from the Cygwin command line directly, we have the same behaviour again (server.sh being killed when setup.tcl exits), when I call it via setup.tcl.
Anybody knowing a solution for this?
You'll want to set a trap handler in your server script so you can handle/ignore certain signals.
For example, to ignore HUP signals, you can do something like the following:
#!/bin/bash
handle_signal() {
echo "Ignoring HUP signal"
}
trap handle_signal SIGHUP
# Rest of code goes here
In the example case, if the script receives a HUP signal it will print a message and continue as normal. It will still die to Ctrl-C as that's the INT signal which is unhandled.

Start a process and keep it running after the ruby script exits

I'm trying to write a ruby script that:
Run a command/script
Stores the command's process pid in a file so I can check if it's still running later, and
the command should keep running after the ruby code exits.
I'm successful in steps 1 and 2, but it looks like the started script (i.e, the child process) terminates once the ruby code is finished.
This is the last version of what I could think about (super simplified):
pid = fork do
exec "/my/fancy/daemon/style/script"
end
File.open('tmp/process.pid', 'w') { |file| file.write(pid.to_s) }
Can you please tell me what am I doing wrong? The ultimate goal is to keep the other script (i.e, the child process) running after the ruby code exits.
You can "detach" your child process:
Process.detach(pid)
See Process#detach for more info.
If you're running your script on a shell, and if your script is the last interactive process, your virtual terminal may exit and cause your child process to hangup as well. If you consider not sending output to the terminal, you can use Process.daemon before running exec.
See Process#daemon.

How to run code after ruby Kernel.exec

I have the following ruby shell.
#!/usr/bin/env ruby
$stdin.each_line do |line|
pid = fork{
exec line
puts "after exec -> #{Process.pid}"
}
Process.wait pid
end
The puts method after exec is never executed. Based on ri Kernel.exec, it seems that exec replaces the current process by running the given external. So, it's supposed to replace the new forked processes with the external processes. How am I supposed to run anything after exec command?
You cannot.
Per the documentation for Kernel#exec, "[it] replaces the current process by running the given external command". That means that you are no longer running your code but instead the code you specified by the command.
If you want to "wrap" a system call then you should use Kernel#system (or the backtick operator) to execute the command in a subshell.

Ruby kill virtual shell opened with PTY.spawn

In a ruby script, I start more virtual shells, each managed by a shell manager object, like so:
#shell = PTY.spawn 'env PS1="\w>" TERM=dumb COLUMNS=63 LINES=21 sh -i'
At some later point in time, I would like to destroy this instance and also kill the associated shell process. Sadly, I can't get anything to work properly. Here's what I tried, in order of probability to work:
Nothing, that is, expecting the shell proc gets closed when the managing object gets destroyed.
Killing all processes running on the shell (this works) with the kill command, and then killing the shell itself with system("kill #{#shell[2]"). This has no effect.
Using -9 in the above. This leaves the shell process defunct.
All the shells get closed when the ruby program exits, but I want to kill them while keeping the program running. Anyone encounter something like this before?
The problem is zombies. Yes, really.
All Unix-style kernel's leave the process around until someone waits for it. (That's in order to keep track of the PID, the exit status, and a bit of other stuff.) They are called zombies and have a Z state in the ps(1) listing. You can't kill them, because they are already dead. They go away when you wait for them.
So here is how to clean up your #shell object:
#shell[0].close
#shell[1].close
begin
Process.wait #shell[2]
rescue PTY::ChildExited
end
You may not need the rescue block depending on whether you have higher level layers catching exceptions too broadly. (Sigh, like my irb.)
By the way, the reason your process finally vanished when the Ruby program exited is because then the zombie also became an orphan (no parent process) and either the shell or init(8) will eventually wait for all orphans.

how to controller (start/kill) a background process (server app) in ruby

i'm trying to set up a server for integration tests (specs actually) via ruby and can't figure out how to control the process.
so, what i'm trying to do is:
run a rake task for my gem that executes the integration specs
the task needs to first start a server (i use webrick) and then run the specs
after executing the specs it should kill the webrick so i'm not left with some unused background process
webrick is not a requirement, but it's included in the ruby standard library so being able to use it would be great.
hope anyone is able to help!
ps. i'm running on linux, so having this work for windows is not my main priority (right now).
The standard way is to use the system functions fork (to duplicate the current process), exec (to replace the current process by an executable file), and kill (to send a signal to a process to terminate it).
For example :
pid = fork do
# this code is run in the child process
# you can do anything here, like changing current directory or reopening STDOUT
exec "/path/to/executable"
end
# this code is run in the parent process
# do your stuffs
# kill it (other signals than TERM may be used, depending on the program you want
# to kill. The signal KILL will always work but the process won't be allowed
# to cleanup anything)
Process.kill "TERM", pid
# you have to wait for its termination, otherwise it will become a zombie process
# (or you can use Process.detach)
Process.wait pid
This should work on any Unix like system. Windows creates process in a different way.
I just had to do something similar and this is what I came up with. #Michael Witrant's answer got me started, but I changed some things like using Process.spawn instead of fork (newer and better).
# start spawns a process and returns the pid of the process
def start(exe)
puts "Starting #{exe}"
pid = spawn(exe)
# need to detach to avoid daemon processes: http://www.ruby-doc.org/core-2.1.3/Process.html#method-c-detach
Process.detach(pid)
return pid
end
# This will kill off all the programs we started
def killall(pids)
pids.each do |pid|
puts "Killing #{pid}"
# kill it (other signals than TERM may be used, depending on the program you want
# to kill. The signal KILL will always work but the process won't be allowed
# to cleanup anything)
begin
Process.kill "TERM", pid
# you have to wait for its termination, otherwise it will become a zombie process
# (or you can use Process.detach)
Process.wait pid
rescue => ex
puts "ERROR: Couldn't kill #{pid}. #{ex.class}=#{ex.message}"
end
end
end
# Now we can start processes and keep the pids for killing them later
pids = []
pids << start('./someprogram')
# Do whatever you want here, run your tests, etc.
# When you're done, be sure to kill of the processes you spawned
killall(pids)
That's about all she wrote, give it a try and let me know how it works.
I have tried fork, but it has kind of problems when ActiveRecord is involved in both the processes. I would suggest Spawn plugin (http://github.com/tra/spawn). It does fork only but takes care of ActiveRecord.

Resources