How to run code after ruby Kernel.exec - ruby

I have the following ruby shell.
#!/usr/bin/env ruby
$stdin.each_line do |line|
pid = fork{
exec line
puts "after exec -> #{Process.pid}"
}
Process.wait pid
end
The puts method after exec is never executed. Based on ri Kernel.exec, it seems that exec replaces the current process by running the given external. So, it's supposed to replace the new forked processes with the external processes. How am I supposed to run anything after exec command?

You cannot.
Per the documentation for Kernel#exec, "[it] replaces the current process by running the given external command". That means that you are no longer running your code but instead the code you specified by the command.
If you want to "wrap" a system call then you should use Kernel#system (or the backtick operator) to execute the command in a subshell.

Related

Start a process and keep it running after the ruby script exits

I'm trying to write a ruby script that:
Run a command/script
Stores the command's process pid in a file so I can check if it's still running later, and
the command should keep running after the ruby code exits.
I'm successful in steps 1 and 2, but it looks like the started script (i.e, the child process) terminates once the ruby code is finished.
This is the last version of what I could think about (super simplified):
pid = fork do
exec "/my/fancy/daemon/style/script"
end
File.open('tmp/process.pid', 'w') { |file| file.write(pid.to_s) }
Can you please tell me what am I doing wrong? The ultimate goal is to keep the other script (i.e, the child process) running after the ruby code exits.
You can "detach" your child process:
Process.detach(pid)
See Process#detach for more info.
If you're running your script on a shell, and if your script is the last interactive process, your virtual terminal may exit and cause your child process to hangup as well. If you consider not sending output to the terminal, you can use Process.daemon before running exec.
See Process#daemon.

Shell exec and pipes

I'm using bash, and as I understand it, exec followed by a command is supposed to replace the shell without creating a new process. For example,
exec echo hello
has the appearance of printing "hello" and then immediately exiting, because after echo is done, the shell process isn't there to return to anymore.
If I put this as part of a pipeline - for instance,
exec echo hello | sed 's/hell/heck/'
or
echo hello | exec sed 's/hell/heck/'
my expectation is that, similarly, the shell would terminate as a result of its process being replaced away. This is not what happens in reality, though - both the commands above print "hecko" and return to the shell normally, just as if the word "exec" wasn't there. Why is this?
There is sentence in bash manual:
Each command in a pipeline is executed as a separate process (i.e., in
a subshell).
So in both examples two processes are spawned by the pipeline first and 'exec' is executed inside one of spawned process - without impact on shell executing the pipeline.

Exiting ruby in subshell without killing parent

I have Ruby programA that calls Ruby programB with:
system("ruby programB.rb <parameters>")
Under certain conditions, I want programB to terminate its operation (and the associated subshell) but allow programA to continue on to the next set of parameters.
However, exit() and abort() kill both the subshell and the parent, and I am unable to get Process.kill("SIGTERM",0) to work in programB (unfortunately, this is on Windows). I'm running ruby 1.9.2.
How can I terminate programB without also killing programA?
If the regular system call isn't cutting it, the usual way is to do something like this:
pid = fork do
exec("ruby programB.rb ...")
end
kill("SIGTERM", pid)
The fork operation gives you a process identifier you can kill. system will block until the child process returns, so any call to kill in the parent process will affect only the parent process.
Unfortunately there's no fork in Windows, but there are alternatives that achieve the same thing.
exit() and abort() don't kill the parent, at least not on Mac OS, and Linux in my experience.
Try saving this as abort.rb:
puts RUBY_VERSION
puts `date`
puts 'aborting'
abort
and this as exit.rb:
puts RUBY_VERSION
puts `date`
puts 'exiting'
exit
Then save this as test.rb in the same directory and run it:
puts `ruby exit.rb`
puts `ruby abort.rb`
On my system I see:
1.9.3
Fri Dec 21 22:17:12 MST 2012
exiting
1.9.3
Fri Dec 21 22:17:12 MST 2012
aborting
They do exit the currently running script in the sub-shell, which then exits because it's not a log-in shell, and could set a return status which is important to the calling program, but I have yet to see them kill the parent.
If you need to capture STDERR, using backticks or %x won't work. I'd recommend using Open3.capture3 for simplicity if you need to know what status code was returned, or whether STDERR returned anything.
The only thing that works reliably for me is this:
kill -INT $$
It reliably kills the script and only the script, even if it was source'd from the command line. Note that I'm running GNU bash, version 4.4.12(1)-release (x86_64-apple-darwin15.6.0); I can't recall if this works on bash 3.x

How do I open STDIN of process in Ruby?

I have a set of tasks that I need to run from a Ruby script, however one particular task always waits for EOF on STDIN before quitting.
Obviously this causes the script to hang while waiting for the child process to end.
I have the process ID of the child process, but not a pipe or any kind of handle to it. How could I open a handle to the STDIN of a process to send EOF to it?
EDIT: Given that you aren't starting the script, a solution that occurs to me is to put $stdin under your control while using your gem. I suggest something like:
old_stdin = $stdin.dup
# note that old_stdin.fileno is non-0.
# create a file handle you can use to signal EOF
new_stdin = File::open('/dev/null', 'r')
# and make $stdin use it, instead.
$stdin.reopen(new_stdin)
new_stdin.close
# note that $stdin.fileno is still 0, though now it's using /dev/null for input.
# replace with the call that runs the external program
system('/bin/cat')
# "cat" will now exit. restore the old state.
$stdin.reopen(old_stdin)
old_stdin.close
If your ruby script is creating the tasks, it can use IO::popen. For example, cat, when run with no arguments, will wait for EOF on stdin before it exits, but you can run the following:
f = IO::popen('cat', 'w')
f.puts('hello')
# signals EOF to "cat"
f.close

Spawn a background process in Ruby

I'm writing a ruby bootstrapping script for a school project, and part of this bootstrapping process is to start a couple of background processes (which are written and function properly). What I'd like to do is something along the lines of:
`/path/to/daemon1 &`
`/path/to/daemon2 &`
`/path/to/daemon3 &`
However, that blocks on the first call to execute daemon1. I've seen references to a Process.spawn method, but that seems to be a 1.9+ feature, and I'm limited to Ruby 1.8.
I've also tried to execute these daemons from different threads, but I'd like my bootstrap script to be able to exit.
So how can I start these background processes so that my bootstrap script doesn't block and can exit (but still have the daemons running in the background)?
As long as you are working on a POSIX OS you can use fork and exec.
fork = Create a subprocess
exec = Replace current process with another process
You then need to inform that your main-process is not interested in the created subprocesses via Process.detach.
job1 = fork do
exec "/path/to/daemon01"
end
Process.detach(job1)
...
better way to pseudo-deamonize:
`((/path/to/deamon1 &)&)`
will drop the process into it's own shell.
best way to actually daemonize:
`service daemon1 start`
and make sure the server/user has permission to start the actual daemon. check out 'deamonize' tool for linux to set up your deamon.

Resources