I need to call commands sporadically in my ruby app, most commands will end quite quickly, but some will run for longer (like playing a song) and I need to be able to kill that process if wanted.
The commands are called in a sub thread as not to lock the rest of the app.
I also need to store info on running commands (so I know what to kill if needed), but I also need to send out info when the command ended.
The problems:
wait or waitpid doesn't work (they never return)
I tried to use trap, which detects when the command ended, but it only works for the latest started command.
Ruby leaves the commands as zombie process (until killing the app) and trying to use the kill(both Process.kill and calling the command directly) doesn't remove them. (Perhaps the reason to why wait doesn't work??)
# Play sound (player can be either 'mplayer' or 'aplay' )
pid = spawn player, sound_dir + file
Eddie::common[:commands][:sound_pids][pid] = file
Signal.trap("CLD") {
# This runs, but doesn't clean up zombies
Process.kill "KILL", pid
p = system 'kill ' + pid.to_s
Eddie::common[:commands][:sound_pids].delete pid
}
If I run this twice (once while the first command is running) it will look like this after both commands ended:
Eddie::common[:commands][:sound_pids] => {3018=>"firstfile.wav"}
and this is the result of ps
chris 3018 0.0 0.0 0 0 pts/5 Z+ 23:50 0:00 [aplay] <defunct>
chris 3486 0.2 0.0 0 0 pts/5 Z+ 23:51 0:00 [aplay] <defunct>
Added note: Using system to call the command doesn't leave a zombie process, but in turn it's not killable..
Related
I was running a Python script which launches several times a Fortran executable (with os.system('./executable params.ini).
Unfortunately, I did a CTRL+Z to stop the python script but it seems that I have stopped it during the execution of the Fortran executable.
Now, impossible to relaunch the Python script. I tried. :
fg %1
and
bg %1
and
kill -CONT pid_of_executable
But nothing happens ...
So, is there a way to relaunch the python script ? I am frustrated ... if anyone could save my life ... (I am joking)
UPDATE 1: Once Python script stopped by CTRL+Z, pa aux | grep compute
gives :
user1 38258 0.0 0.0 6121988 10324 s003 S 3:26PM 0:00.99 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python compute_Cl_variable_step_160_between_1e-8_and_1_values_Only_Omega_m_der_to_choose.py
user1 33564 0.0 0.0 6010372 16472 s012 S+ 1:34PM 0:01.44 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python compute_Cl_variable_step_160_between_1e-8_and_1_values_Only_Omega_m_der_to_choose.py
user1 96299 0.0 0.0 6509060 12668 s004 S+ 6:06PM 0:01.77 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python compute_Cl_variable_step_160_between_1e-8_and_1_values_Only_Omega_m_der_to_choose.py
With
jobs
you can see the suspended jobs
fg jobnummer
will bring it back.
I'm running ruby in an Alpine docker container (it's a sidekiq worker, if that matters). At a certain point, my application receives instructions to shell out to a subcommand. I need to be able to stream STDOUT rather than have it buffer. This is why I'm using PTY instead of system() or another similar answer. I'm executing the following line of code:
stdout, stdin, pid = PTY.spawn(my_cmd)
When I connect to the docker container and run ps auxf, I see this:
root 7 0.0 0.4 187492 72668 ? Sl 01:38 0:00 ruby_program
root 12378 0.0 0.0 1508 252 pts/4 Ss+ 01:38 0:00 \_ sh -c my_cmd
root 12380 0.0 0.0 15936 6544 pts/4 S+ 01:38 0:00 \_ my_cmd
Note how the child process of ruby is "sh -c my_cmd", which itself then has a child "my_cmd" process.
"my_cmd" can take a significant amount of time to run. It is designed so that sending a signal USR1 to the process causes it to save its state so it can be resumed later and abort cleanly.
The problem is that the pid returned from "PTY.spawn()" is the pid of the "sh -c my_cmd" process, not the "my_cmd" process. So when I execute:
Process.kill('USR1', pid)
it sends USR1 to sh, not to my_cmd, so it doesn't behave as it should.
Is there any way to get the pid that corresponds to the command I actually specified? I'm open to ideas outside the realm of PTY, but it needs to satisfy the following constraints:
1) I need to be able to stream output from both STDOUT and STDERR as they are written, without waiting for them to be flushed (since STDOUT and STDERR get mixed together into a single stream in PTY, I'm redirecting STDERR to a file and using inotify to get updates).
2) I need to be able to send USR1 to the process to be able to pause it.
I gave up on a clean solution. I finally just executed
pgrep -P #{pid}
to get the child pid, and then I could send USR1 to that process. Feels hacky, but when ruby gives you lemons...
You should send your arguments as arrays. So instead of
stdout, stdin, pid = PTY.spawn("my_cmd arg1 arg2")
You should use
stdout, stdin, pid = PTY.spawn("my_cmd", "arg1", "arg2")
etc.
Also see:
Process.spawn child does not receive TERM
https://zendesk.engineering/running-a-child-process-in-ruby-properly-febd0a2b6ec8
I have a question about UNIX job control in RHEL6
Basically, I am trying to implement passenger debug log rotation using logrotate. I am following the instructions here:
https://github.com/phusion/passenger/wiki/Phusion-Passenger-and-logrotation
I've got everything setup correctly (I think). My problem is this; when I spawn the background job using
nohup pipetool $HOME/passenger.log < $HOME/passenger.pipe &
And then log out and back in, if I inspect the process table, for example by using 'ps aux' if I check the pid of the process it appears as with the command 'bash'. I have tried changing the first line of the command to "#!/usr/bin/ruby". Here is an example of this:
[root#server]# nohup pipetool /var/log/nginx/passenger-debug.log < /var/pipe/passenger.pipe &
[1] 63767
[root#server]# exit
exit
[me#server]$ sudo su
[sudo] password for me:
[root#server]# ps aux | grep 63767
root 63767 0.0 0.0 108144 2392 pts/0 S 15:26 0:00 bash
root 63887 0.0 0.0 103236 856 pts/0 S+ 15:26 0:00 grep 63767
[root#server]#
When this occurs the line in the supplied logrotate file ( killall -HUP pipetool ) fails because the 'pipetool' is not matched. Again, I've tried changing the first line to #!/usr/bin/ruby. This had no impact. So, my question is basically; is there any good way to have the actual command appear in the process table instead of just 'bash' when spawned using job control? I am using bash as the shell when I invoke the pipetool. I appreciate you taking the time to help me.
This should work for you: edit pipetool to set the global variable $PROGRAM_NAME:
$PROGRAM_NAME = 'pipetool'
The script should then show up as pipetool in the process list.
I am currently reading up on some more details on Bash scripting and especially process management here. In the section on "PIDs and Parents" I found the following statement:
A process's PID will NEVER be freed up for use after the process dies UNTIL the parent process waits for the PID to see whether it ended and retrieve its exit code.
So if I understand this correctly, if I start an process in a bash script, then the process terminates, that the PID cannot be used by any other process. Wouldn't this mean, that if I have a long running script, which repeatedly starts other sub-processes but never waits on them, that I'll eventually have a resource leak, because the used PIDs will not be returned back to the system?
How about if I actually wait for the other process, but the wait get's cancelled by a trap. Would this wait somehow still free up the PID, or do I have to wait again after the trap has been caught?
Luckily you won't. I can't tell you exactly why but you can easily test this. Run the following script (stop with Ctrl+C):
#!/bin/bash
while true; do
sleep 5 &
sleep 1
done
You can see you get no zombies (leaked PIDs) after 6+ seconds. To see some zombies use the following python code (again, stop with Ctrl+C):
#!/usr/bin/python
import subprocess, time
pl = []
while True:
pl.append(subprocess.Popen(["sleep", "5"]))
time.sleep(1)
After 6 seconds you'll see one zombie:
ps xaw | grep 'sleep'
...
26470 pts/2 Z+ 0:00 [sleep] <defunct>
...
My guess is that bash does wait and stores the results reaping the zombile processes with or without the builtin wait command. For the python script, if you remove the pl.append part the garbage collection releases the objects and does it's magic again reaping the zombies. Just for info a child may never become a zombie (from wikipedia, Zombie process):
...if the parent explicitly ignores SIGCHLD by setting its handler to SIG_IGN (rather
than simply ignoring the signal by default) or has the SA_NOCLDWAIT flag set, all
child exit status information will be discarded and no zombie processes will be left.
You don't have to explicitly wait on foreground processes because the shell in which your script is running waits on them. The next process won't start until the previous one finishes.
If you start many long running background processes, you could use all available PIDs, but that's subject to the limit of ulimit -u (which could be unlimited).
I have some processes showing up as <defunct> in top (and ps). I've boiled things down from the real scripts and programs.
In my crontab:
* * * * * /tmp/launcher.sh /tmp/tester.sh
The contents of launcher.sh (which is of course marked executable):
#!/bin/bash
# the real script does a little argument processing here
"$#"
The contents of tester.sh (which is of course marked executable):
#!/bin/bash
sleep 27 & # the real script launches a compiled C program in the background
ps shows the following:
user 24257 24256 0 18:32 ? 00:00:00 [launcher.sh] <defunct>
user 24259 1 0 18:32 ? 00:00:00 sleep 27
Note that tester.sh does not appear--it has exited after launching the background job.
Why does launcher.sh stick around, marked <defunct>? It only seems to do this when launched by cron--not when I run it myself.
Additional note: launcher.sh is a common script in the system this runs on, which is not easily modified. The other things (crontab, tester.sh, even the program that I run instead of sleep) can be modiified much more easily.
Because they haven't been the subject of a wait(2) system call.
Since someone may wait for these processes in the future, the kernel can't completely get rid of them or it won't be able to execute the wait system call because it won't have the exit status or evidence of its existence any more.
When you start one from the shell, your shell is trapping SIGCHLD and doing various wait operations anyway, so nothing stays defunct for long.
But cron isn't in a wait state, it is sleeping, so the defunct child may stick around for a while until cron wakes up.
Update: Responding to comment...
Hmm. I did manage to duplicate the issue:
PPID PID PGID SESS COMMAND
1 3562 3562 3562 cron
3562 1629 3562 3562 \_ cron
1629 1636 1636 1636 \_ sh <defunct>
1 1639 1636 1636 sleep
So, what happened was, I think:
cron forks and cron child starts shell
shell (1636) starts sid and pgid 1636 and starts sleep
shell exits, SIGCHLD sent to cron 3562
signal is ignored or mishandled
shell turns zombie. Note that sleep is reparented to init, so when the sleep exits init will get the signal and clean up. I'm still trying to figure out when the zombie gets reaped. Probably with no active children cron 1629 figures out it can exit, at that point the zombie will be reparented to init and get reaped. So now we wonder about the missing SIGCHLD that cron should have processed.It isn't necessarily vixie cron's fault. As you can see here, libdaemon installs a SIGCHLD handler during daemon_fork(), and this could interfere with signal delivery on a quick exit by intermediate 1629Now, I don't even know if vixie cron on my Ubuntu system is even built with libdaemon, but at least I have a new theory. :-)
to my opinion it's caused by process CROND (spawned by crond for every task) waiting for input on stdin which is piped to the stdout/stderr of the command in the crontab. This is done because cron is able to send resulting output via mail to the user.
So CROND is waiting for EOF till the user command and all it's spawned child processes have closed the pipe. If this is done CROND continues with the wait-statement and then the defunct user command disappears.
So I think you have to explicitly disconnect every spawned subprocess in your script form the pipe (e.g. by redirecting it to a file or /dev/null.
so the following line should work in crontab :
* * * * * ( /tmp/launcher.sh /tmp/tester.sh &>/dev/null & )
I suspect that cron is waiting for all subprocesses in the session to terminate. See wait(2) with respect to negative pid arguments. You can see the SESS with:
ps faxo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
Here's what I see (edited):
STAT EUID RUID TT TPGID SESS PGRP PPID PID %CPU COMMAND
Ss 0 0 ? -1 3197 3197 1 3197 0.0 cron
S 0 0 ? -1 3197 3197 3197 18825 0.0 \_ cron
Zs 1000 1000 ? -1 18832 18832 18825 18832 0.0 \_ sh <defunct>
S 1000 1000 ? -1 18832 18832 1 18836 0.0 sleep
Notice that the sh and the sleep are in the same SESS.
Use the command setsid(1). Here's tester.sh:
#!/bin/bash
setsid sleep 27 # the real script launches a compiled C program in the background
Notice you don't need &, setsid puts it in the background.
I’d recommend that you solve the problem by simply not having two separate processes: Have launcher.sh do this on its last line:
exec "$#"
This will eliminate the superfluous process.
I found this question while I was looking for a solution with a similar issue. Unfortunately answers in this question didn't solve my problem.
Killing defunct process is not an option as you need to find and kill its parent process. I ended up killing the defunct processes in the following way:
ps -ef | grep '<defunct>' | grep -v grep | awk '{print "kill -9 ",$3}' | sh
In "grep ''" you can narrow down the search to a specific defunct process you are after.
I have tested the same problem so many times.
And finally I've got the solution.
Just specify the '/bin/bash' before the bash script as shown below.
* * * * * /bin/bash /tmp/launcher.sh /tmp/tester.sh