I'm working on a piece of code which monitors a directory and performs certain tasks when a new file is created in that directory. I'm using FSSM and my (simplified) code looks like this:
require 'fssm'
class FileWatcher
def initialize
FSSM.monitor('./temp/', '**/*', :directories => true) do
create do |base, relative|
puts "Create called with #{relative}"
end
end
end
end
FileWatcher.new
The file monitoring aspect works well, but my problem is when I halt this script. What happens is the "file_monitor" process remains running. E.g. this is after running and halting the script 3 times:
$ ps aux | grep file_watcher
root 3272 1.0 5.3 9760 6596 pts/0 Tl 00:11 0:02 ruby file_watcher.rb
root 3302 1.5 5.2 9760 6564 pts/0 Tl 00:14 0:02 ruby file_watcher.rb
root 3314 2.2 5.2 9764 6564 pts/0 Sl+ 00:14 0:02 ruby file_watcher.rb
I.e. there are 3 processes still running. So, how to clean up upon exit of the script?
Install a signal handler in the father process which triggers when it is about to get killed by a signal (SIGINT, SIGTERM, SIGQUIT, SIGHUP) and which then in turn sends an appropriate signal to the child process (FSSM monitor).
Related
I was running a Python script which launches several times a Fortran executable (with os.system('./executable params.ini).
Unfortunately, I did a CTRL+Z to stop the python script but it seems that I have stopped it during the execution of the Fortran executable.
Now, impossible to relaunch the Python script. I tried. :
fg %1
and
bg %1
and
kill -CONT pid_of_executable
But nothing happens ...
So, is there a way to relaunch the python script ? I am frustrated ... if anyone could save my life ... (I am joking)
UPDATE 1: Once Python script stopped by CTRL+Z, pa aux | grep compute
gives :
user1 38258 0.0 0.0 6121988 10324 s003 S 3:26PM 0:00.99 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python compute_Cl_variable_step_160_between_1e-8_and_1_values_Only_Omega_m_der_to_choose.py
user1 33564 0.0 0.0 6010372 16472 s012 S+ 1:34PM 0:01.44 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python compute_Cl_variable_step_160_between_1e-8_and_1_values_Only_Omega_m_der_to_choose.py
user1 96299 0.0 0.0 6509060 12668 s004 S+ 6:06PM 0:01.77 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python compute_Cl_variable_step_160_between_1e-8_and_1_values_Only_Omega_m_der_to_choose.py
With
jobs
you can see the suspended jobs
fg jobnummer
will bring it back.
I'm running ruby in an Alpine docker container (it's a sidekiq worker, if that matters). At a certain point, my application receives instructions to shell out to a subcommand. I need to be able to stream STDOUT rather than have it buffer. This is why I'm using PTY instead of system() or another similar answer. I'm executing the following line of code:
stdout, stdin, pid = PTY.spawn(my_cmd)
When I connect to the docker container and run ps auxf, I see this:
root 7 0.0 0.4 187492 72668 ? Sl 01:38 0:00 ruby_program
root 12378 0.0 0.0 1508 252 pts/4 Ss+ 01:38 0:00 \_ sh -c my_cmd
root 12380 0.0 0.0 15936 6544 pts/4 S+ 01:38 0:00 \_ my_cmd
Note how the child process of ruby is "sh -c my_cmd", which itself then has a child "my_cmd" process.
"my_cmd" can take a significant amount of time to run. It is designed so that sending a signal USR1 to the process causes it to save its state so it can be resumed later and abort cleanly.
The problem is that the pid returned from "PTY.spawn()" is the pid of the "sh -c my_cmd" process, not the "my_cmd" process. So when I execute:
Process.kill('USR1', pid)
it sends USR1 to sh, not to my_cmd, so it doesn't behave as it should.
Is there any way to get the pid that corresponds to the command I actually specified? I'm open to ideas outside the realm of PTY, but it needs to satisfy the following constraints:
1) I need to be able to stream output from both STDOUT and STDERR as they are written, without waiting for them to be flushed (since STDOUT and STDERR get mixed together into a single stream in PTY, I'm redirecting STDERR to a file and using inotify to get updates).
2) I need to be able to send USR1 to the process to be able to pause it.
I gave up on a clean solution. I finally just executed
pgrep -P #{pid}
to get the child pid, and then I could send USR1 to that process. Feels hacky, but when ruby gives you lemons...
You should send your arguments as arrays. So instead of
stdout, stdin, pid = PTY.spawn("my_cmd arg1 arg2")
You should use
stdout, stdin, pid = PTY.spawn("my_cmd", "arg1", "arg2")
etc.
Also see:
Process.spawn child does not receive TERM
https://zendesk.engineering/running-a-child-process-in-ruby-properly-febd0a2b6ec8
I need to call commands sporadically in my ruby app, most commands will end quite quickly, but some will run for longer (like playing a song) and I need to be able to kill that process if wanted.
The commands are called in a sub thread as not to lock the rest of the app.
I also need to store info on running commands (so I know what to kill if needed), but I also need to send out info when the command ended.
The problems:
wait or waitpid doesn't work (they never return)
I tried to use trap, which detects when the command ended, but it only works for the latest started command.
Ruby leaves the commands as zombie process (until killing the app) and trying to use the kill(both Process.kill and calling the command directly) doesn't remove them. (Perhaps the reason to why wait doesn't work??)
# Play sound (player can be either 'mplayer' or 'aplay' )
pid = spawn player, sound_dir + file
Eddie::common[:commands][:sound_pids][pid] = file
Signal.trap("CLD") {
# This runs, but doesn't clean up zombies
Process.kill "KILL", pid
p = system 'kill ' + pid.to_s
Eddie::common[:commands][:sound_pids].delete pid
}
If I run this twice (once while the first command is running) it will look like this after both commands ended:
Eddie::common[:commands][:sound_pids] => {3018=>"firstfile.wav"}
and this is the result of ps
chris 3018 0.0 0.0 0 0 pts/5 Z+ 23:50 0:00 [aplay] <defunct>
chris 3486 0.2 0.0 0 0 pts/5 Z+ 23:51 0:00 [aplay] <defunct>
Added note: Using system to call the command doesn't leave a zombie process, but in turn it's not killable..
I have a question about UNIX job control in RHEL6
Basically, I am trying to implement passenger debug log rotation using logrotate. I am following the instructions here:
https://github.com/phusion/passenger/wiki/Phusion-Passenger-and-logrotation
I've got everything setup correctly (I think). My problem is this; when I spawn the background job using
nohup pipetool $HOME/passenger.log < $HOME/passenger.pipe &
And then log out and back in, if I inspect the process table, for example by using 'ps aux' if I check the pid of the process it appears as with the command 'bash'. I have tried changing the first line of the command to "#!/usr/bin/ruby". Here is an example of this:
[root#server]# nohup pipetool /var/log/nginx/passenger-debug.log < /var/pipe/passenger.pipe &
[1] 63767
[root#server]# exit
exit
[me#server]$ sudo su
[sudo] password for me:
[root#server]# ps aux | grep 63767
root 63767 0.0 0.0 108144 2392 pts/0 S 15:26 0:00 bash
root 63887 0.0 0.0 103236 856 pts/0 S+ 15:26 0:00 grep 63767
[root#server]#
When this occurs the line in the supplied logrotate file ( killall -HUP pipetool ) fails because the 'pipetool' is not matched. Again, I've tried changing the first line to #!/usr/bin/ruby. This had no impact. So, my question is basically; is there any good way to have the actual command appear in the process table instead of just 'bash' when spawned using job control? I am using bash as the shell when I invoke the pipetool. I appreciate you taking the time to help me.
This should work for you: edit pipetool to set the global variable $PROGRAM_NAME:
$PROGRAM_NAME = 'pipetool'
The script should then show up as pipetool in the process list.
I have a plain ruby application (it's not a Web app, so not using a pre-existing platform like rails, sinatra...) processing data continuously.
I plan to deploy it with Capistrano and simply start it with the ruby command. The problem is that I get data by batches, and it can take few minutes to process them.
When I deploy a new version I would like to introduce a soft restart, meaning that the app will be first notify about the new deploy: so it can finish the batch, and then say 'I m ready for an update' (the deployment script will wait for that message).
Is there any Gem for that? Maybe Capistrano includes that option?
Allow the application to trap POSIX signals. Take a look at the Signal class.
If you send a kill <signal type> to the process, any signal handlers will be invoked, regardless of what the process is currently doing. You can, for example, set some sort of flag that is checked at a sensible point in your logic (at the end of a run loop, for example), terminating the process if that flag is set. There are many signals you can respond to, but SIGHUP or one of the SIGUSR signals probably make sense for what you're doing... you can respond to whatever signal you like in whatever way you like, but it makes sense to allow the default behaviour for most of the typically handled ones (like SIGTERM and SIGKILL). For really complex stuff, you can actually accept a coded series of signals to trigger particular events too.
Signal.trap("HUP") do
puts "Huh?"
end
loop do
puts "Looping..."
sleep 2
end
Output
[chris#chipbook:~%] ruby sig_demo.rb
Looping...
Looping...
Looping...
Looping...
Looping...
Huh?
Looping...
Looping...
Looping...
Huh?
Looping...
Looping...
Looping...
Because in another terminal window I'd done:
[chris#chipbook:/usr/local%] ps aux | grep ruby
chris 69487 0.0 0.0 2425480 188 s005 R+ 11:26pm 0:00.00 grep ruby
chris 69462 0.0 0.1 2440224 4060 s004 S+ 11:26pm 0:00.03 ruby sig_demo.rb
[chris#chipbook:/usr/local%] kill -HUP 69462
[chris#chipbook:/usr/local%] kill -HUP 69462