I'm writing a ruby bootstrapping script for a school project, and part of this bootstrapping process is to start a couple of background processes (which are written and function properly). What I'd like to do is something along the lines of:
`/path/to/daemon1 &`
`/path/to/daemon2 &`
`/path/to/daemon3 &`
However, that blocks on the first call to execute daemon1. I've seen references to a Process.spawn method, but that seems to be a 1.9+ feature, and I'm limited to Ruby 1.8.
I've also tried to execute these daemons from different threads, but I'd like my bootstrap script to be able to exit.
So how can I start these background processes so that my bootstrap script doesn't block and can exit (but still have the daemons running in the background)?
As long as you are working on a POSIX OS you can use fork and exec.
fork = Create a subprocess
exec = Replace current process with another process
You then need to inform that your main-process is not interested in the created subprocesses via Process.detach.
job1 = fork do
exec "/path/to/daemon01"
end
Process.detach(job1)
...
better way to pseudo-deamonize:
`((/path/to/deamon1 &)&)`
will drop the process into it's own shell.
best way to actually daemonize:
`service daemon1 start`
and make sure the server/user has permission to start the actual daemon. check out 'deamonize' tool for linux to set up your deamon.
Related
I have a couple cli-based scripts that run for some time.
I'd like another script to 'restart' those other scripts.
I've checked SO for answers, but the scenarios were not applicable enough to mine, as I'm trying to end Terminal processes using Terminal.
Process:
2 cli-based scripts are running (node, python, etc).
3rd script is run and decides whether or not to restart the other 2.
This can't quit Terminal, but has to end current processes.
3rd script then runs an executable that restarts everything.
Currently none of the terminal windows are named, and from reading the other posts, I can see that it may be helpful to do so.
I can mostly set this up, I just could not find a command that would end all other terminal processes and close them.
There are a couple of ways to do this. Most common is having a pidfile.
This file contains the process ID (pid) of the job you want to kill
later on. A simple way to create the pidfile is:
$ node server &
$ echo $! > /tmp/node.pidfile
$! contains the pid of the process that was most recently backgrounded.
Then later on, you kill it like so:
$ kill `cat /tmp/node.pidfile`
You would do similar for the python script.
The other less robust way is to do a killall for each process and assume you are not running similar node or python jobs.
Refer to
What is a .pid file and what does it contain? if you're not familiar with this.
The question headline is quite general, so is my reply
killall bash
or generically
killall processName
eg. killall chrome
Based on an external redis queue, I want a Sinatra application to run a script like this:
ruby fetch_vin.rb vin_number_123
This will fire up watir-webdriver and report to the queue appropriately. When the script is finished, everything but the Sinatra app should close.
It seems however that Thread, as well as exec and spawn are all blocking when ran from inside ruby.
How do I fire & forget?
You can use Process#spawn:
pid = Process.spawn("ruby fetch_vin.rb vin_number_123")
Process.detach(pid)
I think the bit you were missing was calling detach after the process was spawned. This will detach and let both processes continue to run. Will work for any command, not just a ruby script.
See Process Ruby Docs for more details.
I am trying to daemonize a Ruby script, running on 2.1.1.
My daemon part of the code is like this:
case ARGV[0]
when "-start"
puts "TweetSearcher started."
Process.daemon
when "-stop"
Process.kill(9,Process.pid)
else
puts "Lacks arguments. Use -start/-stop"
abort
end
However, it looks like that the Process.kill(9,Process.pid) is not killing what I wanted to. I want to kill a previous "ruby tweetsearcher.rb -start", already running in background.
How do I proceed?
Typically, the PID is stored in a file that is then read to stop it.
Calling Process.kill(9,Process.pid) kills the "stopper" process itself, rather than the one it's trying to stop.
Here's a guide to writing daemons in Ruby: http://codeincomplete.com/posts/2014/9/15/ruby_daemons/
As you can see, it's not a trivial process.
Here is another blog that suggests that you should not try to daemonize at all, but instead rely on a process monitoring system to take care of those concerns: https://www.mikeperham.com/2014/09/22/dont-daemonize-your-daemons/
I'm trying to write a ruby script that:
Run a command/script
Stores the command's process pid in a file so I can check if it's still running later, and
the command should keep running after the ruby code exits.
I'm successful in steps 1 and 2, but it looks like the started script (i.e, the child process) terminates once the ruby code is finished.
This is the last version of what I could think about (super simplified):
pid = fork do
exec "/my/fancy/daemon/style/script"
end
File.open('tmp/process.pid', 'w') { |file| file.write(pid.to_s) }
Can you please tell me what am I doing wrong? The ultimate goal is to keep the other script (i.e, the child process) running after the ruby code exits.
You can "detach" your child process:
Process.detach(pid)
See Process#detach for more info.
If you're running your script on a shell, and if your script is the last interactive process, your virtual terminal may exit and cause your child process to hangup as well. If you consider not sending output to the terminal, you can use Process.daemon before running exec.
See Process#daemon.
I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.