I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.
Related
I have a couple cli-based scripts that run for some time.
I'd like another script to 'restart' those other scripts.
I've checked SO for answers, but the scenarios were not applicable enough to mine, as I'm trying to end Terminal processes using Terminal.
Process:
2 cli-based scripts are running (node, python, etc).
3rd script is run and decides whether or not to restart the other 2.
This can't quit Terminal, but has to end current processes.
3rd script then runs an executable that restarts everything.
Currently none of the terminal windows are named, and from reading the other posts, I can see that it may be helpful to do so.
I can mostly set this up, I just could not find a command that would end all other terminal processes and close them.
There are a couple of ways to do this. Most common is having a pidfile.
This file contains the process ID (pid) of the job you want to kill
later on. A simple way to create the pidfile is:
$ node server &
$ echo $! > /tmp/node.pidfile
$! contains the pid of the process that was most recently backgrounded.
Then later on, you kill it like so:
$ kill `cat /tmp/node.pidfile`
You would do similar for the python script.
The other less robust way is to do a killall for each process and assume you are not running similar node or python jobs.
Refer to
What is a .pid file and what does it contain? if you're not familiar with this.
The question headline is quite general, so is my reply
killall bash
or generically
killall processName
eg. killall chrome
I am running a script on Windows command line that takes multiple hours to finish executing. During this time, I am required to keep my computer open or the script stops. I was wondering if there are any tools that I can use which would keep the script running even if I put my computer to sleep (or shut the computer down). Thanks!
If computer is put to sleep or shut down, programs cannot run on it by definition of these states. Possible workarounds might include:
Running script on a permanently running remote machine (i.e. server)
Preventing computer to go to sleep
This hopefully should be an easy question to answer. I am attempting to have mumble-ruby run automatically I have everything up and running except after running this simple script it runs but ends. In short:
Running this from terminal I get "Press enter to terminate script" and it works.
Running this via a cronjob runs the script but ends it and runs cli.disconnect (I assume).
I want the below script to run automatically via a cronjob at a specified time and not end until the server shuts down.
#!/usr/bin/env ruby
require 'mumble-ruby'
cli = Mumble::Client.new('IP Address', Port, 'MusicBot', 'Password')
cli.connect
sleep(1)
cli.join_channel(5)
stream = cli.stream_raw_audio('/tmp/mumble.fifo')
stream.volume = 2.7
print 'Press enter to terminate script';
gets
cli.disconnect
Assuming you are on a Unix/Linux system, you can run it in a screen session. (This is a Unix command, not a scripting function.)
If you don't know what screen is, it's basically a "detachable" terminal session. You can open a screen session, run this script, and then detach from that screen session. That detached session will stay alive even after you log off, leaving your script running. (You can re-attach to that screen session later if you want to shut it down manually.)
screen is pretty neat, and every developer on Unix/Linux should be aware of it.
How to do this without reading any docs:
open a terminal session on the server that will run the script
run screen - you will now be in a new shell prompt in a new screen session
run your script
type ctrl-a then d (without ctrl; the "d" is for "detach") to detach from the screen (but still leave it running)
Now you're back in your first shell. Your script is still alive in your screen session. You can disconnect and the screen session will keep on trucking.
Do you want to get back into that screen and shut the app down manually? Easy! Run screen -r (for "reattach"). To kill the screen session, just reattach and exit the shell.
You can have multiple screen sessions running concurrently, too. (If there is more than one screen running, you'll need to provide an argument to screen -r.)
Check out some screen docs!
Here's a screen howto. Search "gnu screen howto" for many more.
Lots of ways to skin this cat... :)
My thought was to take your script (call it foo) and remove the last 3 lines. In your /etc/rc.d/rc.local file (NOTE: this applies to Ubuntu and Fedora, not sure what you're running - but it has something similar) you'd add nohup /path_to_foo/foo 2>&1 > /dev/null& to the end of the file so that it runs in the background. You can also run that command right at a terminal if you just want to run it and have it running. You have to make sure that foo is made executable with chmod +x /path_to_foo/foo.
Use an infinite loop. Try:
while running do
sleep(3600)
end
You can use exit to terminate when you need to. This will run the loop once an hour so it doesnt eat up processing time. An infinite loop before your disconnect method will prevent it from being called until the server shuts down.
I'm having python application which needs to execute proprietary application (which crashes from time to time) about 20 000 times a day.
The problem is when application crashes, Windows automatically triggers WerFault which will keep program hanging, thus python's subprocess.call() will wait forever for user input (that application has to run on weekends, on holidays, 24/7... so this is not acceptable).
If though about using sleep; poll; kill; terminate but that would mean losing ability to use communicate(), application can run from few miliseconds to 2 hours, so setting fixed timeout will be ineffective
I also tried turning on automatic debugging (use a script which would take a crash dump of an application and terminate id), but somehow this howto doesn't work on my server (WerFault still appears and waits for user input).
Several other tutorials like this didn't take any effect either.
Question:
is there a way how to prevent WerFault from displaying (waiting for user input)? this is more system then programming question
Alternative question: is there a graceful way in python how to detect application crash (whether WerFault was displayed)
Simple (and ugly) answer, monitor for WerFault.exe instances from time to time, specially the one associated with the PID of the offending application. And kill it. Dealing with WerFault.exe is complicated but you don't want to disable it -- see Windows Error Reporting service.
Get a list of processes by name that match WerFault.exe. I use psutil package. Be careful with psutil because processes are cached, use psutil.get_pid_list().
Decode its command line by using argparse. This might be overkill but it leverages existing python libraries.
Identify the process that is holding your application according to its PID.
This is a simple implementation.
def kill_proc_kidnapper(self, child_pid, kidnapper_name='WerFault.exe'):
"""
Look among all instances of 'WerFault.exe' process for an specific one
that took control of another faulting process.
When 'WerFault.exe' is launched it is specified the PID using -p argument:
'C:\\Windows\\SysWOW64\\WerFault.exe -u -p 5012 -s 68'
| |
+-> kidnapper +-> child_pid
Function uses `argparse` to properly decode process command line and get
PID. If PID matches `child_pid` then we have found the correct parent
process and can kill it.
"""
parser = argparse.ArgumentParser()
parser.add_argument('-u', action='store_false', help='User name')
parser.add_argument('-p', type=int, help='Process ID')
parser.add_argument('-s', help='??')
kidnapper_p = None
child_p = None
for proc in psutil.get_pid_list():
if kidnapper_name in proc.name:
args, unknown_args = parser.parse_known_args(proc.cmdline)
print proc.name, proc.cmdline
if args.p == child_pid:
# We found the kidnapper, aim.
print 'kidnapper found: {0}'.format(proc.pid)
kidnapper_p = proc
if psutil.pid_exists(child_pid):
child_p = psutil.Process(child_pid)
if kidnapper_p and child_pid:
print 'Killing "{0}" ({1}) that kidnapped "{2}" ({3})'.format(
kidnapper_p.name, kidnapper_p.pid, child_p.name, child_p.pid)
self.taskkill(kidnapper_p.pid)
return 1
else:
if not kidnapper_p:
print 'Kidnapper process "{0}" not found'.format(kidnapper_name)
if not child_p:
print 'Child process "({0})" not found'.format(child_pid)
return 0
Now, taskkill function invokes taskkill commmand with correct PID.
def taskkill(self, pid):
"""
Kill task and entire process tree for this process
"""
print('Task kill for PID {0}'.format(pid))
cmd = 'taskkill /f /t /pid {0}'.format(pid)
subprocess.call(cmd.split())
I see no reason as to why your program needs to crash, find the offending piece of code, and put it into a try-statement.
http://docs.python.org/3.2/tutorial/errors.html#handling-exceptions
I'm writing a ruby bootstrapping script for a school project, and part of this bootstrapping process is to start a couple of background processes (which are written and function properly). What I'd like to do is something along the lines of:
`/path/to/daemon1 &`
`/path/to/daemon2 &`
`/path/to/daemon3 &`
However, that blocks on the first call to execute daemon1. I've seen references to a Process.spawn method, but that seems to be a 1.9+ feature, and I'm limited to Ruby 1.8.
I've also tried to execute these daemons from different threads, but I'd like my bootstrap script to be able to exit.
So how can I start these background processes so that my bootstrap script doesn't block and can exit (but still have the daemons running in the background)?
As long as you are working on a POSIX OS you can use fork and exec.
fork = Create a subprocess
exec = Replace current process with another process
You then need to inform that your main-process is not interested in the created subprocesses via Process.detach.
job1 = fork do
exec "/path/to/daemon01"
end
Process.detach(job1)
...
better way to pseudo-deamonize:
`((/path/to/deamon1 &)&)`
will drop the process into it's own shell.
best way to actually daemonize:
`service daemon1 start`
and make sure the server/user has permission to start the actual daemon. check out 'deamonize' tool for linux to set up your deamon.