bash wait for all processes to finish (doesn't work) - bash

I have a directory with several sub-directories with names
1
2
3
4
backup_1
backup_2
I wrote a parallelized bash code to process files in these folders and a minimum working example is as follows:
#!/bin/bash
P=`pwd`
task(){
dirname=$(basename $dir)
echo $dirname running >> output.out
if [[ $dirname != "backup"* ]]; then
sed -i "s/$dirname running/$dirname is good/" $P/output.out
else
sed -i "s/$dirname running/$dirname ignored/" $P/output.out
fi
}
for dir in */; do
((i=i%8)); ((i++==0)) && wait
task "$dir" &
done
wait
echo all done
The "wait" at the end of the script is supposed to wait for all processes to finish before proceeding to echo "all done". The output.out file, after all processes are finished should show
1 is good
2 is good
3 is good
4 is good
backup_1 ignored
backup_2 ignored
I am able to get this output if I set the script to run in serial with ((i=i%1)); ((i++==0)) && wait. However, if I run it in parallel with ((i=i%2)); ((i++==0)) && wait, I get something like
2 is good
1 running
3 running
4 is good
backup_1 running
backup_2 ignored
Can anyone tell me why is wait not working in this case?
I also know that GNU parallel can do the same thing in parallelizing tasks. However, I don't know how to command parallel to run this task on all sub-directories in the parent directory. It'll be great is someone can produce a sample script that I can follow.
Many thanks
Jacek

A literal porting to GNU Parallel looks like this:
task(){
dir="$1"
P=`pwd`
dirname=$(basename $dir)
echo $dirname running >> output.out
if [[ $dirname != "backup"* ]]; then
sed -i "s/$dirname running/$dirname is good/" $P/output.out
else
sed -i "s/$dirname running/$dirname ignored/" $P/output.out
fi
}
export -f task
parallel -j8 task ::: */
echo all done
As others point out you have race conditions when you run sed on the same file in parallel.
To avoid race conditions you could do:
task(){
dir="$1"
P=`pwd`
dirname=$(basename $dir)
echo $dirname running
if [[ $dirname != "backup"* ]]; then
echo "$dirname is good" >&2
else
echo "$dirname ignored" >&2
fi
}
export -f task
parallel -j8 task ::: */ >running.out 2>done.out
echo all done
You will end up with two files running.out and done.out.
If you really just want to ignore the dirs called backup*:
task(){
dir="$1"
P=`pwd`
dirname=$(basename $dir)
echo $dirname running
echo "$dirname is good" >&2
}
export -f task
parallel -j8 task '{=/backup/ and skip()=}' ::: */ >running.out 2>done.out
echo all done
Consider spending 20 minutes on reading chapter 1+2 of https://doi.org/10.5281/zenodo.1146014 Your command line will love you for it.

Related

Run a shell script with While condition in an infinite loop based on conditions

I need to create a shell script to place some indicator/flag files in a directory say /dir1/dir2/flag_file_directory based on the request flags received from a shell script in a directory /dir1/dir2/req_flag_file_directory and the source files present in a directory say dir1/dir2/source_file_directory. For this I need to run a script using a while condition in an infinite loop as I do not know when the source files will be made available.
So, my implementation plan is somewhat like this - Lets say JOB1 which is scheduled to run at some time in the morning will first place(touch) a request flag (eg. touch /dir1/dir2/req_flag_file_directory/req1.req), saying that this job is running, so look for the Source files of pattern file_pattern_YYYYMMDD.CSV (the file patterns are different for different jobs) present in the source file directory, if they are present, then count the number. If the count of the files is correct, then first delete the request flag for this job and then touch a indicator/flag file in the /dir1/dir2/flag_file_directory. This indicator/flag file will then be used as an indicator that the source files are all present and the job can be continued to load these files into our system.
I will have all the details related to the jobs and their flag files in a file whose structure is as shown below. Based on the request flag, the script should know what other criterias it should look for before placing the indicator file:
request_flags|source_name|job_name|file_pattern|file_count|indicator_flag_file
req1.req|Sourcename1|jobname1|file_pattern_1|3|ind1.ind
req2.req|Sourcename2|jobname2|file_pattern_2|6|ind2.ind
req3.req|Sourcename3|jobname3|file_pattern_3|1|ind3.ind
req**n**.req|Sourcename**n**|jobname**n**|file_pattern_**n**|2|ind**n**.ind
Please let me know how this can be achieved and also if you have other suggestions or solutions too
Rather have the service daemon script polling in an infinite loop (i.e. waking up periodically to check if it needs to do work), you could use file locking and a named pipe to create an event queue.
Outline of the service daemon, daemon.sh. This script will loop infinitely, blocking by reading from the named pipe at read line until a message arrives (i.e., some other process writes to $RequestPipe).
#!/bin/bash
# daemon.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
while true ; do
if read line < "$RequestPipe" ; then
# ... commands to be executed after message received ...
echo "$line" # for example
fi
done
An outline of requestor.sh, the script that wakes up the service daemon when everything is ready. This script does all the preparation necessary, e.g. creating files in req_flag_file_directory and source_file_directory, then wakes the service daemon script by writing to the named pipe. It could even send a message that that contains more information for the service daemon, say "Job 1 ready".
#!/bin/bash
# requestor.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
# ... create all the necessary files ...
(
flock --exclusive 200
# Unblock the service daemon/listener by sending a line of text.
echo Wake up sleepyhead. > "$RequestPipe"
) 200>"$LockFile" # subshell exit releases lock automatically
daemon.sh fleshed out with some error handling:
#!/bin/bash
# daemon.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
SharedGroup=$(echo need to put a group here 1>&2; exit 1)
#
if [[ ! -w "$RequestPipe" ]] ; then
# Handle 1st time. Or fix a problem.
mkfifo --mode=775 "$RequestPipe"
chgrp "$SharedGroup" "$RequestPipe"
if [[ ! -w "$RequestPipe" ]] ; then
echo "ERROR: request queue, can't write to $RequestPipe" 1>&2
exit 1
fi
fi
while true ; do
if read line < "$RequestPipe" ; then
# ... commands to be executed after message received ...
echo "$line" # for example
fi
done
requestor.sh fleshed out with some error handling:
#!/bin/bash
# requestor.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
SharedGroup=$(echo need to put a group here 1>&2; exit 1)
# ... create all the necessary files ...
#
if [[ ! -w "$LockFile" ]] ; then
# Handle 1st time. Or fix a problem.
touch "$LockFile"
chgrp "$SharedGroup" "$LockFile"
chmod 775 "$LockFile"
if [[ ! -w "$LockFile" ]] ; then
echo "ERROR: write lock, can't write to $LockFile" 1>&2
exit 1
fi
fi
if [[ ! -w "$RequestPipe" ]] ; then
# Handle 1st time. Or fix a problem.
mkfifo --mode=775 "$RequestPipe"
chgrp "$SharedGroup" "$RequestPipe"
if [[ ! -w "$RequestPipe" ]] ; then
echo "ERROR: request queue, can't write to $RequestPipe" 1>&2
exit 1
fi
fi
(
flock --exclusive 200 || {
echo "ERROR: write lock, $LockFile flock failed." 1>&2
exit 1
}
# Unblock the service daemon/listener by sending a line of text.
echo Wake up sleepyhead. > "$RequestPipe"
) 200> "$LockFile" # subshell exit releases lock automatically
Still having some doubts about the contents of requests file, but I think I've come up with a rather simple solution:
#!/bin/bash
DETAILS_FILE="details.txt"
DETAILS_LINES=$((`wc -l $DETAILS_FILE|awk '{print $1}'`-1)) # to remove banner line (first line)
DETAILS=`tail -$DETAILS_LINES $DETAILS_FILE|tr '\n\r' ' '`
PIDS=()
IFS=' '
waitall () { # PIDS...
## Wait for children to exit and indicate whether all exited with 0 status.
local errors=0
while :; do
debug "Processes remaining: $*"
for pid in $#; do
echo "PID: $pid"
shift
if kill -0 "$pid" 2>/dev/null; then
debug "$pid is still alive."
set -- "$#" "$pid"
elif wait "$pid"; then
debug "$pid exited with zero exit status."
else
debug "$pid exited with non-zero exit status."
((++errors))
fi
done
(("$#" > 0)) || break
# TODO: how to interrupt this sleep when a child terminates?
sleep ${WAITALL_DELAY:-1}
done
((errors == 0))
}
debug () { echo "DEBUG: $*" >&2; }
#function to check for # of sourcefiles matching pattern in dir
#params: req3.req Sourcename3 jobname3 file_pattern_3 1 ind3.ind
check () {
NOFILES=`find $2 -type f | egrep -c $4`
if [ $NOFILES -eq "$5" ];then
echo "Touching file $6. done."
touch $6
else
echo "$NOFILES matching $4 pattern. exiting"
fi
}
echo "parsing $DETAILS_FILE file..."
read -a lines <<< "$DETAILS"
for line in "${lines[#]}"
do
IFS='|'
read -a ARRAY <<< "$line"
echo "Line processed. Dispatching job ${ARRAY[2]}..."
check ${ARRAY[#]} &
IFS=' '
PIDS="$PIDS $!"
#echo $PIDS
done
waitall ${PIDS}
wait
Although not exactly in a infinite loop. This script is intended to run in a crontab.
First it reads details.txt file, as per your example.
After parsing all details, this script dispatches the check function, with sole purpose of counting the number of files matching file_pattern of each source_name folder, and if the number of files is equal to file_count, then touches the indicator_flag_file.
Hope that helps!

Unix shell scripting, need to adjust my script for performance?

I have a script below that does a few things...
#!/bin/bash
# Script to sync dr-xxxx
# 1. Check for locks and die if exists
# 2. CPIO directories found in cpio.cfg
# 3. RSYNC to remote server
# 5. TRAP and remove lock so we can run again
if ! mkdir /tmp/drsync.lock; then
printf "Failed to aquire lock.\n" >&2
exit 1
fi
trap 'rm -rf /tmp/drsync.lock' EXIT # remove the lockdir on exit
# Config specific to CPIO
BASE=/home/mirxx
DUMP_DIR=/usrx/drsync
CPIO_CFG="$BASE/cpio.cfg"
while LINE=: read -r f1 f2
do
echo "Working with $f1"
cd $f1
find . -print | cpio -o | gzip > $DUMP_DIR/$f2.cpio.gz
echo "Done for $f1"
done <"$CPIO_CFG"
RSYNC=/usr/bin/rsync # use latest version
RSYNC_BW="4500" # 4.5MB/sec
DR_PATH=/usrx/drsync
DR_USER=root
DR_HOST=dr-xxxx
I=0
MAX_RESTARTS=5 # max rsync retries before quitting
LAST_EXIT_CODE=1
while [ $I -le $MAX_RESTARTS ]
do
I=$(( $I + 1 ))
echo $I. start of rsync
$RSYNC \
--partial \
--progress \
--bwlimit=$RSYNC_BW \
-avh $DUMP_DIR/*gz \
$DR_USER#$DR_HOST:$DR_PATH
LAST_EXIT_CODE=$?
if [ $LAST_EXIT_CODE -eq 0 ]; then
break
fi
done
# check if successful
if [ $LAST_EXIT_CODE -ne 0 ]; then
echo rsync failed for $I times. giving up.
else
echo rsync successful after $I times.
fi
What I would like to change above is, for this line..
find . -print | cpio -o | gzip > $DUMP_DIR/$f2.cpio.gz
I am looking to change the above line so that it starts a parallel process for every entry in CPIO_CFG which gets feed in. I believe i have to use & at the end? Should I implement any safety precautions?
Is it also possible to modify the above command to also include an exclude list that I can pass in via $f3 in the cpio.cfg file.
For the below code..
while [ $I -le $MAX_RESTARTS ]
do
I=$(( $I + 1 ))
echo $I. start of rsync
$RSYNC --partial --progress --bwlimit=$RSYNC_BW -avh $DUMP_DIR/*gz $DR_USER#$DR_HOST:$DR_PATH
LAST_EXIT_CODE=$?
if [ $LAST_EXIT_CODE -eq 0 ]; then
break
fi
done
The same thing here, is it possible to run multiple RSYNC threads one for .gz file found in $DUMP_DIR/*.gz
I think the above would greatly increase the speed of my script, the box is fairly beefy (AIX 7.1, 48 cores and 192GB RAM).
Thank you for your help.
The original code is a traditional batch queue. Let's add a bit of lean thinking...
The actual workflow is the transformation and transfer of a set of directories in compressed cpio format. Assuming that there is no dependency between the directories/archives, we should be able to create a single action for creating the archive and the transfer.
It helps if we break up the script into functions, which should make our intentions more visible.
First, create a function transfer_archive() with archive_name and an optional number_of_attempts as arguments. This contains your second while loop, but replaces $DUMP_DIR/*gz with $archive_name. Details will be left as an exercise.
function transfer_archive {
typeset archive_name=${1:?"pathname to archive expected"}
typeset number_of_attempts=${2:-1}
(
n=0
while
((n++))
((n<=number_of_attempts))
do
${RSYNC:?}
--partial \
--progress \
--bwlimit=${RSYNC_BW:?} \
-avh ${archive_name:?} ${DR_USER:?}#${DR_HOST:?}:${DR_PATH:?} && exit 0
done
exit 1
)
}
Inside the function we use a subshell, ( ... ) with two exit statements.
The function will return the exit value of the subshell, either true (rsync succeeded), or false (too many attempts).
We then combine that with archive creation:
function create_and_transfer_archive {
(
# only cd in a subshell - no confusion upstairs
cd ${DUMP_DIR:?Missing global setting} || exit
dir=${1:?directory}
archive=${2:?archive}
# cd, find and cpio must be in the same subshell together
(cd ${dir:?} && find . -print | cpio -o ) |
gzip > ${archive:?}.cpio.gz || return # bail out
transfer_archive ${archive:?}.cpio.gz
)
}
Finally, your main loop will process all directories in parallel:
while LINE=: read -r dir archive_base
do
(
create_and_transfer_archive $dir ${archive_base:?} &&
echo $dir Done || echo $dir failed
) &
done <"$CPIO_CFG" | cat
Instead of the pipe with cat, you could just add wait at the end of the script, but
it has the nice effect of capturing all output from the background processes.
Now, I've glossed over one important aspect, and that is the number of jobs you can run in
parallel. This will scale reasonably well, but it would be better to actually maintain a
job queue. Above a certain number, adding more jobs will start to slow things down, and
at that point you will have to add a job counter and a job limit. Once the job limit is
reached, stop starting more create_and_transfer_archive jobs, until processes have completed.
How to keep track of those jobs is a separate question.

Shell script to poll a directory and stop upon an event

Need shell script to:
1/keep polling a directory "receive_dir" irrespective of having files or no files in it.
2/move the files over to another directory "send_dir".
3/the script should only stop polling upon a file "stopfile" get moved to "receive_dir". Thanks !!
My script:
until [ $i = stopfile ]
do
for i in `ls receive_dir`; do
time=$(date +%m-%d-%Y-%H:%M:%S)
echo $time
mv receive_dir/$i send_dir/;
done
done
This fails on empty directories and also is there any better way ?
If you are running on Linux, you might wish to consider inotifywait
$ declare -f tillStopfile
tillStopfile ()
{
cd receive_dir
[[ -d ../send_dir ]] || mkdir ../send_dir
while true; do
date +%m-%d-%Y-%H:%M:%S
for f in *
do
mv "$f" ../send_dir
[[ $f == "stopfile" ]] && break 2
done
sleep 3
done
}
$
Improvements
while true ... break #
easier to control this loop
cd receive_dir #
why not run in the "receive_dir"
factor date out of inner loop #
unless you need to see each time-stamp?
added suggested "sleep"
# pick a suitable inteval
Run:
$ tillStopfile 2>/dev/null # suppresses ls error messages

OS X Bash, 'watch' command

I'm looking for the best way to duplicate the Linux 'watch' command on Mac OS X. I'd like to run a command every few seconds to pattern match on the contents of an output file using 'tail' and 'sed'.
What's my best option on a Mac, and can it be done without downloading software?
With Homebrew installed:
brew install watch
You can emulate the basic functionality with the shell loop:
while :; do clear; your_command; sleep 2; done
That will loop forever, clear the screen, run your command, and wait two seconds - the basic watch your_command implementation.
You can take this a step further and create a watch.sh script that can accept your_command and sleep_duration as parameters:
#!/bin/bash
# usage: watch.sh <your_command> <sleep_duration>
while :;
do
clear
date
$1
sleep $2
done
Use MacPorts:
$ sudo port install watch
The shells above will do the trick, and you could even convert them to an alias (you may need to wrap in a function to handle parameters):
alias myWatch='_() { while :; do clear; $2; sleep $1; done }; _'
Examples:
myWatch 1 ls ## Self-explanatory
myWatch 5 "ls -lF $HOME" ## Every 5 seconds, list out home directory; double-quotes around command to keep its arguments together
Alternately, Homebrew can install the watch from http://procps.sourceforge.net/:
brew install watch
It may be that "watch" is not what you want. You probably want to ask for help in solving your problem, not in implementing your solution! :)
If your real goal is to trigger actions based on what's seen from the tail command, then you can do that as part of the tail itself. Instead of running "periodically", which is what watch does, you can run your code on demand.
#!/bin/sh
tail -F /var/log/somelogfile | while read line; do
if echo "$line" | grep -q '[Ss]ome.regex'; then
# do your stuff
fi
done
Note that tail -F will continue to follow a log file even if it gets rotated by newsyslog or logrotate. You want to use this instead of the lower-case tail -f. Check man tail for details.
That said, if you really do want to run a command periodically, the other answers provided can be turned into a short shell script:
#!/bin/sh
if [ -z "$2" ]; then
echo "Usage: $0 SECONDS COMMAND" >&2
exit 1
fi
SECONDS=$1
shift 1
while sleep $SECONDS; do
clear
$*
done
I am going with the answer from here:
bash -c 'while [ 0 ]; do <your command>; sleep 5; done'
But you're really better off installing watch as this isn't very clean...
If watch doesn't want to install via
brew install watch
There is another similar/copy version that installed and worked perfectly for me
brew install visionmedia-watch
https://github.com/tj/watch
Or, in your ~/.bashrc file:
function watch {
while :; do clear; date; echo; $#; sleep 2; done
}
To prevent flickering when your main command takes perceivable time to complete, you can capture the output and only clear screen when it's done.
function watch {while :; do a=$($#); clear; echo "$(date)\n\n$a"; sleep 1; done}
Then use it by:
watch istats
Try this:
#!/bin/bash
# usage: watch [-n integer] COMMAND
case $# in
0)
echo "Usage $0 [-n int] COMMAND"
;;
*)
sleep=2;
;;
esac
if [ "$1" == "-n" ]; then
sleep=$2
shift; shift
fi
while :;
do
clear;
echo "$(date) every ${sleep}s $#"; echo
$#;
sleep $sleep;
done
Here's a slightly changed version of this answer that:
checks for valid args
shows a date and duration title at the top
moves the "duration" argument to be the 1st argument, so complex commands can be easily passed as the remaining arguments.
To use it:
Save this to ~/bin/watch
execute chmod 700 ~/bin/watch in a terminal to make it executable.
try it by running watch 1 echo "hi there"
~/bin/watch
#!/bin/bash
function show_help()
{
echo ""
echo "usage: watch [sleep duration in seconds] [command]"
echo ""
echo "e.g. To cat a file every second, run the following"
echo ""
echo " watch 1 cat /tmp/it.txt"
exit;
}
function show_help_if_required()
{
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
}
function require_numeric_value()
{
REG_EX='^[0-9]+$'
if ! [[ $1 =~ $REG_EX ]] ; then
show_help
fi
}
show_help_if_required $1
require_numeric_value $1
DURATION=$1
shift
while :; do
clear
echo "Updating every $DURATION seconds. Last updated $(date)"
bash -c "$*"
sleep $DURATION
done
Use the Nix package manager!
Install Nix, and then do nix-env -iA nixpkgs.watch and it should be available for use after the completing the install instructions (including sourcing . "$HOME/.nix-profile/etc/profile.d/nix.sh" in your shell).
The watch command that's available on Linux does not exist on macOS. If you don't want to use brew you can add this bash function to your shell profile.
# execute commands at a specified interval of seconds
function watch.command {
# USAGE: watch.commands [seconds] [commands...]
# EXAMPLE: watch.command 5 date
# EXAMPLE: watch.command 5 date echo 'ls -l' echo 'ps | grep "kubectl\\\|node\\\|npm\\\|puma"'
# EXAMPLE: watch.command 5 'date; echo; ls -l; echo; ps | grep "kubectl\\\|node\\\|npm\\\|puma"' echo date 'echo; ls -1'
local cmds=()
for arg in "${#:2}"; do
echo $arg | sed 's/; /;/g' | tr \; \\n | while read cmd; do
cmds+=($cmd)
done
done
while true; do
clear
for cmd in $cmds; do
eval $cmd
done
sleep $1
done
}
https://gist.github.com/Gerst20051/99c1cf570a2d0d59f09339a806732fd3

How to terminate script's process tree in Cygwin bash from bash script

I have a Cygwin bash script that I need to watch and terminate under certain conditions - specifically, after a certain file has been created. I'm having difficulty figuring out how exactly to terminate the script with the same level of completeness that Ctrl+C does, however.
Here's a simple script (called test1) that does little more than wait around to be terminated.
#!/bin/bash
test -f kill_me && rm kill_me
touch kill_me
tail -f kill_me
If this script is run in the foreground, Ctrl+C will terminate both the tail and the script itself. If the script is run in the background, a kill %1 (assuming it is job 1) will also terminate both tail and the script.
However, when I try to do the same thing from a script, I'm finding that only the bash process running the script is terminated, while tail hangs around disconnected from its parent. Here's one way I tried (test2):
#!/bin/bash
test -f kill_me && rm kill_me
(
touch kill_me
tail -f kill_me
) &
while true; do
sleep 1
test -f kill_me && {
kill %1
exit
}
done
If this is run, the bash subshell running in the background is terminated OK, but tail still hangs around.
If I use an explicitly separate script, like this, it still doesn't work (test3):
#!/bin/bash
test -f kill_me && rm kill_me
# assuming test1 above is included in the same directory
./test1 &
while true; do
sleep 1
test -f kill_me && {
kill %1
exit
}
done
tail is still hanging around after this script is run.
In my actual case, the process creating files is not particularly instrumentable, so I can't get it to terminate of its own accord; by finding out when it has created a particular file, however, I can at that point know that it's OK to terminate it. Unfortunately, I can't use a simple killall or equivalent, as there may be multiple instances running, and I only want to kill the specific instance.
/bin/kill (the program, not the bash builtin) interprets a negative PID as “kill the process group” which will get all the children too.
Changing
kill %1
to
/bin/kill -- -$$
works for me.
Adam's link put me in a direction that will solve the problem, albeit not without some minor caveats.
The script doesn't work unmodified under Cygwin, so I rewrote it, and with a couple more options. Here's my version:
#!/bin/bash
function usage
{
echo "usage: $(basename $0) [-c] [-<sigspec>] <pid>..."
echo "Recursively kill the process tree(s) rooted by <pid>."
echo "Options:"
echo " -c Only kill children; don't kill root"
echo " <sigspec> Arbitrary argument to pass to kill, expected to be signal specification"
exit 1
}
kill_parent=1
sig_spec=-9
function do_kill # <pid>...
{
kill "$sig_spec" "$#"
}
function kill_children # pid
{
local target=$1
local pid=
local ppid=
local i
# Returns alternating ids: first is pid, second is parent
for i in $(ps -f | tail +2 | cut -b 10-24); do
if [ ! -n "$pid" ]; then
# first in pair
pid=$i
else
# second in pair
ppid=$i
(( ppid == target && pid != $$ )) && {
kill_children $pid
do_kill $pid
}
# reset pid for next pair
pid=
fi
done
}
test -n "$1" || usage
while [ -n "$1" ]; do
case "$1" in
-c)
kill_parent=0
;;
-*)
sig_spec="$1"
;;
*)
kill_children $1
(( kill_parent )) && do_kill $1
;;
esac
shift
done
The only real downside is the somewhat ugly message that bash prints out when it receives a fatal signal, namely "Terminated", "Killed" or "Interrupted" (depending on what you send). However, I can live with that in batch scripts.
This script looks like it'll do the job:
#!/bin/bash
# Author: Sunil Alankar
##
# recursive kill. kills the process tree down from the specified pid
#
# foreach child of pid, recursive call dokill
dokill() {
local pid=$1
local itsparent=""
local aprocess=""
local x=""
# next line is a single line
for x in `/bin/ps -f | sed -e '/UID/d;s/[a-zA-Z0-9_-]\{1,\}
\{1,\}\([0-9]\{1,\}\) \{1,\}\([0-9]\{1,\}\) .*/\1 \2/g'`
do
if [ "$aprocess" = "" ]; then
aprocess=$x
itsparent=""
continue
else
itsparent=$x
if [ "$itsparent" = "$pid" ]; then
dokill $aprocess
fi
aprocess=""
fi
done
echo "killing $1"
kill -9 $1 > /dev/null 2>&1
}
case $# in
1) PID=$1
;;
*) echo "usage: rekill <top pid to kill>";
exit 1;
;;
esac
dokill $PID

Resources