bash: suppress kill message in while loop - bash

I have a progress bar that prints dots as it waits for an external program to finish executing. When it does finish, I get an ugly kill message which I want to suppress.
#!/bin/bash
program < input.file.1 > output.1 &
sim='running simulation'
echo -ne $sim >&2
while kill -0 $!; do
echo -n . >&2
sleep 1
done
Expected: running simulation.........
Actual: running simulation........./run_with_dots.1: line 8: kill: (11872) - No such process

Redirect stderr:
while kill -0 $! 2> /dev/null; do

Related

how do I watch for a process to have died in shell script?

I'm running a shell test program that I can view a progress bar but when I run it I keep getting a unary error . Is kill -0 a way to kill a subprocess in shell ?
Or is there another method to test if my process has died?
heres my code to run a progress bar until my command ends:
#!/bin/sh
# test my progress bar
spin[0]="-"
spin[1]="\\"
spin[2]="|"
spin[3]="/"
sleep 10 2>/dev/null & # run as background process
pid=$! # grab process id
echo -n "[sleeping] ${spin[0]}"
while [ kill -0 $pid ] # wait for process to end
do
for i in "${spin[#]}"
do
echo -ne "\b$i"
sleep 0.1
done
done
enter code here
1. Is kill -0 a way to kill a subprocess in shell ?
On Linux OS, kill -0 is just a way to try to kill a process and see what happens, '0' is not a POSIX signal, it does nothing at all.
If the process is running, kill will return 0, if not, it will return 1.
ps $pid >/dev/null 2>&1 could do the same job.
To kill a process, one generally use the SIGQUIT/3 (quit program) or SIGKILL/9 (terminate program) ; the process could trap the signal and make a clean exit, or it could ignore the signal so the OS has to terminate it 'quick and dirty'.
2. test and '['
The square bracket '[' is an utility ( /bin/[ ), and expect something you didn't provide correctly.
The syntax of while is while list; do list; done where list will return an exit code, so you don't have to use something else.
3. how do I watch for a process to have died in shell script?
Like you did, the code below will do the job:
#!/bin/bash
spin[0]="-"
spin[1]="\\"
spin[2]="|"
spin[3]="/"
sleep 10 2>/dev/null & # run as background process
pid=$! # grab process id
echo -n "[sleeping] ${spin[0]}"
#while ps -p $pid >/dev/null 2>&1 # using ps
while kill -0 $pid >/dev/null 2>&1 # using kill
do
for i in "${spin[#]}"
do
echo -ne "\b$i"
sleep 0.5
done
done
CAVEATS
I use /bin/bash as interpreter, as some of the Bourne Shell (sh) could not support the use of an array (ie spin[n]).
It's probably cleaner to run the spinner in the background and kill it when the process (running in the foreground) terminates. Or, you could open another file descriptor and write something into it after the background process terminates, and have the main process block on a read. eg:
#!/bin/bash
# test my progress bar
spin[0]='-'
spin[1]='\'
spin[2]='|'
spin[3]='/'
{ { { sleep 10 2>/dev/null; echo >&5; } & # run as background process
} 5>&1 1>&3 | { # wait for process to end
while ! read -t 1; do
printf "\r[sleeping] ${spin[ $(( i = ++i % 4 )) ]}"
done
}
} 3>&1

Redirecting output in shell script from subprocess with simulated ctrl-c

I am working on a shell script, where it sets some parameters and calls a python script. Sometimes the python script hangs and when I press ctrl-c, it generates some error output. I am trying to write this to a file. When I execute the shell script and press ctrl-c, I get the output in the redirected file, but if I try to simulate the ctrl-c by sleeping for some time and killing the process, the output is not redirected to the file. I have used some examples from SO to do the sleep and terminate which works, but the output file doesn't have the error which I get from ctrl-c action manually.
I cannot change the python script that this script is executing, so I have to implement this in the calling script.
[ -z "$1" ] && echo "No environment argument supplied" && exit 1
. env_params.txt
. run_params.txt
echo "========================================================== RUN STARTED AT $B ==========================================================" >> $OUTFILE
echo " " >> $OUTFILE
export RUN_COMMAND="$PYTHON_SCRIPT_LOC/pyscript.py PARAM1=VALUE1 PARAM2=VALU2 PAram3=value3"
echo "Run command from test.sh $RUN_COMMAND" >> $OUTFILE
echo " " >> $OUTFILE
echo " " >> $OUTFILE
echo "========================================================== Running Python script ==========================================================" >> $OUTFILE
echo "Before python command"
###############################################
( python $RUN_COMMAND >> $OUTFILE 2>&1 ) & pid=$!
SLEEP_TIME=1m
echo "before sleep - sleeping for $SLEEP_TIME $pid"
( sleep $SLEEP_TIME && kill -HUP $pid ) 2>/dev/null & watcher=$!
if wait $pid 2>/dev/null; then
echo "after sleep - sleeping for $SLEEP_TIME $pid"
echo "your_command finished"
pkill -HUP -P $watcher
wait $watcher
else
echo "after sleep - sleeping for $SLEEP_TIME $pid"
echo "your_command interrupted"
fi
### also tried this - did not work either
### python $RUN_COMMAND >> $OUTFILE 2>&1 &; (pythonPID=$! ; sleep 1m ;kill -s 2 $pythonPID)
.
.
What changes do I need to make such that the output is written to the $OUTFILE when the process is killed in the script itself, rather than pressing ctrl-c on the terminal?
Probabelly you do not want to use SIGHUP (Hangup detected on controlling terminal or death of controlling process, but rather
SIGINT 2 Term Interrupt from keyboard
for more info read man 7 signal

Preventing "Terminated" message when killing subprocess

I am looking for a tool like Linux's "timeout" command that will allow timouet values of less than 1 second. I am using the solution here, but I am running into one problem.
The problem is that sometimes it prints this message, and I am trying to figure out why.
./tools/utimeout.sh: line 17: 12369 Terminated $(./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null)
I tried modifying the solution so that the subprocess would be killed before the script exits, but this didn't help.
#!/bin/bash
TIMEOUT=400000
#execute command in background
"$#" &
#get process ID
PROC=$!
#sleep for 10 milliseconds then kill command
(./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null) &
CPROC=$!
wait $PROC &> /dev/null
kill $CPROC &> /dev/null
if [ $? -eq 1 ]; then
# echo "Process timed out."
exit 1
else
# echo "Process completed successfully."
exit 0
fi
Since I want to capture stderr from this script, as workaround, I am just removing the message from the error logs with sed. Since this is a hack, I was hoping to find a better solution.
To avoid the message you can disown the process
(./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null) &
disown %
and then you cannot wait on it, or you can put it in a subshell:
( (./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null) & )
but of course then you cannot kill it as $! wont be right.
My fedora 21 timeout takes floating point durations:
$ time timeout .1 sleep 1
real 0m0.103s
user 0m0.002s
sys 0m0.002s

How can I make an external program interruptible in this trap-captured bash script?

I am writing a script which will run an external program (arecord) and do some cleanup if it's interrupted by either a POSIX signal or input on a named pipe. Here's the draft in full
#!/bin/bash
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "echo interrupted && (rm $P || echo 'couldnt delete $P') && echo 'removed fifo' && exit" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
while true
do
echo waiting...
sleep 1
done
#arecord $F
This works perfectly as it is: the script ends when a signal arrives and a signal is generated if the fifo is written-to.
But instead of the while true loop I want the now-commented-out arecord command, but if I run that program instead of the loop the SIGINT doesn't get caught in the trap and arecord keeps running.
What should I do?
It sounds like you really need this to work more like an init script. So, start arecord in the background and put the pid in a file. Then use the trap to kill the arecord process based on the pidfile.
#!/bin/bash
PIDFILE=/var/run/arecord-runner.pid #Just somewhere to store the pid
LOGFILE=/var/log/arecord-runner.log
#Just one option for how to format your trap call
#Note that this does not use &&, so one failed function will not
# prevent other items in the trap from running
trapFunc() {
echo interrupted
(rm $P || echo 'couldnt delete $P')
echo 'removed fifo'
kill $(cat $PIDFILE)
exit 0
}
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "trapFunc" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
arecord $F 1>$LOGFILE 2>&1 & #Run in the background, sending logs to file
echo $! > $PIDFILE #Save pid of the last background process to file
while true
do
echo waiting...
sleep 1
done
Also... you may have your trap written with '&&' clauses for a reason, but as an alternative, you can give a function name as I did above, or a sort of anonymous function like this:
trap "{ command1; command2 args; command3; exit 0; }"
Just make sure that each command is followed by a semicolon and there are spaces between the braces and the commands. The risk of using && in the trap is that your script will continue to run past the interrupt if one of the commands before the exit fails to execute (but maybe you want that?).

How to continue execution of background process in this scenario

I have 3 process a.sh, b.sh, c.sh that are executed in background.
./a.sh &
pid_a=$!
./b.sh &
pid_b=$!
./c.sh &
pid_c=$!
I need to ensure that all three processes run till the longest process terminates. If c.sh takes 10 secs, a.sh takes 3sec, b.sh takes 5sec for individual execution times, I need to execute a.sh, b.sh again to ensure that they exist till c.sh finishes.
I was trying this approach which certainly doesn't work in the above scenario
./a.sh &
while ps -p $! > /dev/null; do
./b.sh &
pid_b=$!
./c.sh &
pid_c=$!
wait $pid_c
done
How do I get this ?
You can use temporary files as flags to indicate when each process completes for the first time. Run each script in a background loop until each of the other two have completed at least once.
flag_dir=$(mktemp -d flagsXXXXX)
flag_a=$flag_dir/a
flag_b=$flag_dir/b
flag_c=$flag_dir/c
( until [[ -f $flag_b && -f $flag_c ]]; do ./a.sh; touch $flag_a; done; ) &
( until [[ -f $flag_a && -f $flag_c ]]; do ./b.sh; touch $flag_b; done; ) &
( until [[ -f $flag_a && -f $flag_b ]]; do ./c.sh; touch $flag_c; done; ) &
# Each until-loop runs until it sees the other two have completed at least one
# cycle. Wait here until each loop finishes.
wait
# Clean up
rm -rf "$flag_dir"
[Note This works for bash only. ksh93 kill behaves differently.]
As long as there's at least one process you are allowed to kill, kill -0 will return success. Tune the interval as needs be.
#! /bin/bash
interval=1
pids= && for t in 2 3; do
(sleep $t && echo slept $t seconds) & pids=${pids:+$pids }$!
done
while (kill -0 $pids) 2>& -; do
sleep $interval
# optional reporting:
for pid in $pids; do
(kill -0 $pid) 2>&- && echo $pid is alive
done
done
Results in:
6463 is alive
6464 is alive
slept 2 seconds
[1]- Done eval sleeper $t
6464 is alive
slept 3 seconds
[2]+ Done eval sleeper $t
Builtin kill is not consistent regarding errors:
$ ksh -c 'kill -0 571 6133 && echo ok || echo no'
kill: 571: permission denied
no
$ bash -c 'kill -0 571 6133 && echo ok || echo no'
bash: line 0: kill: (571) - Operation not permitted
ok
Firstly, you can use kill -0 to test the status of the process for c.sh, rather than using wait to wait for it to terminate.
Second, you can use 2 separate processes to monitor the state of scripts a.sh and b.sh
Third, this assumes that c.sh is the longest running process.
Thus, monitor process 1 does the following:
# I have pid_c
./a.sh &
pid_a=$!
while wait $pid_a; do
if kill -0 $pid_c; then
./a.sh&
pid_a=$!
fi
done
and monitor process 2 does the following:
# I have pid_c
./b.sh &
pid_b=$!
while wait $pid_b; do
if kill -0 $pid_c; then
./b.sh &
pid_b=$!
fi
done
Thus, you're monitoring the 2 processes separately. However, if you need to monitor them as well, then spawn the monitors as 2 background jobs and a simple wait will wait on c.sh as well as the 2 monitors.
Note: kill -0 $PID returns 0 if $PID is running or 1 if $PID has terminated.

Resources