I want to kill a bash command when I found some string in the output.
To clarify, I want the solution to be similar to a timeout command:
timeout 10s looping_program.sh
Which will execute the script: looping_program.sh and kill the script after 10 seconds of execute.
Instead I want something like:
regexout "^Success$" looping_program.sh
Which will execute the script until it matches a line that just says Success in the stdout of the program.
Note that I'm assuming that this looping_program.sh does not exit at the same time it outputs Success for whatever reason, so simply waiting for the program to exit would waste time if I don't care about what happens after that.
So something like:
bash -e looping_program.sh > /tmp/output &
PID="$(ps aux | grep looping_program.sh | head -1 | tr -s ' ' | cut -f 2 -d ' ')"
echo $PID
while :; do
echo "$(tail -1 /tmp/output)"
if [[ "$(tail -1 /tmp/output)" == "Success" ]]; then
kill $PID
exit 0
fi
sleep 1
done
Where looping_program.sh is something like:
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
echo "Success"
sleep 1;
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
But that is not very robust (uses a single tmp file... might kill other programs...) and I want it to just be one command. Does something like this exist? I may just write a c program to do it if not.
P.S.: I provided my code as an example of what I wanted the program to do. It does not use good programming practices. Notes from other commenters:
#KamilCuk Do not use temporary file. Use a fifo.
#pjh Note that any approach that involves using kill with a PID in shell code runs the risk of killing the wrong process. Use kill in shell programs only when it is absolutely necessary.
There are more suggestions below from other users, I just wanted to make sure no one came across this and thought it would be good to model their code after.
looping_program() {
for i in 1 2 3; do echo $i; sleep 1; done
echo Success
yes
}
coproc looping_program
while IFS= read -r line; do
if [[ "$line" =~ Success ]]; then
break
fi
done <&${COPROC[0]}
exec {COPROC[0]}>&- {COPROC[1]}>&-
kill ${COPROC_PID}
wait ${COPROC_PID}
Notes:
Do not use temporary file. Use a fifo.
Do not use tail -n1 to read last line. Read from the stream in a loop.
Do not repeat tail -1 twice. Cache the result.
Wait for pid after killing to synchronize.
When you're using a coprocess, use COPROC_PID to get the PID
When you're not using a coprocess, use $! to get the PID of a background process started from the current shell.
When you can't use $! (because the process you're trying to get a PID of was not spawned in the background as a direct child of the current shell), do not use ps aux | grep to get the pid. Use pgrep.
Do not use echo $(stuff). Just run the stuff, no echo.
With expect
#!/usr/bin/env -S expect -f
set timeout -1
spawn ./looping_program.sh
expect "Success"
send -- "\x03"
expect eof
Call it looping_killer:
$ ./looping_killer
spawn ./looping_program.sh
Fail
Fail
Fail
Success
^C
To pass the program and pattern:
./looping_killer some_program "some pattern"
You'd change the expect script to
#!/usr/bin/env -S expect -f
set timeout -1
spawn [lindex $argv 0]
expect -- [lindex $argv 1]
send -- "\x03"
expect eof
Assuming that your looping program exists when it tries to write to a broken pipe, this will print all output up to and including the 'Success' line and then exit:
./looping_program | sed '/^Success$/q'
You may need to disable buffering of the looping program output. See Force line-buffering of stdout in a pipeline and How to make output of any shell command unbuffered? for ways to do it.
See Should I save my scripts with the .sh extension? and Erlkonig: Commandname Extensions Considered Harmful for reasons why I dropped the '.sh' suffix.
Note that any approach that involves using kill with a PID in shell code runs the risk of killing the wrong process. Use kill in shell programs only when it is absolutely necessary.
Related
I need to find if a value (actually it's more complex than that) is in one of 20 servers I have. And I need to do it as fast as possible. Right now I am sending the scripts simultaneously to all the servers. My main script is something like this (but with all the servers):
#!/bin/sh
#mainScript.sh
value=$1
c1=`cat serverList | sed -n '1p'`
c2=`cat serverList | sed -n '2p'`
sh find.sh $value $c1 & sh find.sh $value $c2
#!/bin/sh
#find.sh
#some code here .....
if [ $? -eq 0 ]; then
rm $tempfile
else
myValue=`sed -n '/VALUE/p' $tempfile | awk 'BEGIN{FS="="} {print substr($2, 8, length($2)-2)}'`
echo "$myValue"
fi
So the script only returns a response if it finds the value in the server. I would like to know if there is a way to stop executing the other scripts if one of them already return a value.
I tried adding an "exit" on the find.sh script but it won't stop all the scripts. Can somebody please tell me if what I want to do is possible?
I would suggest that you use something that can handle this for you: GNU Parallel. From the linked tutorial:
If you are looking for success instead of failures, you can use success. This will finish as soon as the first job succeeds:
parallel -j2 --halt now,success=1 echo {}\; exit {} ::: 1 2 3 0 4 5 6
Output:
1
2
3
0
parallel: This job succeeded:
echo 0; exit 0
I suggest you start by modifying your find.sh so that its return code depends on its success, that will let us identify a successful call more easily; for instance:
myValue=`sed -n '/VALUE/p' $tempfile | awk 'BEGIN{FS="="} {print substr($2, 8, length($2)-2)}'`
success=$?
echo "$myValue"
exit $success
To terminate all the find.sh processes spawned by your script you can use pkill with a Parent Process ID criteria and a command name criteria :
pkill -P $$ find.sh # $$ refers to the current process' PID
Note that this requires that you start the find.sh script directly rather than passing it as a parameter to sh. Normally that shouldn't be a problem, but if you have a good reason to call sh rather than your script, you can replace find.sh in the pkill command by sh (assuming you're not spawning other scripts you wouldn't want to kill).
Now that find.sh exits with success only when it finds the expected string, you can plug the two actions with && and run the whole thing in background :
{ find.sh $value $c1 && pkill -P $$ find.sh; } &
The first occurrence of find.sh that terminates with success will invoke the pkill command that will terminate all others (those killed processes will have non-zero exit codes and therefore won't run their associated pkill).
For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).
Would someone please tell me why below bash statement cannot be terminated by Ctrl+c properly?
$( { ( tail -fn0 /tmp/a.txt & )| while read line; do echo $line; done } 3>&1 )
I run this statement, then two bash processes and one tail process are launched(got from ps auxf), then input Ctrl+c, and it won't quit to the bash prompt, at this moment, I see the two bash processes stopped, while the tail is still running, then I input something into /tmp/a.txt, then we could get into bash prompt.
What I want is, input Ctrl+c, then just quit into bash prompt without any relevant process left.
It will be more appreciative that someone explains the exact process of this statement, like a pipe causes the bash fork, something redirect to somewhere, etc.
Updated at Oct 9 2014:
Here provide some update in case it's useful to you.
My adopt solution is alike with 2 factors:
use a tmp pid file
( tail -Fn0 ${monitor_file} & echo "$!" >${tail_pid} ) | \
while IFS= read -r line; do
xxxx
done
use trap like: trap "rm ${tail_pid} 2>/dev/null; kill 0 2>/dev/null; exit;" INT TERM to kill relevant processes and remove remain files.
Please note, this kill 0 2 is bash specific, and 0 means all processes in the current process group.
This solution used a tmp pid file, while I still expect other solution without tmp pid file.
It works to trap the INT signal (sent by Ctrl-C) to kill the tail process.
$( r=$RANDOM; trap '{ kill $(cat /tmp/pid$r.pid);rm /tmp/pid$r.pid;exit; }' SIGINT EXIT; { ( tail -fn0 /tmp/a.txt & echo $! > /tmp/pid$r.pid )| while read line; do echo $line; done } 3>&1 )
(I use a random value on the PID file name to at least mostly allow multiple instances to run)
Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&
I have a pair of shell programs that talk over a named pipe. The reader creates the pipe when it starts, and removes it when it exits.
Sometimes, the writer will attempt to write to the pipe between the time that the reader stops reading and the time that it removes the pipe.
reader: while condition; do read data <$PIPE; do_stuff; done
writer: echo $data >>$PIPE
reader: rm $PIPE
when this happens, the writer will hang forever trying to open the pipe for writing.
Is there a clean way to give it a timeout, so that it won't stay hung until killed manually? I know I can do
#!/bin/sh
# timed_write <timeout> <file> <args>
# like "echo <args> >> <file>" with a timeout
TIMEOUT=$1
shift;
FILENAME=$1
shift;
PID=$$
(X=0; # don't do "sleep $TIMEOUT", the "kill %1" doesn't kill the sleep
while [ "$X" -lt "$TIMEOUT" ];
do sleep 1; X=$(expr $X + 1);
done; kill $PID) &
echo "$#" >>$FILENAME
kill %1
but this is kind of icky. Is there a shell builtin or command to do this more cleanly (without breaking out the C compiler)?
The UNIX "standard" way of dealing with this is to use Expect, which comes with timed-run example: run a program for only a given amount of time.
Expect can do wonders for scripting, well worth learning it. If you don't like Tcl, there is a Python Expect module as well.
This pair of programs works much more nicely after being re-written in Perl using Unix domain sockets instead of named pipes. The particular problem in this question went away entirely, since if/when one end dies the connection disappears instead of hanging.
This question comes up periodically (though I couldn't find it with a search). I've written two shell scripts to use as timeout commands: one for things that read standard input and one for things that don't read standard input. This stinks, and I've been meaning to write a C program, but I haven't gotten around to it yet. I'd definitely recommend writing a timeout command in C once and for all. But meanwhile, here's the simpler of the two shell scripts, which hangs if the command reads standard input:
#!/bin/ksh
# our watchdog timeout in seconds
maxseconds="$1"
shift
case $# in
0) echo "Usage: `basename $0` <seconds> <command> [arg ...]" 1>&2 ;;
esac
"$#" &
waitforpid=$!
{
sleep $maxseconds
echo "TIMED OUT: $#" 1>&2
2>/dev/null kill -0 $waitforpid && kill -15 $waitforpid
} &
killerpid=$!
>>/dev/null 2>&1 wait $waitforpid
# this is the exit value we care about, so save it and use it when we
rc=$?
# zap our watchdog if it's still there, since we no longer need it
2>>/dev/null kill -0 $killerpid && kill -15 $killerpid
exit $rc
The other script is online at http://www.cs.tufts.edu/~nr/drop/timeout.
trap 'kill $(ps -L $! -o pid=); exit 30' 30
echo kill -30 $$ 2\>/dev/null | at $1 2>/dev/null
shift; eval $# &
wait