I have a command "command1" that runs indefinitely (must be killed with Ctrl+c), and that at random intervals outputs new lines to stdout. My goal is to run it and see if it outputs a certain "target" line within 10 seconds. If the target output is generated, stop immediately with success, otherwise wait for the 10 seconds and fail.
I came up with this:
timeout 10 bash -c '(while read line; do [[ "$line" == "target" ]] && break; done < <(command1))'
It works, but the problem is that when a match is found, although the timeout command completes and returns successfully, command1 will continue to run indefinitely as a background process. I need it to stop as well when "break" is executed. If a match is not found, and the timeout expires, command1 is stopped correctly.
I also tried this:
timeout 10 bash -c '(command1 | while read line; do [[ "$line" == "target" ]] && exit; done)'
Which does not leave any spurious processes running. The problem is that the exit command does not terminate command1 since it is in a separate process, and the timeout always expires even if the target is found before.
I was exploring some alternative options, such as wait -n, but the same problem persists, and I must use bash 4.2, so wait -n isn't even an option.
Any suggestions would be greatly appreciated.
When command1 does not terminate itself, you can kill it manually.
By the way: Instead of while read ... you can use grep.
timeout 10 bash -c 'command1 | (grep -m1 -Fx "target"; pkill -P $PPID command1)'
-P $PPID ensures that only the command1 from this command is killed, and not some other command1 that might run in another shell at the same time.
This assumes that command1 is a single command, and not something like (cmd1; cmd2; ...). For that case, you could simply kill the whole bash process using kill $PPID.
Found what works best for my case:
timeout 10 bash -c 'grep -q -m1 "target" <(command1); pkill -P $!'
All processes terminate gracefully when either the target is found or the timeout expires. If found, command returns 0, if not found, command returns 124.
Thank you #Socowi for some very helpful hints that put me on the right track.
Related
I'm working on a script which need to detect the first call to FFMPEG in a program and run a script from then on.
the core code was like:
strace -f -etrace=execve <program> 2>&1 | grep <some_pattern> | <run_some_script>
The desired behaviours is, when the first greped result comes out, the script should start. And if nothing matched before <program> terminates, the script should be ignored.
The main problem is how to conditionally execute the script based on the grep's output and how to terminate the script after the program terminates.
I think the first one could be solved using read, since the greped text are used as signals, its contents are irrelevant:
... | read -N 1 && <run_some_script>
and the second could be solved using broken pipe mechanism:
<run_some_script> > >(...)
but I don't know how to make them work together. Or is there a better solution?
You could ask grep to just match the pattern once and return and make it return a success error code. Putting this together in a if conditional altogether as
if strace -f -etrace=execve <program> 2>&1 | grep -q <some_pattern>; then
echo 'run a program'
fi
The -q flag is to suppress the usual stdout content returned by the grep command as you've mentioned you only want to use grep result to perform an action and not use the results.
Or may be you needed to use coproc running the command to run in background and check every line of the output produced. Just write a wrapper over the command you want to run as below. The function is not needed for single commands but for multiple commands a function would be more relevant.
wrapper() { strace -f -etrace=execve <program> 2>&1 ; }
Use coproc is just similar to running the command in background but provides an easy way to capture the output of the command run
coproc outputfd { wrapper; }
Now watch the output of the commands run inside wrapper by reading from the file descriptor provided by coproc. The below code will watch on the output and on the first match of the pattern it starts a background job for the command to run and the process id is stored in pid.
flag=1
while IFS= read -r -u "${outputfd[0]}" output; do
if [[ $output == *"pattern"* && $flag -eq 1 ]]; then
flag=0
command_to_run & pid=$!
fi
done
When the loop terminates, which means the background job started by coproc is complete. At that point kill the script started. For safety purposes, see if its alive and do the kill
kill "$pid" >/dev/null 2>&1
Using the ifne util:
strace -f -etrace=execve <program> 2>&1 |
grep <some_pattern> | ifne <some_script>
As part of a bash script, I want to run a program repeatedly, and redirect the output to less. The program has an interactive element, so the goal is that when you exit the program via the window's X button, it is restarted via the script. This part works great, but when I use a pipe to less, the program does not automatically restart until I go to the console and press q. The relevant part of the script:
while :
do
program | less
done
I want to make less quit itself when the pipe closes, so that the program restarts without any user intervention. (That way it behaves just as if the pipe was not there, except while the program is running you can consult the console to view the output of the current run.)
Alternative solutions to this problem are also welcome.
Instead of exiting less, could you simply aggregate the output of each run of program?
while :
do
program
done | less
Having less exit when program would be at odds with one useful feature of less, which is that it can buffer the output of a program that exits before you finish reading its output.
UPDATE: Here's an attempt at using a background process to kill less when it is time. It assumes that the only program reading the output file is the less to kill.
while :
do
( program > /tmp/$$-program-output; kill $(lsof -Fp | cut -c2-) ) &
less /tmp/$$-program-output
done
program writes its output to a file. Once it exits, the kill command uses lsof to
find out what process is reading the file, then kills it. Note that there is a race condition; less needs to start before program exists. If that's a problem, it can
probably be worked around, but I'll avoid cluttering the answer otherwise.
You may try to kill the process group program and less belong to instead of using kill and lsof.
#!/bin/bash
trap 'kill 0' EXIT
while :
do
# script command gives sh -c own process group id (only sh -c cmd gets killed, not entire script!)
# FreeBSD script command
script -q /dev/null sh -c '(trap "kill -HUP -- -$$" EXIT; echo hello; sleep 5; echo world) | less -E -c'
# GNU script command
#script -q -c 'sh -c "(trap \"kill -HUP -- -$$\" EXIT; echo hello; sleep 5; echo world) | less -E -c"' /dev/null
printf '\n%s\n\n' "you now may ctrl-c the program: $0" 1>&2
sleep 3
done
While I agree with chepner's suggestion, if you really want individual less instances, I think this item for the man page will help you:
-e or --quit-at-eof
Causes less to automatically exit the second time it reaches end-of-file. By default,
the only way to exit less is via the "q" command.
-E or --QUIT-AT-EOF
Causes less to automatically exit the first time it reaches end-of-file.
you would make this option visible to less in the LESS envir variable
export LESS="-E"
while : ; do
program | less
done
IHTH
For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).
I want to build a bash script that executes a command and in the meanwhile performs other stuff, with the possibility of killing the command if the script is killed. Say, executes a cp of a large file and in the meanwhile prints the elapsed time since copy started, but if the script is killed it kills also the copy.
I don't want to use rsync, for 2 reasons: 1) is slow and 2) I want to learn how to do it, it could be useful.
I tried this:
until cp SOURCE DEST
do
#evaluates time, stuff, commands, file dimensions, not important now
#and echoes something
done
but it doesn't execute the do - done block, as it is waiting that the copy ends. Could you please suggest something?
until is the opposite of while. It's nothing to do with doing stuff while another command runs. For that you need to run your task in the background with &.
cp SOURCE DEST &
pid=$!
# If this script is killed, kill the `cp'.
trap "kill $pid 2> /dev/null" EXIT
# While copy is running...
while kill -0 $pid 2> /dev/null; do
# Do stuff
...
sleep 1
done
# Disable the trap on a normal exit.
trap - EXIT
kill -0 checks if a process is running. Note that it doesn't actually signal the process and kill it, as the name might suggest. Not with signal 0, at least.
There are three steps involved in solving your problem:
Execute a command in the background, so it will keep running while your script does something else. You can do this by following the command with &. See the section on Job Control in the Bash Reference Manual for more details.
Keep track of that command's status, so you'll know if it is still running. You can do this with the special variable $!, which is set to the PID (process identifier) of the last command you ran in the background, or empty if no background command was started. Linux creates a directory /proc/$PID for every process that is running and deletes it when the process exits, so you can check for the existence of that directory to find out if the background command is still running. You can learn more than you ever wanted to know about /proc from the Linux Documentation Project's File System Hierarchy page or Advanced Bash-Scripting Guide.
Kill the background command if your script is killed. You can do this with the trap command, which is a bash builtin command.
Putting the pieces together:
# Look for the 4 common signals that indicate this script was killed.
# If the background command was started, kill it, too.
trap '[ -z $! ] || kill $!' SIGHUP SIGINT SIGQUIT SIGTERM
cp $SOURCE $DEST & # Copy the file in the background.
# The /proc directory exists while the command runs.
while [ -e /proc/$! ]; do
echo -n "." # Do something while the background command runs.
sleep 1 # Optional: slow the loop so we don't use up all the dots.
done
Note that we check the /proc directory to find out if the background command is still running, because kill -0 will generate an error if it's called when the process no longer exists.
Update to explain the use of trap:
The syntax is trap [arg] [sigspec …], where sigspec … is a list of signals to catch, and arg is a command to execute when any of those signals is raised. In this case, the command is a list:
'[ -z $! ] || kill $!'
This is a common bash idiom that takes advantage of the way || is processed. An expression of the form cmd1 || cmd2 will evaluate as successful if either cmd1 OR cmd2 succeeds. But bash is clever: if cmd1 succeeds, bash knows that the complete expression must also succeed, so it doesn't bother to evaluate cmd2. On the other hand, if cmd1 fails, the result of cmd2 determines the overall result of the expression. So an important feature of || is that it will execute cmd2 only if cmd1 fails. That means it's a shortcut for the (invalid) sequence:
if cmd1; then
# do nothing
else
cmd2
fi
With that in mind, we can see that
trap '[ -z $! ] || kill $!' SIGHUP SIGINT SIGQUIT SIGTERM
will test whether $! is empty (which means the background task was never executed). If that fails, which means the task was executed, it kills the task.
here is the simplest way to do that using ps -p :
[command_1_to_execute] &
pid=$!
while ps -p $pid &>/dev/null; do
[command_2_to_be_executed meanwhile command_1 is running]
sleep 10
done
This will run every 10 seconds the command_2 if the command_1 is still running in background .
hope this will help you :)
What you want is to do two things at once in shell. The usual way to do that is with a job. You can start a background job by ending the command with an ampersand.
copy $SOURCE $DEST &
You can then use the jobs command to check its status.
Read more:
Gnu Bash Job Control
I have a bash script to test how a server performs under load.
num=1
if [ $# -gt 0 ]; then
num=$1
fi
for i in {1 .. $num}; do
(while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done) &
done
wait
When I hit Ctrl-C, the main process exits, but the background loops keep running. How do I make them all exit? Or is there a better way of spawning a configurable number of logic loops executing in parallel?
Here's a simpler solution -- just add the following line at the top of your script:
trap "kill 0" SIGINT
Killing 0 sends the signal to all processes in the current process group.
One way to kill subshells, but not self:
kill $(jobs -p)
Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes).
If you just want to make sure one specific child-process (and its own children) are tidied up then a better solution is to kill by process group (PGID) using the sub-process' PID, like so:
set -m
./some_child_script.sh &
some_pid=$!
kill -- -${some_pid}
Firstly, the set -m command will enable job management (if it isn't already), this is important, as otherwise all commands, sub-shells etc. will be assigned to the same process group as your parent script (unlike when you run the commands manually in a terminal), and kill will just give a "no such process" error. This needs to be called before you run the background command you wish to manage as a group (or just call it at script start if you have several).
Secondly, note that the argument to kill is negative, this indicates that you want to kill an entire process group. By default the process group ID is the same as the first command in the group, so we can get it by simply adding a minus sign in front of the PID we fetched with $!. If you need to get the process group ID in a more complex case, you will need to use ps -o pgid= ${some_pid}, then add the minus sign to that.
Lastly, note the use of the explicit end of options --, this is important, as otherwise the process group argument will be treated as an option (signal number), and kill will complain it doesn't have enough arguments. You only need this if the process group argument is the first one you wish to terminate.
Here is a simplified example of a background timeout process, and how to cleanup as much as possible:
#!/bin/bash
# Use the overkill method in case we're terminated ourselves
trap 'kill $(jobs -p | xargs)' SIGINT SIGHUP SIGTERM EXIT
# Setup a simple timeout command (an echo)
set -m
{ sleep 3600; echo "Operation took longer than an hour"; } &
timeout_pid=$!
# Run our actual operation here
do_something
# Cancel our timeout
kill -- -${timeout_pid} >/dev/null 2>&1
wait -- -${timeout_pid} >/dev/null 2>&1
printf '' 2>&1
This should cleanly handle cancelling this simplistic timeout in all reasonable cases; the only case that can't be handled is the script being terminated immediately (kill -9), as it won't get a chance to cleanup.
I've also added a wait, followed by a no-op (printf ''), this is to suppress "terminated" messages that can be caused by the kill command, it's a bit of a hack, but is reliable enough in my experience.
You need to use job control, which, unfortunately, is a bit complicated. If these are the only background jobs that you expect will be running, you can run a command like this one:
jobs \
| perl -ne 'print "$1\n" if m/^\[(\d+)\][+-]? +Running/;' \
| while read -r ; do kill %"$REPLY" ; done
jobs prints a list of all active jobs (running jobs, plus recently finished or terminated jobs), in a format like this:
[1] Running sleep 10 &
[2] Running sleep 10 &
[3] Running sleep 10 &
[4] Running sleep 10 &
[5] Running sleep 10 &
[6] Running sleep 10 &
[7] Running sleep 10 &
[8] Running sleep 10 &
[9]- Running sleep 10 &
[10]+ Running sleep 10 &
(Those are jobs that I launched by running for i in {1..10} ; do sleep 10 & done.)
perl -ne ... is me using Perl to extract the job numbers of the running jobs; you can obviously use a different tool if you prefer. You may need to modify this script if your jobs has a different output format; but the above output is also on Cygwin, so it's very likely identical to yours.
read -r reads a "raw" line from standard input, and saves it into the variable $REPLY. kill %"$REPLY" will be something like kill %1, which "kills" (sends an interrupt signal to) job number 1. (Not to be confused with kill 1, which would kill process number 1.) Together, while read -r ; do kill %"$REPLY" ; done goes through each job number printed by the Perl script, and kills it.
By the way, your for i in {1 .. $num} won't do what you expect, since brace expansion is handled before parameter expansion, so what you have is equivalent to for i in "{1" .. "$num}". (And you can't have white-space inside the brace expansion, anyway.) Unfortunately, I don't know of a clean alternative; I think you have to do something like for i in $(bash -c "{1..$num}"), or else switch to an arithmetic for-loop or whatnot.
Also by the way, you don't need to wrap your while-loop in parentheses; & already causes the job to be run in a subshell.
Here's my eventual solution. I'm keeping track of the subshell process IDs using an array variable, and trapping the Ctrl-C signal to kill them.
declare -a subs #array of subshell pids
function kill_subs() {
for pid in ${subs[#]}; do
kill $pid
done
exit 0
}
num=1 if [ $# -gt 0 ]; then
num=$1 fi
for ((i=0;i < $num; i++)); do
while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done &
subs[$i]=$! #grab the pid of the subshell
done
trap kill_subs 1 2 15
wait
While these is not an answer, I just would like to point out something which invalidates the selected one; using jobs or kill 0 might have unexpected results; in my case it killed unintended processes which in my case is not an option.
It has been highlighted somehow in some of the answers but I am afraid not with enough stress or it has been not considered:
"Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes)."
"If these are the only background jobs that you expect will be running, you can run a command like this one:"