How to stop the execution of a bash command after N seconds? - bash

I am writing a simple bash script that analyses the contents of a directory and invokes a specific command for each file, however, in some cases, the execution takes longer than expected, and I would like to stop it after a certain number of seconds (or minutes) to move on to parsing the next entry.
For now, the script works in the following way:
#!/bin/bash
for f in /home/Users/Desktop/fdroid_tests/destination_dir/*; do
if [ -d "$f" ]; then
qark --java $f
fi
done
How can I solve it?

Besides just backgrounding the tasks, you may also be interested in the timeout command. For example timeout 5s qark --java $f will run your qark application and signal it to exit after 5 seconds if it hasn't already exited.
You can combine the two answers as well (timeout 5s qark --java $f &) if it's okay to have all your instances of qark run simultaneously. If you do, you may also want to add a wait after the loop so that the script doesn't exit until all the qark instances finish or timeout.

In bash, it is possible to launch a process in background, using the ampersand switch:
qark --java $f &
You wait for five seconds:
sleep 5
You stop all running qark processes, using pkill:
pkill qark
(I didn't try this, but you can give it a try.)
Edit:
According to the added comments to this answer, this might even be better:
qark --java $f & pid=$!
sleep 5
kill $pid

Related

Can't terminate command from a different process

I have a command "command1" that runs indefinitely (must be killed with Ctrl+c), and that at random intervals outputs new lines to stdout. My goal is to run it and see if it outputs a certain "target" line within 10 seconds. If the target output is generated, stop immediately with success, otherwise wait for the 10 seconds and fail.
I came up with this:
timeout 10 bash -c '(while read line; do [[ "$line" == "target" ]] && break; done < <(command1))'
It works, but the problem is that when a match is found, although the timeout command completes and returns successfully, command1 will continue to run indefinitely as a background process. I need it to stop as well when "break" is executed. If a match is not found, and the timeout expires, command1 is stopped correctly.
I also tried this:
timeout 10 bash -c '(command1 | while read line; do [[ "$line" == "target" ]] && exit; done)'
Which does not leave any spurious processes running. The problem is that the exit command does not terminate command1 since it is in a separate process, and the timeout always expires even if the target is found before.
I was exploring some alternative options, such as wait -n, but the same problem persists, and I must use bash 4.2, so wait -n isn't even an option.
Any suggestions would be greatly appreciated.
When command1 does not terminate itself, you can kill it manually.
By the way: Instead of while read ... you can use grep.
timeout 10 bash -c 'command1 | (grep -m1 -Fx "target"; pkill -P $PPID command1)'
-P $PPID ensures that only the command1 from this command is killed, and not some other command1 that might run in another shell at the same time.
This assumes that command1 is a single command, and not something like (cmd1; cmd2; ...). For that case, you could simply kill the whole bash process using kill $PPID.
Found what works best for my case:
timeout 10 bash -c 'grep -q -m1 "target" <(command1); pkill -P $!'
All processes terminate gracefully when either the target is found or the timeout expires. If found, command returns 0, if not found, command returns 124.
Thank you #Socowi for some very helpful hints that put me on the right track.

Sleep seems to run first in bash script

I need to terminate a script if it exceeds a specific duration (10 mins)
examplescript.sh &
pid=$!
sleep 600
if ['pgrep $pid']
then
kill $pid
fi
When I tested it on my test environment, it seems working well. examplescript.sh runs first and if it runs for more than 10 mins, it will be terminated. However, when I tried in our production environment, it seems that sleep runs first. It waits 600s before running the examplescript.sh. Is there something wrong in the script?
There is multiply thing you should correct in your code.
pgrep will make a regex search on process names not pids. You can use kill -0 pid to check if a process with pid is running.
[ (test) is a command[1] and should be treated as one. That means each argument should be separated by spaces. When using [ the last argument should also be ]:
[ arg1 arg2 ]
In your example you wont need [ since kill -0 will exit truly if the process is still running:
if kill -0 pid; then
And to wrap it up:
examplescript.sh &
pid=$!
sleep 600
if kill -0 "$pid" 2> /dev/null; then
kill "$pid"
fi
kill -0 will write an error to stderr if the process is not running anymore. So we redirect that to /dev/null.
[1] It's usually a build-in these days.
Another thing to note is that your script will run for 600 seconds even though examplescript.sh will only take a few seconds to run.
Are your production machines significantly faster? I do not have example script to really run this on my machine, but I think your problem might be solved if you take the code you mention above
examplescript.sh &
pid=$!
sleep 600
if ['pgrep $pid']
then
kill $pid
fi
put it in a file called, say, monitor.sh and run that file in the background. i.e.
monitor.sh &
Hope this helps.

Checking and killing hanged background processes in a bash script

Say I have this pseudocode in bash
#!/bin/bash
things
for i in {1..3}
do
nohup someScript[i] &
done
wait
for i in {4..6}
do
nohup someScript[i] &
done
wait
otherThings
and say this someScript[i] sometimes end up hanging.
Is there a way I can take the process IDs (with $!)
and check periodically if the process is taking more than a specified amount of time after which I want to kill the hanged processes with kill -9 ?
Unfortunately the answer from #Eugeniu did not work for me, timeout gave an error.
However I found useful doing this routine, I'll post it here so anyone can take advantage of it if in my same problem.
Create another script which goes like this
#!/bin/bash
#monitor.sh
pid=$1
counter=10
while ps -p $pid > /dev/null
do
if [[ $counter -eq 0 ]] ; then
kill -9 $pid
#if it's still there then kill it
fi
counter=$((counter-1))
sleep 1
done
then in the main work you just put
things
for i in {1..3}
do
nohup someScript[i] &
./monitor.sh $! &
done
wait
In this way for any of your someScript you will have a parallel process that checks if it's still there every chosen interval (until maximum time decided by the counter) and that actually quit itself if the associated process dies (or gets killed)
One possible approach:
#!/bin/bash
# things
mypids=()
for i in {1..3}; do
# launch the script with timeout (3600s)
timeout 3600 nohup someScript[i] &
mypids[i]=$! # store the PID
done
wait "${mypids[#]}"

Shell Script: kill program after set time if output file is empty

I'm running a command from within terminal that goes through a directory checking for media files and then extracts closed captioning data from those files. Unfortunately, even if there is no closed captioning data present, the program still processes the entire file and this can take a long time. What I would like to be able to do is check the output after 60 seconds and look for data, and if the file is empty, terminate the process and move on to the next file.
My old command is as follows
for i in */*.vob
do
/home/me/ccextractor/linux/ccextractor -out=srt -utf8 -trim "$i"
done
I've been experimenting with sleep but I can't seem to get it working. Any suggestions?
SOLUTION
With help from the answers below (take note of the comments as well), my final working code is:
for i in */*.vob
do
/home/me/ccextractor/linux/ccextractor -out=srt -utf8 -trim "$i" &
pid=$!
sleep 15
srtfile=$(expr "${i}" | sed -r 's/.{4}$//')
fgrep -q "1" "${srtfile}.srt" || kill $pid
wait $pid
done
You can have CCExtractor exit after processing one minute of video, there's not need to kill the process or control it separately.
ccextractor -endat 1:00 [etc]
will do what you want.
Assuming that ccextractor works fine in the background (doesn't require input or a tty), then try:
for i in */*.vob
do
/home/me/ccextractor/linux/ccextractor -out=srt -utf8 -trim "$i" &
pid=$!
sleep 60
[ -s "${i}_1" ] || kill $pid
wait $pid
done
where I have also assumed that the output file that ccextractor creates has _1 appended to the name of the input file.
[ -s "${i}_1" ] tests to see if the output file exists and has size greater than zero. If that is false, then the "or" condition is run and the process is killed.
wait $pid causes the shell to wait for one ccextractor to exit before starting another. If you want to run then all in parallel, remove this line.

How do I terminate all the subshell processes?

I have a bash script to test how a server performs under load.
num=1
if [ $# -gt 0 ]; then
num=$1
fi
for i in {1 .. $num}; do
(while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done) &
done
wait
When I hit Ctrl-C, the main process exits, but the background loops keep running. How do I make them all exit? Or is there a better way of spawning a configurable number of logic loops executing in parallel?
Here's a simpler solution -- just add the following line at the top of your script:
trap "kill 0" SIGINT
Killing 0 sends the signal to all processes in the current process group.
One way to kill subshells, but not self:
kill $(jobs -p)
Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes).
If you just want to make sure one specific child-process (and its own children) are tidied up then a better solution is to kill by process group (PGID) using the sub-process' PID, like so:
set -m
./some_child_script.sh &
some_pid=$!
kill -- -${some_pid}
Firstly, the set -m command will enable job management (if it isn't already), this is important, as otherwise all commands, sub-shells etc. will be assigned to the same process group as your parent script (unlike when you run the commands manually in a terminal), and kill will just give a "no such process" error. This needs to be called before you run the background command you wish to manage as a group (or just call it at script start if you have several).
Secondly, note that the argument to kill is negative, this indicates that you want to kill an entire process group. By default the process group ID is the same as the first command in the group, so we can get it by simply adding a minus sign in front of the PID we fetched with $!. If you need to get the process group ID in a more complex case, you will need to use ps -o pgid= ${some_pid}, then add the minus sign to that.
Lastly, note the use of the explicit end of options --, this is important, as otherwise the process group argument will be treated as an option (signal number), and kill will complain it doesn't have enough arguments. You only need this if the process group argument is the first one you wish to terminate.
Here is a simplified example of a background timeout process, and how to cleanup as much as possible:
#!/bin/bash
# Use the overkill method in case we're terminated ourselves
trap 'kill $(jobs -p | xargs)' SIGINT SIGHUP SIGTERM EXIT
# Setup a simple timeout command (an echo)
set -m
{ sleep 3600; echo "Operation took longer than an hour"; } &
timeout_pid=$!
# Run our actual operation here
do_something
# Cancel our timeout
kill -- -${timeout_pid} >/dev/null 2>&1
wait -- -${timeout_pid} >/dev/null 2>&1
printf '' 2>&1
This should cleanly handle cancelling this simplistic timeout in all reasonable cases; the only case that can't be handled is the script being terminated immediately (kill -9), as it won't get a chance to cleanup.
I've also added a wait, followed by a no-op (printf ''), this is to suppress "terminated" messages that can be caused by the kill command, it's a bit of a hack, but is reliable enough in my experience.
You need to use job control, which, unfortunately, is a bit complicated. If these are the only background jobs that you expect will be running, you can run a command like this one:
jobs \
| perl -ne 'print "$1\n" if m/^\[(\d+)\][+-]? +Running/;' \
| while read -r ; do kill %"$REPLY" ; done
jobs prints a list of all active jobs (running jobs, plus recently finished or terminated jobs), in a format like this:
[1] Running sleep 10 &
[2] Running sleep 10 &
[3] Running sleep 10 &
[4] Running sleep 10 &
[5] Running sleep 10 &
[6] Running sleep 10 &
[7] Running sleep 10 &
[8] Running sleep 10 &
[9]- Running sleep 10 &
[10]+ Running sleep 10 &
(Those are jobs that I launched by running for i in {1..10} ; do sleep 10 & done.)
perl -ne ... is me using Perl to extract the job numbers of the running jobs; you can obviously use a different tool if you prefer. You may need to modify this script if your jobs has a different output format; but the above output is also on Cygwin, so it's very likely identical to yours.
read -r reads a "raw" line from standard input, and saves it into the variable $REPLY. kill %"$REPLY" will be something like kill %1, which "kills" (sends an interrupt signal to) job number 1. (Not to be confused with kill 1, which would kill process number 1.) Together, while read -r ; do kill %"$REPLY" ; done goes through each job number printed by the Perl script, and kills it.
By the way, your for i in {1 .. $num} won't do what you expect, since brace expansion is handled before parameter expansion, so what you have is equivalent to for i in "{1" .. "$num}". (And you can't have white-space inside the brace expansion, anyway.) Unfortunately, I don't know of a clean alternative; I think you have to do something like for i in $(bash -c "{1..$num}"), or else switch to an arithmetic for-loop or whatnot.
Also by the way, you don't need to wrap your while-loop in parentheses; & already causes the job to be run in a subshell.
Here's my eventual solution. I'm keeping track of the subshell process IDs using an array variable, and trapping the Ctrl-C signal to kill them.
declare -a subs #array of subshell pids
function kill_subs() {
for pid in ${subs[#]}; do
kill $pid
done
exit 0
}
num=1 if [ $# -gt 0 ]; then
num=$1 fi
for ((i=0;i < $num; i++)); do
while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done &
subs[$i]=$! #grab the pid of the subshell
done
trap kill_subs 1 2 15
wait
While these is not an answer, I just would like to point out something which invalidates the selected one; using jobs or kill 0 might have unexpected results; in my case it killed unintended processes which in my case is not an option.
It has been highlighted somehow in some of the answers but I am afraid not with enough stress or it has been not considered:
"Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes)."
"If these are the only background jobs that you expect will be running, you can run a command like this one:"

Resources