In bash: processing every command line without using the debug trap? - bash

I have a complicated mechanism built into my bash environment that requires the execution of a couple scripts when the prompt is generated, but also when the user hits enter to begin processing a command. I'll give an oversimplified description:
The debug trap does this in a fairly limited way: it fires every time a statement is executed.
trap 'echo $BASH_COMMAND' DEBUG # example
Unfortunately, this means that when I type this:
sleep 1; sleep 2; sleep 3
rather than processing a $BASH_COMMAND that contains the entire line, I get the three sleeps in three different traps. Worse yet:
sleep 1 | sleep 2 | sleep 3
fires all three as the pipe is set up - before sleep 1 even starts executing, the output might lead you to believe that sleep 3 is running.
I need a way to execute a script right at the beginning, processing the entire command, and I'd rather it not fire when the prompt command is run, but I can deal with that if I must.

THERE'S A MAJOR PROBLEM WITH THIS SOLUTION. COMMANDS WITH PIPES (|) WILL FINISH EXECUTING THE TRAP, BUT BACKGROUNDING A PROCESS DURING THE TRAP WILL CAUSE THE PROCESSING OF THE COMMAND TO FREEZE - YOU'LL NEVER GET A PROMPT BACK WITHOUT HITTING ^C. THE TRAP COMPLETES, BUT $PROMPT_COMMAND NEVER RUNS. THIS PROBLEM PERSISTS EVEN IF YOU DISOWN THE PROCESS IMMEDIATELY AFTER BACKGROUNDING IT.
This wound up being a little more interesting than I expected:
LOGFILE=~/logfiles/$BASHPID
start_timer() {
if [ ! -e $LOGFILE ]; then
#You may have to adjust this to fit with your history output format:
CMD=`history | tail -1 | tr -s " " | cut -f2-1000 -d" "`
#timer2 keeps updating the status line with how long the cmd has been running
timer2 -p "$PROMPT_BNW $CMD" -u -q & echo $! > $LOGFILE
fi
}
stop_timer() {
#Unfortunately, killing a process always prints that nasty confirmation line,
#and you can't silence it by redirecting stdout and stderr to /dev/null, so you
#have to disown the process before killing it.
disown `cat $LOGFILE`
kill -9 `cat $LOGFILE`
rm -f $LOGFILE
}
trap 'start_timer' DEBUG

Related

Kill bash command when line is found

I want to kill a bash command when I found some string in the output.
To clarify, I want the solution to be similar to a timeout command:
timeout 10s looping_program.sh
Which will execute the script: looping_program.sh and kill the script after 10 seconds of execute.
Instead I want something like:
regexout "^Success$" looping_program.sh
Which will execute the script until it matches a line that just says Success in the stdout of the program.
Note that I'm assuming that this looping_program.sh does not exit at the same time it outputs Success for whatever reason, so simply waiting for the program to exit would waste time if I don't care about what happens after that.
So something like:
bash -e looping_program.sh > /tmp/output &
PID="$(ps aux | grep looping_program.sh | head -1 | tr -s ' ' | cut -f 2 -d ' ')"
echo $PID
while :; do
echo "$(tail -1 /tmp/output)"
if [[ "$(tail -1 /tmp/output)" == "Success" ]]; then
kill $PID
exit 0
fi
sleep 1
done
Where looping_program.sh is something like:
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
echo "Success"
sleep 1;
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
echo "Fail"
sleep 1;
But that is not very robust (uses a single tmp file... might kill other programs...) and I want it to just be one command. Does something like this exist? I may just write a c program to do it if not.
P.S.: I provided my code as an example of what I wanted the program to do. It does not use good programming practices. Notes from other commenters:
#KamilCuk Do not use temporary file. Use a fifo.
#pjh Note that any approach that involves using kill with a PID in shell code runs the risk of killing the wrong process. Use kill in shell programs only when it is absolutely necessary.
There are more suggestions below from other users, I just wanted to make sure no one came across this and thought it would be good to model their code after.
looping_program() {
for i in 1 2 3; do echo $i; sleep 1; done
echo Success
yes
}
coproc looping_program
while IFS= read -r line; do
if [[ "$line" =~ Success ]]; then
break
fi
done <&${COPROC[0]}
exec {COPROC[0]}>&- {COPROC[1]}>&-
kill ${COPROC_PID}
wait ${COPROC_PID}
Notes:
Do not use temporary file. Use a fifo.
Do not use tail -n1 to read last line. Read from the stream in a loop.
Do not repeat tail -1 twice. Cache the result.
Wait for pid after killing to synchronize.
When you're using a coprocess, use COPROC_PID to get the PID
When you're not using a coprocess, use $! to get the PID of a background process started from the current shell.
When you can't use $! (because the process you're trying to get a PID of was not spawned in the background as a direct child of the current shell), do not use ps aux | grep to get the pid. Use pgrep.
Do not use echo $(stuff). Just run the stuff, no echo.
With expect
#!/usr/bin/env -S expect -f
set timeout -1
spawn ./looping_program.sh
expect "Success"
send -- "\x03"
expect eof
Call it looping_killer:
$ ./looping_killer
spawn ./looping_program.sh
Fail
Fail
Fail
Success
^C
To pass the program and pattern:
./looping_killer some_program "some pattern"
You'd change the expect script to
#!/usr/bin/env -S expect -f
set timeout -1
spawn [lindex $argv 0]
expect -- [lindex $argv 1]
send -- "\x03"
expect eof
Assuming that your looping program exists when it tries to write to a broken pipe, this will print all output up to and including the 'Success' line and then exit:
./looping_program | sed '/^Success$/q'
You may need to disable buffering of the looping program output. See Force line-buffering of stdout in a pipeline and How to make output of any shell command unbuffered? for ways to do it.
See Should I save my scripts with the .sh extension? and Erlkonig: Commandname Extensions Considered Harmful for reasons why I dropped the '.sh' suffix.
Note that any approach that involves using kill with a PID in shell code runs the risk of killing the wrong process. Use kill in shell programs only when it is absolutely necessary.

Quit less when pipe closes

As part of a bash script, I want to run a program repeatedly, and redirect the output to less. The program has an interactive element, so the goal is that when you exit the program via the window's X button, it is restarted via the script. This part works great, but when I use a pipe to less, the program does not automatically restart until I go to the console and press q. The relevant part of the script:
while :
do
program | less
done
I want to make less quit itself when the pipe closes, so that the program restarts without any user intervention. (That way it behaves just as if the pipe was not there, except while the program is running you can consult the console to view the output of the current run.)
Alternative solutions to this problem are also welcome.
Instead of exiting less, could you simply aggregate the output of each run of program?
while :
do
program
done | less
Having less exit when program would be at odds with one useful feature of less, which is that it can buffer the output of a program that exits before you finish reading its output.
UPDATE: Here's an attempt at using a background process to kill less when it is time. It assumes that the only program reading the output file is the less to kill.
while :
do
( program > /tmp/$$-program-output; kill $(lsof -Fp | cut -c2-) ) &
less /tmp/$$-program-output
done
program writes its output to a file. Once it exits, the kill command uses lsof to
find out what process is reading the file, then kills it. Note that there is a race condition; less needs to start before program exists. If that's a problem, it can
probably be worked around, but I'll avoid cluttering the answer otherwise.
You may try to kill the process group program and less belong to instead of using kill and lsof.
#!/bin/bash
trap 'kill 0' EXIT
while :
do
# script command gives sh -c own process group id (only sh -c cmd gets killed, not entire script!)
# FreeBSD script command
script -q /dev/null sh -c '(trap "kill -HUP -- -$$" EXIT; echo hello; sleep 5; echo world) | less -E -c'
# GNU script command
#script -q -c 'sh -c "(trap \"kill -HUP -- -$$\" EXIT; echo hello; sleep 5; echo world) | less -E -c"' /dev/null
printf '\n%s\n\n' "you now may ctrl-c the program: $0" 1>&2
sleep 3
done
While I agree with chepner's suggestion, if you really want individual less instances, I think this item for the man page will help you:
-e or --quit-at-eof
Causes less to automatically exit the second time it reaches end-of-file. By default,
the only way to exit less is via the "q" command.
-E or --QUIT-AT-EOF
Causes less to automatically exit the first time it reaches end-of-file.
you would make this option visible to less in the LESS envir variable
export LESS="-E"
while : ; do
program | less
done
IHTH

Quit from pipe in bash

For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).

How do I terminate all the subshell processes?

I have a bash script to test how a server performs under load.
num=1
if [ $# -gt 0 ]; then
num=$1
fi
for i in {1 .. $num}; do
(while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done) &
done
wait
When I hit Ctrl-C, the main process exits, but the background loops keep running. How do I make them all exit? Or is there a better way of spawning a configurable number of logic loops executing in parallel?
Here's a simpler solution -- just add the following line at the top of your script:
trap "kill 0" SIGINT
Killing 0 sends the signal to all processes in the current process group.
One way to kill subshells, but not self:
kill $(jobs -p)
Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes).
If you just want to make sure one specific child-process (and its own children) are tidied up then a better solution is to kill by process group (PGID) using the sub-process' PID, like so:
set -m
./some_child_script.sh &
some_pid=$!
kill -- -${some_pid}
Firstly, the set -m command will enable job management (if it isn't already), this is important, as otherwise all commands, sub-shells etc. will be assigned to the same process group as your parent script (unlike when you run the commands manually in a terminal), and kill will just give a "no such process" error. This needs to be called before you run the background command you wish to manage as a group (or just call it at script start if you have several).
Secondly, note that the argument to kill is negative, this indicates that you want to kill an entire process group. By default the process group ID is the same as the first command in the group, so we can get it by simply adding a minus sign in front of the PID we fetched with $!. If you need to get the process group ID in a more complex case, you will need to use ps -o pgid= ${some_pid}, then add the minus sign to that.
Lastly, note the use of the explicit end of options --, this is important, as otherwise the process group argument will be treated as an option (signal number), and kill will complain it doesn't have enough arguments. You only need this if the process group argument is the first one you wish to terminate.
Here is a simplified example of a background timeout process, and how to cleanup as much as possible:
#!/bin/bash
# Use the overkill method in case we're terminated ourselves
trap 'kill $(jobs -p | xargs)' SIGINT SIGHUP SIGTERM EXIT
# Setup a simple timeout command (an echo)
set -m
{ sleep 3600; echo "Operation took longer than an hour"; } &
timeout_pid=$!
# Run our actual operation here
do_something
# Cancel our timeout
kill -- -${timeout_pid} >/dev/null 2>&1
wait -- -${timeout_pid} >/dev/null 2>&1
printf '' 2>&1
This should cleanly handle cancelling this simplistic timeout in all reasonable cases; the only case that can't be handled is the script being terminated immediately (kill -9), as it won't get a chance to cleanup.
I've also added a wait, followed by a no-op (printf ''), this is to suppress "terminated" messages that can be caused by the kill command, it's a bit of a hack, but is reliable enough in my experience.
You need to use job control, which, unfortunately, is a bit complicated. If these are the only background jobs that you expect will be running, you can run a command like this one:
jobs \
| perl -ne 'print "$1\n" if m/^\[(\d+)\][+-]? +Running/;' \
| while read -r ; do kill %"$REPLY" ; done
jobs prints a list of all active jobs (running jobs, plus recently finished or terminated jobs), in a format like this:
[1] Running sleep 10 &
[2] Running sleep 10 &
[3] Running sleep 10 &
[4] Running sleep 10 &
[5] Running sleep 10 &
[6] Running sleep 10 &
[7] Running sleep 10 &
[8] Running sleep 10 &
[9]- Running sleep 10 &
[10]+ Running sleep 10 &
(Those are jobs that I launched by running for i in {1..10} ; do sleep 10 & done.)
perl -ne ... is me using Perl to extract the job numbers of the running jobs; you can obviously use a different tool if you prefer. You may need to modify this script if your jobs has a different output format; but the above output is also on Cygwin, so it's very likely identical to yours.
read -r reads a "raw" line from standard input, and saves it into the variable $REPLY. kill %"$REPLY" will be something like kill %1, which "kills" (sends an interrupt signal to) job number 1. (Not to be confused with kill 1, which would kill process number 1.) Together, while read -r ; do kill %"$REPLY" ; done goes through each job number printed by the Perl script, and kills it.
By the way, your for i in {1 .. $num} won't do what you expect, since brace expansion is handled before parameter expansion, so what you have is equivalent to for i in "{1" .. "$num}". (And you can't have white-space inside the brace expansion, anyway.) Unfortunately, I don't know of a clean alternative; I think you have to do something like for i in $(bash -c "{1..$num}"), or else switch to an arithmetic for-loop or whatnot.
Also by the way, you don't need to wrap your while-loop in parentheses; & already causes the job to be run in a subshell.
Here's my eventual solution. I'm keeping track of the subshell process IDs using an array variable, and trapping the Ctrl-C signal to kill them.
declare -a subs #array of subshell pids
function kill_subs() {
for pid in ${subs[#]}; do
kill $pid
done
exit 0
}
num=1 if [ $# -gt 0 ]; then
num=$1 fi
for ((i=0;i < $num; i++)); do
while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done &
subs[$i]=$! #grab the pid of the subshell
done
trap kill_subs 1 2 15
wait
While these is not an answer, I just would like to point out something which invalidates the selected one; using jobs or kill 0 might have unexpected results; in my case it killed unintended processes which in my case is not an option.
It has been highlighted somehow in some of the answers but I am afraid not with enough stress or it has been not considered:
"Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes)."
"If these are the only background jobs that you expect will be running, you can run a command like this one:"

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources