Ending Timestamp not printing on Shell Script: Using trap - shell

I have a shell script I use for deployments. Since I want to capture the output of the entire process, I've wrapped it in a subshell and tail that out:
#! /usr/bin/env ksh
# deploy.sh
########################################################################
(yadda, yadda, yadda)
########################################################################
# LOGGING WRAPPER
#
dateFormat=$(date +"%Y.%m.%d-%H.%M.%S")
(
print -n "EXECUING: $0 $*: "
date
#
########################################################################
(yadda, yadda, yadda)
#
# Tail Startup
#
trap 'printf "Stopping Script: ";date;exit 0"' INT
print "TAILING LOG: YOU MAY STOP THIS WITH A CTRL-C WHEN YOU SEE THAT SERVER HAS STARTED"
sleep 2
./tailLog.sh
) 2>&1 | tee "deployment.$dateFormat.log"
#
########################################################################
Before I employed the subshell, the trap command worked. When you pressed CNTL-C, the program would print Stopping Script: and the date.
However, I wanted to make sure that no one forgets to save the output of this script, so I employed the subshell to automatically save the output. And, now trap doesn't seem to be working.
What am I doing wrong?
NEW INFORMATION
A little more playing around. I now see the issue isn't the shell or subshell. It's the damn pipe!
If I don't pipe the output to tee, the trap works fine. If I pipe the output to tee, the trap doesn't work.
So, the real question is how do I tee the output and still be able to use trap?
TEST PROGRAM
Before you answer, please, please, try these test programs:
#! /bin/ksh
dateFormat=$(date +"%Y.%m.%d-%H:%M:%S")
(
trap 'printf "The script was killed at: %s\n", "$(date)"' SIGINT
echo "$0 $*"
while sleep 2
do
print -n "The time is now "
date
done
) | tee somefile
And
#! /bin/ksh
dateFormat=$(date +"%Y.%m.%d-%H:%M:%S")
(
trap 'printf "The script was killed at: %s\n", "$(date)"' SIGINT
echo "$0 $*"
while sleep 2
do
print -n "The time is now "
date
done
)
The top one pipes to somefile..... The bottom one doesn't. The bottom one, the trap works. The top one, the trap doesn't. See if you can get the pipe to work and the "The script was killed at" line to print into the teed out file.
The pipe does work. The trap doesn't, but only when I have the pipe. You can move the trap statement all around and put in layers and layers of sub shells. There's some minor thing I am doing that's wrong, and I have no idea what it is.

Since trap stops the running process – logShell.sh – I think the pipe doesn't get executed at all. You can't do it this way.
One solution could be editing logShell.sh to write line by line in your log file. Maybe you could post it and we can discuss how you manage it.
OK, now I've got it. You have to use tee with -i to ignore interrupt signals.
#! /bin/ksh
dateFormat=$(date +"%Y.%m.%d-%H:%M:%S")
(
trap 'printf "The script was killed at: %s\n", "$(date)"' SIGINT
echo "$0 $*"
while sleep 2
do
print -n "The time is now "
date
done
) | tee -i somefile
this one works fine!

Related

In bash: processing every command line without using the debug trap?

I have a complicated mechanism built into my bash environment that requires the execution of a couple scripts when the prompt is generated, but also when the user hits enter to begin processing a command. I'll give an oversimplified description:
The debug trap does this in a fairly limited way: it fires every time a statement is executed.
trap 'echo $BASH_COMMAND' DEBUG # example
Unfortunately, this means that when I type this:
sleep 1; sleep 2; sleep 3
rather than processing a $BASH_COMMAND that contains the entire line, I get the three sleeps in three different traps. Worse yet:
sleep 1 | sleep 2 | sleep 3
fires all three as the pipe is set up - before sleep 1 even starts executing, the output might lead you to believe that sleep 3 is running.
I need a way to execute a script right at the beginning, processing the entire command, and I'd rather it not fire when the prompt command is run, but I can deal with that if I must.
THERE'S A MAJOR PROBLEM WITH THIS SOLUTION. COMMANDS WITH PIPES (|) WILL FINISH EXECUTING THE TRAP, BUT BACKGROUNDING A PROCESS DURING THE TRAP WILL CAUSE THE PROCESSING OF THE COMMAND TO FREEZE - YOU'LL NEVER GET A PROMPT BACK WITHOUT HITTING ^C. THE TRAP COMPLETES, BUT $PROMPT_COMMAND NEVER RUNS. THIS PROBLEM PERSISTS EVEN IF YOU DISOWN THE PROCESS IMMEDIATELY AFTER BACKGROUNDING IT.
This wound up being a little more interesting than I expected:
LOGFILE=~/logfiles/$BASHPID
start_timer() {
if [ ! -e $LOGFILE ]; then
#You may have to adjust this to fit with your history output format:
CMD=`history | tail -1 | tr -s " " | cut -f2-1000 -d" "`
#timer2 keeps updating the status line with how long the cmd has been running
timer2 -p "$PROMPT_BNW $CMD" -u -q & echo $! > $LOGFILE
fi
}
stop_timer() {
#Unfortunately, killing a process always prints that nasty confirmation line,
#and you can't silence it by redirecting stdout and stderr to /dev/null, so you
#have to disown the process before killing it.
disown `cat $LOGFILE`
kill -9 `cat $LOGFILE`
rm -f $LOGFILE
}
trap 'start_timer' DEBUG

Quit from pipe in bash

For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).

Bash script: `exit 0` fails to exit

So I have this Bash script:
#!/bin/bash
PID=`ps -u ...`
if [ "$PID" = "" ]; then
echo $(date) Server off: not backing up
exit
else
echo "say Server backup in 10 seconds..." >> fifo
sleep 10
STARTTIME="$(date +%s)"
echo nosave >> fifo
echo savenow >> fifo
tail -n 3 -f server.log | while read line
do
if echo $line | grep -q 'save complete'; then
echo $(date) Backing up...
OF="./backups/backup $(date +%Y-%m-%d\ %H:%M:%S).tar.gz"
tar -czhf "$OF" data
echo autosave >> fifo
echo "$(date) Backup complete, resuming..."
echo "done"
exit 0
echo "done2"
fi
TIMEDIFF="$(($(date +%s)-STARTTIME))"
if ((TIMEDIFF > 70)); then
echo "Save took too long, canceling backup."
exit 1
fi
done
fi
Basically, the server takes input from a fifo and outputs to server.log. The fifo is used to send stop/start commands to the server for autosaves. At the end, once it receives the message from the server that the server has completed a save, it tar's the data directory and starts saves again.
It's at the exit 0 line that I'm having trouble. Everything executes fine, but I get this output:
srv:scripts $ ./backup.sh
Sun Nov 24 22:42:09 EST 2013 Backing up...
Sun Nov 24 22:42:10 EST 2013 Backup complete, resuming...
done
But it hangs there. Notice how "done" echoes but "done2" fails. Something is causing it to hang on exit 0.
ADDENDUM: Just to avoid confusion for people looking at this in the future, it hangs at the exit line and never returns to the command prompt. Not sure if I was clear enough in my original description.
Any thoughts? This is the entire script, there's nothing else going on and I'm calling it direct from bash.
Here's a smaller, self contained example that exhibits the same behavior:
echo foo > file
tail -f file | while read; do exit; done
The problem is that since each part of the pipeline runs in a subshell, exit only exits the while read loop, not the entire script.
It will then hang until tail finds a new line, tries to write it, and discovers that the pipe is broken.
To fix it, you can replace
tail -n 3 -f server.log | while read line
do
...
done
with
while read line
do
...
done < <(tail -n 3 -f server.log)
By redirecting from a process substitution instead, the flow doesn't have to wait for tail to finish like it would in a pipeline, and it won't run in a subshell so that exit will actually exits the entire script.
But it hangs there. Notice how "done" echoes but "done2" fails.
done2 won't be printed at all since exit 0 has already ended your script with return code 0.
I don't know the details of bash subshells inside loops, but normally the appropriate way to exit a loop is to use the "break" command. In some cases that's not enough (you really need to exit the program), but refactoring that program may be the easiest (safest, most portable) way to solve that. It may also improve readability, because people don't expect programs to exit in the middle of a loop.

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources