Shell script: How to restart a process (with pipe) if it dies - bash

I currently use the technique described in How do I write a bash script to restart a process if it dies? by lhunath in order to restart a dead process.
until myserver; do
echo "Server 'myserver' crashed with exit code $?. Respawning.." >&2
sleep 1
done
But rather than just invoking the process myserver, I would like to invoke such a thing:
myserver 2>&1 | /usr/bin/logger -p local0.info &
How to use the first technique with a process with pipe?

The until loop itself can be piped into logger:
until myserver 2>&1; do
echo "..."
sleep 1
done | /usr/bin/logger -p local0.info &
since myserver inherits its standard output and error from the loop (which inherits from the shell).

You can use the PIPESTATUS variable to get the exit code from a specific command in a pipeline:
while :; do
myserver 2>&1 | /usr/bin/logger -p local0.info
if [[ ${PIPESTATUS[0]} != 0 ]]
then echo "Server 'myserver' crashed with exit code ${PIPESTATUS[0]}. Respawning.." >&2
sleep 1
else break
fi
done

Related

false | true; echo $? [duplicate]

I currently have a script that does something like
./a | ./b | ./c
I want to modify it so that if any of a, b, or c exit with an error code I print an error message and stop instead of piping bad output forward.
What would be the simplest/cleanest way to do so?
In bash you can use set -e and set -o pipefail at the beginning of your file. A subsequent command ./a | ./b | ./c will fail when any of the three scripts fails. The return code will be the return code of the first failed script.
Note that pipefail isn't available in standard sh.
You can also check the ${PIPESTATUS[]} array after the full execution, e.g. if you run:
./a | ./b | ./c
Then ${PIPESTATUS} will be an array of error codes from each command in the pipe, so if the middle command failed, echo ${PIPESTATUS[#]} would contain something like:
0 1 0
and something like this run after the command:
test ${PIPESTATUS[0]} -eq 0 -a ${PIPESTATUS[1]} -eq 0 -a ${PIPESTATUS[2]} -eq 0
will allow you to check that all commands in the pipe succeeded.
If you really don't want the second command to proceed until the first is known to be successful, then you probably need to use temporary files. The simple version of that is:
tmp=${TMPDIR:-/tmp}/mine.$$
if ./a > $tmp.1
then
if ./b <$tmp.1 >$tmp.2
then
if ./c <$tmp.2
then : OK
else echo "./c failed" 1>&2
fi
else echo "./b failed" 1>&2
fi
else echo "./a failed" 1>&2
fi
rm -f $tmp.[12]
The '1>&2' redirection can also be abbreviated '>&2'; however, an old version of the MKS shell mishandled the error redirection without the preceding '1' so I've used that unambiguous notation for reliability for ages.
This leaks files if you interrupt something. Bomb-proof (more or less) shell programming uses:
tmp=${TMPDIR:-/tmp}/mine.$$
trap 'rm -f $tmp.[12]; exit 1' 0 1 2 3 13 15
...if statement as before...
rm -f $tmp.[12]
trap 0 1 2 3 13 15
The first trap line says 'run the commands 'rm -f $tmp.[12]; exit 1' when any of the signals 1 SIGHUP, 2 SIGINT, 3 SIGQUIT, 13 SIGPIPE, or 15 SIGTERM occur, or 0 (when the shell exits for any reason).
If you're writing a shell script, the final trap only needs to remove the trap on 0, which is the shell exit trap (you can leave the other signals in place since the process is about to terminate anyway).
In the original pipeline, it is feasible for 'c' to be reading data from 'b' before 'a' has finished - this is usually desirable (it gives multiple cores work to do, for example). If 'b' is a 'sort' phase, then this won't apply - 'b' has to see all its input before it can generate any of its output.
If you want to detect which command(s) fail, you can use:
(./a || echo "./a exited with $?" 1>&2) |
(./b || echo "./b exited with $?" 1>&2) |
(./c || echo "./c exited with $?" 1>&2)
This is simple and symmetric - it is trivial to extend to a 4-part or N-part pipeline.
Simple experimentation with 'set -e' didn't help.
Unfortunately, the answer by Johnathan requires temporary files and the answers by Michel and Imron requires bash (even though this question is tagged shell). As pointed out by others already, it is not possible to abort the pipe before later processes are started. All processes are started at once and will thus all run before any errors can be communicated. But the title of the question was also asking about error codes. These can be retrieved and investigated after the pipe finished to figure out whether any of the involved processes failed.
Here is a solution that catches all errors in the pipe and not only errors of the last component. So this is like bash's pipefail, just more powerful in the sense that you can retrieve all the error codes.
res=$( (./a 2>&1 || echo "1st failed with $?" >&2) |
(./b 2>&1 || echo "2nd failed with $?" >&2) |
(./c 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
To detect whether anything failed, an echo command prints on standard error in case any command fails. Then the combined standard error output is saved in $res and investigated later. This is also why standard error of all processes is redirected to standard output. You can also send that output to /dev/null or leave it as yet another indicator that something went wrong. You can replace the last redirect to /dev/null with a file if yo uneed to store the output of the last command anywhere.
To play more with this construct and to convince yourself that this really does what it should, I replaced ./a, ./b and ./c by subshells which execute echo, cat and exit. You can use this to check that this construct really forwards all the output from one process to another and that the error codes get recorded correctly.
res=$( (sh -c "echo 1st out; exit 0" 2>&1 || echo "1st failed with $?" >&2) |
(sh -c "cat; echo 2nd out; exit 0" 2>&1 || echo "2nd failed with $?" >&2) |
(sh -c "echo start; cat; echo end; exit 0" 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
This answer is in the spirit of the accepted answer, but using shell variables instead of temporary files.
if TMP_A="$(./a)"
then
if TMP_B="$(echo "TMP_A" | ./b)"
then
if TMP_C="$(echo "TMP_B" | ./c)"
then
echo "$TMP_C"
else
echo "./c failed"
fi
else
echo "./b failed"
fi
else
echo "./a failed"
fi

applescript blocks shell script cmd when writing to pipe

The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0

How can I make an external program interruptible in this trap-captured bash script?

I am writing a script which will run an external program (arecord) and do some cleanup if it's interrupted by either a POSIX signal or input on a named pipe. Here's the draft in full
#!/bin/bash
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "echo interrupted && (rm $P || echo 'couldnt delete $P') && echo 'removed fifo' && exit" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
while true
do
echo waiting...
sleep 1
done
#arecord $F
This works perfectly as it is: the script ends when a signal arrives and a signal is generated if the fifo is written-to.
But instead of the while true loop I want the now-commented-out arecord command, but if I run that program instead of the loop the SIGINT doesn't get caught in the trap and arecord keeps running.
What should I do?
It sounds like you really need this to work more like an init script. So, start arecord in the background and put the pid in a file. Then use the trap to kill the arecord process based on the pidfile.
#!/bin/bash
PIDFILE=/var/run/arecord-runner.pid #Just somewhere to store the pid
LOGFILE=/var/log/arecord-runner.log
#Just one option for how to format your trap call
#Note that this does not use &&, so one failed function will not
# prevent other items in the trap from running
trapFunc() {
echo interrupted
(rm $P || echo 'couldnt delete $P')
echo 'removed fifo'
kill $(cat $PIDFILE)
exit 0
}
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "trapFunc" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
arecord $F 1>$LOGFILE 2>&1 & #Run in the background, sending logs to file
echo $! > $PIDFILE #Save pid of the last background process to file
while true
do
echo waiting...
sleep 1
done
Also... you may have your trap written with '&&' clauses for a reason, but as an alternative, you can give a function name as I did above, or a sort of anonymous function like this:
trap "{ command1; command2 args; command3; exit 0; }"
Just make sure that each command is followed by a semicolon and there are spaces between the braces and the commands. The risk of using && in the trap is that your script will continue to run past the interrupt if one of the commands before the exit fails to execute (but maybe you want that?).

Bash script not exiting immediately when `exit` is called

I have the following bash script:
tail -F -n0 /private/var/log/system.log | while read line
do
if [ ! `echo $line | grep -c 'launchd'` -eq 0 ]; then
echo 'launchd message'
exit 0
fi
done
For some reason, it is echoing launchd message, waiting for a full 5 seconds, and then exiting.
Why is this happening and how do I make it exit immediately after it echos launchd message?
Since you're using a pipe, the while loop is being run in a subshell. Run it in the main shell instead.
#!/bin/bash
while ...
do
...
done < <(tail ...)
As indicated by Ignacio, your tail | while creates a subshell. The delay is because it's waiting for the next line to be written to the log file before everything closes.
You can add this line immediately before your exit command if you'd prefer not using process substitution:
kill -SIGPIPE $$
Unfortunately, I don't know of any way to control the exit code using this method. It will be 141 which is 128 + 13 (the signal number of SIGPIPE).
If you're trying to make the startup of a daemon dependent on another one having started, there's probably a better way to do that.
By the way, if you're really writing a Bash script (which you'd have to be to use <() process substitution), you can write your if like this: if [[ $line == *launchd* ]].
You can also exit the subshell with a tell-tale exit code and then test the value of "$?" to get the same effect you're looking for:
tail -F -n0 /private/var/log/system.log | while read line
do
if [ ! `echo $line | grep -c 'launchd'` -eq 0 ]; then
echo 'launchd message'
exit 10
fi
done
if [ $? -eq 10 ]; then exit 0; fi

Best way to make a shell script daemon?

I'm wondering if there is a better way to make a daemon that waits for something using only sh than:
#! /bin/sh
trap processUserSig SIGUSR1
processUserSig() {
echo "doing stuff"
}
while true; do
sleep 1000
done
In particular, I'm wondering if there's any way to get rid of the loop and still have the thing listen for the signals.
Just backgrounding your script (./myscript &) will not daemonize it. See http://www.faqs.org/faqs/unix-faq/programmer/faq/, section 1.7, which describes what's necessary to become a daemon. You must disconnect it from the terminal so that SIGHUP does not kill it. You can take a shortcut to make a script appear to act like a daemon;
nohup ./myscript 0<&- &>/dev/null &
will do the job. Or, to capture both stderr and stdout to a file:
nohup ./myscript 0<&- &> my.admin.log.file &
Redirection explained (see bash redirection)
0<&- closes stdin
&> file sends stdout and stderr to a file
However, there may be further important aspects that you need to consider. For example:
You will still have a file descriptor open to the script, which means that the directory it's mounted in would be unmountable. To be a true daemon you should chdir("/") (or cd / inside your script), and fork so that the parent exits, and thus the original descriptor is closed.
Perhaps run umask 0. You may not want to depend on the umask of the caller of the daemon.
For an example of a script that takes all of these aspects into account, see Mike S' answer.
Some of the top-upvoted answers here are missing some important parts of what makes a daemon a daemon, as opposed to just a background process, or a background process detached from a shell.
This http://www.faqs.org/faqs/unix-faq/programmer/faq/ describes what is necessary to be a daemon. And this Run bash script as daemon implements the setsid, though it misses the chdir to root.
The original poster's question was actually more specific than "How do I create a daemon process using bash?", but since the subject and answers discuss daemonizing shell scripts generally, I think it's important to point it out (for interlopers like me looking into the fine details of creating a daemon).
Here's my rendition of a shell script that would behave according to the FAQ. Set DEBUG to true to see pretty output (but it also exits immediately rather than looping endlessly):
#!/bin/bash
DEBUG=false
# This part is for fun, if you consider shell scripts fun- and I do.
trap process_USR1 SIGUSR1
process_USR1() {
echo 'Got signal USR1'
echo 'Did you notice that the signal was acted upon only after the sleep was done'
echo 'in the while loop? Interesting, yes? Yes.'
exit 0
}
# End of fun. Now on to the business end of things.
print_debug() {
whatiam="$1"; tty="$2"
[[ "$tty" != "not a tty" ]] && {
echo "" >$tty
echo "$whatiam, PID $$" >$tty
ps -o pid,sess,pgid -p $$ >$tty
tty >$tty
}
}
me_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
me_FILE=$(basename $0)
cd /
#### CHILD HERE --------------------------------------------------------------------->
if [ "$1" = "child" ] ; then # 2. We are the child. We need to fork again.
shift; tty="$1"; shift
$DEBUG && print_debug "*** CHILD, NEW SESSION, NEW PGID" "$tty"
umask 0
$me_DIR/$me_FILE XXrefork_daemonXX "$tty" "$#" </dev/null >/dev/null 2>/dev/null &
$DEBUG && [[ "$tty" != "not a tty" ]] && echo "CHILD OUT" >$tty
exit 0
fi
##### ENTRY POINT HERE -------------------------------------------------------------->
if [ "$1" != "XXrefork_daemonXX" ] ; then # 1. This is where the original call starts.
tty=$(tty)
$DEBUG && print_debug "*** PARENT" "$tty"
setsid $me_DIR/$me_FILE child "$tty" "$#" &
$DEBUG && [[ "$tty" != "not a tty" ]] && echo "PARENT OUT" >$tty
exit 0
fi
##### RUNS AFTER CHILD FORKS (actually, on Linux, clone()s. See strace -------------->
# 3. We have been reforked. Go to work.
exec >/tmp/outfile
exec 2>/tmp/errfile
exec 0</dev/null
shift; tty="$1"; shift
$DEBUG && print_debug "*** DAEMON" "$tty"
# The real stuff goes here. To exit, see fun (above)
$DEBUG && [[ "$tty" != "not a tty" ]] && echo NOT A REAL DAEMON. NOT RUNNING WHILE LOOP. >$tty
$DEBUG || {
while true; do
echo "Change this loop, so this silly no-op goes away." >/dev/null
echo "Do something useful with your life, young padawan." >/dev/null
sleep 10
done
}
$DEBUG && [[ "$tty" != "not a tty" ]] && sleep 3 && echo "DAEMON OUT" >$tty
exit # This may never run. Why is it here then? It's pretty.
# Kind of like, "The End" at the end of a movie that you
# already know is over. It's always nice.
Output looks like this when DEBUG is set to true. Notice how the session and process group ID (SESS, PGID) numbers change:
<shell_prompt>$ bash blahd
*** PARENT, PID 5180
PID SESS PGID
5180 1708 5180
/dev/pts/6
PARENT OUT
<shell_prompt>$
*** CHILD, NEW SESSION, NEW PGID, PID 5188
PID SESS PGID
5188 5188 5188
not a tty
CHILD OUT
*** DAEMON, PID 5198
PID SESS PGID
5198 5188 5188
not a tty
NOT A REAL DAEMON. NOT RUNNING WHILE LOOP.
DAEMON OUT
# double background your script to have it detach from the tty
# cf. http://www.linux-mag.com/id/5981
(./program.sh &) &
Use your system's daemon facility, such as start-stop-daemon.
Otherwise, yes, there has to be a loop somewhere.
$ ( cd /; umask 0; setsid your_script.sh </dev/null &>/dev/null & ) &
It really depends on what is the binary itself going to do.
For example I want to create some listener.
The starting Daemon is simple task :
lis_deamon :
#!/bin/bash
# We will start the listener as Deamon process
#
LISTENER_BIN=/tmp/deamon_test/listener
test -x $LISTENER_BIN || exit 5
PIDFILE=/tmp/deamon_test/listener.pid
case "$1" in
start)
echo -n "Starting Listener Deamon .... "
startproc -f -p $PIDFILE $LISTENER_BIN
echo "running"
;;
*)
echo "Usage: $0 start"
exit 1
;;
esac
this is how we start the daemon (common way for all /etc/init.d/ staff)
now as for the listener it self,
It must be some kind of loop/alert or else that will trigger the script
to do what u want. For example if u want your script to sleep 10 min
and wake up and ask you how you are doing u will do this with the
while true ; do sleep 600 ; echo "How are u ? " ; done
Here is the simple listener that u can do that will listen for your
commands from remote machine and execute them on local :
listener :
#!/bin/bash
# Starting listener on some port
# we will run it as deamon and we will send commands to it.
#
IP=$(hostname --ip-address)
PORT=1024
FILE=/tmp/backpipe
count=0
while [ -a $FILE ] ; do #If file exis I assume that it used by other program
FILE=$FILE.$count
count=$(($count + 1))
done
# Now we know that such file do not exist,
# U can write down in deamon it self the remove for those files
# or in different part of program
mknod $FILE p
while true ; do
netcat -l -s $IP -p $PORT < $FILE |/bin/bash > $FILE
done
rm $FILE
So to start UP it : /tmp/deamon_test/listener start
and to send commands from shell (or wrap it to script) :
test_host#netcat 10.184.200.22 1024
uptime
20:01pm up 21 days 5:10, 44 users, load average: 0.62, 0.61, 0.60
date
Tue Jan 28 20:02:00 IST 2014
punt! (Cntrl+C)
Hope this will help.
Have a look at the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
On Mac OS X use a launchd script for shell daemon.
If I had a script.sh and i wanted to execute it from bash and leave it running even when I want to close my bash session then I would combine nohup and & at the end.
example: nohup ./script.sh < inputFile.txt > ./logFile 2>&1 &
inputFile.txt can be any file. If your file has no input then we usually use /dev/null. So the command would be:
nohup ./script.sh < /dev/null > ./logFile 2>&1 &
After that close your bash session,open another terminal and execute: ps -aux | egrep "script.sh" and you will see that your script is still running at the background. Of cource,if you want to stop it then execute the same command (ps) and kill -9 <PID-OF-YOUR-SCRIPT>
See Bash Service Manager project: https://github.com/reduardo7/bash-service-manager
Implementation example
#!/usr/bin/env bash
export PID_FILE_PATH="/tmp/my-service.pid"
export LOG_FILE_PATH="/tmp/my-service.log"
export LOG_ERROR_FILE_PATH="/tmp/my-service.error.log"
. ./services.sh
run-script() {
local action="$1" # Action
while true; do
echo "### Running action '${action}'"
echo foo
echo bar >&2
[ "$action" = "run" ] && return 0
sleep 5
[ "$action" = "debug" ] && exit 25
done
}
before-start() {
local action="$1" # Action
echo "* Starting with $action"
}
after-finish() {
local action="$1" # Action
local serviceExitCode=$2 # Service exit code
echo "* Finish with $action. Exit code: $serviceExitCode"
}
action="$1"
serviceName="Example Service"
serviceMenu "$action" "$serviceName" run-script "$workDir" before-start after-finish
Usage example
$ ./example-service
# Actions: [start|stop|restart|status|run|debug|tail(-[log|error])]
$ ./example-service start
# Starting Example Service service...
$ ./example-service status
# Serive Example Service is runnig with PID 5599
$ ./example-service stop
# Stopping Example Service...
$ ./example-service status
# Service Example Service is not running
Here is the minimal change to the original proposal to create a valid daemon in Bourne shell (or Bash):
#!/bin/sh
if [ "$1" != "__forked__" ]; then
setsid "$0" __forked__ "$#" &
exit
else
shift
fi
trap 'siguser1=true' SIGUSR1
trap 'echo "Clean up and exit"; kill $sleep_pid; exit' SIGTERM
exec > outfile
exec 2> errfile
exec 0< /dev/null
while true; do
(sleep 30000000 &>/dev/null) &
sleep_pid=$!
wait
kill $sleep_pid &>/dev/null
if [ -n "$siguser1" ]; then
siguser1=''
echo "Wait was interrupted by SIGUSR1, do things here."
fi
done
Explanation:
Line 2-7: A daemon must be forked so it doesn't have a parent. Using an artificial argument to prevent endless forking. "setsid" detaches from starting process and terminal.
Line 9: Our desired signal needs to be differentiated from other signals.
Line 10: Cleanup is required to get rid of dangling "sleep" processes.
Line 11-13: Redirect stdout, stderr and stdin of the script.
Line 16: sleep in the background
Line 18: wait waits for end of sleep, but gets interrupted by (some) signals.
Line 19: Kill sleep process, because that is still running when signal is caught.
Line 22: Do the work if SIGUSR1 has been caught.
Guess it does not get any simpler than that.
Like many answers this one is not a "real" daemonization but rather an alternative to nohup approach.
echo "script.sh" | at now
There are obviously differences from using nohup. For one there is no detaching from the parent in the first place. Also "script.sh" doesn't inherit parent's environment.
By no means this is a better alternative. It is simply a different (and somewhat lazy) way of launching processes in background.
P.S. I personally upvoted carlo's answer as it seems to be the most elegant and works both from terminal and inside scripts
try executing using &
if you save this file as program.sh
you can use
$. program.sh &

Resources