Make script terminate when background zenity dialog is closed - bash

I have written a fairly simple script here that is meant to display a text info dialog using zenity and continuously read data from a remote TCP connection and display it in the dialog. This works... However I would like for the entire script to terminate if I close the zenity dialog.
Is there a way to do this? I don't think I can check for anything in the while loop, because the script could be stalled on reading the data from the remote TCP connection.
#!/bin/bash
on_exit() {
zenity --display=:0 --error --text="Script has exited." &
}
# Options
while getopts "a:p:t:" OPTION; do case "$OPTION" in
a) address="$OPTARG";;
p) port="$OPTARG";;
t) title="$OPTARG";;
esac; done
exec &> >(zenity --display=:0 --text-info --title=$title || exit)
# doesn't make a difference? ↑
# also tried &&
trap "on_exit" EXIT
while read data < /dev/tcp/$address/$port; do
echo $data
# ...
# do some other stuff with the information
# ...
done
Note: This is going to be run on IGEL Linux. I don't have the option of installing additional packages. So, ideally the solution I'm looking for is native to Bash.
Update
I only had to make this modification to continue using exec. Or #BachLien's answer using named pipes also works.
PID=$$
exec &> >(zenity --display=:0 --text-info --title=$title; kill $PID)

I do not have zenity installed, so I tried this script to illustrate the idea.
The program terminal+cat (emmulating zenity) is executed by the function _dspMsg which runs in background (child process); cat continuously displays messages from a file ($m) which is a named pipe; the parent process is killed when terminal+cat exits.
In the mean while, another cat proccess writes messsages into pipe $m (emmulating TPC infomation feeds); it would be killed when when _dspMsg exits.
#!/bin/bash
# 1) named pipe
m=`mktemp -u /tmp/msg-XXXX=` # get a temporary filename (for named pipe)
mkfifo "$m" # create that named pipe
trap "echo END; rm $m" EXIT # remove that file when exit
# 2) zenity
_dspMsg(){ # continuously display messages
urxvt -e bash -c "cat <$m" # terminal+cat is used in place of zenity
kill $1 # kill parent pid
} # to be run in background
_dspMsg $$ & # $$ = proccess id
# 3) TCP info feeds
cat >>"$m" # feeding messages using cat
# cat is used in placed of TCP data feed
Note:
A named pipe is used as a way of communicating between parent and child processes.
To test that script, you may need to change urxvt to xterm, iTerm, or any other terminal emmulator available in your computer.
So, maybe it is what you need (untested):
#!/bin/bash
while getopts "a:p:t:" OPTION; do case "$OPTION" in
a) address="$OPTARG";;
p) port="$OPTARG";;
t) title="$OPTARG";;
esac; done
m=`mktemp -u /tmp/msg-XXXX=`
mkfifo "$m"
trap "zenity --display=:0 --error --text='Script has exited.' & rm $m" EXIT
_dspMsg(){
zenity --display=:0 --text-info --title="$title" <"$m"
kill $1
}
_dspMsg $$ &
while read data < /dev/tcp/$address/$port; do
echo $data >>"$m"
done

You can pipe all of the output of the while loop into zenity, getting rid of the need for exec &>.
while read data < "/dev/tcp/$address/$port"; do
echo "$data"
done | zenity --display=:0 --text-info --title="$title"

Related

Kill next command in pipeline on failure

I have a streaming backup script which I'm running as follows:
./backup_script.sh | aws s3 cp - s3://bucket/path/to/backup
The aws command streams stdin to cloud storage in an atomic way. If the process is interrupted without an EOF, the upload is aborted.
I want the aws process to be killed if ./backup_script.sh exits with a non-zero exit code.
Any bash trick for doing this?
EDIT:
You can test your solution with this script:
#!/usr/bin/env python
import signal
import sys
import functools
def signal_handler(signame, signum, frame):
print "Got {}".format(signame)
sys.exit(0)
signal.signal(signal.SIGTERM, functools.partial(signal_handler, 'TERM'))
signal.signal(signal.SIGINT, functools.partial(signal_handler, 'INT'))
for i in sys.stdin:
pass
print "Got EOF"
Example:
$ grep --bla | ./sigoreof.py
grep: unrecognized option `--bla'
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
Got EOF
I want ./sigoreof.py to be terminated with a signal.
Adopting/correcting a solution originally given by #Dummy00001:
mkfifo aws.fifo
exec 3<>aws.fifo # open the FIFO read/write *in the shell itself*
aws s3 cp - s3://bucket/path/to/backup <aws.fifo 3>&- & aws_pid=$!
rm aws.fifo # everyone who needs a handle already has one; can remove the directory entry
if ./backup_script.sh >&3 3>&-; then
exec 3>&- # success: close the FIFO and let AWS exit successfully
wait "$aws_pid"
else
kill "$aws_pid" # send a SIGTERM...
wait "$aws_pid" # wait for the process to die...
exec 3>&- # only close the write end *after* the process is dead
fi
Important points:
The shell opens the FIFO r/w to avoid blocking (opening for write only would block for a reader; this could also be avoided by invoking the reader [that is, the s3 command] in the background prior to the exec opening the write side).
The write end of the FIFO is held by the script itself, so the read end never hits end-of-file until after the script intentionally closes it.
The aws command's handle on the write end of the FIFO is explicitly closed (3<&-), so it doesn't hold itself open (in which case the exec 3>&- done in the parent would not successfully allow it to finish reading and exit).
backup_script.sh should have a non-zero exit status if there is an error, so you script should look something like:
if ./backup_script.sh > output.txt; then
aws s3 cp output.txt s3://bucket/path/to/backup
fi
rm -f output.txt
A pipe isn't really appropriate here.
If you really need to conserve disk space locally, you'll have to "reverse" the upload; either remove the uploaded file in the event of an error in backup_script.sh, or upload to a temporary location, then move that to the final path once you've determined that the backup has succeeded.
(For simplicity, I'm ignoring the fact that by letting aws exit on its own in the event of an error, you may be uploading more of the partial backup than you need to. See Charles Duffy's answer for a more bandwidth-efficient approach.)
After starting the backup process with
mkfifo data
./backup_script.sh > data & writer_pid=$!
use one of the following to upload the data.
# Upload and remove if there was an error
aws s3 cp - s3://bucket/path/to/backup < data &
if ! wait $writer_pid; then
aws s3 rm s3://bucket/path/to/backup
fi
or
# Upload to a temporary file and move it into place
# once you know the backup succeeded.
aws s3 cp - s3://bucket/path/to/backup.tmp < data &
if wait $writer_pid; then
aws s3 mv s3://bucket/path/to/backup.tmp s3://bucket/path/to/backup
else
aws s3 rm s3://bucket/path/to/backup
fi
A short script which uses process substitution instead of named pipes would be:
#!/bin/bash
exec 4> >( ./second-process.sh )
./first-process.sh >&4 &
if ! wait $! ; then echo "error in first process" >&2; kill 0; wait; fi
It works much like with a fifo, basically using the fd as the information carrier for the IPC instead of a file name.
Two remarks: I wasn't sure whether it's necessary to close fd 4 ; I would assume that upon script exit the shell closes all open files.
And I couldn't figure out how to obtain the PID of the process in the process substitution (anybody? at least on my cygwin the usual $! didn't work.) Therefore I resorted to killing all processes in the group, which may not be desirable (but I'm not entirely sure about the semantics).
I think you need to spawn both processes from a third one and either use the named pipe approach from Lynch in the post mentioned by #tourism (further below in the answers); or keep piping directly but re-write backup_script.sh such that it stays alive in the error case, keeping stdout open. backup_script.sh would have to signal the error condition to the calling process (e.g. by sending a SIGUSR to the parent process ID), which in turn first kills the aws process (leading to an atomic abort) and only then backup_script.sh, unless it exited already because of the broken pipe.
I had a similar situation: a shell script contained a pipeline that used one of its own functions and that function wanted to be able to effect termination. A simple contrived example that finds and displays a file:
#!/bin/sh
a() { find . -maxdepth 1 -name "$1" -print -quit | grep . || exit 101; }
a "$1" | cat
echo done
Here, the function a needs to be able to effect termination which it tries to do by calling exit. However, when invoked through a pipeline (line 3), it only terminates its own (subshell) process. In the example, the done message still appears.
One way to work around this is to detect when in a subshell and send a signal to the parent:
#!/bin/sh
die() { [[ $$ == $(exec sh -c 'echo $PPID') ]] && exit $1 || kill $$; }
a() { find . -maxdepth 1 -name "$1" -print -quit | grep . || die 101; }
a "$1" | cat
echo done
When in a subshell the $$ is the pid of the parent and the construct $(exec sh -c 'echo $PPID') is a shell-agnostic way to obtain the pid of the subprocess. If using bash then this can be replaced by $BASHPID.
If the subprocess pid and $$ differ then it sends a SIGTERM signal to the parent (kill $$) instead of calling exit.
The given exit status (101) isn't propagated by kill so the script exits with a status of 143 (which is 128+15 where 15 is the id of SIGTERM).

Quit from pipe in bash

For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).

How do I receive notification in a bash script when a specific child process terminates?

I wonder if anyone can help with this?
I have a bash script. It starts a sub-process which is another gui-based application. The bash script then goes into an interactive mode getting input from the user. This interactive mode continues indefinately. I would like it to terminate when the gui-application in the sub-process exits.
I have looked at SIGCHLD but this doesn't seem to be the answer. Here's what I've tried but I don't get a signal when the prog ends.
set -o monitor
"${prog}" &
prog_pid=$!
function check_pid {
kill -0 $1 2> /dev/null
}
function cleanup {
### does cleanup stuff here
exit
}
function sigchld {
check_pid $prog_pid
[[ $? == 1 ]] && cleanup
}
trap sigchld SIGCHLD
Updated following answers. I now have this working using the suggestion from 'nosid'. I have another, related, issue now which is that the interactive process that follows is a basic menu driven process that blocks waiting for key input from the user. If the child process ends the USR1 signal is not handled until after input is received. Is there any way to force the signal to be handled immediately?
The wait look looks like this:
stty raw # set the tty driver to raw mode
max=$1 # maximum valid choice
choice=$(expr $max + 1) # invalid choice
while [[ $choice -gt $max ]]; do
choice=`dd if=/dev/tty bs=1 count=1 2>/dev/null`
done
stty sane # restore tty
Updated with solution. I have solved this. The trick was to use nonblocking I/O for the read. Now, with the answer from 'nosid' and my modifications, I have exactly what I want. For completeness, here is what works for me:
#!/bin/bash -bm
{
"${1}"
kill -USR1 $$
} &
function cleanup {
# cleanup stuff
exit
}
trap cleanup SIGUSR1
while true ; do
stty raw # set the tty driver to raw mode
max=9 # maximum valid choice
while [[ $choice -gt $max || -z $choice ]]; do
choice=`dd iflag=nonblock if=/dev/tty bs=1 count=1 2>/dev/null`
done
stty sane # restore tty
# process choice
done
Here is a different approach. Instead of using SIGCHLD, you can execute an arbitrary command as soon as the GUI application terminates.
{
some_command args...
kill -USR1 $$
} &
function sigusr1() { ... }
trap sigusr1 SIGUSR1
Ok. I think I understand what you need. Have a look at my .xinitrc:
xrdb ~/.Xdefaults
source ~/.xinitrc.hw.settings
xcompmgr &
xscreensaver &
# after starting some arbitrary crap we want to start the main gui.
startfluxbox & PIDOFAPP=$! ## THIS IS THE IMPORTANT PART
setxkbmap genja
wmclockmon -bl &
sleep 1
wmctrl -s 3 && aterms sone &
sleep 1
wmctrl -s 0
wait $PIDOFAPP ## THIS IS THE SECOND PART OF THE IMPORTANT PART
xeyes -geometry 400x400+500+400 &
sleep 2
echo im out!
What happens is that after you send a process to the background, you can use wait to wait until the process dies. whatever is after wait will not be executed as long as the application is running. You can use this to exit after the GUI has been shut down.
PS: I run bash.
I think you need to do:
set -bm
or
set -o monitor notify
As per the bash manual:
-b
Cause the status of terminated background jobs to be reported immediately, rather than before printing the next primary prompt.
The shell's main job is executing child processes, and
it needs to catch SIGCHLD for its own purposes. This somehow restricts it to pass on the signal to the script itself.
Could you just check for the child pid and based on that send the alert. You can find the child pid as below-
bash_pid=$$
while true
do
children=`ps -eo ppid | grep -w $bash_pid`
if [ -z "$children" ]; then
cleanup
alert
exit
fi
done

Best way to make a shell script daemon?

I'm wondering if there is a better way to make a daemon that waits for something using only sh than:
#! /bin/sh
trap processUserSig SIGUSR1
processUserSig() {
echo "doing stuff"
}
while true; do
sleep 1000
done
In particular, I'm wondering if there's any way to get rid of the loop and still have the thing listen for the signals.
Just backgrounding your script (./myscript &) will not daemonize it. See http://www.faqs.org/faqs/unix-faq/programmer/faq/, section 1.7, which describes what's necessary to become a daemon. You must disconnect it from the terminal so that SIGHUP does not kill it. You can take a shortcut to make a script appear to act like a daemon;
nohup ./myscript 0<&- &>/dev/null &
will do the job. Or, to capture both stderr and stdout to a file:
nohup ./myscript 0<&- &> my.admin.log.file &
Redirection explained (see bash redirection)
0<&- closes stdin
&> file sends stdout and stderr to a file
However, there may be further important aspects that you need to consider. For example:
You will still have a file descriptor open to the script, which means that the directory it's mounted in would be unmountable. To be a true daemon you should chdir("/") (or cd / inside your script), and fork so that the parent exits, and thus the original descriptor is closed.
Perhaps run umask 0. You may not want to depend on the umask of the caller of the daemon.
For an example of a script that takes all of these aspects into account, see Mike S' answer.
Some of the top-upvoted answers here are missing some important parts of what makes a daemon a daemon, as opposed to just a background process, or a background process detached from a shell.
This http://www.faqs.org/faqs/unix-faq/programmer/faq/ describes what is necessary to be a daemon. And this Run bash script as daemon implements the setsid, though it misses the chdir to root.
The original poster's question was actually more specific than "How do I create a daemon process using bash?", but since the subject and answers discuss daemonizing shell scripts generally, I think it's important to point it out (for interlopers like me looking into the fine details of creating a daemon).
Here's my rendition of a shell script that would behave according to the FAQ. Set DEBUG to true to see pretty output (but it also exits immediately rather than looping endlessly):
#!/bin/bash
DEBUG=false
# This part is for fun, if you consider shell scripts fun- and I do.
trap process_USR1 SIGUSR1
process_USR1() {
echo 'Got signal USR1'
echo 'Did you notice that the signal was acted upon only after the sleep was done'
echo 'in the while loop? Interesting, yes? Yes.'
exit 0
}
# End of fun. Now on to the business end of things.
print_debug() {
whatiam="$1"; tty="$2"
[[ "$tty" != "not a tty" ]] && {
echo "" >$tty
echo "$whatiam, PID $$" >$tty
ps -o pid,sess,pgid -p $$ >$tty
tty >$tty
}
}
me_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
me_FILE=$(basename $0)
cd /
#### CHILD HERE --------------------------------------------------------------------->
if [ "$1" = "child" ] ; then # 2. We are the child. We need to fork again.
shift; tty="$1"; shift
$DEBUG && print_debug "*** CHILD, NEW SESSION, NEW PGID" "$tty"
umask 0
$me_DIR/$me_FILE XXrefork_daemonXX "$tty" "$#" </dev/null >/dev/null 2>/dev/null &
$DEBUG && [[ "$tty" != "not a tty" ]] && echo "CHILD OUT" >$tty
exit 0
fi
##### ENTRY POINT HERE -------------------------------------------------------------->
if [ "$1" != "XXrefork_daemonXX" ] ; then # 1. This is where the original call starts.
tty=$(tty)
$DEBUG && print_debug "*** PARENT" "$tty"
setsid $me_DIR/$me_FILE child "$tty" "$#" &
$DEBUG && [[ "$tty" != "not a tty" ]] && echo "PARENT OUT" >$tty
exit 0
fi
##### RUNS AFTER CHILD FORKS (actually, on Linux, clone()s. See strace -------------->
# 3. We have been reforked. Go to work.
exec >/tmp/outfile
exec 2>/tmp/errfile
exec 0</dev/null
shift; tty="$1"; shift
$DEBUG && print_debug "*** DAEMON" "$tty"
# The real stuff goes here. To exit, see fun (above)
$DEBUG && [[ "$tty" != "not a tty" ]] && echo NOT A REAL DAEMON. NOT RUNNING WHILE LOOP. >$tty
$DEBUG || {
while true; do
echo "Change this loop, so this silly no-op goes away." >/dev/null
echo "Do something useful with your life, young padawan." >/dev/null
sleep 10
done
}
$DEBUG && [[ "$tty" != "not a tty" ]] && sleep 3 && echo "DAEMON OUT" >$tty
exit # This may never run. Why is it here then? It's pretty.
# Kind of like, "The End" at the end of a movie that you
# already know is over. It's always nice.
Output looks like this when DEBUG is set to true. Notice how the session and process group ID (SESS, PGID) numbers change:
<shell_prompt>$ bash blahd
*** PARENT, PID 5180
PID SESS PGID
5180 1708 5180
/dev/pts/6
PARENT OUT
<shell_prompt>$
*** CHILD, NEW SESSION, NEW PGID, PID 5188
PID SESS PGID
5188 5188 5188
not a tty
CHILD OUT
*** DAEMON, PID 5198
PID SESS PGID
5198 5188 5188
not a tty
NOT A REAL DAEMON. NOT RUNNING WHILE LOOP.
DAEMON OUT
# double background your script to have it detach from the tty
# cf. http://www.linux-mag.com/id/5981
(./program.sh &) &
Use your system's daemon facility, such as start-stop-daemon.
Otherwise, yes, there has to be a loop somewhere.
$ ( cd /; umask 0; setsid your_script.sh </dev/null &>/dev/null & ) &
It really depends on what is the binary itself going to do.
For example I want to create some listener.
The starting Daemon is simple task :
lis_deamon :
#!/bin/bash
# We will start the listener as Deamon process
#
LISTENER_BIN=/tmp/deamon_test/listener
test -x $LISTENER_BIN || exit 5
PIDFILE=/tmp/deamon_test/listener.pid
case "$1" in
start)
echo -n "Starting Listener Deamon .... "
startproc -f -p $PIDFILE $LISTENER_BIN
echo "running"
;;
*)
echo "Usage: $0 start"
exit 1
;;
esac
this is how we start the daemon (common way for all /etc/init.d/ staff)
now as for the listener it self,
It must be some kind of loop/alert or else that will trigger the script
to do what u want. For example if u want your script to sleep 10 min
and wake up and ask you how you are doing u will do this with the
while true ; do sleep 600 ; echo "How are u ? " ; done
Here is the simple listener that u can do that will listen for your
commands from remote machine and execute them on local :
listener :
#!/bin/bash
# Starting listener on some port
# we will run it as deamon and we will send commands to it.
#
IP=$(hostname --ip-address)
PORT=1024
FILE=/tmp/backpipe
count=0
while [ -a $FILE ] ; do #If file exis I assume that it used by other program
FILE=$FILE.$count
count=$(($count + 1))
done
# Now we know that such file do not exist,
# U can write down in deamon it self the remove for those files
# or in different part of program
mknod $FILE p
while true ; do
netcat -l -s $IP -p $PORT < $FILE |/bin/bash > $FILE
done
rm $FILE
So to start UP it : /tmp/deamon_test/listener start
and to send commands from shell (or wrap it to script) :
test_host#netcat 10.184.200.22 1024
uptime
20:01pm up 21 days 5:10, 44 users, load average: 0.62, 0.61, 0.60
date
Tue Jan 28 20:02:00 IST 2014
punt! (Cntrl+C)
Hope this will help.
Have a look at the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
On Mac OS X use a launchd script for shell daemon.
If I had a script.sh and i wanted to execute it from bash and leave it running even when I want to close my bash session then I would combine nohup and & at the end.
example: nohup ./script.sh < inputFile.txt > ./logFile 2>&1 &
inputFile.txt can be any file. If your file has no input then we usually use /dev/null. So the command would be:
nohup ./script.sh < /dev/null > ./logFile 2>&1 &
After that close your bash session,open another terminal and execute: ps -aux | egrep "script.sh" and you will see that your script is still running at the background. Of cource,if you want to stop it then execute the same command (ps) and kill -9 <PID-OF-YOUR-SCRIPT>
See Bash Service Manager project: https://github.com/reduardo7/bash-service-manager
Implementation example
#!/usr/bin/env bash
export PID_FILE_PATH="/tmp/my-service.pid"
export LOG_FILE_PATH="/tmp/my-service.log"
export LOG_ERROR_FILE_PATH="/tmp/my-service.error.log"
. ./services.sh
run-script() {
local action="$1" # Action
while true; do
echo "### Running action '${action}'"
echo foo
echo bar >&2
[ "$action" = "run" ] && return 0
sleep 5
[ "$action" = "debug" ] && exit 25
done
}
before-start() {
local action="$1" # Action
echo "* Starting with $action"
}
after-finish() {
local action="$1" # Action
local serviceExitCode=$2 # Service exit code
echo "* Finish with $action. Exit code: $serviceExitCode"
}
action="$1"
serviceName="Example Service"
serviceMenu "$action" "$serviceName" run-script "$workDir" before-start after-finish
Usage example
$ ./example-service
# Actions: [start|stop|restart|status|run|debug|tail(-[log|error])]
$ ./example-service start
# Starting Example Service service...
$ ./example-service status
# Serive Example Service is runnig with PID 5599
$ ./example-service stop
# Stopping Example Service...
$ ./example-service status
# Service Example Service is not running
Here is the minimal change to the original proposal to create a valid daemon in Bourne shell (or Bash):
#!/bin/sh
if [ "$1" != "__forked__" ]; then
setsid "$0" __forked__ "$#" &
exit
else
shift
fi
trap 'siguser1=true' SIGUSR1
trap 'echo "Clean up and exit"; kill $sleep_pid; exit' SIGTERM
exec > outfile
exec 2> errfile
exec 0< /dev/null
while true; do
(sleep 30000000 &>/dev/null) &
sleep_pid=$!
wait
kill $sleep_pid &>/dev/null
if [ -n "$siguser1" ]; then
siguser1=''
echo "Wait was interrupted by SIGUSR1, do things here."
fi
done
Explanation:
Line 2-7: A daemon must be forked so it doesn't have a parent. Using an artificial argument to prevent endless forking. "setsid" detaches from starting process and terminal.
Line 9: Our desired signal needs to be differentiated from other signals.
Line 10: Cleanup is required to get rid of dangling "sleep" processes.
Line 11-13: Redirect stdout, stderr and stdin of the script.
Line 16: sleep in the background
Line 18: wait waits for end of sleep, but gets interrupted by (some) signals.
Line 19: Kill sleep process, because that is still running when signal is caught.
Line 22: Do the work if SIGUSR1 has been caught.
Guess it does not get any simpler than that.
Like many answers this one is not a "real" daemonization but rather an alternative to nohup approach.
echo "script.sh" | at now
There are obviously differences from using nohup. For one there is no detaching from the parent in the first place. Also "script.sh" doesn't inherit parent's environment.
By no means this is a better alternative. It is simply a different (and somewhat lazy) way of launching processes in background.
P.S. I personally upvoted carlo's answer as it seems to be the most elegant and works both from terminal and inside scripts
try executing using &
if you save this file as program.sh
you can use
$. program.sh &

write to fifo/pipe from shell, with timeout

I have a pair of shell programs that talk over a named pipe. The reader creates the pipe when it starts, and removes it when it exits.
Sometimes, the writer will attempt to write to the pipe between the time that the reader stops reading and the time that it removes the pipe.
reader: while condition; do read data <$PIPE; do_stuff; done
writer: echo $data >>$PIPE
reader: rm $PIPE
when this happens, the writer will hang forever trying to open the pipe for writing.
Is there a clean way to give it a timeout, so that it won't stay hung until killed manually? I know I can do
#!/bin/sh
# timed_write <timeout> <file> <args>
# like "echo <args> >> <file>" with a timeout
TIMEOUT=$1
shift;
FILENAME=$1
shift;
PID=$$
(X=0; # don't do "sleep $TIMEOUT", the "kill %1" doesn't kill the sleep
while [ "$X" -lt "$TIMEOUT" ];
do sleep 1; X=$(expr $X + 1);
done; kill $PID) &
echo "$#" >>$FILENAME
kill %1
but this is kind of icky. Is there a shell builtin or command to do this more cleanly (without breaking out the C compiler)?
The UNIX "standard" way of dealing with this is to use Expect, which comes with timed-run example: run a program for only a given amount of time.
Expect can do wonders for scripting, well worth learning it. If you don't like Tcl, there is a Python Expect module as well.
This pair of programs works much more nicely after being re-written in Perl using Unix domain sockets instead of named pipes. The particular problem in this question went away entirely, since if/when one end dies the connection disappears instead of hanging.
This question comes up periodically (though I couldn't find it with a search). I've written two shell scripts to use as timeout commands: one for things that read standard input and one for things that don't read standard input. This stinks, and I've been meaning to write a C program, but I haven't gotten around to it yet. I'd definitely recommend writing a timeout command in C once and for all. But meanwhile, here's the simpler of the two shell scripts, which hangs if the command reads standard input:
#!/bin/ksh
# our watchdog timeout in seconds
maxseconds="$1"
shift
case $# in
0) echo "Usage: `basename $0` <seconds> <command> [arg ...]" 1>&2 ;;
esac
"$#" &
waitforpid=$!
{
sleep $maxseconds
echo "TIMED OUT: $#" 1>&2
2>/dev/null kill -0 $waitforpid && kill -15 $waitforpid
} &
killerpid=$!
>>/dev/null 2>&1 wait $waitforpid
# this is the exit value we care about, so save it and use it when we
rc=$?
# zap our watchdog if it's still there, since we no longer need it
2>>/dev/null kill -0 $killerpid && kill -15 $killerpid
exit $rc
The other script is online at http://www.cs.tufts.edu/~nr/drop/timeout.
trap 'kill $(ps -L $! -o pid=); exit 30' 30
echo kill -30 $$ 2\>/dev/null | at $1 2>/dev/null
shift; eval $# &
wait

Resources