how to implement event handling in shell script? - shell

In my shell script, im deleting a file at the end of script. And i need it to be deleted even if the script was stopped by (ctrl c or ctrl z)..Is there any way to read that and delete the file?
Thanks in advance

Like #pgl said, trap is what you want. The syntax is:
trap <actionhere> <event> [event...]
The action is one and only one argument, but it can run several commands. The event is either exit (when you call exit manually) or a signal by its "short" name, ie without the SIG prefix (for instance, INT for SIGINT.
Example:
trap "rm -f myfile" INT exit
You can change the trap all along the script. And of course, you can use variable interpolation in your action.

You can catch ctrl+c's with the trap builtin. Try this to get started:
help trap

Related

When trapping SIGINT in bash what effect does the -- have in the trap command?

I've been reading various posts about handling SIGINT in bash, but I still don't properly understand it.
I know that trap '_handler_name' SIGINT runs _handler_name when the signal is received. (People always seem to put it in single quotes, but I don't know why. It doesn't seem necessary to me.)
I was hoping to be able to trap SIGINT and handle it without aborting a loop in my script, but that only seems to work when the loop is in its own subshell. (I don't know why that is...)
I had thought that using trap -- '_handler_name' SIGINT might somehow stop other parts of the script from aborting when the signal is received. (This is based upon my reading of this answer.)
So my main question is: what effect does the -- have on trap. I thought that always just meant "that's the end of the switches" but the example I was looking at didn't have a - after that, so it looks redundant.
And sub-questions that would help my understanding are: what effect does trap have on subshells? and why do people put the handler name in quotes in the trap command?
For context, what I'm trying to do is spot a SIGINT, politely kill a couple of processes, then wait for a few seconds for everything to finish before exiting manually.
PS This article was interesting, though I didn't manage to get my solution from reading it.
UPDATE: I've moved what was here to a new question, since it turns out that what I'm asking here isn't the cause of the issue I've observed.
what effect does the -- have on trap
Yes, it just means "that's the end of the switches". https://github.com/bminor/bash/blob/master/builtins/trap.def#L110 -> https://github.com/bminor/bash/blob/f3a35a2d601a55f337f8ca02a541f8c033682247/builtins/bashgetopt.c#L85
what effect does trap have on subshells?
From bash manual command execution environment https://www.gnu.org/software/bash/manual/bash.html#Command-Execution-Environment :
When a simple command other than a builtin or shell function is to be executed, it is invoked in a separate execution environment that consists of the following. Unless otherwise noted, the values are inherited from the shell.
...
traps caught by the shell are reset to the values inherited from the shell’s parent, and traps ignored by the shell are ignored
...
Command substitution, commands grouped with parentheses, and asynchronous commands are invoked in a subshell environment that is a duplicate of the shell environment, except that traps caught by the shell are reset to the values that the shell inherited from its parent at invocation.
Also below near trap builtin:
Trapped signals that are not being ignored are reset to their original values in a subshell or subshell environment when one is created.
Also errexit is relevant specifically for ERR.
why do people put the handler name in quotes in the trap command?
Cosmetics. Because trap re-evals the string, it's a visual cue that it's a string to be re-evaled, not list of words.

BASH: special parameter for the last command, not parameter, executed

I am looking for a workaround for processes with a long duration.
There is the special parameter $_ containing the last parameter of the last command.
Well I am asking you for something vice versa.
For example:
/etc/init.d/service stop; /etc/init.d/service start
.. could be easier if there is a parameter/variable containing the last binary/script called. Let's define it as $. and we get this:
/etc/init.d/service stop; $. start
Do you have any Idea how to get this?
I found this Thread on SO
But I only get output like this:
printf "\033]0;%s#%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"
But the var $BASH_COMMAND is working well:
# echo $BASH_COMMAND
echo $BASH_COMMAND
# echo $BASH_VERSION
4.1.2(1)-release
Any help is very appreciated!
Thank you,
Florian
You can re-execute the last command by using:
!!
however, this won't help with what you want to do, so you could try using the "search and replace on last command" shortcut:
^<text to search for>^<text to replace with>^
so your problem could be solved using:
/etc/init.d/service stop; ^stop^start^
NOTE: This will only replace the first instance of the search text.
Also, see the comments below by more experienced peeps, for other examples and useful sources.
If the primary problem is the duration of the first process, and you know what the next process will be, you can simply issue a wait command against the first process and follow it with the second.
Example with backgrounded process:
./longprocess &
wait ${!}; ./nextprocess # ${!} simply pulls the PID of the last bg process
Example with manual PID entry:
./longprocess
# determine PID of longprocess
wait [PID]; ./nextprocess
Or, if it is always start|stop of init scripts, could make a custom script like below.
#/bin/bash
#wrapperscript.sh
BASESCRIPT=${1}
./$BASESCRIPT stop
./$BASESCRIPT start
Since the commands are wrapped in a shellscript, the default behavior will be for the shell to wait for each command to complete before moving on to the next. So, execution would look like:
./wrapperscript.sh /etc/init.d/service

How to read escape characters in shell script?

How to read control c, control z in shell script?
Thanks in advance
added...
What is my requirement was, im deleting a file at the end of script. If the script was stopped (by control c or control z) also i need to delete that file
Every time an user make control c (or any other special combination) a signal are send to your script.
You will need to capture this signal in your script using the trap command.
It's long to explain, but this web contain a good explanation about managing signals: http://linuxcommand.org/wss0160.php
#!/bin/sh
trap 'echo Hi there' INT USR1 TERM
while true; do sleep 1; done
Read man kill for the list of allowed signals that you can put there, note the description field in the SIGNALS section of the kill man page that mentions which signal can be blocked (trapped) by your shell script.
Note: Ctrl + c is the INT (interrupt) signal

Shell script that can check if it was backgrounded at invocation

I have written a script that relies on other server responses (uses wget to pull data), and I want it to always be run in the background unquestionably. I know one solution is to just write a wrapper script that will call my script with an & appended, but I want to avoid that clutter.
Is there a way for a bash (or zsh) script to determine if it was called with say ./foo.sh &, and if not, exit and re-launch itself as such?
The definition of a background process (I think) is that it has a controlling terminal but it is not part of that terminal's foreground process group. I don't think any shell, even zsh, gives you any access to that information through a builtin.
On Linux (and perhaps other unices), the STAT column of ps includes a + when the process is part of its terminal's foreground process group. So a literal answer to your question is that you could put your script's content in a main function and invoke it with:
case $(ps -o stat= -p $$) in
*+*) main "$#" &;;
*) main "$#";;
esac
But you might as well run main "$#" & anyway. On Unix, fork is cheap.
However, I strongly advise against doing what you propose. This makes it impossible for someone to run your script and do something else afterwards — one would expect to be able to write your_script; my_postprocessing or your_script && my_postprocessing, but forking the script's main task makes this impossible. Considering that the gain is occasionally saving one character when the script is invoked, it's not worth making your script markedly less useful in this way.
If you really mean for the script to run in the background so that the user can close his terminal, you'll need to do more work — you'll need to daemonize the script, which includes not just backgrounding but also closing all file descriptors that have the terminal open, making the process a session leader and more. I think that will require splitting your script into a daemonizing wrapper script and a main script. But daemonizing is normally done for programs that never terminate unless explicitly stopped, which is not the behavior you describe.
I do not know, how to do this, but you may set variable in parent script and check for it in child:
if [[ -z "$_BACKGROUNDED" ]] ; then
_BACKGROUNDED=1 exec "$0" "$#" & exit
fi
# Put code here
Works both in bash and zsh.
the "tty" command says "not a tty" if you're in the background, or gives the controlling terminal name (/dev/pts/1 for example) if you're in the foreground. A simple way to tell.
Remember that you can't (or, not recommended to) edit the running script. This question and the answers give workarounds.
I don't write shell scripts a long time ago, but I can give you a very good idea (I hope). You can check the value of $$ (this is the PID of the process) and compare with the output of the command "jobs -l". This last command will return the PID of all the backgrounded processes (jobs) and if the value of $$ is contained in the result of the "jobs -l", this means that the current script is running on background.

Removing created temp files in unexpected bash exit

I am creating temporary files from a bash script. I am deleting them at the end of the processing, but since the script is running for quite a long time, if I kill it or simply CTRL-C during the run, the temp files are not deleted.
Is there a way I can catch those events and clean-up the files before the execution ends?
Also, is there some kind of best practice for the naming and location of those temp files?
I'm currently not sure between using:
TMP1=`mktemp -p /tmp`
TMP2=`mktemp -p /tmp`
...
and
TMP1=/tmp/`basename $0`1.$$
TMP2=/tmp/`basename $0`2.$$
...
Or maybe is there some better solutions?
I usually create a directory in which to place all my temporary files, and then immediately after, create an EXIT handler to clean up this directory when the script exits.
MYTMPDIR="$(mktemp -d)"
trap 'rm -rf -- "$MYTMPDIR"' EXIT
If you put all your temporary files under $MYTMPDIR, then they will all be deleted when your script exits in most circumstances. Killing a process with SIGKILL (kill -9) kills the process right away though, so your EXIT handler won't run in that case.
You could set a "trap" to execute on exit or on a control-c to clean up.
trap '{ rm -f -- "$LOCKFILE"; }' EXIT
Alternatively, one of my favourite unix-isms is to open a file, and then delete it while you still have it open. The file stays on the file system and you can read and write it, but as soon as your program exits, the file goes away. Not sure how you'd do that in bash, though.
BTW: One argument I'll give in favour of mktemp instead of using your own solution: if the user anticipates your program is going to create huge temporary files, he might want set TMPDIR to somewhere bigger, like /var/tmp. mktemp recognizes that, your hand-rolled solution (second option) doesn't. I frequently use TMPDIR=/var/tmp gvim -d foo bar, for instance.
You want to use the trap command to handle exiting the script or signals like CTRL-C. See the Greg's Wiki for details.
For your tempfiles, using basename $0 is a good idea, as well as providing a template that provides room for enough temp files:
tempfile() {
tempprefix=$(basename "$0")
mktemp /tmp/${tempprefix}.XXXXXX
}
TMP1=$(tempfile)
TMP2=$(tempfile)
trap 'rm -f $TMP1 $TMP2' EXIT
Just keep in mind that choosen answer is bashism, which means solution as
trap "{ rm -f $LOCKFILE }" EXIT
would work only in bash (it will not catch Ctrl+c if shell is dash or classic sh), but if you want compatibility then you still need to enumerate all signals that you want to trap.
Also keep in mind that when script exits the trap for signal "0"(aka EXIT) is always performed resulting in double execution of trap command.
That the reason not to stack all signals in one line if there is EXIT signal.
To better understand it look at following script that will work across different systems without changes:
#!/bin/sh
on_exit() {
echo 'Cleaning up...(remove tmp files, etc)'
}
on_preExit() {
echo
echo 'Exiting...' # Runs just before actual exit,
# shell will execute EXIT(0) after finishing this function
# that we hook also in on_exit function
exit 2
}
trap on_exit EXIT # EXIT = 0
trap on_preExit HUP INT QUIT TERM STOP PWR # 1 2 3 15 30
sleep 3 # some actual code...
exit
This solution will give you more control since you can run some of your code on occurrence of actual signal just before final exit (preExit function) and if it needed you can run some code at actual EXIT signal (final stage of exit)
GOOD HABITS ARE BEAUTIFUL
Avoid assuming the value of a variable is never going to be changed at some super distant time (especially if such a bug would raise an error).
Do cause trap to expand the value of a variable immediately if applicable to your code. Any variable name passed to trap in single quotes will delay the expansion of its value until after the catch.
Avoid the assumption that a file name will not contain any spaces.
Do use Bash ${VAR#Q} or $(printf '%q' "$VAR") to overcome issues caused by spaces and other special characters like quotes and carriage returns in file names.
zTemp=$(mktemp --tmpdir "$(basename "$0")-XXX.ps")
trap "rm -f ${zTemp#Q}" EXIT
The alternative of using a predictable file name with $$ is a gaping security hole and you should never, ever, ever think about using it. Even if it is just a simple personal script on your single user PC. It is a very bad habit you should not obtain. BugTraq is full of "insecure temp file" incidents. See here, here and here for more information on the security aspect of temp files.
I was initially thinking of quoting the insecure TMP1 and TMP2 assignments, but on second thought that would probably not be a good idea.
I prefer using tempfile which creates a file in /tmp in the safe manner and you do not have to worry about its naming:
tmp=$(tempfile -s "your_sufix")
trap "rm -f '$tmp'" exit
You don't have to bother removing those tmp files created with mktemp. They will be deleted anyway later.
Use mktemp if you can as it generates more unique files then '$$' prefix. And it looks like more cross platform way to create temp files then explicitly put them into /tmp.

Resources