Trap not activated when calling functions - ksh

I am using set -e and a trap handler to produce error messages is my ksh scripts.
#!/bin/ksh
set -e
myexit()
{
if [[ $1 != 0 ]]; then
echo "ERROR: Script $0 failed unexpectedly with signal $1!"
fi
}
settrap()
{
for sig in INT TERM EXIT; do
#echo "setting trap for $sig..."
trap "code=$?;trap - INT TERM EXIT;myexit $code \"$sig\"; [[ $sig == EXIT ]] || kill -$sig $$" $sig
done
}
settrap
Now I have the strange behavior that this works for calling old-style functions, but not for functions calling functions.
test1()
{
echo "test1"
eval test2
}
test2()
{
echo "test2"
return -1
}
test3()
{
settrap
echo "test1"
eval test2
}
What will happen?
test1 will exit but not call myexit
test2 and test3 will call myexit.
Question: Why does test1 not cause myexit to be called when the call to test2 returns -1?
Edit: The problem is not because functions have local traps. As explained here: Old-style POSIX functions (those created using the name() syntax) share traps with the parent script.

The behavior seems to be a bug with signal bubbling in ksh88.
ksh function (not posix) trap not receiving signals -HUP, -TERM but does receive -INT
I switched to using dtksh which is a newer version on my system and everything works fine.
This shebang solves the issue:
#!/usr/dt/bin/dtksh

Related

Is there some command that would guarantee stop of further processing but not exit terminal?

Is there something between exit and return 1 in bash? Some command that would guarantee stop of further processing but not exit terminal?
Meaning if I use exit in a sourced function in my bash anytime exit is invoked it will actually quit bash (or log off of ssh connection if I am on remote host). If I use return 1 then I have to check the value in the calling function.
With return I have to write code like the following:
foo(){
if [[ "$#" -ne 1 ]]; then
echo "Unexpected number of arguments [actual=$#][expected=1]"
return 1
fi
# ... do stuff.
}
bar(){
foo
if [[ "$?" -ne 0 ]]; then
echo "Line:$LINENO failure"
return 1;
fi
# do stuff only when foo() is successful
}
I could use an exit but as described then I will quit my current bash if the operation is not succesful:
foo(){
if [[ "$#" -ne 1 ]]; then
echo "Unexpected number of arguments [actual=$#][expected=1]"
exit
fi
# ... do stuff.
}
bar(){
foo
# do stuff only when foo() is successful
}
What I would like is something like:
foo(){
if [[ "$#" -ne 1 ]]; then
echo "Unexpected number of arguments [actual=$#][expected=1]"
# Simulate CTRL+C press (to cause everything to halt but not exit terminal)
# Like an exception throw or something?
fi
# ... do stuff.
}
bar(){
foo
# do stuff only when foo() is successful
}
With return I have to write code like the following:
bar() {
foo
if [[ "$?" -ne 0 ]]; then
echo "Line:$LINENO failure"
return 1;
fi
# do stuff only when foo() is successful
}
Explicitly checking $? can usually be avoided. You can shorten the above to:
bar() {
foo || return
# do stuff only when foo() is successful
}
You just run a command as the if expression. If will check the return code and if it is 0 will evaluate true. If it is non 0 will evaluate false.
He is some sample code.
$ foo() { return 0; }
$ if foo
> then
> echo hello
> else
> echo good bye
> fi
hello
$ foo() { return 1; }
$ if foo
> then
> echo hello
> else
> echo good bye
> fi
good bye
You really should just check the exit value. But, if you want to return when the shell is interactive and exit otherwise, in bash you could do:
foo() {
case $- in
*i*) return 1;;
*) exit 1;;
esac
}
What I was looking for is kill -INT -$$ which interrupts the current process but does not exist the current shell (unlike exit 1). Which allows kill -INT -$$ to be use in command line shell.

bash: error handling and functions

I am trying to call a function in a loop and gracefully handle and continue when it throws.
If I omit the || handle_error it just stops the entire script as one would expect.
If I leave || handle_error there it will print foo is fine after the error and will not execute handle_error at all. This is also an expected behavior, it's just how it works.
#!/bin/bash
set -e
things=(foo bar)
function do_something {
echo "param: $1"
# just throw on first loop run
# this statement is just a way to selectively throw
# not part of a real use case scenario where the command(s)
# may or may not throw
if [[ $1 == "foo" ]]; then
throw_error
fi
# this line should not be executed when $1 is "foo"
echo "$1 is fine."
}
function handle_error {
echo "$1 failed."
}
for thing in ${things[#]}; do
do_something $thing || handle_error $thing
done
echo "done"
yields
param: foo
./test.sh: line 12: throw_error: command not found
foo is fine.
param: bar
bar is fine.
done
what I would like to have is
param: foo
./test.sh: line 12: throw_error: command not found
foo failed.
param: bar
bar is fine.
done
Edit:
do_something doesn't really have to return anything. It's just an example of a function that throws, I could potentially remove it from the example source code because I will have no control over its content nor I want to, and testing each command in it for failure is not viable.
Edit:
You are not allowed to touch do_something logic. I stated this before, it's just a function containing a set of instructions that may throw an error. It may be a typo, it may be make failing in a CI environment, it may be a network error.
The solution I found is to save the function in a separate file and execute it in a sub-shell. The downside is that we lose all locals.
do-something.sh
#!/bin/bash
set -e
echo "param: $1"
if [[ $1 == "foo" ]]; then
throw_error
fi
echo "$1 is fine."
my-script.sh
#!/bin/bash
set -e
things=(foo bar)
function handle_error {
echo "$1 failed."
}
for thing in "${things[#]}"; do
./do-something.sh "$thing" || handle_error "$thing"
done
echo "done"
yields
param: foo
./do-something.sh: line 8: throw_error: command not found
foo failed.
param: bar
bar is fine.
done
If there is a more elegant way I will mark that as correct answer. Will check again in 48h.
Edit
Thanks to #PeterCordes comment and this other answer I found another solution that doesn't require to have separate files.
#!/bin/bash
set -e
things=(foo bar)
function do_something {
echo "param: $1"
if [[ $1 == "foo" ]]; then
throw_error
fi
echo "$1 is fine."
}
function handle_error {
echo "$1 failed with code: $2"
}
for thing in "${things[#]}"; do
set +e; (set -e; do_something "$thing"); error=$?; set -e
((error)) && handle_error "$thing" $error
done
echo "done"
correctly yields
param: foo
./test.sh: line 11: throw_error: command not found
foo failed with code: 127
param: bar
bar is fine.
done
#!/bin/bash
set -e
things=(foo bar)
function do_something() {
echo "param: $1"
ret_value=0
if [[ $1 == "foo" ]]; then
ret_value=1
elif [[ $1 == "fred" ]]; then
ret_value=2
fi
echo "$1 is fine."
return $ret_value
}
function handle_error() {
echo "$1 failed."
}
for thing in ${things[#]}; do
do_something $thing || handle_error $thing
done
echo "done"
See my comment above for the explanation. You can't test for a return value without creating a return value, which should be somewhat obvious. And || tests for a return value, basically, one greater than 0. Like && tests for 0. I think that's more or less right. I believe the bash return value limit is 254? I want to say. Must be integer between 0 and 254. Can't be a string, a float, etc.
http://tldp.org/LDP/abs/html/complexfunct.html
Functions return a value, called an exit status. This is analogous to
the exit status returned by a command. The exit status may be
explicitly specified by a return statement, otherwise it is the exit
status of the last command in the function (0 if successful, and a
non-zero error code if not). This exit status may be used in the
script by referencing it as $?. This mechanism effectively permits
script functions to have a "return value" similar to C functions.
So actually, foo there would have returned a 127 command not found error. I think. I'd have to test to see for sure.
[updated}
No, echo is the last command, as you can see from your output. And the outcome of the last echo is 0, so the function returns 0. So you want to dump this notion and go to something like trap, that's assuming you can't touch the internals of the function, which is odd.
echo fred; echo reval: $?
fred
reval: 0
What does set -e mean in a bash script?
-e Exit immediately if a command exits with a non-zero status.
But it's not very reliable and considered as a bad practice, better use :
trap 'do_something' ERR
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.html see trap, that may do what you want. But not ||, unless you add returns.
try
if [[ "$1" == "foo" ]]; then
foo
fi
I wonder if it was trying to execute the command foo within the condition test?
from bash reference:
-e
Exit immediately if a pipeline (see Pipelines), which may consist of a single simple command (see Simple Commands), a list (see Lists),
or a compound command (see Compound Commands) returns a non-zero
status. The shell does not exit if the command that fails is part of
the command list immediately following a while or until keyword, part
of the test in an if statement, part of any command executed in a &&
or || list except the command following the final && or ||, any
command in a pipeline but the last, or if the command’s return status
is being inverted with !.
as you can see, if the error occurs within the test condition, then the script will continue oblivious and return 0.
--
Edit
So in response, I note that the docs continue:
If a compound command other than a subshell returns a non-zero status
because a command failed while -e was being ignored, the shell does
not exit.
Well because your for is succeeded by the echo, there's no reason for an error to be thrown!

exit from bash script that uses $()

I was under impression that exit would terminate the current bash script no matter what, and had the following error handler at the top of my script:
function err {
printf "\e[31m$1\e[0m\n" 1>&2
exit 1
}
It worked like a charm for most cases until this line:
item=$(myfunc $1)
Normally, that line works fine, with the STDOUT of myfunc dumped into $item, as intended. The problem arises when myfunc throws an error, via the err function seen above. The $() ends up swallowing the non-zero return and guarding the exit from exiting the script itself. If I understand correctly, the problem is that $() actually spawns a new subshell (just like the deprecated backticks), but I know of no other way to capture the output of a function into a variable in bash that allows the exit to work.
I tried using set -e as well, and had no luck with that either. Can someone suggest how to build my error handler so that it exits the script even in these cases?
You can test the result of the assignment:
if item=$(myfunc "$1")
then : Function worked
else : Function failed
fi
This tests the exit status of the command run in the sub-shell that the $(…) uses.
Without actually using functions, you can experiment with:
$ if item=$(echo Hi; exit 1); then echo "$item - OK"; else echo "$item - OH"; fi
Hi - OH
$ if item=$(echo Hi; exit 0); then echo "$item - OK"; else echo "$item - OH"; fi
Hi - OK
$
Or, if functions are deemed crucial, then:
$ err() { exit $1; }
$ myfunc() { echo Mine; err $1; }
$ if item=$(myfunc 1); then echo "$item - OK"; else echo "$item - OH"; fi
Mine - OH
$ if item=$(myfunc 0); then echo "$item - OK"; else echo "$item - OH"; fi
Mine - OK
$
Tested using Bash 3.2.57 on Mac OS X 10.11 El Capitan.
for what it's worth you also can't use bash short circuit evaluation to test the exit status of the function. It tests the assignment to the variable instead.
eg.
while
[[ $(myfunc "$1") ]] && echo success || echo failure
or
[[ myfunc "$1" ]] && echo success || echo failure
are fairly common and might do what's expected,
[[ item=$(myfunc "$1") ]] && echo success || echo failure
Always returns success even if myfunc returns non zero. You have to use if then else to test the exit status.

Importing functions from a shell script

I have a shell script that I would like to test with shUnit. The script (and all the functions) are in a single file since it makes installation much easier.
Example for script.sh
#!/bin/sh
foo () { ... }
bar () { ... }
code
I wanted to write a second file (that does not need to be distributed and installed) to test the functions defined in script.sh
Something like run_tests.sh
#!/bin/sh
. script.sh
# Unit tests
Now the problem lies in the . (or source in Bash). It does not only parse function definitions but also executes the code in the script.
Since the script with no arguments does nothing bad I could
. script.sh > /dev/null 2>&1
but I was wandering if there is a better way to achieve my goal.
Edit
My proposed workaround does not work in the case the sourced script calls exit so I have to trap the exit
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
...
}
. script.sh
The run_tests function is called but as soon as I redirect the output of the source command the functions in the script are not parsed and are not available in the trap handler
This works but I get the output of script.sh:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh
This does not print the output but I get an error that the function is not defined:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh | grep OUTPUT_THAT_DOES_NOT_EXISTS
This does not print the output and the run_tests trap handler is not called at all:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh > /dev/null
According to the “Shell Builtin Commands” section of the bash manpage, . aka source takes an optional list of arguments which are passed to the script being sourced. You could use that to introduce a do-nothing option. For example, script.sh could be:
#!/bin/sh
foo() {
echo foo $1
}
main() {
foo 1
foo 2
}
if [ "${1}" != "--source-only" ]; then
main "${#}"
fi
and unit.sh could be:
#!/bin/bash
. ./script.sh --source-only
foo 3
Then script.sh will behave normally, and unit.sh will have access to all the functions from script.sh but will not invoke the main() code.
Note that the extra arguments to source are not in POSIX, so /bin/sh might not handle it—hence the #!/bin/bash at the start of unit.sh.
Picked up this technique from Python, but the concept works just fine in bash or any other shell...
The idea is that we turn the main code section of our script into a function. Then at the very end of the script, we put an 'if' statement that will only call that function if we executed the script but not if we sourced it. Then we explicitly call the script() function from our 'runtests' script which has sourced the 'script' script and thus contains all its functions.
This relies on the fact that if we source the script, the bash-maintained environment variable $0, which is the name of the script being executed, will be the name of the calling (parent) script (runtests in this case), not the sourced script.
(I've renamed script.sh to just script cause the .sh is redundant and confuses me. :-)
Below are the two scripts. Some notes...
$# evaluates to all of the arguments passed to the function or
script as individual strings. If instead, we used $*, all the
arguments would be concatenated together into one string.
The RUNNING="$(basename $0)" is required since $0 always includes at
least the current directory prefix as in ./script.
The test if [[ "$RUNNING" == "script" ]].... is the magic that causes
script to call the script() function only if script was run directly
from the commandline.
script
#!/bin/bash
foo () { echo "foo()"; }
bar () { echo "bar()"; }
script () {
ARG1=$1
ARG2=$2
#
echo "Running '$RUNNING'..."
echo "script() - all args: $#"
echo "script() - ARG1: $ARG1"
echo "script() - ARG2: $ARG2"
#
foo
bar
}
RUNNING="$(basename $0)"
if [[ "$RUNNING" == "script" ]]
then
script "$#"
fi
runtests
#!/bin/bash
source script
# execute 'script' function in sourced file 'script'
script arg1 arg2 arg3
If you are using Bash, a similar solution to #andrewdotn's approach (but without needing an extra flag or depending on the script name) can be accomplished by using BASH_SOURCE array.
script.sh:
#!/bin/bash
foo () { ... }
bar () { ... }
main() {
code
}
if [[ "${#BASH_SOURCE[#]}" -eq 1 ]]; then
main "$#"
fi
run_tests.sh:
#!/bin/bash
. script.sh
# Unit tests
If you are using Bash, another solution may be:
#!/bin/bash
foo () { ... }
bar () { ... }
[[ "${FUNCNAME[0]}" == "source" ]] && return
code
I devised this. Let's say our shell library file is the following file, named aLib.sh:
funcs=("a" "b" "c") # File's functions' names
for((i=0;i<${#funcs[#]};i++)); # Avoid function collision with existing
do
declare -f "${funcs[$i]}" >/dev/null
[ $? -eq 0 ] && echo "!!ATTENTION!! ${funcs[$i]} is already sourced"
done
function a(){
echo function a
}
function b(){
echo function b
}
function c(){
echo function c
}
if [ "$1" == "--source-specific" ]; # Source only specific given as arg
then
for((i=0;i<${#funcs[#]};i++));
do
for((j=2;j<=$#;j++));
do
anArg=$(eval 'echo ${'$j'}')
test "${funcs[$i]}" == "$anArg" && continue 2
done
unset ${funcs[$i]}
done
fi
unset i j funcs
At the beginning it checks and warns for any function name collision detected.
At the end, bash has already sourced all functions, so it frees memory from them and keeps only the ones selected.
Can be used like this:
user#pc:~$ source aLib.sh --source-specific a c
user#pc:~$ a; b; c
function a
bash: b: command not found
function c
~

Multiple Bash traps for the same signal

When I use the trap command in Bash, the previous trap for the given signal is replaced.
Is there a way of making more than one trap fire for the same signal?
Technically you can't set multiple traps for the same signal, but you can add to an existing trap:
Fetch the existing trap code using trap -p
Add your command, separated by a semicolon or newline
Set the trap to the result of #2
Here is a bash function that does the above:
# note: printf is used instead of echo to avoid backslash
# processing and to properly handle values that begin with a '-'.
log() { printf '%s\n' "$*"; }
error() { log "ERROR: $*" >&2; }
fatal() { error "$#"; exit 1; }
# appends a command to a trap
#
# - 1st arg: code to add
# - remaining args: names of traps to modify
#
trap_add() {
trap_add_cmd=$1; shift || fatal "${FUNCNAME} usage error"
for trap_add_name in "$#"; do
trap -- "$(
# helper fn to get existing trap command from output
# of trap -p
extract_trap_cmd() { printf '%s\n' "$3"; }
# print existing trap command with newline
eval "extract_trap_cmd $(trap -p "${trap_add_name}")"
# print the new trap command
printf '%s\n' "${trap_add_cmd}"
)" "${trap_add_name}" \
|| fatal "unable to add to trap ${trap_add_name}"
done
}
# set the trace attribute for the above function. this is
# required to modify DEBUG or RETURN traps because functions don't
# inherit them unless the trace attribute is set
declare -f -t trap_add
Example usage:
trap_add 'echo "in trap DEBUG"' DEBUG
Edit:
It appears that I misread the question. The answer is simple:
handler1 () { do_something; }
handler2 () { do_something_else; }
handler3 () { handler1; handler2; }
trap handler3 SIGNAL1 SIGNAL2 ...
Original:
Just list multiple signals at the end of the command:
trap function-name SIGNAL1 SIGNAL2 SIGNAL3 ...
You can find the function associated with a particular signal using trap -p:
trap -p SIGINT
Note that it lists each signal separately even if they're handled by the same function.
You can add an additional signal given a known one by doing this:
eval "$(trap -p SIGUSR1) SIGUSR2"
This works even if there are other additional signals being processed by the same function. In other words, let's say a function was already handling three signals - you could add two more just by referring to one existing one and appending two more (where only one is shown above just inside the closing quotes).
If you're using Bash >= 3.2, you can do something like this to extract the function given a signal. Note that it's not completely robust because other single quotes could appear.
[[ $(trap -p SIGUSR1) =~ trap\ --\ \'([^\047]*)\'.* ]]
function_name=${BASH_REMATCH[1]}
Then you could rebuild your trap command from scratch if you needed to using the function name, etc.
No
About the best you could do is run multiple commands from a single trap for a given signal, but you cannot have multiple concurrent traps for a single signal. For example:
$ trap "rm -f /tmp/xyz; exit 1" 2
$ trap
trap -- 'rm -f /tmp/xyz; exit 1' INT
$ trap 2
$ trap
$
The first line sets a trap on signal 2 (SIGINT). The second line prints the current traps — you would have to capture the standard output from this and parse it for the signal you want.
Then, you can add your code to what was already there — noting that the prior code will most probably include an 'exit' operation. The third invocation of trap clears the trap on 2/INT. The last one shows that there are no traps outstanding.
You can also use trap -p INT or trap -p 2 to print the trap for a specific signal.
I didn't like having to play with these string manipulations which are confusing at the best of times, so I came up with something like this:
(obviously you can modify it for other signals)
exit_trap_command=""
function cleanup {
eval "$exit_trap_command"
}
trap cleanup EXIT
function add_exit_trap {
local to_add=$1
if [[ -z "$exit_trap_command" ]]
then
exit_trap_command="$to_add"
else
exit_trap_command="$exit_trap_command; $to_add"
fi
}
Here's another option:
on_exit_acc () {
local next="$1"
eval "on_exit () {
local oldcmd='$(echo "$next" | sed -e s/\'/\'\\\\\'\'/g)'
local newcmd=\"\$oldcmd; \$1\"
trap -- \"\$newcmd\" 0
on_exit_acc \"\$newcmd\"
}"
}
on_exit_acc true
Usage:
$ on_exit date
$ on_exit 'echo "Goodbye from '\''`uname`'\''!"'
$ exit
exit
Sat Jan 18 18:31:49 PST 2014
Goodbye from 'FreeBSD'!
tap#
I liked Richard Hansen's answer, but I don't care for embedded functions so an alternate is:
#===================================================================
# FUNCTION trap_add ()
#
# Purpose: appends a command to a trap
#
# - 1st arg: code to add
# - remaining args: names of traps to modify
#
# Example: trap_add 'echo "in trap DEBUG"' DEBUG
#
# See: http://stackoverflow.com/questions/3338030/multiple-bash-traps-for-the-same-signal
#===================================================================
trap_add() {
trap_add_cmd=$1; shift || fatal "${FUNCNAME} usage error"
new_cmd=
for trap_add_name in "$#"; do
# Grab the currently defined trap commands for this trap
existing_cmd=`trap -p "${trap_add_name}" | awk -F"'" '{print $2}'`
# Define default command
[ -z "${existing_cmd}" ] && existing_cmd="echo exiting # `date`"
# Generate the new command
new_cmd="${existing_cmd};${trap_add_cmd}"
# Assign the test
trap "${new_cmd}" "${trap_add_name}" || \
fatal "unable to add to trap ${trap_add_name}"
done
}
I have been wrote a set of functions for myself to a bit resolve this task in a convenient way.
Update: The implementation here is obsoleted and left here as a demonstration. The new implementation is more complex, having dependencies, supports a wider range of cases and quite big to be placed here.
New implementation: https://sf.net/p/tacklelib/tacklelib/HEAD/tree/trunk/bash/tacklelib/traplib.sh
Here is the list of features of the new implementation:
Pros:
Automatically restores the previous trap handler in nested functions.
Originally the RETURN trap restores ONLY if ALL functions in the stack did set it.
The RETURN signal trap can support other signal traps to achieve the RAII pattern as in other languages.
For example, to temporary disable interruption handling and auto restore it at the end of a function
while an initialization code is executing.
Protection from call not from a function context in case of the RETURN signal trap.
The not RETURN signal handlers in the whole stack invokes together in a bash process from the
bottom to the top and executes them in order reversed to the tkl_push_trap function calls
The RETURN signal trap handlers invokes only for a single function from the bottom to the top in
reverse order to the tkl_push_trap function calls.
Because the EXIT signal does not trigger the RETURN signal trap handler, then the EXIT signal trap
handler does setup automatically at least once per bash process when the RETURN signal trap handler
makes setup at first time in a bash process.
That includes all bash processes, for example, represented as (...) or $(...) operators.
So the EXIT signal trap handlers automatically handles all the RETURN trap handlers before to run itself.
The RETURN signal trap handler still can call to tkl_push_trap and tkl_pop_trap functions to process
the not RETURN signal traps
The RETURN signal trap handler can call to tkl_set_trap_postponed_exit function from both the EXIT and
RETURN signal trap handlers.
If is called from the RETURN signal trap handler, then the EXIT trap handler will be called after all the
RETURN signal trap handlers in the bash process.
If is called from the EXIT signal trap handler, then the EXIT trap handler will change the exit code after
the last EXIT signal trap handler is invoked.
Faster access to trap stack as a global variable instead of usage the (...) or $(...) operators
which invokes an external bash process.
The source command ignores by the RETURN signal trap handler, so all calls to the source command will not
invoke the RETURN signal trap user code (marked in the Pros, because RETURN signal trap handler has to be
called only after return from a function in the first place and not from a script inclusion).
Cons:
You must not use builtin trap command in the handler passed to the tkl_push_trap function as tkl_*_trap functions
does use it internally.
You must not use builtin exit command in the EXIT signal handlers while the EXIT signal trap
handler is running. Otherwise that will leave the rest of the RETURN and EXIT signal trap handlers not executed.
To change the exit code from the EXIT handler you can use tkl_set_trap_postponed_exit function for that.
You must not use builtin return command in the RETURN signal trap handler while the RETURN signal trap handler
is running. Otherwise that will leave the rest of the RETURN and EXIT signal trap handlers not executed.
All calls to the tkl_push_trap and tkl_pop_trap functions has no effect if has been called from a trap
handler for a signal the trap handler is handling (recursive call through the signal).
You have to replace all builtin trap commands in nested or 3dparty scripts by tkl_*_trap functions if
already using the library.
The source command ignores by the RETURN signal trap handler, so all calls to the source command will not
invoke the RETURN signal trap user code (marked in the Cons, because of losing the back compatability here).
Old implementation:
traplib.sh
#!/bin/bash
# Script can be ONLY included by "source" command.
if [[ -n "$BASH" && (-z "$BASH_LINENO" || ${BASH_LINENO[0]} -gt 0) ]] && (( ! ${#SOURCE_TRAPLIB_SH} )); then
SOURCE_TRAPLIB_SH=1 # including guard
function GetTrapCmdLine()
{
local IFS=$' \t\r\n'
GetTrapCmdLineImpl RETURN_VALUES "$#"
}
function GetTrapCmdLineImpl()
{
local out_var="$1"
shift
# drop return values
eval "$out_var=()"
local IFS
local trap_sig
local stack_var
local stack_arr
local trap_cmdline
local trap_prev_cmdline
local i
i=0
IFS=$' \t\r\n'; for trap_sig in "$#"; do
stack_var="_traplib_stack_${trap_sig}_cmdline"
declare -a "stack_arr=(\"\${$stack_var[#]}\")"
if (( ${#stack_arr[#]} )); then
for trap_cmdline in "${stack_arr[#]}"; do
declare -a "trap_prev_cmdline=(\"\${$out_var[i]}\")"
if [[ -n "$trap_prev_cmdline" ]]; then
eval "$out_var[i]=\"\$trap_cmdline; \$trap_prev_cmdline\"" # the last srored is the first executed
else
eval "$out_var[i]=\"\$trap_cmdline\""
fi
done
else
# use the signal current trap command line
declare -a "trap_cmdline=(`trap -p "$trap_sig"`)"
eval "$out_var[i]=\"\${trap_cmdline[2]}\""
fi
(( i++ ))
done
}
function PushTrap()
{
# drop return values
EXIT_CODES=()
RETURN_VALUES=()
local cmdline="$1"
[[ -z "$cmdline" ]] && return 0 # nothing to push
shift
local IFS
local trap_sig
local stack_var
local stack_arr
local trap_cmdline_size
local prev_cmdline
IFS=$' \t\r\n'; for trap_sig in "$#"; do
stack_var="_traplib_stack_${trap_sig}_cmdline"
declare -a "stack_arr=(\"\${$stack_var[#]}\")"
trap_cmdline_size=${#stack_arr[#]}
if (( trap_cmdline_size )); then
# append to the end is equal to push trap onto stack
eval "$stack_var[trap_cmdline_size]=\"\$cmdline\""
else
# first stack element is always the trap current command line if not empty
declare -a "prev_cmdline=(`trap -p $trap_sig`)"
if (( ${#prev_cmdline[2]} )); then
eval "$stack_var=(\"\${prev_cmdline[2]}\" \"\$cmdline\")"
else
eval "$stack_var=(\"\$cmdline\")"
fi
fi
# update the signal trap command line
GetTrapCmdLine "$trap_sig"
trap "${RETURN_VALUES[0]}" "$trap_sig"
EXIT_CODES[i++]=$?
done
}
function PopTrap()
{
# drop return values
EXIT_CODES=()
RETURN_VALUES=()
local IFS
local trap_sig
local stack_var
local stack_arr
local trap_cmdline_size
local trap_cmd_line
local i
i=0
IFS=$' \t\r\n'; for trap_sig in "$#"; do
stack_var="_traplib_stack_${trap_sig}_cmdline"
declare -a "stack_arr=(\"\${$stack_var[#]}\")"
trap_cmdline_size=${#stack_arr[#]}
if (( trap_cmdline_size )); then
(( trap_cmdline_size-- ))
RETURN_VALUES[i]="${stack_arr[trap_cmdline_size]}"
# unset the end
unset $stack_var[trap_cmdline_size]
(( !trap_cmdline_size )) && unset $stack_var
# update the signal trap command line
if (( trap_cmdline_size )); then
GetTrapCmdLineImpl trap_cmd_line "$trap_sig"
trap "${trap_cmd_line[0]}" "$trap_sig"
else
trap "" "$trap_sig" # just clear the trap
fi
EXIT_CODES[i]=$?
else
# nothing to pop
RETURN_VALUES[i]=""
fi
(( i++ ))
done
}
function PopExecTrap()
{
# drop exit codes
EXIT_CODES=()
local IFS=$' \t\r\n'
PopTrap "$#"
local cmdline
local i
i=0
IFS=$' \t\r\n'; for cmdline in "${RETURN_VALUES[#]}"; do
# execute as function and store exit code
eval "function _traplib_immediate_handler() { $cmdline; }"
_traplib_immediate_handler
EXIT_CODES[i++]=$?
unset _traplib_immediate_handler
done
}
fi
test.sh
#/bin/bash
source ./traplib.sh
function Exit()
{
echo exitting...
exit $#
}
pushd ".." && {
PushTrap "echo popd; popd" EXIT
echo 111 || Exit
PopExecTrap EXIT
}
GetTrapCmdLine EXIT
echo -${RETURN_VALUES[#]}-
pushd ".." && {
PushTrap "echo popd; popd" EXIT
echo 222 && Exit
PopExecTrap EXIT
}
Usage
cd ~/test
./test.sh
Output
~ ~/test
111
popd
~/test
--
~ ~/test
222
exitting...
popd
~/test
In this answer I implemented a simple solution. Here I implement another solution that is based on extracting of previous trap commands from trap -p output. But I don't know how much it is portable because I'm not sure that trap -p output is regulated. Maybe its format can be changed in future (but I doubt that).
trap_add()
{
local new="$1"
local sig="$2"
# Get raw trap output.
local old="$(trap -p "$sig")"
# Extract previous commands from raw trap output.
old="${old#*\'}" # Remove first ' and everything before it.
old="${old%\'*}" # Remove last ' and everything after it.
old="${old//\'\\\'\'/\'}" # Replace every '\'' (escaped ') to just '.
# Combine new and old commands. Separate them by newline.
trap -- "$new
$old" "$sig"
}
trap_add 'echo AAA' EXIT
trap_add '{ echo BBB; }' EXIT
But this solution doesn't work well with subshells. Unfortunately trap -p prints commands of outer shell. And we execute them in subshell after extracting.
trap_add 'echo AAA' EXIT
( trap_add 'echo BBB' EXIT; )
In the above example echo AAA is executed twice: first time in subshell and second time in outer shell.
We have to check whether we are in new subshell and if we are then we must not take commands from trap -p.
trap_add()
{
local new="$1"
# Avoid inheriting trap commands from outer shell.
if [[ "${trap_subshell:-}" != "$BASH_SUBSHELL" ]]; then
# We are in new subshell, don't take commands from outer shell.
trap_subshell="$BASH_SUBSHELL"
local old=
else
# Get raw trap output.
local old="$(trap -p EXIT)"
# Extract previous commands from trap output.
old="${old#*\'}" # Remove first ' and everything before it.
old="${old%\'*}" # Remove last ' and everything after it.
old="${old//\'\\\'\'/\'}" # Replace every '\'' (escaped ') to just '.
fi
# Combine new and old commands. Separate them by newline.
trap -- "$new
$old" EXIT
}
Note that to avoid security issue you have to reset the trap_subshell variable at script startup.
trap_subshell=
Unfortunately the solution above works only with EXIT signal now. A generic solution that works with any signal is below.
# Check if argument is number.
is_num()
{
[ -n "$1" ] && [ "$1" -eq "$1" ] 2>/dev/null
}
# Convert signal name to signal number.
to_sig_num()
{
if is_num "$1"; then
# Signal is already number.
kill -l "$1" >/dev/null # Check that signal number is valid.
echo "$1" # Return result.
else
# Convert to signal number.
kill -l "$1"
fi
}
trap_add()
{
local new="$1"
local sig="$2"
local sig_num
sig_num=$(to_sig_num "$sig")
# Avoid inheriting trap commands from outer shell.
if [[ "${trap_subshell[$sig_num]:-}" != "$BASH_SUBSHELL" ]]; then
# We are in new subshell, don't take commands from outer shell.
trap_subshell[$sig_num]="$BASH_SUBSHELL"
local old=
else
# Get raw trap output.
local old="$(trap -p "$sig")"
# Extract previous commands from trap output.
old="${old#*\'}" # Remove first ' and everything before it.
old="${old%\'*}" # Remove last ' and everything after it.
old="${old//\'\\\'\'/\'}" # Replace every '\'' (escaped ') to just '.
fi
# Combine new and old commands. Separate them by newline.
trap -- "$new
$old" "$sig"
}
trap_subshell=
trap_add 'echo AAA' EXIT
trap_add '{ echo BBB; }' 0 # The same as EXIT.
There's no way to have multiple handlers for the same trap, but the same handler can do multiple things.
The one thing I don't like in the various other answers doing the same thing is the use of string manipulation to get at the current trap function. There are two easy ways of doing this: arrays and arguments. Arguments is the most reliable one, but I'll show arrays first.
Arrays
When using arrays, you rely on the fact that trap -p SIGNAL returns trap -- ??? SIGNAL, so whatever is the value of ???, there are three more words in the array.
Therefore you can do this:
declare -a trapDecl
trapDecl=($(trap -p SIGNAL))
currentHandler="${trapDecl[#]:2:${#trapDecl[#]} - 3}"
eval "trap -- 'your handler;'${currentHandler} SIGNAL"
So let's explain this. First, variable trapDecl is declared as an array. If you do this inside a function, it will also be local, which is convenient.
Next we assign the output of trap -p SIGNAL to the array. To give an example, let's say you are running this after having sourced osht (unit testing for shell), and that the signal is EXIT. The output of trap -p EXIT will be trap -- '_osht_cleanup' EXIT, so the trapDecl assignment will be substituted like this:
trapDecl=(trap -- '_osht_cleanup' EXIT)
The parenthesis there are normal array assignment, so trapDecl becomes an array with four elements: trap, --, '_osht_cleanup' and EXIT.
Next we extract the current handler -- that could be inlined in the next line, but for explanation's sake I assigned it to a variable first. Simplifying that line, I'm doing this: currentHandler="${array[#]:offset:length}", which is the syntax used by Bash to say pick length elements starting at element offset. Since it starts counting from 0, number 2 will be '_osht_cleanup'. Next, ${#trapDecl[#]} is the number of elements inside trapDecl, which will be 4 in the example. You subtract 3 because there are three elements you don't want: trap, -- and EXIT. I don't need to use $(...) around that expression because arithmetic expansion is already performed on the offset and length arguments.
The final line performs an eval, which is used so that the shell will interpret the quoting from the output of trap. If we do parameter substitution on that line, it expands to the following in the example:
eval "trap -- 'your handler;''_osht_cleanup' EXIT"
Do not be confused by the double quote in the middle (''). Bash simply concatenates two quotes strings if they are next to each other. For example, '1'"2"'3''4' is expanded to 1234 by Bash. Or, to give a more interesting example, 1" "2 is the same thing as "1 2". So eval takes that string and evaluates it, which is equivalent to executing this:
trap -- 'your handler;''_osht_cleanup' EXIT
And that will handle the quoting correctly, turning everything between -- and EXIT into a single parameter.
To give a more complex example, I'm prepending a directory clean up to the osht handler, so my EXIT signal now has this:
trap -- 'rm -fr '\''/var/folders/7d/qthcbjz950775d6vn927lxwh0000gn/T/tmp.CmOubiwq'\'';_osht_cleanup' EXIT
If you assign that to trapDecl, it will have size 6 because of the spaces on the handler. That is, 'rm is one element, and so is -fr, instead of 'rm -fr ...' being a single element.
But currentHandler will get all three elements (6 - 3 = 3), and the quoting will work out when eval is run.
Arguments
Arguments just skips all the array handling part and uses eval up front to get the quoting right. The downside is that you replace the positional arguments on bash, so this is best done from a function. This is the code, though:
eval "set -- $(trap -p SIGNAL)"
trap -- "your handler${3:+;}${3}" SIGNAL
The first line will set the positional arguments to the output of trap -p SIGNAL. Using the example from the Arrays section, $1 will be trap, $2 will be --, $3 will be _osht_cleanup (no quotes!), and $4 will be EXIT.
The next line is pretty straightforward, except for ${3:+;}. The ${X:+Y} syntax means "output Y if the variable X is unset or null". So it expands to ; if $3 is set, or nothing otherwise (if there was no previous handler for SIGNAL).
Simple ways to do it
If all the signal handling functions are known at the same time, then the following is sufficient (has said by Jonathan):
trap 'handler1;handler2;handler3' EXIT
Else, if there is an existing handler(s) that should stay, then new handlers can easily be added like this:
trap "$( trap -p EXIT | cut -f2 -d \' );newHandler" EXIT
If you don't know if there are existing handlers but want to keep them in this case, do the following:
handlers="$( trap -p EXIT | cut -f2 -d \' )"
trap "${handlers}${handlers:+;}newHandler" EXIT
It can be factorized in a function like that:
trap-add() {
local sig="${2:?Signal required}"
hdls="$( trap -p ${sig} | cut -f2 -d \' )";
trap "${hdls}${hdls:+;}${1:?Handler required}" "${sig}"
}
export -f trap-add
Usage:
trap-add 'echo "Bye bye"' EXIT
trap-add 'echo "See you next time"' EXIT
Remark : This works only as long as the handlers are function names, or simple instructions that did not contain any simple code (simple code conflicts with cut -f2 -d \').
I add a slightly more robust version of Laurent Simon's trap-add script:
Allows using arbitrary commands as trap, including such with ' characters
Works only in bash; It could be rewritten with sed instead of bash pattern substitution, but that would make it significantly slower.
Still suffers from causing unwanted inheritance of the traps in subshells.
trap-add () {
local handler=$(trap -p "$2")
handler=${handler/trap -- \'/} # /- Strip `trap '...' SIGNAL` -> ...
handler=${handler%\'*} # \-
handler=${handler//\'\\\'\'/\'} # <- Unquote quoted quotes ('\'')
trap "${handler} $1;" "$2"
}
I would like to propose my solution of multiple trap functions for simple scripts
# Executes cleanup functions on exit
function on_exit {
for FUNCTION in $(declare -F); do
if [[ ${FUNCTION} == *"_on_exit" ]]; then
>&2 echo ${FUNCTION}
eval ${FUNCTION}
fi
done
}
trap on_exit EXIT
function remove_fifo_on_exit {
>&2 echo Removing FIFO...
}
function stop_daemon_on_exit {
>&2 echo Stopping daemon...
}
Just like to add my simple version as an example.
trap -- 'echo "Version 1";' EXIT;
function add_to_trap {
local -a TRAP;
# this will put parts of trap command into TRAP array
eval "TRAP=($(trap -p EXIT))";
# 3rd field is trap command. Concat strings.
trap -- 'echo "Version 2"; '"${TRAP[2]}" EXIT;
}
add_to_trap;
If this code is run, will print:
Version 2
Version 1
Here's how I usually do it. It's not much different from what other people have suggested here but my version seems dramatically simpler and so far it always worked as desired for me.
Somewhere in the code, set a trap:
trap "echo Hello" EXIT
and later on update it:
oldTrap=$(trap -p EXIT)
oldTrap=${oldTrap#"trap -- '"}
oldTrap=${oldTrap%"' EXIT"};
trap "$oldTrap; echo World" EXIT
finally, on exit
Hello
World
This is a simple and compact solution to run multiple trap's by executing all functions that start with the name trap_:
trap 'eval $(declare -F | grep -oP "trap_[^ ]+" | tr "\n" ";")' EXIT
Now simply add as many trap functions as you like::
# write stdout and stderr to a log file
exec &> >(tee -a "/var/log/scripts/${0//\//_}.log")
trap_shrink_logs() { echo "$(tail -n 1000 "/var/log/scripts/${0//\//_}.log")" > "/var/log/scripts/${0//\//_}.log" }
# make script race condition safe
[[ -d "/tmp/${0//\//_}" ]] || ! mkdir "/tmp/${0//\//_}" && echo "Already running!" && exit 1
trap_remove_lock() { rmdir "/tmp/${0//\//_}"; }
A special case of Richard Hansen's answer (great idea). I usually need it for EXIT traps. In such case:
extract_trap_cmd() { printf '%s\n' "${3-}"; }
get_exit_trap_cmd() {
eval "extract_trap_cmd $(trap -p EXIT)"
}
...
trap "echo '1 2'; $(get_exit_trap_cmd)" EXIT
...
trap "echo '3 4'; $(get_exit_trap_cmd)" EXIT
Or this way, if you will:
add_exit_trap_handler() {
trap "$1; $(get_exit_trap_cmd)" EXIT
}
...
add_exit_trap_handler "echo '5 6'"
...
add_exit_trap_handler "echo '7 8'"
Always assuming I remember to pass multiple code snippets in semi-colon delimited fashion (as per bash(1)s' requirement for multiple commands on a single line, it's rare that the following (or something similar to it) fails to fulfil my meagre requirements...
extend-trap() {
local sig=${1:?'Der, you forgot the sig!!!!'}
shift
local code s ; while IFS=\' read code s ; do
code="$code ; $*"
done < <(trap -p $sig)
trap "$code" $sig
}
I'd like something simpler... :)
My humble contrib:
#!/bin/bash
# global vars
TRAPS=()
# functions
function add_exit_trap() { TRAPS+=("$#") }
function execute_exit_traps() {
local I
local POSITIONS=${!TRAPS[#]} # retorna os índices válidos do array
for I in $POSITIONS
do
echo "executing TRAPS[$I]=${TRAPS[I]}"
eval ${TRAPS[I]}
done
}
# M A I N
LOCKFILE="/tmp/lock.me.1234567890"
touch $LOCKFILE
trap execute_exit_traps EXIT
add_exit_trap "rm -f $LOCKFILE && echo lock removed."
add_exit_trap "ls -l $LOCKFILE"
add_exit_trap "echo END"
echo "showing lockfile..."
ls -l $LOCKFILE
add_exit_trap() keeps adding strings (commands) to a bash global array while
execute_exit_traps() just loops thru that array and eval the commands
Executed script...
showing lockfile...
-rw-r--r--. 1 root root 0 Mar 24 10:08 /tmp/lock.me.1234567890
executing TRAPS[0]=rm -f /tmp/lock.me.1234567890 && echo lock removed.
lock removed.
executing TRAPS[1]=ls -l /tmp/lock.me.1234567890
ls: cannot access /tmp/lock.me.1234567890: No such file or directory
executing TRAPS[2]=echo END
END
A simple solution is to save commands for trap to variable and, when adding new trap, to restore them from that variable.
trap_add()
{
# Combine new and old commands. Separate them by newline.
trap_cmds="$1
$trap_cmds"
trap -- "$trap_cmds" EXIT
}
trap_add 'echo AAA'
trap_add '{ echo BBB; }'
Unfortunately this solution does not work well with subshells, because subshell inherits outer shell variables and thus outer shell trap commands are executed in subshell.
trap_add 'echo AAA'
( trap_add 'echo BBB'; )
In the above example echo AAA is executed twice: first time in subshell and second time in outer shell.
We have to check whether we are in new subshell and if we are then we must not take commands from the trap_cmds variable.
trap_add()
{
# Avoid inheriting trap commands from outer shell.
if [[ "${trap_subshell:-}" != "$BASH_SUBSHELL" ]]; then
# We are in new subshell, don't take commands from outer shell.
trap_subshell="$BASH_SUBSHELL"
trap_cmds=
fi
# Combine new and old commands. Separate them by newline.
trap_cmds="$1
$trap_cmds"
trap -- "$trap_cmds" EXIT
}
Note that to avoid security issue you have to reset the used variables at script startup.
trap_subshell=
trap_cmds=
Otherwise someone who runs your script can inject their malicious commands via environment variables.
export trap_subshell=0
export trap_cmds='echo "I hacked you"'
./your_script
Generic version that works with arbitrary signals is below.
# Check if argument is number.
is_num()
{
[ -n "$1" ] && [ "$1" -eq "$1" ] 2>/dev/null
}
# Convert signal name to signal number.
to_sig_num()
{
if is_num "$1"; then
# Signal is already number.
kill -l "$1" >/dev/null # Check that signal number is valid.
echo "$1" # Return result.
else
# Convert to signal number.
kill -l "$1"
fi
}
trap_add()
{
local cmd sig sig_num
cmd="$1"
sig="$2"
sig_num=$(to_sig_num "$sig")
# Avoid inheriting trap commands from outer shell.
if [[ "${trap_subshell[$sig_num]:-}" != "$BASH_SUBSHELL" ]]; then
# We are in new subshell, don't take commands from outer shell.
trap_subshell[$sig_num]="$BASH_SUBSHELL"
trap_cmds[$sig_num]=
fi
# Combine new and old commands. Separate them by newline.
trap_cmds[$sig_num]="$cmd
${trap_cmds[$sig_num]}"
trap -- "${trap_cmds[$sig_num]}" $sig
}
trap_subshell=
trap_cmds=
trap_add 'echo AAA' EXIT
trap_add '{ echo BBB; }' 0 # The same as EXIT.
PS In this answer I implemented another solution that gets previous commands from trap -p output.
There's two actually correct answers.
Answer #1: Create a subshell.
## some code that may contain other traps
(
trap 'whatever' EXIT HUP INT ABRT TERM
somecommand
) # 'whatever' executed when leaving the subshell.
# Previous trap(s) restored.
Do note the documentation about traps inheritance:
Signals that were ignored on entry to a non-interactive shell cannot be trapped or reset, although no error need be reported when attempting to do so.
Answer #2: Save and restore traps.
When the trap command is invoked without arguments, it will print entire list of traps formatted for shell re-evaluation. Thus, if subshell is not an option, something like this could be done:
save_traps=$(trap)
trap 'something' HUP INT ABRT TERM
somecommand
eval "$save_traps"
But the root cause of the OP is most likely an XY problem. Suppose you have various commands called in different modules of your script and producing residue temp files across filesystem. Being a good citizen, you want to cleanup after yourself. And you employ trap to call cleanup tack. Instead of trying to stack traps, stack your cleanup list:
_cleanup_list=
_cleanup() {
case "$1" in
add)
shift
for file; do _cleanup_list="${_cleanup_list:+$_cleanup_list }'$file'"; done
;;
clear)
for file in $_cleanup_list; do eval "[ -f $file ] && rm $file"; done
;;
esac
}
trap '_cleanup clear' EXIT
Richard Hansen's answer is definitely the best, but in case you don't want to use an embedded function, here's an alternative:
function foo
{
trap "echo outer" EXIT
echo "in foo: $(trap -p EXIT)"
bar
echo "post bar: $(trap -p EXIT)"
}
function bar
{
# this line is the money shot:
trap "echo inner ; $(eval "( set -- $(trap -p EXIT) ; echo \$3 )")" EXIT
echo "in bar: $(trap -p EXIT)"
}
( foo )
which produces this output:
in foo: trap -- 'echo outer' EXIT
in bar: trap -- 'echo inner ; echo outer' EXIT
post bar: trap -- 'echo inner ; echo outer' EXIT
inner
outer
Generalization to a reusable function left as an exercise for the reader.
Reasons you might not want to use an embedded function:
I suppose it's slightly inefficient, as the function is defined every time its containing function executes. I suspect, however, that this inefficiency is so negligible as to be irrelevant.
More importantly, embedded functions are not localized to the containing function. That means that you're polluting your caller's namespace, and maybe even overwriting one of their functions. That seems like a Bad Idea™.

Resources