Exit code of traps in Bash - bash

This is myscript.sh:
#!/bin/bash
function mytrap {
echo "Trapped!"
}
trap mytrap EXIT
exit 3
And when I run it:
> ./myscript.sh
echo $?
3
Why is the exit code of the script the exit code with the trap the same as without it? Usually, a function returns implicitly the exit code of the last command executed. In this case:
echo returns 0
I would expect mytrap to return 0
Since mytrap is the last function executed, the script should return 0
Why is this not the case? Where is my thinking wrong?

Look the reference from the below man bash page,
exit [n]
Cause the shell to exit with a status of n. If n is omitted, the exit status is that of the last command executed. A trap on EXIT is executed before the shell terminates.
You have the debug version of the script to prove that,
+ trap mytrap EXIT
+ exit 3
+ mytrap
+ echo 'Trapped!'
Trapped!
Consider the same as you mentioned in your comments, the trap function returning an error code,
function mytrap {
echo "Trapped!"
exit 1
}
Look the expanded version of the script,
+ trap mytrap EXIT
+ exit 3
+ mytrap
+ echo 'Trapped!'
Trapped!
+ exit 1
and
echo $?
1
To capture the exit code on trap function,
function mytrap {
echo "$?"
echo "Trapped!"
}

Related

How to perform action on non-zero return code of any command in bash [duplicate]

I want to know whether any commands in a bash script exited with a non-zero status.
I want something similar to set -e functionality, except that I don't want it to exit when a command exits with a non-zero status. I want it to run the whole script, and then I want to know that either:
a) all commands exited with exit status 0
-or-
b) one or more commands exited with a non-zero status
e.g., given the following:
#!/bin/bash
command1 # exits with status 1
command2 # exits with status 0
command3 # exits with status 0
I want all three commands to run. After running the script, I want an indication that at least one of the commands exited with a non-zero status.
Set a trap on ERR:
#!/bin/bash
err=0
trap 'err=1' ERR
command1
command2
command3
test $err = 0 # Return non-zero if any command failed
You might even throw in a little introspection to get data about where the error occurred:
#!/bin/bash
for i in 1 2 3; do
eval "command$i() { echo command$i; test $i != 2; }"
done
err=0
report() {
err=1
printf '%s' "error at line ${BASH_LINENO[0]}, in call to "
sed -n ${BASH_LINENO[0]}p $0
} >&2
trap report ERR
command1
command2
command3
exit $err
You could try to do something with a trap for the DEBUG pseudosignal, such as
trap '(( $? && ++errcount ))' DEBUG
The DEBUG trap is executed
before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function
(quote from manual).
So if you add this trap and as the last command something to print the error count, you get the proper value:
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
echo "Errors: $errcount"
returns Errors: 1 and
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
false
echo "Errors: $errcount"
prints Errors: 2. Beware that that last statement is actually required to account for the second false because the trap is executed before the commands, so the exit status for the second false is only checked when the trap for the echo line is executed.
I am not sure if there is a ready-made solution for your requirement. I would write a function like this:
function run_cmd_with_check() {
"$#"
[[ $? -ne 0 ]] && ((non_zero++))
}
Then, use the function to run all the commands that need tracking:
run_cmd_with_check command1
run_cmd_with_check command2
run_cmd_with_check command3
printf "$non_zero commands exited with non-zero exit code\n"
If required, the function can be enhanced to store all failed commands in an array which can be printed out at the end.
You may want to take a look at this post for more info: Error handling in Bash
You have the magic variable $? available in bash which tells the exit code of last command:
#!/bin/bash
command1 # exits with status 1
C1_output=$? # will be 1
command2 # exits with status 0
C2_output=$? # will be 0
command3 # exits with status 0
C3_output=$? # will be 0
For each command you could do this:
if ! Command1 ; then an_error=1; fi
And repeat this for all commands
At the end an_error will be 1 if any of them failed.
If you want a count of failures set an_error to 0 at the beginning and do $((an_error++)). Instead of an_error=1
You could place your list of commands into an array and then loop over the commands. Any that return an error code your keep the results for later viewing.
declare -A results
commands=("your" "commands")
for cmd in "${commands[#]}"; do
out=$($cmd 2>&1)
[[ $? -eq 0 ]] || results[$cmd]="$out"
done
Then to see any non zero exit codes:
for cmd in "${!results[#]}"; do echo "$cmd = ${results[$cmd]}"; done
If the length of results is 0, there were no errors on your list of commands.
This requires Bash 4+ (for the associative array)
You can use the DEBUG trap like:
trap 'code+=$?' DEBUG
code=0
# run commands here normally
exit $code

How to tell if any command in bash script failed (non-zero exit status)

I want to know whether any commands in a bash script exited with a non-zero status.
I want something similar to set -e functionality, except that I don't want it to exit when a command exits with a non-zero status. I want it to run the whole script, and then I want to know that either:
a) all commands exited with exit status 0
-or-
b) one or more commands exited with a non-zero status
e.g., given the following:
#!/bin/bash
command1 # exits with status 1
command2 # exits with status 0
command3 # exits with status 0
I want all three commands to run. After running the script, I want an indication that at least one of the commands exited with a non-zero status.
Set a trap on ERR:
#!/bin/bash
err=0
trap 'err=1' ERR
command1
command2
command3
test $err = 0 # Return non-zero if any command failed
You might even throw in a little introspection to get data about where the error occurred:
#!/bin/bash
for i in 1 2 3; do
eval "command$i() { echo command$i; test $i != 2; }"
done
err=0
report() {
err=1
printf '%s' "error at line ${BASH_LINENO[0]}, in call to "
sed -n ${BASH_LINENO[0]}p $0
} >&2
trap report ERR
command1
command2
command3
exit $err
You could try to do something with a trap for the DEBUG pseudosignal, such as
trap '(( $? && ++errcount ))' DEBUG
The DEBUG trap is executed
before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function
(quote from manual).
So if you add this trap and as the last command something to print the error count, you get the proper value:
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
echo "Errors: $errcount"
returns Errors: 1 and
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
false
echo "Errors: $errcount"
prints Errors: 2. Beware that that last statement is actually required to account for the second false because the trap is executed before the commands, so the exit status for the second false is only checked when the trap for the echo line is executed.
I am not sure if there is a ready-made solution for your requirement. I would write a function like this:
function run_cmd_with_check() {
"$#"
[[ $? -ne 0 ]] && ((non_zero++))
}
Then, use the function to run all the commands that need tracking:
run_cmd_with_check command1
run_cmd_with_check command2
run_cmd_with_check command3
printf "$non_zero commands exited with non-zero exit code\n"
If required, the function can be enhanced to store all failed commands in an array which can be printed out at the end.
You may want to take a look at this post for more info: Error handling in Bash
You have the magic variable $? available in bash which tells the exit code of last command:
#!/bin/bash
command1 # exits with status 1
C1_output=$? # will be 1
command2 # exits with status 0
C2_output=$? # will be 0
command3 # exits with status 0
C3_output=$? # will be 0
For each command you could do this:
if ! Command1 ; then an_error=1; fi
And repeat this for all commands
At the end an_error will be 1 if any of them failed.
If you want a count of failures set an_error to 0 at the beginning and do $((an_error++)). Instead of an_error=1
You could place your list of commands into an array and then loop over the commands. Any that return an error code your keep the results for later viewing.
declare -A results
commands=("your" "commands")
for cmd in "${commands[#]}"; do
out=$($cmd 2>&1)
[[ $? -eq 0 ]] || results[$cmd]="$out"
done
Then to see any non zero exit codes:
for cmd in "${!results[#]}"; do echo "$cmd = ${results[$cmd]}"; done
If the length of results is 0, there were no errors on your list of commands.
This requires Bash 4+ (for the associative array)
You can use the DEBUG trap like:
trap 'code+=$?' DEBUG
code=0
# run commands here normally
exit $code

BASH Multiple on-exit functions for different exit levels

How can I use multiple on-exit traps in bash?
say i want to run on-exit-1 on exit code 1
and on-exit-2 on exit code 2
function on-exit1 {
echo "do stuff here if code had exit status 1"
}
function on-exit2 {
echo "do stuff here if code had exit status 2"
}
.....
trap on-exit1 EXIT # <--- what do i do here to specify the exit code to trap
trap on-exit2 EXIT # <--- what do i do here to specify the exit code to trap
.....
some bashing up in here
blah...blah
exit 1 # do on-exit1
else blah blah
exit 2 # do on-exit2
else blah blah
exit N # do on-exitNth
Something like the following code sample should work :
exit_check () {
# bash variable $? contains the last function exit code
# will run the function on_exit1 if status exit is 1, on_exit2 if status exit is 2, ...
on_exit$?
}
trap exit_check EXIT
If you really want to use Traps, try this:
#!/usr/bin/env bash
function finish {
echo "exitcode: $?"
}
trap finish EXIT
read -n 1 -s exitcode
exit $exitcode
But as #123 suggested, you could just call your exit functions, no need to 'abuse' Traps here.
Try to provide working a working example next time ;).

KSH - Capture Script's Return Code Before Exit

In KSH how could I trap the EXIT signal and also get the exit code for the script?
The below test outputs "About to exit script with return code 0." I'd like to get it to output the 4 from the return code of the exit command instead.
#!/usr/bin/ksh
trapped_exit() {
typeset rc=$1
echo "(LOG SCRIPT EXECUTION & RETURN CODE)"
echo "About to exit script with return code $rc."
}
trap 'APP_RC=$?; trapped_exit $APP_RC' EXIT
exit 4
I reckon I can alias the exit command to my own function. In this function I'll verify the exit command was called from my process ID and not a child process by comparing to a global variable that is previously defined. If it is from my PID I'll run my cleanup code. And finally call the real exit command with the same args.

trapping shell exit code

I am working on a shell script, and want to handle various exit codes that I might come across. To try things out, I am using this script:
#!/bin/sh
echo "Starting"
trap "echo \"first one\"; echo \"second one\"; " 1
exit 1;
I suppose I am missing something, but it seems I can't trap my own "exit 1". If I try to trap 0 everything works out:
#!/bin/sh
echo "Starting"
trap "echo \"first one\"; echo \"second one\"; " 0
exit
Is there anything I should know about trapping HUP (1) exit code?
trap dispatches on signals the process receives (e.g., from a kill), not on exit codes, with trap ... 0 being reserved for process ending. trap /blah/blah 0 will dispatch on either exit 0 or exit 1
That's just an exit code, it doesn't mean HUP. So your trap ... 1 is looking for HUP, but the exit is just an exit.
In addition to the system signals which you can list by doing trap -l, you can use some special Bash sigspecs: ERR, EXIT, RETURN and DEBUG. In all cases, you should use the name of the signal rather than the number for readability.
You can also use || operator, with a || b, b gets executed when a failed
#!/bin/sh
failed
{
echo "Failed $*"
exit 1
}
dosomething arg1 || failed "some comments"

Resources