I am working on a shell script, and want to handle various exit codes that I might come across. To try things out, I am using this script:
#!/bin/sh
echo "Starting"
trap "echo \"first one\"; echo \"second one\"; " 1
exit 1;
I suppose I am missing something, but it seems I can't trap my own "exit 1". If I try to trap 0 everything works out:
#!/bin/sh
echo "Starting"
trap "echo \"first one\"; echo \"second one\"; " 0
exit
Is there anything I should know about trapping HUP (1) exit code?
trap dispatches on signals the process receives (e.g., from a kill), not on exit codes, with trap ... 0 being reserved for process ending. trap /blah/blah 0 will dispatch on either exit 0 or exit 1
That's just an exit code, it doesn't mean HUP. So your trap ... 1 is looking for HUP, but the exit is just an exit.
In addition to the system signals which you can list by doing trap -l, you can use some special Bash sigspecs: ERR, EXIT, RETURN and DEBUG. In all cases, you should use the name of the signal rather than the number for readability.
You can also use || operator, with a || b, b gets executed when a failed
#!/bin/sh
failed
{
echo "Failed $*"
exit 1
}
dosomething arg1 || failed "some comments"
Related
I have the below script example to handle trap on EXIT and INT signals during some job, and trigger a clean up function that cannot be interrupted
#!/bin/bash
# Our main function to handle some job:
some_job() {
echo "Working hard on some stuff..."
for i in $(seq 1 5); do
#printf "."
printf '%s' "$i."
sleep 1
done
echo ""
echo "Job done, but we found some errors !"
return 2 # to simulate script exit code 2
}
# Our clean temp files function
# - should not be interrupted
# - should not be called twice if interrupted
clean_tempfiles() {
echo ""
echo "Cleaning temp files, do not interrupt..."
for i in $(seq 1 5); do
printf "> "
sleep 1
done
echo ""
}
# Called on signal EXIT, or indirectly on INT QUIT TERM
clean_exit() {
# save the return code of the script
err=$?
# reset trap for all signals to not interrupt clean_tempfiles() on any next signal
trap '' EXIT INT QUIT TERM
clean_tempfiles
exit $err # exit the script with saved $?
}
# Called on signals INT QUIT TERM
sig_cleanup() {
# save error code (130 for SIGINT, 143 for SIGTERM, 131 for SIGQUIT)
err=$?
# some shells will call EXIT after the INT signal
# causing EXIT trap to be executed, so we trap EXIT after INT
trap '' EXIT
(exit $err) # execute in a subshell just to pass $? to clean_exit()
clean_exit
}
trap clean_exit EXIT
trap sig_cleanup INT QUIT TERM
some_job
The trap works properly, clean_tempfiles() cannot be interrupted and the exit code from some_job() is preserved
Now, if I want to redirect the output from some_job() using process substitution, in the last script line:
# - redirect stdout and stderr to stdout_logfile
# - redirect stderr to stderr_logfile
# - redirect both stdout and stderr to terminal
some_job > >(tee -a stdout_logfile) 2> >(tee -a stderr_logfile | tee -a stdout_logfile >&2)
the normal EXIT trap is properly triggered. However, the SIGINT trap is never triggered. Ctr^C cannot be trapped as soon as the last logging line is added
I could workaround it using POSIX sh compliant redirects with temp log fifo pipes, however, I would really have it working with the simpler bash syntax using process substitution
Is this even possible ?
I have a shell script which makes a bunch of child process to run in the background. If any of those fail I want to terminate all child process and the parent process with kill -- -$$.
I attempted to make a simpler signal handler function check_exit_code() in the parent process to see if it would work fine. It is called every time a child process terminates, using trap and SIGCHLD signal.
#!/bin/sh
set -o monitor
function check_exit_code {
if [ $1 -eq "0" ]
then
echo "Success: $1"
else
echo "Fail: $1"
fi
}
trap "check_exit_code $?" SIGCHLD
mycommand1 &
mycommand2 &
mycommand3 &
...
wait
Unfortunately, this only returns Success: 0 even when mycommand# failed and its exit code was 2, so I changed the function to the following.
#!/bin/sh
set -o monitor
function check_exit_code() {
local EXIT_STATUS=$?
if [ "$EXIT_STATUS" -eq "0" ]
then
echo "Success: $EXIT_STATUS"
else
echo "Fail: $EXIT_STATUS"
fi
}
trap "check_exit_code" SIGCHLD
mycommand1 &
mycommand2 &
mycommand3 &
...
wait
This only returns Fail: 145, which mycommand# cannot return. I suspect that when I write $? I receive the exit status of another command. What is the problem? How would you fix it?
The problem with your first attempt is the double quotes around check_exit_status $?. The shell expands $? to zero before setting the trap and thus, a SIGCHLD triggers check_exit_status 0 no matter what, hence the constant Success: 0 output.
Regarding your second attempt, upon entry to the trap action, the special parameter ? holds the most recent pipeline's exit status as usual, which is in this case wait. Since a trap was set for SIGCHLD, the shell interrupts wait and assigns 128 + 17 (numeric equivalent of SIGCHLD) to ? upon receiving that signal.
With bash-5.1.4 or a newer version, you can achieve the desired result like so:
while true; do
wait -n -p pid
case $?,$pid in
( 0* ) # a job has exited with zero
continue ;;
( *, ) # pid is empty, no jobs left
break ;;
( * ) # a job has exited with non-zero
trap "exit $?" TERM
kill 0
esac
done
I want to know whether any commands in a bash script exited with a non-zero status.
I want something similar to set -e functionality, except that I don't want it to exit when a command exits with a non-zero status. I want it to run the whole script, and then I want to know that either:
a) all commands exited with exit status 0
-or-
b) one or more commands exited with a non-zero status
e.g., given the following:
#!/bin/bash
command1 # exits with status 1
command2 # exits with status 0
command3 # exits with status 0
I want all three commands to run. After running the script, I want an indication that at least one of the commands exited with a non-zero status.
Set a trap on ERR:
#!/bin/bash
err=0
trap 'err=1' ERR
command1
command2
command3
test $err = 0 # Return non-zero if any command failed
You might even throw in a little introspection to get data about where the error occurred:
#!/bin/bash
for i in 1 2 3; do
eval "command$i() { echo command$i; test $i != 2; }"
done
err=0
report() {
err=1
printf '%s' "error at line ${BASH_LINENO[0]}, in call to "
sed -n ${BASH_LINENO[0]}p $0
} >&2
trap report ERR
command1
command2
command3
exit $err
You could try to do something with a trap for the DEBUG pseudosignal, such as
trap '(( $? && ++errcount ))' DEBUG
The DEBUG trap is executed
before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function
(quote from manual).
So if you add this trap and as the last command something to print the error count, you get the proper value:
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
echo "Errors: $errcount"
returns Errors: 1 and
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
false
echo "Errors: $errcount"
prints Errors: 2. Beware that that last statement is actually required to account for the second false because the trap is executed before the commands, so the exit status for the second false is only checked when the trap for the echo line is executed.
I am not sure if there is a ready-made solution for your requirement. I would write a function like this:
function run_cmd_with_check() {
"$#"
[[ $? -ne 0 ]] && ((non_zero++))
}
Then, use the function to run all the commands that need tracking:
run_cmd_with_check command1
run_cmd_with_check command2
run_cmd_with_check command3
printf "$non_zero commands exited with non-zero exit code\n"
If required, the function can be enhanced to store all failed commands in an array which can be printed out at the end.
You may want to take a look at this post for more info: Error handling in Bash
You have the magic variable $? available in bash which tells the exit code of last command:
#!/bin/bash
command1 # exits with status 1
C1_output=$? # will be 1
command2 # exits with status 0
C2_output=$? # will be 0
command3 # exits with status 0
C3_output=$? # will be 0
For each command you could do this:
if ! Command1 ; then an_error=1; fi
And repeat this for all commands
At the end an_error will be 1 if any of them failed.
If you want a count of failures set an_error to 0 at the beginning and do $((an_error++)). Instead of an_error=1
You could place your list of commands into an array and then loop over the commands. Any that return an error code your keep the results for later viewing.
declare -A results
commands=("your" "commands")
for cmd in "${commands[#]}"; do
out=$($cmd 2>&1)
[[ $? -eq 0 ]] || results[$cmd]="$out"
done
Then to see any non zero exit codes:
for cmd in "${!results[#]}"; do echo "$cmd = ${results[$cmd]}"; done
If the length of results is 0, there were no errors on your list of commands.
This requires Bash 4+ (for the associative array)
You can use the DEBUG trap like:
trap 'code+=$?' DEBUG
code=0
# run commands here normally
exit $code
I want to know whether any commands in a bash script exited with a non-zero status.
I want something similar to set -e functionality, except that I don't want it to exit when a command exits with a non-zero status. I want it to run the whole script, and then I want to know that either:
a) all commands exited with exit status 0
-or-
b) one or more commands exited with a non-zero status
e.g., given the following:
#!/bin/bash
command1 # exits with status 1
command2 # exits with status 0
command3 # exits with status 0
I want all three commands to run. After running the script, I want an indication that at least one of the commands exited with a non-zero status.
Set a trap on ERR:
#!/bin/bash
err=0
trap 'err=1' ERR
command1
command2
command3
test $err = 0 # Return non-zero if any command failed
You might even throw in a little introspection to get data about where the error occurred:
#!/bin/bash
for i in 1 2 3; do
eval "command$i() { echo command$i; test $i != 2; }"
done
err=0
report() {
err=1
printf '%s' "error at line ${BASH_LINENO[0]}, in call to "
sed -n ${BASH_LINENO[0]}p $0
} >&2
trap report ERR
command1
command2
command3
exit $err
You could try to do something with a trap for the DEBUG pseudosignal, such as
trap '(( $? && ++errcount ))' DEBUG
The DEBUG trap is executed
before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function
(quote from manual).
So if you add this trap and as the last command something to print the error count, you get the proper value:
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
echo "Errors: $errcount"
returns Errors: 1 and
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
false
echo "Errors: $errcount"
prints Errors: 2. Beware that that last statement is actually required to account for the second false because the trap is executed before the commands, so the exit status for the second false is only checked when the trap for the echo line is executed.
I am not sure if there is a ready-made solution for your requirement. I would write a function like this:
function run_cmd_with_check() {
"$#"
[[ $? -ne 0 ]] && ((non_zero++))
}
Then, use the function to run all the commands that need tracking:
run_cmd_with_check command1
run_cmd_with_check command2
run_cmd_with_check command3
printf "$non_zero commands exited with non-zero exit code\n"
If required, the function can be enhanced to store all failed commands in an array which can be printed out at the end.
You may want to take a look at this post for more info: Error handling in Bash
You have the magic variable $? available in bash which tells the exit code of last command:
#!/bin/bash
command1 # exits with status 1
C1_output=$? # will be 1
command2 # exits with status 0
C2_output=$? # will be 0
command3 # exits with status 0
C3_output=$? # will be 0
For each command you could do this:
if ! Command1 ; then an_error=1; fi
And repeat this for all commands
At the end an_error will be 1 if any of them failed.
If you want a count of failures set an_error to 0 at the beginning and do $((an_error++)). Instead of an_error=1
You could place your list of commands into an array and then loop over the commands. Any that return an error code your keep the results for later viewing.
declare -A results
commands=("your" "commands")
for cmd in "${commands[#]}"; do
out=$($cmd 2>&1)
[[ $? -eq 0 ]] || results[$cmd]="$out"
done
Then to see any non zero exit codes:
for cmd in "${!results[#]}"; do echo "$cmd = ${results[$cmd]}"; done
If the length of results is 0, there were no errors on your list of commands.
This requires Bash 4+ (for the associative array)
You can use the DEBUG trap like:
trap 'code+=$?' DEBUG
code=0
# run commands here normally
exit $code
This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 1 year ago.
I have a shell script that executes a number of commands. How do I make the shell script exit if any of the commands exit with a non-zero exit code?
After each command, the exit code can be found in the $? variable so you would have something like:
ls -al file.ext
rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi
You need to be careful of piped commands since the $? only gives you the return code of the last element in the pipe so, in the code:
ls -al file.ext | sed 's/^/xx: /"
will not return an error code if the file doesn't exist (since the sed part of the pipeline actually works, returning 0).
The bash shell actually provides an array which can assist in that case, that being PIPESTATUS. This array has one element for each of the pipeline components, that you can access individually like ${PIPESTATUS[0]}:
pax> false | true ; echo ${PIPESTATUS[0]}
1
Note that this is getting you the result of the false command, not the entire pipeline. You can also get the entire list to process as you see fit:
pax> false | true | false; echo ${PIPESTATUS[*]}
1 0 1
If you wanted to get the largest error code from a pipeline, you could use something like:
true | true | false | true | false
rcs=${PIPESTATUS[*]}; rc=0; for i in ${rcs}; do rc=$(($i > $rc ? $i : $rc)); done
echo $rc
This goes through each of the PIPESTATUS elements in turn, storing it in rc if it was greater than the previous rc value.
If you want to work with $?, you'll need to check it after each command, since $? is updated after each command exits. This means that if you execute a pipeline, you'll only get the exit code of the last process in the pipeline.
Another approach is to do this:
set -e
set -o pipefail
If you put this at the top of the shell script, it looks like Bash will take care of this for you. As a previous poster noted, "set -e" will cause Bash to exit with an error on any simple command. "set -o pipefail" will cause Bash to exit with an error on any command in a pipeline as well.
See here or here for a little more discussion on this problem. Here is the Bash manual section on the set builtin.
"set -e" is probably the easiest way to do this. Just put that before any commands in your program.
If you just call exit in Bash without any parameters, it will return the exit code of the last command. Combined with OR, Bash should only invoke exit, if the previous command fails. But I haven't tested this.
command1 || exit;
command2 || exit;
Bash will also store the exit code of the last command in the variable $?.
[ $? -eq 0 ] || exit $?; # Exit for nonzero return code
http://cfaj.freeshell.org/shell/cus-faq-2.html#11
How do I get the exit code of cmd1 in cmd1|cmd2
First, note that cmd1 exit code could be non-zero and still don't mean an error. This happens for instance in
cmd | head -1
You might observe a 141 (or 269 with ksh93) exit status of cmd1, but it's because cmd was interrupted by a SIGPIPE signal when head -1 terminated after having read one line.
To know the exit status of the elements of a pipeline
cmd1 | cmd2 | cmd3
a. with Z shell (zsh):
The exit codes are provided in the pipestatus special array.
cmd1 exit code is in $pipestatus[1], cmd3 exit code in
$pipestatus[3], so that $? is always the same as
$pipestatus[-1].
b. with Bash:
The exit codes are provided in the PIPESTATUS special array.
cmd1 exit code is in ${PIPESTATUS[0]}, cmd3 exit code in
${PIPESTATUS[2]}, so that $? is always the same as
${PIPESTATUS: -1}.
...
For more details see Z shell.
For Bash:
# This will trap any errors or commands with non-zero exit status
# by calling function catch_errors()
trap catch_errors ERR;
#
# ... the rest of the script goes here
#
function catch_errors() {
# Do whatever on errors
#
#
echo "script aborted, because of errors";
exit 0;
}
In Bash this is easy. Just tie them together with &&:
command1 && command2 && command3
You can also use the nested if construct:
if command1
then
if command2
then
do_something
else
exit
fi
else
exit
fi
#
#------------------------------------------------------------------------------
# purpose: to run a command, log cmd output, exit on error
# usage:
# set -e; do_run_cmd_or_exit "$cmd" ; set +e
#------------------------------------------------------------------------------
do_run_cmd_or_exit(){
cmd="$#" ;
do_log "DEBUG running cmd or exit: \"$cmd\""
msg=$($cmd 2>&1)
export exit_code=$?
# If occurred during the execution, exit with error
error_msg="Failed to run the command:
\"$cmd\" with the output:
\"$msg\" !!!"
if [ $exit_code -ne 0 ] ; then
do_log "ERROR $msg"
do_log "FATAL $msg"
do_exit "$exit_code" "$error_msg"
else
# If no errors occurred, just log the message
do_log "DEBUG : cmdoutput : \"$msg\""
fi
}