Exit Shell Script Based on Process Exit Code [duplicate] - bash

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 1 year ago.
I have a shell script that executes a number of commands. How do I make the shell script exit if any of the commands exit with a non-zero exit code?

After each command, the exit code can be found in the $? variable so you would have something like:
ls -al file.ext
rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi
You need to be careful of piped commands since the $? only gives you the return code of the last element in the pipe so, in the code:
ls -al file.ext | sed 's/^/xx: /"
will not return an error code if the file doesn't exist (since the sed part of the pipeline actually works, returning 0).
The bash shell actually provides an array which can assist in that case, that being PIPESTATUS. This array has one element for each of the pipeline components, that you can access individually like ${PIPESTATUS[0]}:
pax> false | true ; echo ${PIPESTATUS[0]}
1
Note that this is getting you the result of the false command, not the entire pipeline. You can also get the entire list to process as you see fit:
pax> false | true | false; echo ${PIPESTATUS[*]}
1 0 1
If you wanted to get the largest error code from a pipeline, you could use something like:
true | true | false | true | false
rcs=${PIPESTATUS[*]}; rc=0; for i in ${rcs}; do rc=$(($i > $rc ? $i : $rc)); done
echo $rc
This goes through each of the PIPESTATUS elements in turn, storing it in rc if it was greater than the previous rc value.

If you want to work with $?, you'll need to check it after each command, since $? is updated after each command exits. This means that if you execute a pipeline, you'll only get the exit code of the last process in the pipeline.
Another approach is to do this:
set -e
set -o pipefail
If you put this at the top of the shell script, it looks like Bash will take care of this for you. As a previous poster noted, "set -e" will cause Bash to exit with an error on any simple command. "set -o pipefail" will cause Bash to exit with an error on any command in a pipeline as well.
See here or here for a little more discussion on this problem. Here is the Bash manual section on the set builtin.

"set -e" is probably the easiest way to do this. Just put that before any commands in your program.

If you just call exit in Bash without any parameters, it will return the exit code of the last command. Combined with OR, Bash should only invoke exit, if the previous command fails. But I haven't tested this.
command1 || exit;
command2 || exit;
Bash will also store the exit code of the last command in the variable $?.

[ $? -eq 0 ] || exit $?; # Exit for nonzero return code

http://cfaj.freeshell.org/shell/cus-faq-2.html#11
How do I get the exit code of cmd1 in cmd1|cmd2
First, note that cmd1 exit code could be non-zero and still don't mean an error. This happens for instance in
cmd | head -1
You might observe a 141 (or 269 with ksh93) exit status of cmd1, but it's because cmd was interrupted by a SIGPIPE signal when head -1 terminated after having read one line.
To know the exit status of the elements of a pipeline
cmd1 | cmd2 | cmd3
a. with Z shell (zsh):
The exit codes are provided in the pipestatus special array.
cmd1 exit code is in $pipestatus[1], cmd3 exit code in
$pipestatus[3], so that $? is always the same as
$pipestatus[-1].
b. with Bash:
The exit codes are provided in the PIPESTATUS special array.
cmd1 exit code is in ${PIPESTATUS[0]}, cmd3 exit code in
${PIPESTATUS[2]}, so that $? is always the same as
${PIPESTATUS: -1}.
...
For more details see Z shell.

For Bash:
# This will trap any errors or commands with non-zero exit status
# by calling function catch_errors()
trap catch_errors ERR;
#
# ... the rest of the script goes here
#
function catch_errors() {
# Do whatever on errors
#
#
echo "script aborted, because of errors";
exit 0;
}

In Bash this is easy. Just tie them together with &&:
command1 && command2 && command3
You can also use the nested if construct:
if command1
then
if command2
then
do_something
else
exit
fi
else
exit
fi

#
#------------------------------------------------------------------------------
# purpose: to run a command, log cmd output, exit on error
# usage:
# set -e; do_run_cmd_or_exit "$cmd" ; set +e
#------------------------------------------------------------------------------
do_run_cmd_or_exit(){
cmd="$#" ;
do_log "DEBUG running cmd or exit: \"$cmd\""
msg=$($cmd 2>&1)
export exit_code=$?
# If occurred during the execution, exit with error
error_msg="Failed to run the command:
\"$cmd\" with the output:
\"$msg\" !!!"
if [ $exit_code -ne 0 ] ; then
do_log "ERROR $msg"
do_log "FATAL $msg"
do_exit "$exit_code" "$error_msg"
else
# If no errors occurred, just log the message
do_log "DEBUG : cmdoutput : \"$msg\""
fi
}

Related

How to perform action on non-zero return code of any command in bash [duplicate]

I want to know whether any commands in a bash script exited with a non-zero status.
I want something similar to set -e functionality, except that I don't want it to exit when a command exits with a non-zero status. I want it to run the whole script, and then I want to know that either:
a) all commands exited with exit status 0
-or-
b) one or more commands exited with a non-zero status
e.g., given the following:
#!/bin/bash
command1 # exits with status 1
command2 # exits with status 0
command3 # exits with status 0
I want all three commands to run. After running the script, I want an indication that at least one of the commands exited with a non-zero status.
Set a trap on ERR:
#!/bin/bash
err=0
trap 'err=1' ERR
command1
command2
command3
test $err = 0 # Return non-zero if any command failed
You might even throw in a little introspection to get data about where the error occurred:
#!/bin/bash
for i in 1 2 3; do
eval "command$i() { echo command$i; test $i != 2; }"
done
err=0
report() {
err=1
printf '%s' "error at line ${BASH_LINENO[0]}, in call to "
sed -n ${BASH_LINENO[0]}p $0
} >&2
trap report ERR
command1
command2
command3
exit $err
You could try to do something with a trap for the DEBUG pseudosignal, such as
trap '(( $? && ++errcount ))' DEBUG
The DEBUG trap is executed
before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function
(quote from manual).
So if you add this trap and as the last command something to print the error count, you get the proper value:
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
echo "Errors: $errcount"
returns Errors: 1 and
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
false
echo "Errors: $errcount"
prints Errors: 2. Beware that that last statement is actually required to account for the second false because the trap is executed before the commands, so the exit status for the second false is only checked when the trap for the echo line is executed.
I am not sure if there is a ready-made solution for your requirement. I would write a function like this:
function run_cmd_with_check() {
"$#"
[[ $? -ne 0 ]] && ((non_zero++))
}
Then, use the function to run all the commands that need tracking:
run_cmd_with_check command1
run_cmd_with_check command2
run_cmd_with_check command3
printf "$non_zero commands exited with non-zero exit code\n"
If required, the function can be enhanced to store all failed commands in an array which can be printed out at the end.
You may want to take a look at this post for more info: Error handling in Bash
You have the magic variable $? available in bash which tells the exit code of last command:
#!/bin/bash
command1 # exits with status 1
C1_output=$? # will be 1
command2 # exits with status 0
C2_output=$? # will be 0
command3 # exits with status 0
C3_output=$? # will be 0
For each command you could do this:
if ! Command1 ; then an_error=1; fi
And repeat this for all commands
At the end an_error will be 1 if any of them failed.
If you want a count of failures set an_error to 0 at the beginning and do $((an_error++)). Instead of an_error=1
You could place your list of commands into an array and then loop over the commands. Any that return an error code your keep the results for later viewing.
declare -A results
commands=("your" "commands")
for cmd in "${commands[#]}"; do
out=$($cmd 2>&1)
[[ $? -eq 0 ]] || results[$cmd]="$out"
done
Then to see any non zero exit codes:
for cmd in "${!results[#]}"; do echo "$cmd = ${results[$cmd]}"; done
If the length of results is 0, there were no errors on your list of commands.
This requires Bash 4+ (for the associative array)
You can use the DEBUG trap like:
trap 'code+=$?' DEBUG
code=0
# run commands here normally
exit $code

Process Substitution and Exit Codes

I would like to monitor the output of a process for a given period of time. The following does everything I want except give me the return value of the command that ran.
cmd='cat <<EOF
My
Three
Lines
EOF
exit 2
'
perl -pe "if (/Hello/) { print \$_; exit 1 }" <(echo "$cmd" | timeout
5 bash)
Does anyone have a way to get that return value? I've looked at other questions here, but none of the answers apply in this situation.
Bash 4.4-Only Answer: Use $! to collect PID, and wait on that PID
Bash only made it possible to collect the exit status of a process substitution in version 4.4. Since we need to have that version anyhow, might as well use automatic FD allocation too. :)
exec {psfd}< <(echo "hello"; exit 3); procsub_pid=$!
cat <&$psfd # read from the process substitution so it can exit
exec {psfd}<&- # close the FD
wait "$procsub_pid" # wait for the process to collect its exit status
echo "$?"
...properly returns:
3
In the context of your code, that might look like:
cmd() { printf '%s\n' My Three Lines; exit 2; }
export -f cmd
exec {psfd}< <(timeout 5 bash -c cmd); ps_pid=$!
perl -pe "if (/Hello/) { print \$_; exit 1 }" <&$psfd
echo "Perl exited with status $?"
wait "$ps_pid"; echo "Process substitution exited with status $?"
...emitting as output:
Perl exited with status 0
Process substitution exited with status 2
Easy Answer: Do Something Else
While it's possible to work around this in very recent shell releases, in general, process substitutions eat exit status. More to the point, there's just no need for them in the example given.
If you set the pipefail shell option, exit status from any component in a pipeline -- not just the last -- will be reflected in the pipeline's exit status; thus, you don't need to use a process substitution to have perl's exit status be honored as well.
#!/usr/bin/env bash
set -o pipefail
cmd() { printf '%s\n' My Three Lines; exit 2; }
export -f cmd
timeout 5 bash -c 'cmd' | perl -pe "if (/Hello/) { print \$_; exit 1 }"
printf '%s\n' \
"Perl exited with status ${PIPESTATUS[1]}" \
"Process substitution exited with status ${PIPESTATUS[0]}"

Get exit code of process substitution with pipe into while loop

The following script calls another program reading its output in a while loop (see Bash - How to pipe input to while loop and preserve variables after loop ends):
while read -r col0 col1; do
# [...]
done < <(other_program [args ...])
How can I check for the exit code of other_program to see if the loop was executed properly?
Note: ls -d / /nosuch is used as an example command below, because it fails (exit code 1) while still producing stdout output (/) (in addition to stderr output).
Bash v4.2+ solution:
ccarton's helpful answer works well in principle, but by default the while loop runs in a subshell, which means that any variables created or modified in the loop will not be visible to the current shell.
In Bash v4.2+, you can change this by turning the lastpipe option on, which makes the last segment of a pipeline run in the current shell;
as in ccarton's answer, the pipefail option must be set to have $? reflect the exit code of the first failing command in the pipeline:
shopt -s lastpipe # run the last segment of a pipeline in the current shell
shopt -so pipefail # reflect a pipeline's first failing command's exit code in $?
ls -d / /nosuch | while read -r line; do
result=$line
done
echo "result: [$result]; exit code: $?"
The above yields (stderr output omitted):
result: [/]; exit code: 1
As you can see, the $result variable, set in the while loop, is available, and the ls command's (nonzero) exit code is reflected in $?.
Bash v3+ solution:
ikkachu's helpful answer works well and shows advanced techniques, but it is a bit cumbersome.
Here is a simpler alternative:
while read -r line || { ec=$line && break; }; do # Note the `|| { ...; }` part.
result=$line
done < <(ls -d / /nosuch; printf $?) # Note the `; printf $?` part.
echo "result: [$result]; exit code: $ec"
By appending the value of $?, the ls command's exit code, to the output without a trailing \n (printf $?), read reads it in the last loop operation, but indicates failure (exit code 1), which would normally exit the loop.
We can detect this case with ||, and assign the exit code (that was still read into $line) to variable $ec and exit the loop then.
On the off chance that the command's output doesn't have a trailing \n, more work is needed:
while read -r line ||
{ [[ $line =~ ^(.*)/([0-9]+)$ ]] && ec=${BASH_REMATCH[2]} && line=${BASH_REMATCH[1]};
[[ -n $line ]]; }
do
result=$line
done < <(printf 'no trailing newline'; ls /nosuch; printf "/$?")
echo "result: [$result]; exit code: $ec"
The above yields (stderr output omitted):
result: [no trailing newline]; exit code: 1
At least one way would be to redirect the output of the background process through a named pipe. This would allow to pick up its PID and then get the exit status through waiting on the PID.
#!/bin/bash
mkfifo pipe || exit 1
(echo foo ; exit 19) > pipe &
pid=$!
while read x ; do echo "read: $x" ; done < pipe
wait $pid
echo "exit status of bg process: $?"
rm pipe
If you can use a direct pipe (i.e. don't mind the loop being run in a subshell), you could use Bash's PIPESTATUS, which contains the exit codes of all commands in the pipeline:
(echo foo ; exit 19) | while read x ; do
echo "read: $x" ; done;
echo "status: ${PIPESTATUS[0]}"
A simple way is to use the bash pipefail option to propagate the first error code from a pipeline.
set -o pipefail
other_program | while read x; do
echo "Read: $x"
done || echo "Error: $?"
Another way is to use coproc (requires 4.0+).
coproc other_program [args ...]
while read -r -u ${COPROC[0]} col0 col1; do
# [...]
done
wait $COPROC_PID || echo "Error exit status: $?"
coproc frees you from having to setup asynchronicity and stdin/stdout redirection that you'd otherwise need to do in an equivalent mkfifo.

How to tell if any command in bash script failed (non-zero exit status)

I want to know whether any commands in a bash script exited with a non-zero status.
I want something similar to set -e functionality, except that I don't want it to exit when a command exits with a non-zero status. I want it to run the whole script, and then I want to know that either:
a) all commands exited with exit status 0
-or-
b) one or more commands exited with a non-zero status
e.g., given the following:
#!/bin/bash
command1 # exits with status 1
command2 # exits with status 0
command3 # exits with status 0
I want all three commands to run. After running the script, I want an indication that at least one of the commands exited with a non-zero status.
Set a trap on ERR:
#!/bin/bash
err=0
trap 'err=1' ERR
command1
command2
command3
test $err = 0 # Return non-zero if any command failed
You might even throw in a little introspection to get data about where the error occurred:
#!/bin/bash
for i in 1 2 3; do
eval "command$i() { echo command$i; test $i != 2; }"
done
err=0
report() {
err=1
printf '%s' "error at line ${BASH_LINENO[0]}, in call to "
sed -n ${BASH_LINENO[0]}p $0
} >&2
trap report ERR
command1
command2
command3
exit $err
You could try to do something with a trap for the DEBUG pseudosignal, such as
trap '(( $? && ++errcount ))' DEBUG
The DEBUG trap is executed
before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function
(quote from manual).
So if you add this trap and as the last command something to print the error count, you get the proper value:
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
echo "Errors: $errcount"
returns Errors: 1 and
#!/usr/bin/env bash
trap '(( $? && ++errcount ))' DEBUG
true
false
true
false
echo "Errors: $errcount"
prints Errors: 2. Beware that that last statement is actually required to account for the second false because the trap is executed before the commands, so the exit status for the second false is only checked when the trap for the echo line is executed.
I am not sure if there is a ready-made solution for your requirement. I would write a function like this:
function run_cmd_with_check() {
"$#"
[[ $? -ne 0 ]] && ((non_zero++))
}
Then, use the function to run all the commands that need tracking:
run_cmd_with_check command1
run_cmd_with_check command2
run_cmd_with_check command3
printf "$non_zero commands exited with non-zero exit code\n"
If required, the function can be enhanced to store all failed commands in an array which can be printed out at the end.
You may want to take a look at this post for more info: Error handling in Bash
You have the magic variable $? available in bash which tells the exit code of last command:
#!/bin/bash
command1 # exits with status 1
C1_output=$? # will be 1
command2 # exits with status 0
C2_output=$? # will be 0
command3 # exits with status 0
C3_output=$? # will be 0
For each command you could do this:
if ! Command1 ; then an_error=1; fi
And repeat this for all commands
At the end an_error will be 1 if any of them failed.
If you want a count of failures set an_error to 0 at the beginning and do $((an_error++)). Instead of an_error=1
You could place your list of commands into an array and then loop over the commands. Any that return an error code your keep the results for later viewing.
declare -A results
commands=("your" "commands")
for cmd in "${commands[#]}"; do
out=$($cmd 2>&1)
[[ $? -eq 0 ]] || results[$cmd]="$out"
done
Then to see any non zero exit codes:
for cmd in "${!results[#]}"; do echo "$cmd = ${results[$cmd]}"; done
If the length of results is 0, there were no errors on your list of commands.
This requires Bash 4+ (for the associative array)
You can use the DEBUG trap like:
trap 'code+=$?' DEBUG
code=0
# run commands here normally
exit $code

Execute multiple commands in a bash script sequentially and fail if at least one of them fails

I have a bash script that I use to execute multiple commands in sequence and I need to return non-zero exit code if at least one command in the sequence returns non-zero exit code. I know there is a wait command for that but I'm not sure I understand how to use it.
UPD The script looks like this:
#!/bin/bash
command1
command2
command3
All the commands run in foreground. All the commands need to run regardless of which exit status the previous command returned (so it must not behave as "exit on first error"). Basically I need to gather all the exit statuses and return global exit status accordingly.
Just do it:
EXIT_STATUS=0
command1 || EXIT_STATUS=$?
command2 || EXIT_STATUS=$?
command3 || EXIT_STATUS=$?
exit $EXIT_STATUS
Not sure which of the statuses it should return if several of commands have failed.
If by sequence you mean pipe then you need to set pipefail in your script like set -o pipefail. From man bash:
The return status of a pipeline is the exit status of the last command,
unless the pipefail option is
enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost)
command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved
word ! precedes a pipeline, the exit status of that pipeline is the logical negation of the exit
status as described above. The shell waits for all commands in the pipeline to terminate before
returning a value.
If you just mean sequential commands then just check the exit status of each command and set a flag if the exit status is none zero. Have your script return the value of the flag like:
#!/bin/bash
EXIT=0
grep -q A <<< 'ABC' || EXIT=$? # Will exit with 0
grep -q a <<< 'ABC' || EXIT=$? # Will exit with 1
grep -q A <<< 'ABC' || EXIT=$? # Will exit with 0
echo $EXIT # Will print 1
exit $EXIT # Exit status of script will be 1
This uses the logical operator OR || to only set EXIT if the command fails. If multiple commands fail the exit status from the last failed command will be return by the script.
If these commands are not running in the background then wait isn't relevant here.
If you wish to know which command failed, but not neccessarily its return code you could use:
#!/bin/bash
rc=0;
counter=0;
command1 || let "rc += 1 << $counter"; let counter+=1;
command2 || let "rc += 1 << $counter"; let counter+=1;
command3 || let "rc += 1 << $counter"; let counter+=1;
exit $rc
This uses bit shifting in bash in order to set the bit corresponding to which command failed.
Hence if the first command failed you'll get an return code of 1 (=2^0), if the third failed you would get a return code of 8 (=2^3), and if both the first and the third command failed you would get 9 as the return code.
If you wish to know which command failed:
#!/bin/bash
EXITCODE_RESULT=0
command1
EXIT_CODE_1=$?
command2
EXIT_CODE_2=$?
command3
EXIT_CODE_3=$?
for i in ${!EXIT_CODE_*}
do
# check if the values of the EXIT_CODE vars contain 1
EXITCODE_RESULT=$(($EXITCODE_RESULT || ${!i}))
if [ ${!i} -ne 0 ]
then
var_fail+="'$i' "
else
var_succ+="'$i' "
fi
done
In $var_fail you get a list of the failed EXIT_CODE vars and in $var_succ a list of the successful ones

Resources