Bash - break conditional clause if one of the statements return error code != 0 - bash

I have the following bash script which runs on my CI and intends to run my code on a physical MacOS and on several docker Linux images:
if [[ "$OS_NAME" == "mac_os" ]]; then
make all;
run_test1;
run_test2;
make install;
else
docker exec -i my_docker_image bash -c "make all";
docker exec -i my_docker_image bash -c "run_test1";
docker exec -i my_docker_image bash -c "run_test2";
docker exec -i my_docker_image bash -c "make install";
fi
If the tests fail (run_test1 or run_test2) they return error code 1. If they pass they return error code 0.
The whole script runs with set -e so whenever it sees exit code other than 0 it stops and fails the entire build.
The problem is that currently, when run_test1 and run_test2 are inside the conditional clause - when they fail and return error code 1 the conditional clause doesn't break and the build succeeds although tests didn't pass.
So I have 2 questions:
How to break the conditional clause if one of the commands return error code other than 0?
How to break the conditional clause in such a way that the entire clause will return an error code (so the whole build will fail)?

Your code should work as expected, let's demonstrate this:
#!/usr/bin/env bash
set -e
var="test"
if [[ $var = test ]]; then
echo "Hello"
cat non_existing_file &> /dev/null
echo "World"
else
echo "Else hello"
cat non_existing file &> /dev/null
fi
echo I am done
This will output only "Hello", as expected. If it works differently for you, it means you didn't provide enough code. Let's try to change the code above and show some examples when set -e is ignored, just like in your case:
Quoting Bash Reference manual:
If a compound command or shell function executes in a context where -e
is being ignored, none of the commands executed within the compound
command or function body will be affected by the -e setting, even if
-e is set and a command returns a failure status.
Now let's quote the same manual and see some cases where -e is ignored:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword, part of
the test in an if statement, part of any command executed in a && or
|| list except the command following the final && or ||, any command
in a pipeline but the last, or if the command’s return status is being
inverted with !.
From this we can see that, for example, if you had the code above in a function and tested that function with if, set -e would be ignored:
#!/usr/bin/env bash
set -e
f() {
var="test"
if [[ $var = test ]]; then
echo "Hello"
cat non_existing_file &> /dev/null
echo "World"
else
echo "Else hello"
cat non_existing file &> /dev/null
fi
}
if f; then echo "Function OK!"; fi
echo I am done
Function f is executed in a context where set -e is being ignored (if statement), meaning that set -e doesn't affect any of the commands executed within this function. This code outputs:
Hello
World
Function OK!
I am done
The same rules apply when you execute the function in a && or || list. If you change the line if f; then echo "Function OK!"; fi to f && echo "Function OK", you will get the same output. I believe the latter could be your case.
Even so, your second question can be solved easily by adding || exit:
run_test1 || exit 1;
run_test2 || exit 1;
Your first question is trivial if you are inside a function for example. Then you can simply return. Or inside a loop, then you can break. If you are not, breaking out of the conditional clause is not that easy. Take a look at this answer.
set -e can be a surprising thing as it is being ignored in many cases. Use it with care and be aware of these cases.

Related

How to exit gitlab job when script fails [duplicate]

I have a Bash shell script that invokes a number of commands.
I would like to have the shell script automatically exit with a return value of 1 if any of the commands return a non-zero value.
Is this possible without explicitly checking the result of each command?
For example,
dosomething1
if [[ $? -ne 0 ]]; then
exit 1
fi
dosomething2
if [[ $? -ne 0 ]]; then
exit 1
fi
Add this to the beginning of the script:
set -e
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value. A simple command is any command not part of an if, while, or until test, or part of an && or || list.
See the bash manual on the "set" internal command for more details.
It's really annoying to have a script stubbornly continue when something fails in the middle and breaks assumptions for the rest of the script. I personally start almost all portable shell scripts with set -e.
If I'm working with bash specifically, I'll start with
set -Eeuo pipefail
This covers more error handling in a similar fashion. I consider these as sane defaults for new bash programs. Refer to the bash manual for more information on what these options do.
To add to the accepted answer:
Bear in mind that set -e sometimes is not enough, specially if you have pipes.
For example, suppose you have this script
#!/bin/bash
set -e
./configure > configure.log
make
... which works as expected: an error in configure aborts the execution.
Tomorrow you make a seemingly trivial change:
#!/bin/bash
set -e
./configure | tee configure.log
make
... and now it does not work. This is explained here, and a workaround (Bash only) is provided:
#!/bin/bash
set -e
set -o pipefail
./configure | tee configure.log
make
The if statements in your example are unnecessary. Just do it like this:
dosomething1 || exit 1
If you take Ville Laurikari's advice and use set -e then for some commands you may need to use this:
dosomething || true
The || true will make the command pipeline have a true return value even if the command fails so the the -e option will not kill the script.
If you have cleanup you need to do on exit, you can also use 'trap' with the pseudo-signal ERR. This works the same way as trapping INT or any other signal; bash throws ERR if any command exits with a nonzero value:
# Create the trap with
# trap COMMAND SIGNAME [SIGNAME2 SIGNAME3...]
trap "rm -f /tmp/$MYTMPFILE; exit 1" ERR INT TERM
command1
command2
command3
# Partially turn off the trap.
trap - ERR
# Now a control-C will still cause cleanup, but
# a nonzero exit code won't:
ps aux | grep blahblahblah
Or, especially if you're using "set -e", you could trap EXIT; your trap will then be executed when the script exits for any reason, including a normal end, interrupts, an exit caused by the -e option, etc.
The $? variable is rarely needed. The pseudo-idiom command; if [ $? -eq 0 ]; then X; fi should always be written as if command; then X; fi.
The cases where $? is required is when it needs to be checked against multiple values:
command
case $? in
(0) X;;
(1) Y;;
(2) Z;;
esac
or when $? needs to be reused or otherwise manipulated:
if command; then
echo "command successful" >&2
else
ret=$?
echo "command failed with exit code $ret" >&2
exit $ret
fi
Run it with -e or set -e at the top.
Also look at set -u.
On error, the below script will print a RED error message and exit.
Put this at the top of your bash script:
# BASH error handling:
# exit on command failure
set -e
# keep track of the last executed command
trap 'LAST_COMMAND=$CURRENT_COMMAND; CURRENT_COMMAND=$BASH_COMMAND' DEBUG
# on error: print the failed command
trap 'ERROR_CODE=$?; FAILED_COMMAND=$LAST_COMMAND; tput setaf 1; echo "ERROR: command \"$FAILED_COMMAND\" failed with exit code $ERROR_CODE"; put sgr0;' ERR INT TERM
An expression like
dosomething1 && dosomething2 && dosomething3
will stop processing when one of the commands returns with a non-zero value. For example, the following command will never print "done":
cat nosuchfile && echo "done"
echo $?
1
#!/bin/bash -e
should suffice.
I am just throwing in another one for reference since there was an additional question to Mark Edgars input and here is an additional example and touches on the topic overall:
[[ `cmd` ]] && echo success_else_silence
Which is the same as cmd || exit errcode as someone showed.
For example, I want to make sure a partition is unmounted if mounted:
[[ `mount | grep /dev/sda1` ]] && umount /dev/sda1

Abort bash script if git pull fails [duplicate]

I have a Bash shell script that invokes a number of commands.
I would like to have the shell script automatically exit with a return value of 1 if any of the commands return a non-zero value.
Is this possible without explicitly checking the result of each command?
For example,
dosomething1
if [[ $? -ne 0 ]]; then
exit 1
fi
dosomething2
if [[ $? -ne 0 ]]; then
exit 1
fi
Add this to the beginning of the script:
set -e
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value. A simple command is any command not part of an if, while, or until test, or part of an && or || list.
See the bash manual on the "set" internal command for more details.
It's really annoying to have a script stubbornly continue when something fails in the middle and breaks assumptions for the rest of the script. I personally start almost all portable shell scripts with set -e.
If I'm working with bash specifically, I'll start with
set -Eeuo pipefail
This covers more error handling in a similar fashion. I consider these as sane defaults for new bash programs. Refer to the bash manual for more information on what these options do.
To add to the accepted answer:
Bear in mind that set -e sometimes is not enough, specially if you have pipes.
For example, suppose you have this script
#!/bin/bash
set -e
./configure > configure.log
make
... which works as expected: an error in configure aborts the execution.
Tomorrow you make a seemingly trivial change:
#!/bin/bash
set -e
./configure | tee configure.log
make
... and now it does not work. This is explained here, and a workaround (Bash only) is provided:
#!/bin/bash
set -e
set -o pipefail
./configure | tee configure.log
make
The if statements in your example are unnecessary. Just do it like this:
dosomething1 || exit 1
If you take Ville Laurikari's advice and use set -e then for some commands you may need to use this:
dosomething || true
The || true will make the command pipeline have a true return value even if the command fails so the the -e option will not kill the script.
If you have cleanup you need to do on exit, you can also use 'trap' with the pseudo-signal ERR. This works the same way as trapping INT or any other signal; bash throws ERR if any command exits with a nonzero value:
# Create the trap with
# trap COMMAND SIGNAME [SIGNAME2 SIGNAME3...]
trap "rm -f /tmp/$MYTMPFILE; exit 1" ERR INT TERM
command1
command2
command3
# Partially turn off the trap.
trap - ERR
# Now a control-C will still cause cleanup, but
# a nonzero exit code won't:
ps aux | grep blahblahblah
Or, especially if you're using "set -e", you could trap EXIT; your trap will then be executed when the script exits for any reason, including a normal end, interrupts, an exit caused by the -e option, etc.
The $? variable is rarely needed. The pseudo-idiom command; if [ $? -eq 0 ]; then X; fi should always be written as if command; then X; fi.
The cases where $? is required is when it needs to be checked against multiple values:
command
case $? in
(0) X;;
(1) Y;;
(2) Z;;
esac
or when $? needs to be reused or otherwise manipulated:
if command; then
echo "command successful" >&2
else
ret=$?
echo "command failed with exit code $ret" >&2
exit $ret
fi
Run it with -e or set -e at the top.
Also look at set -u.
On error, the below script will print a RED error message and exit.
Put this at the top of your bash script:
# BASH error handling:
# exit on command failure
set -e
# keep track of the last executed command
trap 'LAST_COMMAND=$CURRENT_COMMAND; CURRENT_COMMAND=$BASH_COMMAND' DEBUG
# on error: print the failed command
trap 'ERROR_CODE=$?; FAILED_COMMAND=$LAST_COMMAND; tput setaf 1; echo "ERROR: command \"$FAILED_COMMAND\" failed with exit code $ERROR_CODE"; put sgr0;' ERR INT TERM
An expression like
dosomething1 && dosomething2 && dosomething3
will stop processing when one of the commands returns with a non-zero value. For example, the following command will never print "done":
cat nosuchfile && echo "done"
echo $?
1
#!/bin/bash -e
should suffice.
I am just throwing in another one for reference since there was an additional question to Mark Edgars input and here is an additional example and touches on the topic overall:
[[ `cmd` ]] && echo success_else_silence
Which is the same as cmd || exit errcode as someone showed.
For example, I want to make sure a partition is unmounted if mounted:
[[ `mount | grep /dev/sda1` ]] && umount /dev/sda1

If errexit is on, how do I run a command that might fail and get its exit code?

All of my scripts have errexit turned on; that is, I run set -o errexit. However, sometimes I want to run commands like grep, but want to continue execution of my script even if the command fails.
How do I do this? That is, how can I get the exit code of a command into a variable without killing my whole script?
I could turn errexit off, but I'd prefer not to.
Your errexit will only cause the script to terminate if the command that fails is "untested". Per man sh on FreeBSD:
Exit immediately if any untested command fails in non-interactive
mode. The exit status of a command is considered to be explic-
itly tested if the command is part of the list used to control an
if, elif, while, or until; if the command is the left hand oper-
and of an ``&&'' or ``||'' operator; or if the command is a pipe-
line preceded by the ! keyword.
So .. if you were thinking of using a construct like this:
grep -q something /path/to/somefile
retval=$?
if [ $retval -eq 0 ]; then
do_something # found
else
do_something_else # not found
fi
you should instead use a construct like this:
if grep -q something /path/to/somefile; then
do_something # found
else
do_something_else # not found
fi
The existence of the if keyword makes the grep command tested, thus unaffected by errexit. And this way takes less typing.
Of course, if you REALLY need the exit value in a variable, there's nothing stopping you from using $?:
if grep -q something /path/to/somefile; then
do_something # found
else
unnecessary=$?
do_something $unnecessary # not found
fi
Here's a way to achieve this: you can "turn off" set -o errexit for some lines of code, and then turn it on again when you decide:
set +e #disables set -o errexit
grep -i something file.txt
rc="$?" #capturing the return code for last command
set -e #reenables set -o errexit
Another option would be the following:
grep -i something file.txt || rc="$?"
That would allow you to capture the return code on the variable rc, without interrupting your script. You could even extend this last option to capture and process the return code on the same line without risking to trigger an exit:
grep -i something file.txt || rc="$?" && echo rc="$?" > somefile.txt && command || :
The last bit ||: will guarantee that the line above always returns a return code = 0 (true).
The most compact and least repetitive form I can think of:
<command> && rc=$? || rc=$?
A form without variable name repetition -- rc=$(<command> && echo $? || echo $?) -- has an expression rvalue but would also capture the stdout of <command>1. So, it's only safe if you "know" that <command> has no normal output.
Using the a && b || c construct is safe here because rc=$? and $(echo $?) can never fail.
1Sure, you can work around this by fiddling with file descriptors, but that'd make the construct long-winded and inconvenient as a standard

"set -e" in shell and command substitution

In shell scripts set -e is often used to make them more robust by stopping the script when some of the commands executed from the script exits with non-zero exit code.
It's usually easy to specify that you don't care about some of the commands succeeding by adding || true at the end.
The problem appears when you actually care about the return value, but don't want the script to stop on non-zero return code, for example:
output=$(possibly-failing-command)
if [ 0 == $? -a -n "$output" ]; then
...
else
...
fi
Here we want to both check the exit code (thus we can't use || true inside of command substitution expression) and get the output. However, if the command in command substitution fails, the whole script stops due to set -e.
Is there a clean way to prevent the script from stopping here without unsetting -e and setting it back afterwards?
Yes, inline the process substitution in the if-statement
#!/bin/bash
set -e
if ! output=$(possibly-failing-command); then
...
else
...
fi
Command Fails
$ ( set -e; if ! output=$(ls -l blah); then echo "command failed"; else echo "output is -->$output<--"; fi )
/bin/ls: cannot access blah: No such file or directory
command failed
Command Works
$ ( set -e; if ! output=$(ls -l core); then echo "command failed"; else echo "output is: $output"; fi )
output is: -rw------- 1 siegex users 139264 2010-12-01 02:02 core

Automatic exit from Bash shell script on error [duplicate]

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 3 years ago.
I've been writing some shell script and I would find it useful if there was the ability to halt the execution of said shell script if any of the commands failed. See below for an example:
#!/bin/bash
cd some_dir
./configure --some-flags
make
make install
So in this case, if the script can't change to the indicated directory, then it would certainly not want to do a ./configure afterwards if it fails.
Now I'm well aware that I could have an if check for each command (which I think is a hopeless solution), but is there a global setting to make the script exit if one of the commands fails?
Use the set -e builtin:
#!/bin/bash
set -e
# Any subsequent(*) commands which fail will cause the shell script to exit immediately
Alternatively, you can pass -e on the command line:
bash -e my_script.sh
You can also disable this behavior with set +e.
You may also want to employ all or some of the the -e -u -x and -o pipefail options like so:
set -euxo pipefail
-e exits on error, -u errors on undefined variables, -x prints commands before execution, and -o (for option) pipefail exits on command pipe failures. Some gotchas and workarounds are documented well here.
(*) Note:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part
of any command executed in a && or || list except the command
following the final && or ||, any command in a pipeline but
the last, or if the command's return value is being inverted with
!
(from man bash)
To exit the script as soon as one of the commands failed, add this at the beginning:
set -e
This causes the script to exit immediately when some command that is not part of some test (like in a if [ ... ] condition or a && construct) exits with a non-zero exit code.
Use it in conjunction with pipefail.
set -e
set -o pipefail
-e (errexit): Abort the script at the first error, when a command exits with non-zero status (except in until or while loops, if-tests, and list constructs)
-o pipefail: Causes a pipeline to return the exit status of the last command in the pipe that returned a non-zero return value.
Chapter 33. Options
Here is how to do it:
#!/bin/sh
abort()
{
echo >&2 '
***************
*** ABORTED ***
***************
'
echo "An error occurred. Exiting..." >&2
exit 1
}
trap 'abort' 0
set -e
# Add your script below....
# If an error occurs, the abort() function will be called.
#----------------------------------------------------------
# ===> Your script goes here
# Done!
trap : 0
echo >&2 '
************
*** DONE ***
************
'
An alternative to the accepted answer that fits in the first line:
#!/bin/bash -e
cd some_dir
./configure --some-flags
make
make install
One idiom is:
cd some_dir && ./configure --some-flags && make && make install
I realize that can get long, but for larger scripts you could break it into logical functions.
I think that what you are looking for is the trap command:
trap command signal [signal ...]
For more information, see this page.
Another option is to use the set -e command at the top of your script - it will make the script exit if any program / command returns a non true value.
One point missed in the existing answers is show how to inherit the error traps. The bash shell provides one such option for that using set
-E
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.
Adam Rosenfield's answer recommendation to use set -e is right in certain cases but it has its own potential pitfalls. See GreyCat's BashFAQ - 105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
According to the manual, set -e exits
if a simple commandexits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in a if statement, part of an && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted via !".
which means, set -e does not work under the following simple cases (detailed explanations can be found on the wiki)
Using the arithmetic operator let or $((..)) ( bash 4.1 onwards) to increment a variable value as
#!/usr/bin/env bash
set -e
i=0
let i++ # or ((i++)) on bash 4.1 or later
echo "i is $i"
If the offending command is not part of the last command executed via && or ||. For e.g. the below trap wouldn't fire when its expected to
#!/usr/bin/env bash
set -e
test -d nosuchdir && echo no dir
echo survived
When used incorrectly in an if statement as, the exit code of the if statement is the exit code of the last executed command. In the example below the last executed command was echo which wouldn't fire the trap, even though the test -d failed
#!/usr/bin/env bash
set -e
f() { if test -d nosuchdir; then echo no dir; fi; }
f
echo survived
When used with command-substitution, they are ignored, unless inherit_errexit is set with bash 4.4
#!/usr/bin/env bash
set -e
foo=$(expr 1-1; true)
echo survived
when you use commands that look like assignments but aren't, such as export, declare, typeset or local. Here the function call to f will not exit as local has swept the error code that was set previously.
set -e
f() { local var=$(somecommand that fails); }
g() { local var; var=$(somecommand that fails); }
When used in a pipeline, and the offending command is not part of the last command. For e.g. the below command would still go through. One options is to enable pipefail by returning the exit code of the first failed process:
set -e
somecommand that fails | cat -
echo survived
The ideal recommendation is to not use set -e and implement an own version of error checking instead. More information on implementing custom error handling on one of my answers to Raise error in a Bash script

Resources