I am currently using something like this:
(
false
true
) && echo "OK" || echo "FAILED";
And it doesn't work. I would like the subshell to exit with an error if something fails (false in this case). Currently it only fails if the last command fails.
It should only exit out of the current subshell and not the whole script.
I am giving this script to people and I don't want them to see all the output but still give them some kind of response if the script was successful or not.
Edit: The commands inside the subshell above are only an example. I would like to run multiple commands inside a subshell without checking the return value after each command. Something like set -e for subshells.
Edit2: I tried adding set -e inside a subshell. Maybe I did something wrong but it didn't change the behavior of my script. It didn't stop execution or exit out of the subshell with a non-0 code.
(
set -e
false
echo "test"
) && echo "OK" || echo "FAILED";
First prints test and then OK. It should print FAILED because of false.
This effect of bash set -e inside a conditional expression like foo || true, is known and considered a problem. I think it is good reason to hate both set -e and shell scripting in general.
http://david.rothlis.net/shell-set-e/
http://fvue.nl/wiki/Bash:_Error_handling
The first link makes the following suggestion. It looks good in small examples, but maybe less clear in the real world.
Do your own error checking by stringing together a series of commands
with “&&” like this:
mkdir abc &&
cd abc &&
do_something_else &&
last_thing ||
{ echo error >&2; exit 1; }
A few important notes: You don’t need a trailing backslash. You don’t
indent the following line. You must stick to 80 chars per line so that
everyone can see the “&&” or “||” at the end. If you need to mix ANDs
and ORs, group with { these braces } which don’t spawn a sub-shell.
To your edited question:
Something like set -e for subshells.
Well, you can just do set -e for the subshell.
( set -e
my
commands
)
You can't implicitly make just your subshells have the errexit option. You can do some trickery using eval, or use a subprocess as a shell (even though a subprocess is not the same as a subshell), like
errexit_shell() {
bash -e
}
but those options are both inadvisable for various reasons, not the least of which being readability. Your best bet in that case would just be to adapt your entire script to use set -e, and your subshells will come along for the ride.
To your original question:
Just capture the status of the part that indicates success or failure:
(
cat teisatrt
status=$?
echo "true"
exit "$status"
) && echo passed || echo failed
(Of course, if all you want to know is if that file is readable, don't cat it, just use test -r.)
As you have it, you are redirecting output from the whole subshell to /dev/null, so you will never see your "true" echo. You should move the redirect inside the subshell to the command you really want it on. In order to exit the subshell when cat fails, you will need to check for failure after cat runs. If you don't, then as you have noted, its return code is wiped out by the following statement. So something like this:
echo "Installing App"
(
cat teisatrt &> /dev/null || exit 1
echo "true"
) && echo "OK" || echo "FAILED";
Related
#!/bin/bash
set -exuo pipefail
# Run delorean to update the namespaces folder
main() {
if [ !$(yq -r '.random' file_that_doesnt_exist.yaml) = "true" ]; then
echo "yes"
else
echo "no"
fi
}
# shellcheck disable=SC2068
main $#
set -e pipefail based on my understanding should exit the bash script on the first occurence of error. However, I get "no" in stdout even though `echo "no" occurs after the error. How does that happen?
set -e stops the script on error, but not always:
The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted with !.
(Quote from man bash).
Basically, don't use -e (or at least don't only use that.)
Use a trap, and carefully design your tests with it in mind.
c.f. this whole page for good discussions on alternatives and explanations, and this one for some other examples and elaborations that should give you good inspiration for a better solution.
I have a simple shell script with the following preamble:
#!/usr/bin/env bash
set -eu
set -o pipefail
I also have the following function:
foo() {
printf "Foo working... "
echo "Failed!"
false # point of interest #1
true # point of interest #2
}
Executing foo() as a regular command works as expected:
The script exits at #1, because the return code of false is non-zero and we use set -e.
My goal is to capture the output of the function foo() in a variable, and only print it in case an error occours during the execution of foo().
This is what I've come up with:
printf "Doing something that could fail... "
if a="$(foo 2>&1)"; then
echo "Success!"
else
code=$?
echo "Error:"
printf "${a}"
exit $code
fi
The script doesn't exit at #1 and the "Success!" path of the if statement is executed.
Commenting out the true at #2 causes the "Error:" path of the if statement to be executed.
It seems like bash just ignores set -e inside the substitution and the if statement is simply checking the return code of the last command in foo().
Q: What causes this weird behaviour?
A: This is just how bash works, it's normal behaviour
Q: Is there any way to make bash respect set -e inside a command substitution and make this work correctly?
A: You shouldn't use set -e for this purpose
Q: How would you go about implementing this without set -e (i.e. print the output of a function only if something went wrong while executing it)?
A: See accepted answer and my "final thoghts" section.
I am using:
GNU bash, version 5.0.11(1)-release (x86_64-apple-darwin18.6.0)
Final thoughts / takeaway (might be useful for someone else):
Beware that using if ...; then, or even && ... || ... will disable most kinds of "traditional" bash error handling methods (this includes set -e and trap ... ERR + set -o errtrace) by design.
If you want to do something like I did, you probably should check the return codes inside your function manually and return a non-null exit code by hand (dangerous_command || return 1) to avoid continuing execution on errors (you can do this whether you use set -e or not).
As answered, set -e does not propagate inside command substitutions by design.
If you wish to implement error handling logic which does, you can use trap ... ERR in combination with set -o errtrace, which will work with functions running inside command substitutions (that is unless you put them inside an if statement, which will disable trap ... ERR as well, so in this case manual return code checking is your only option if you wish to stop your function on errors).
If you think about it, this whole behaviour kind of makes sense: you wouldn't expect your script to terminate on a command "guarded" by an if statement, as the whole point of your if statement is checking whether the command succeeds or not.
Personally I still wouldn't go as far as avoiding set -e and trap ... ERR entirely as they can be really useful, but understanding how they behave in different circumstances is important, because they are no silver bullet either.
Q: How would you go about implementing this without set -e (i.e. print the output of a function only if something went wrong while executing it)?
You may use this way by checking return value of the function:
#!/usr/bin/env bash
foo() {
local n=$RANDOM
echo "Foo working with random=$n ..."
(($n % 2))
}
echo "Doing something that could fail..."
a="$(foo 2>&1)"
code=$?
if (($code == 0)); then
echo "Success!"
else
printf '{"ErrorCode": %d, "ErrorMessage": "%s"}\n' $code "$a"
exit $code
fi
Now run it as:
$> ./errScript.sh
Doing something that could fail...
Success!
$> ./errScript.sh
Doing something that could fail...
{"ErrorCode": 1, "ErrorMessage": "Foo working with random=27662 ..."}
$> ./errScript.sh
Doing something that could fail...
Success!
$> ./errScript.sh
Doing something that could fail...
{"ErrorCode": 1, "ErrorMessage": "Foo working with random=31864 ..."}
This dummy function code returns failure if $RANDOM is even number and success for $RANDOM being odd number.
Original answer for original question
You need to enable set -e in command substitution as well:
#!/usr/bin/env bash
set -eu
set -o pipefail
foo() {
printf "Foo working... "
echo "Failed!"
false # point of interest #1
true # point of interest #2
}
printf "Doing something that could fail... "
a="$(set -e; foo)"
code=$?
if (($code == 0)); then
echo "Success!"
else
echo "Error:"
printf "${a}"
exit $code
fi
Then use it as:
./errScript.sh; echo $?
Doing something that could fail... 1
However do note that using set -e is not ideal in shell scripts and it may fail to exit script in many scenarios.
Do check this important post on set -e
How would you go about implementing this without set -e (i.e. print the output of a function only if something went wrong while executing it)?
Return a nonzero return status from your function to indicate an error/failure.
foo() {
printf "Foo working... "
echo "Failed!"
return 1 # point of interest #1
return 0 # point of interest #2
}
if a="$(foo 2>&1)"; then
echo "Success!"
else
code=$?
echo "Error:"
printf "${a}"
exit $code
fi
As others have stated, errexit is not a reliable way to deal with errors in programs. Just one of the big problems with it is that it is silently disabled in several common situations, including within command substitution.
If you still want to use errexit, there are a few ways to get the effect that you want.
One way to do it is to temporarily disable errexit in the main code, explicitly enable errexit within the command substitution (as demonstrated in the answer by #anubhava), get the exit code of the command substitution from $?, and re-enable errexit in the main code.
Another possible way to do it (after the preamble and foo definition code in the question) is:
shopt -s lastpipe
printf "Doing something that could fail... "
set +o pipefail
foo 2>&1 | { read -r -d '' a || true; }
code=${PIPESTATUS[0]}
set -o pipefail
if (( code == 0 )); then
echo "Success!"
else
echo "Error:"
printf '%s\n' "$a"
exit "$code"
fi
shopt -s lastpipe causes the last command of pipelines to be run in the top-level shell. It means that variables set in commands at the end of pipelines (like a in this case) can be used later in the program. lastpipe was introduced in Bash 4.2 so this code won't work with older versions of Bash.
The set +o pipefail (temporarily) disables pipefail to prevent a failing foo at the start of a pipeline causing the whole pipeline to fail.
The read -r -d '' a reads all of its input (assumed not to contain a NUL character), including internal newlines, into the variable a.
The { ... || true; } around the read hides the non-zero status returned by read when it encounters EOF on its input, thus preventing the pipeline from failing.
code=${PIPESTATUS[0]} captures the status of the first command in the pipeline (foo).
set -o pipefail re-enables pipefail so it is enabled for the rest of the program.
A few tweaks have been made to the code in the question to stop Shellcheck warnings.
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
I'm writing a script in Bash to test some code. However, it seems silly to run the tests if compiling the code fails in the first place, in which case I'll just abort the tests.
Is there a way I can do this without wrapping the entire script inside of a while loop and using breaks? Something like a dun dun dun goto?
Try this statement:
exit 1
Replace 1 with appropriate error codes. See also Exit Codes With Special Meanings.
Use set -e
#!/bin/bash
set -e
/bin/command-that-fails
/bin/command-that-fails2
The script will terminate after the first line that fails (returns nonzero exit code). In this case, command-that-fails2 will not run.
If you were to check the return status of every single command, your script would look like this:
#!/bin/bash
# I'm assuming you're using make
cd /project-dir
make
if [[ $? -ne 0 ]] ; then
exit 1
fi
cd /project-dir2
make
if [[ $? -ne 0 ]] ; then
exit 1
fi
With set -e it would look like:
#!/bin/bash
set -e
cd /project-dir
make
cd /project-dir2
make
Any command that fails will cause the entire script to fail and return an exit status you can check with $?. If your script is very long or you're building a lot of stuff it's going to get pretty ugly if you add return status checks everywhere.
A SysOps guy once taught me the Three-Fingered Claw technique:
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$#" || die "cannot $*"; }
These functions are *NIX OS and shell flavor-robust. Put them at the beginning of your script (bash or otherwise), try() your statement and code on.
Explanation
(based on flying sheep comment).
yell: print the script name and all arguments to stderr:
$0 is the path to the script ;
$* are all arguments.
>&2 means > redirect stdout to & pipe 2. pipe 1 would be stdout itself.
die does the same as yell, but exits with a non-0 exit status, which means “fail”.
try uses the || (boolean OR), which only evaluates the right side if the left one failed.
$# is all arguments again, but different.
If you will invoke the script with source, you can use return <x> where <x> will be the script exit status (use a non-zero value for error or false). But if you invoke an executable script (i.e., directly with its filename), the return statement will result in a complain (error message "return: can only `return' from a function or sourced script").
If exit <x> is used instead, when the script is invoked with source, it will result in exiting the shell that started the script, but an executable script will just terminate, as expected.
To handle either case in the same script, you can use
return <x> 2> /dev/null || exit <x>
This will handle whichever invocation may be suitable. That is assuming you will use this statement at the script's top level. I would advise against directly exiting the script from within a function.
Note: <x> is supposed to be just a number.
I often include a function called run() to handle errors. Every call I want to make is passed to this function so the entire script exits when a failure is hit. The advantage of this over the set -e solution is that the script doesn't exit silently when a line fails, and can tell you what the problem is. In the following example, the 3rd line is not executed because the script exits at the call to false.
function run() {
cmd_output=$(eval $1)
return_value=$?
if [ $return_value != 0 ]; then
echo "Command $1 failed"
exit -1
else
echo "output: $cmd_output"
echo "Command succeeded."
fi
return $return_value
}
run "date"
run "false"
run "date"
Instead of if construct, you can leverage the short-circuit evaluation:
#!/usr/bin/env bash
echo $[1+1]
echo $[2/0] # division by 0 but execution of script proceeds
echo $[3+1]
(echo $[4/0]) || exit $? # script halted with code 1 returned from `echo`
echo $[5+1]
Note the pair of parentheses which is necessary because of priority of alternation operator. $? is a special variable set to exit code of most recently called command.
I have the same question but cannot ask it because it would be a duplicate.
The accepted answer, using exit, does not work when the script is a bit more complicated. If you use a background process to check for the condition, exit only exits that process, as it runs in a sub-shell. To kill the script, you have to explicitly kill it (at least that is the only way I know).
Here is a little script on how to do it:
#!/bin/bash
boom() {
while true; do sleep 1.2; echo boom; done
}
f() {
echo Hello
N=0
while
((N++ <10))
do
sleep 1
echo $N
# ((N > 5)) && exit 4 # does not work
((N > 5)) && { kill -9 $$; exit 5; } # works
done
}
boom &
f &
while true; do sleep 0.5; echo beep; done
This is a better answer but still incomplete a I really don't know how to get rid of the boom part.
You can close your program by program name on follow way:
for soft exit do
pkill -9 -x programname # Replace "programmname" by your programme
for hard exit do
pkill -15 -x programname # Replace "programmname" by your programme
If you like to know how to evaluate condition for closing a program, you need to customize your question.
#!/bin/bash -x
# exit and report the failure if any command fails
exit_trap () { # ---- (1)
local lc="$BASH_COMMAND" rc=$?
echo "Command [$lc] exited with code [$rc]"
}
trap exit_trap EXIT # ---- (2)
set -e # ---- (3)
Explanation:
This question is also about how to write clean code. Let's divide the above script into multiple parts:
Part - 1:
exit_trap is a function that gets called when any step failed and captures the last executed step using $BASH_COMMAND and captures the return code of that step. This is the function that can be used for any clean-up, similar to shutdownhooks
The command currently being executed or about to be executed, unless the shell is executing a command as the result of a trap, in which case it is the command executing at the time of the trap.
Doc.
Part - 2:
trap [action] [signal]
Register the trap action (here exit_trap function) in case of EXIT signal.
Part - 3:
Exit immediately if a sequence of one or more commands returns a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command’s return status is being inverted with !. If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR, if set, is executed before the shell exits.
Doc.
Part - 4:
You can create a common.sh file and source it in all of your scripts.
source common.sh