I am using set -e to stop execution of a script on first error.
The problem is that this does not tell me what went wrong.
How can update a bash script so it will display me the last command that failed?
Instead of set -e, use an ERR trap; you can pass $BASH_LINENO in to get the specific line number on which the error occurred. I provide a script taking advantage of this in my answer at https://stackoverflow.com/a/185900/14122
To summarize:
error() {
local sourcefile=$1
local lineno=$2
# ...logic for reporting an error at line $lineno
# of file $sourcefile goes here...
}
trap 'error "${BASH_SOURCE}" "${LINENO}"' ERR
make err.sh
set -e
trap 'echo "ERROR: $BASH_SOURCE:$LINENO $BASH_COMMAND" >&2' ERR
include it (. err.sh) in all your scripts.
replace any
... | while read X ; do ... ; done
with
while read X ; do ... ; done < <( ... )
in your scripts for the trap to give the correct line number/command in the error message
Have you tried with --verbose?
bash --verbose script.sh
You can't use set -e by itself because processing will immediately stop after any error. Take a look at the Set Builtin section of the Bash Reference Manual for more information about the -x and -v options, which you can use for debugging.
Something like:
set -e
set -v
will exit on any error, while showing you each input line as it is read. It will not, however, show you just the line with the error. For that, you will need to do your own explicit error checking.
For example:
set +e
if false; then
real_exit_status=$?
echo 'Some useful error message.' >&2
exit $real_exit_status
fi
set -ex will show (all) lines as they are executed and stop at the first command returning nonzero (not as part of if/while/until constructs).
Related
I have been looking for a way to print the line number inside the shell script when it errors out.
I came across '-x' option, which prints the line when running the shell script, but this is not exactly what I want. Maybe I could do $LINENO before every exit code? Is there a cleaner way to do it?
I just wanted the line number so I could open the shell script and directly go to the place where the interpreter realized the error.
Using
PS4=':$LINENO+'
will add line number to the output of set -x.
If you only want to print that on errors, there's some risk of running into bugs in recent interpreters. However, you can try the following (first given in this previous answer):
error() {
local parent_lineno="$1"
local message="$2"
local code="${3:-1}"
if [[ -n "$message" ]] ; then
echo "Error on or near line ${parent_lineno}: ${message}; exiting with status ${code}"
else
echo "Error on or near line ${parent_lineno}; exiting with status ${code}"
fi
exit "${code}"
}
trap 'error ${LINENO}' ERR
Again, this will not work on some recent builds of bash, which don't always have LINENO set correctly inside traps.
Another approach (which will only work on recent shells; the below uses some bash 4.0 and 4.1 features) is to use PS4 to emit the exit status and line number of each command to a dedicated file descriptor, and use tail to print only the last line given to that FD before the shell exits:
exec {BASH_XTRACEFD}> >(tail -n 1) # send set -x output to tail -n 1
PS4=':At line $LINENO; prior command exit status $?+'
set -x
I use set -e options on top of my bash scripts to stop executing on any errors. But also I can to use -e on echo command like the following:
echo -e "Some text".
I have two questions:
How to handle correctly handle errors in bash scripts?
What -e options means in echo command?
The "correct" way to handle bash errors depends on the situation and what you want to accomplish.
In some cases, the if statement approach that barmar describes is the best way to handle a problem.
The vagaries of set -e
set -e will silently stop a script as soon as there is an uncaught error. It will print no message. So, if you want to know why or what line caused the script to fail, you will be frustrated.
Further, as documented on Greg's FAQ, the behavior of set -e varies from one bash version to the next and can be quite surprising.
In sum, set -e has only limited uses.
A die function
In other cases, when a command fails, you want the script to exit immediately with a message. In perl, the die function provides a handy way to do this. This feature can be emulated in shell with a function:
die () {
echo "ERROR: $*. Aborting." >&2
exit 1
}
A call to die can then be easily attached to commands which have to succeed or else the script must be stopped. For example:
cp file1 dir/ || die "Failed to cp file1 to dir."
Here, due to the use of bash's OR control operator, ||, the die command is executed only if the command which precedes it fails.
If you want to handle an error instead of stopping the script when it happens, use if:
if ! some_command
then
# Do whatever you want here, for instance...
echo some_command got an error
fi
echo -e is unrelated. This -e option tells the echo command to process escape sequences in its arguments. See man echo for the list of escape sequences.
One way of handling error is to use -e in your shebang at start of your script and using a trap handler for ERR like this:
#!/bin/bash -e
errHandler () {
d=$(date '+%D %T :: ')
echo "$d Error, Exiting..." >&2
# can do more things like print to a log file etc or some cleanup
exit 1
}
trap errHandler ERR
Now this function errHandler will be called only when an error occurs in your script.
I am trying to echo the last command run inside a bash script. I found a way to do it with some history,tail,head,sed which works fine when commands represent a specific line in my script from a parser standpoint. However under some circumstances I don't get the expected output, for instance when the command is inserted inside a case statement:
The script:
#!/bin/bash
set -o history
date
last=$(echo `history |tail -n2 |head -n1` | sed 's/[0-9]* //')
echo "last command is [$last]"
case "1" in
"1")
date
last=$(echo `history |tail -n2 |head -n1` | sed 's/[0-9]* //')
echo "last command is [$last]"
;;
esac
The output:
Tue May 24 12:36:04 CEST 2011
last command is [date]
Tue May 24 12:36:04 CEST 2011
last command is [echo "last command is [$last]"]
[Q] Can someone help me find a way to echo the last run command regardless of how/where this command is called within the bash script?
My answer
Despite the much appreciated contributions from my fellow SO'ers, I opted for writing a run function - which runs all its parameters as a single command and display the command and its error code when it fails - with the following benefits:
-I only need to prepend the commands I want to check with run which keeps them on one line and doesn't affect the conciseness of my script
-Whenever the script fails on one of these commands, the last output line of my script is a message that clearly displays which command fails along with its exit code, which makes debugging easier
Example script:
#!/bin/bash
die() { echo >&2 -e "\nERROR: $#\n"; exit 1; }
run() { "$#"; code=$?; [ $code -ne 0 ] && die "command [$*] failed with error code $code"; }
case "1" in
"1")
run ls /opt
run ls /wrong-dir
;;
esac
The output:
$ ./test.sh
apacheds google iptables
ls: cannot access /wrong-dir: No such file or directory
ERROR: command [ls /wrong-dir] failed with error code 2
I tested various commands with multiple arguments, bash variables as arguments, quoted arguments... and the run function didn't break them. The only issue I found so far is to run an echo which breaks but I do not plan to check my echos anyway.
Bash has built in features to access the last command executed. But that's the last whole command (e.g. the whole case command), not individual simple commands like you originally requested.
!:0 = the name of command executed.
!:1 = the first parameter of the previous command
!:4 = the fourth parameter of the previous command
!:* = all of the parameters of the previous command
!^ = the first parameter of the previous command (same as !:1)
!$ = the final parameter of the previous command
!:-3 = all parameters in range 0-3 (inclusive)
!:2-5 = all parameters in range 2-5 (inclusive)
!! = the previous command line
etc.
So, the simplest answer to the question is, in fact:
echo !!
...alternatively:
echo "Last command run was ["!:0"] with arguments ["!:*"]"
Try it yourself!
echo this is a test
echo !!
In a script, history expansion is turned off by default, you need to enable it with
set -o history -o histexpand
The command history is an interactive feature. Only complete commands are entered in the history. For example, the case construct is entered as a whole, when the shell has finished parsing it. Neither looking up the history with the history built-in (nor printing it through shell expansion (!:p)) does what you seem to want, which is to print invocations of simple commands.
The DEBUG trap lets you execute a command right before any simple command execution. A string version of the command to execute (with words separated by spaces) is available in the BASH_COMMAND variable.
trap 'previous_command=$this_command; this_command=$BASH_COMMAND' DEBUG
…
echo "last command is $previous_command"
Note that previous_command will change every time you run a command, so save it to a variable in order to use it. If you want to know the previous command's return status as well, save both in a single command.
cmd=$previous_command ret=$?
if [ $ret -ne 0 ]; then echo "$cmd failed with error code $ret"; fi
Furthermore, if you only want to abort on a failed commands, use set -e to make your script exit on the first failed command. You can display the last command from the EXIT trap.
set -e
trap 'echo "exit $? due to $previous_command"' EXIT
Note that if you're trying to trace your script to see what it's doing, forget all this and use set -x.
After reading the answer from Gilles, I decided to see if the $BASH_COMMAND var was also available (and the desired value) in an EXIT trap - and it is!
So, the following bash script works as expected:
#!/bin/bash
exit_trap () {
local lc="$BASH_COMMAND" rc=$?
echo "Command [$lc] exited with code [$rc]"
}
trap exit_trap EXIT
set -e
echo "foo"
false 12345
echo "bar"
The output is
foo
Command [false 12345] exited with code [1]
bar is never printed because set -e causes bash to exit the script when a command fails and the false command always fails (by definition). The 12345 passed to false is just there to show that the arguments to the failed command are captured as well (the false command ignores any arguments passed to it)
I was able to achieve this by using set -x in the main script (which makes the script print out every command that is executed) and writing a wrapper script which just shows the last line of output generated by set -x.
This is the main script:
#!/bin/bash
set -x
echo some command here
echo last command
And this is the wrapper script:
#!/bin/sh
./test.sh 2>&1 | grep '^\+' | tail -n 1 | sed -e 's/^\+ //'
Running the wrapper script produces this as output:
echo last command
history | tail -2 | head -1 | cut -c8-
tail -2 returns the last two command lines from history
head -1 returns just first line
cut -c8- returns just command line, removing PID and spaces.
There is a racecondition between the last command ($_) and last error ( $?) variables. If you try to store one of them in an own variable, both encountered new values already because of the set command. Actually, last command hasn't got any value at all in this case.
Here is what i did to store (nearly) both informations in own variables, so my bash script can determine if there was any error AND setting the title with the last run command:
# This construct is needed, because of a racecondition when trying to obtain
# both of last command and error. With this the information of last error is
# implied by the corresponding case while command is retrieved.
if [[ "${?}" == 0 && "${_}" != "" ]] ; then
# Last command MUST be retrieved first.
LASTCOMMAND="${_}" ;
RETURNSTATUS='✓' ;
elif [[ "${?}" == 0 && "${_}" == "" ]] ; then
LASTCOMMAND='unknown' ;
RETURNSTATUS='✓' ;
elif [[ "${?}" != 0 && "${_}" != "" ]] ; then
# Last command MUST be retrieved first.
LASTCOMMAND="${_}" ;
RETURNSTATUS='✗' ;
# Fixme: "$?" not changing state until command executed.
elif [[ "${?}" != 0 && "${_}" == "" ]] ; then
LASTCOMMAND='unknown' ;
RETURNSTATUS='✗' ;
# Fixme: "$?" not changing state until command executed.
fi
This script will retain the information, if an error occured and will obtain the last run command. Because of the racecondition i can not store the actual value. Besides, most commands actually don't even care for error noumbers, they just return something different from '0'. You'll notice that, if you use the errono extention of bash.
It should be possible with something like a "intern" script for bash, like in bash extention, but i'm not familiar with something like that and it wouldn't be compatible as well.
CORRECTION
I didn't think, that it was possible to retrieve both variables at the same time. Although i like the style of the code, i assumed it would be interpreted as two commands. This was wrong, so my answer devides down to:
# Because of a racecondition, both MUST be retrieved at the same time.
declare RETURNSTATUS="${?}" LASTCOMMAND="${_}" ;
if [[ "${RETURNSTATUS}" == 0 ]] ; then
declare RETURNSYMBOL='✓' ;
else
declare RETURNSYMBOL='✗' ;
fi
Although my post might not get any positive rating, i solved my problem myself, finally.
And this seems appropriate regarding the intial post. :)
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function