Trouble with errexit in bash - bash

I'm writing a bash script and I'd like it to crash on the first error. However, I can't get it to do this in a specific circumstance I simplified below:
#!/bin/bash
set -Exu
bad_command() {
false
#exit 1
echo "NO!!"
}
(set -o pipefail; bad_command | cat ; echo "${PIPESTATUS[#]}; $?") || false
echo "NOO!!"
The expected behaviour would be a crash of the bad_command subshell, propagated to a crash of the () subshell, propagated to a crash of the outter shell. But none of those crash, and both NOs get printed(!?)
If I uncomment the exit 1 statement, then the NO is no longer printed, but NOO still is(!?)
I tried using set -e expicitly inside each of the 3 shells (first line in function, first statement after (, but there's no change.
Note: I need to execute the pipe inside the () subshell, because this is a simplification of a more elaborate script. Without the () subshell, everything works as expected, no NOs whatsoever with either false or exit 1.

This seems to be a bash or even POSIX bug: https://groups.google.com/forum/?fromgroups=#!topic/gnu.bash.bug/NCK_0GmIv2M

After hitting the same problem, I have found a workaround. Actually 3 depending on what you want to achieve.
First a small rewrite of the OP example code since handling the exit code requires some extra work down the line:
#! /bin/bash
set -eEu
bad_command_extra() {
return 42
}
bad_command() {
bad_command_extra
echo "NO!!"
}
if bad_command; then echo "NOO!!"; else echo "errexit worked: $?"; fi
If it's only needed to have the errexit work, following is sufficient in calling bad_command. The trick is to launch bad_command in the background:
(bad_command) &
bc_pid=$!
if wait $bc_pid; then echo "NOO!!"; else echo "errexit worked: $?"; fi
If you want to work with the output as well (similar to abc=$(bad_command)), capture it in a temporary file as usual:
tmp_out=$(mktemp)
tmp_err=$(mktemp)
(bad_command >$tmp_out 2>$tmp_err) &
bc_pid=$!
if wait $bc_pid; then echo "NOO!!"; else echo "errexit worked: $?"; fi
cat $tmp_out $tmp_err
rm -f $tmp_out $tmp_err
Finally, I found out in my testings that the wait command returned either 0 or 1 but not the actual exit code of bad_command (bash 4.3.42). This requires some more work:
tmp_out=$(mktemp)
tmp_err=$(mktemp)
tmp_exit=$(mktemp)
echo 0 > $tmp_exit
(
get_exit () {
echo $? > $tmp_exit
}
trap get_exit ERR
bad_command >$tmp_out 2>$tmp_err
) &
bc_pid=$!
bc_exit=$(cat $tmp_exit)
if wait $bc_pid
then echo "NOO!!"
else echo "errexit worked: $bc_exit"
fi
cat $tmp_out $tmp_err
rm -f $tmp_out $tmp_err $tmp_exit
For some strange reason, putting the if on one line as before got me exit code 0 in this case !

Related

Cannot stop BASH when using of && and || operators

I would like to stop my BASH if the commands have any errors.
make clean || ( echo "ERROR!!" && echo "ERROR!!" >> log_file && exit 1 )
But seems like my BASH still keeps going. How do I put exit 1 in the one-line operators
I am very new to BASH, any help is appreciated!
exit 1 exits from the subshell created by (), not the original shell. Use {} to keep the command group in the same shell.
Don't use && between commands unless you want to stop as soon as one of them fails. Use ; to separate commands on the same line.
make clean || { echo "ERROR!!" ; echo "ERROR!!" >> log_file ; exit 1 ;}
Or just use if to make it easier to understand.
if ! make clean
then
echo "ERROR!!"
echo "ERROR!!" >> log_file
exit
fi
You have the direct solution in Barmar's answer. An alternative if you want to check multiple commands in a similar way could be to define a function which could be reused:
die() {
echo "ERROR: $#"
echo "ERROR: $#" >> log_file
exit 1
}
make clean || die "I left it unclean"
make something || die "something went wrong"
or, if you want the script to end at first sign of trouble, you could use set -e
set -e
make clean # stops here unless successful
make something # or here if this line fails etc.
You may want to log an error message too, so you could install a trap on ERR. errfunc would here be called before exiting the script and the line number where it failed would be logged:
errfunc() {
echo "ERROR on line $1"
echo "ERROR on line $1" >> log_file
}
trap 'errfunc $LINENO' ERR
set -e
make clean
make something

In bash, either exit script without exiting the shell or export/set variables from within subshell

I have a function that runs a set of scripts that set variables, functions, and aliases in the current shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
. "$script"
done
}
If one of the scripts has an error, I want to exit the script and then exit the function, but not to kill the shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
{(
set -e
. "$script"
)}
if [[ $? -ne 0 ]]; then
>&2 echo $script failed. Skipping remaining scripts.
return 1
fi
done
}
This would do what I want except it doesn't set the variables in the script whether the script succeeds or fails.
Without the subshell, set -e causes the whole shell to exit, which is undesirable.
Is there a way I can either prevent the called script from continuing on an error without killing the shell or else set/export variables, aliases, and functions from within a subshell?
The following script simulates my problem:
test() {
{(
set -e
export foo=bar
false
echo Should not have gotten here!
export bar=baz
)}
local errorCode=$?
echo foo="'$foo'". It should equal 'bar'.
echo bar="'$bar'". It should not be set.
if [[ $errorCode -ne 0 ]]; then
echo Script failed correctly. Exiting function.
return 1
fi
echo Should not have gotten here!
}
test
If worst comes to worse, since these scripts don't actually edit the filesystem, I can run each script in a subshell, check the exit code, and if it succeeds, run it outside of a subshell.
Note that set -e has a number of surprising behaviors -- relying on it is not universally considered a good idea. That caveat being give, though: We can shuffle environment variables, aliases, and shell functions out as text:
envTest() {
local errorCode newVars
newVars=$(
set -e
{
export foo=bar
false
echo Should not have gotten here!
export bar=baz
} >&2
# print generate code which, when eval'd, recreates our functions and variables
declare -p | egrep -v '^declare -[^[:space:]]*r'
declare -f
alias -p
); errorCode=$?
if (( errorCode == 0 )); then
eval "$newVars"
fi
printf 'foo=%q. It should equal %q\n' "$foo" "bar"
printf 'bar=%q. It should not be set.\n' "$bar"
if [[ $errorCode -ne 0 ]]; then
echo 'Script failed correctly. Exiting function.'
return 1
fi
echo 'Should not have gotten here!'
}
envTest
Note that this code only evaluates either export should the entire script segment succeed; the question text and comments appear to indicate that this is acceptable if not desired.

Writing try catch finally in shell

Is there a linux bash command like the java try catch finally?
Or does the linux shell always go on?
try {
`executeCommandWhichCanFail`
mv output
} catch {
mv log
} finally {
rm tmp
}
Based on your example, it looks like you are trying to do something akin to always deleting a temporary file, regardless of how a script exits. In Bash to do this try the trap builtin command to trap the EXIT signal.
#!/bin/bash
trap 'rm tmp' EXIT
if executeCommandWhichCanFail; then
mv output
else
mv log
exit 1 #Exit with failure
fi
exit 0 #Exit with success
The rm tmp statement in the trap is always executed when the script exits, so the file "tmp" will always tried to be deleted.
Installed traps can also be reset; a call to trap with only a signal name will reset the signal handler.
trap EXIT
For more details, see the bash manual page: man bash
Well, sort of:
{ # your 'try' block
executeCommandWhichCanFail &&
mv output
} || { # your 'catch' block
mv log
}
rm tmp # finally: this will always happen
I found success in my script with this syntax:
# Try, catch, finally
(echo "try this") && (echo "and this") || echo "this is the catch statement!"
# this is the 'finally' statement
echo "finally this"
If either try statement throws an error or ends with exit 1, then the interpreter moves on to the catch statement and then the finally statement.
If both try statements succeed (and/or end with exit), the interpreter will skip the catch statement and then run the finally statement.
Example_1:
goodFunction1(){
# this function works great
echo "success1"
}
goodFunction2(){
# this function works great
echo "success2"
exit
}
(goodFunction1) && (goodFunction2) || echo "Oops, that didn't work!"
echo "Now this happens!"
Output_1
success1
success2
Now this happens!
Example _2
functionThrowsErr(){
# this function returns an error
ech "halp meh"
}
goodFunction2(){
# this function works great
echo "success2"
exit
}
(functionThrowsErr) && (goodFunction2) || echo "Oops, that didn't work!"
echo "Now this happens!"
Output_2
main.sh: line 3: ech: command not found
Oops, that didn't work!
Now this happens!
Example_3
functionThrowsErr(){
# this function returns an error
echo "halp meh"
exit 1
}
goodFunction2(){
# this function works great
echo "success2"
}
(functionThrowsErr) && (goodFunction2) || echo "Oops, that didn't work!"
echo "Now this happens!"
Output_3
halp meh
Oops, that didn't work!
Now this happens!
Note that the order of the functions will affect output. If you need both statements to be tried and caught separately, use two try catch statements.
(functionThrowsErr) || echo "Oops, functionThrowsErr didn't work!"
(goodFunction2) || echo "Oops, good function is bad"
echo "Now this happens!"
Output
halp meh
Oops, functionThrowsErr didn't work!
success2
Now this happens!
mv takes two parameters, so may be you really wanted to cat the output file's contents:
echo `{ execCommand && cat output ; } || cat log`
rm -f tmp
Another way to do it would be:
set -e; # stop on errors
mkdir -p "$HOME/tmp/whatevs"
exit_code=0
(
set +e;
(
set -e;
echo 'foo'
echo 'bar'
echo 'biz'
)
exit_code="$?"
)
rm -rf "$HOME/tmp/whatevs"
if [[ "exit_code" != '0' ]]; then
echo 'failed';
fi
although the above doesn't really offer any benefit over:
set -e; # stop on errors
mkdir -p "$HOME/tmp/whatevs"
exit_code=0
(
set -e;
echo 'foo'
echo 'bar'
echo 'biz'
exit 44;
exit 43;
) || {
exit_code="$?" # exit code of last command which is 44
}
rm -rf "$HOME/tmp/whatevs"
if [[ "exit_code" != '0' ]]; then
echo 'failed';
fi
Warning: exit traps are not always excuted. Since writing this answer I have run into situations, where my exit trap would not be executed, causing loss of files, the reason of which I haven't found yet.
The issue occurred when I stopped a python script with Ctrl+C, which in turn had executed a bash script using exit traps -- which actually should cause the exit traps to be executed, since exit traps are executed on SIGINT in bash.
So, while trap .. exit is useful for cleanup, there are stil scenarios where it won't be executed, the most obvious ones being power outages and receiving SIGKILL.
I often end up with bash scripts becoming quite large, as I add additional options, or otherwise change them. When a bash-script contains a lot of functions, using 'trap EXIT' may become non-trivial.
For instance, consider a script invoked as
dotask TASK [ARG ...]
where each TASK may consists of substeps, where it is desirable to perform cleanup in between.
In this case, it is helpful to work with subshells to produce scoped exit traps, e.g.
function subTask (
local tempFile=$(mktemp)
trap "rm '${tempFile}'" exit
...
)
However, working with subshells can be tricky, as they can't set global variables of the parent shell.
Additionally, it is often inconvenient to write a single exit trap. For instance, the cleanup steps may depend on how far a function came before encountering an error. It would be nice to be able to make RAII style cleanup declarations:
function subTask (
...
onExit 'rm tmp.1'
...
onExit 'rm tmp.2'
...
)
It would seem obvious to use something like
handlers=""
function onExit { handlers+="$1;"; trap "$handlers" exit; }
to update the trap. But this fails for nested subshells, as it would cause premature execution of the parent shell's handlers. The client code would have to explicitly reset the handlers variable at the beginning of the subshell.
Solutions discussed in [multiple bash traps for the same signal], which patch the trap by using the output from trap -p EXIT will equally fail: Even though subshells don't inherit the EXIT trap, trap -p exit will display the parent shell's handler so, again, manual resetting is needed.

Get the exit code for a command in Bash and KornShell (ksh)

I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.

bash script: how to save return value of first command in a pipeline?

Bash: I want to run a command and pipe the results through some filter, but if the command fails, I want to return the command's error value, not the boring return value of the filter:
E.g.:
if !(cool_command | output_filter); then handle_the_error; fi
Or:
set -e
cool_command | output_filter
In either case it's the return value of cool_command that I care about -- for the 'if' condition in the first case, or to exit the script in the second case.
Is there some clean idiom for doing this?
Use the PIPESTATUS builtin variable.
From man bash:
PIPESTATUS
An array variable (see Arrays
below) containing a list of exit
status values from the processes in
the most-recently-executed foreground
pipeline (which may contain only a
single command).
If you didn't need to display the error output of the command, you could do something like
if ! echo | mysql $dbcreds mysql; then
error "Could not connect to MySQL. Did you forget to add '--db-user=' or '--db-password='?"
die "Check your credentials or ensure server is running with /etc/init.d/mysqld status"
fi
In the example, error and die are defined functions. elsewhere in the script. $dbcreds is also defined, though this is built from command line options. If there is no error generated by the command, nothing is returned. If an error occurs, text will be returned by this particular command.
Correct me if I'm wrong, but I get the impression you're really looking to do something a little more convoluted than
[ `id -u` -eq '0' ] || die "Must be run as root!"
where you actually grab the user ID prior to the if statement, and then perform the test. Doing it this way, you could then display the result if you choose. This would be
UID=`id -u`
if [ $UID -eq '0' ]; then
echo "User is root"
else
echo "User is not root"
exit 1 ##set an exit code higher than 0 if you're exiting because of an error
fi
The following script uses a fifo to filter the output in a separate process. This has the following advantages over the other answers. First, it is not bash specific. In particular it does not rely on PIPESTATUS. Second, output is not stalled until the command has completed.
$ cat >test_filter.sh <<EOF
#!/bin/sh
cmd()
{
echo $1
echo $2 >&2
return $3
}
filter()
{
while read line
do
echo "... $line"
done
}
tmpdir=$(mktemp -d)
fifo="$tmpdir"/out
mkfifo "$fifo"
filter <"$fifo" &
pid=$!
cmd a b 10 >"$fifo" 2>&1
ret=$?
wait $pid
echo exit code: $ret
rm -f "$fifo"
rmdir "$tmpdir"
EOF
$ sh ./test_filter.sh
... a
... b
exit code: 10

Resources