Bash get exit status of command when 'set -e' is active? - bash

I generally have -e set in my Bash scripts, but occasionally I would like to run a command and get the return value.
Without doing the set +e; some-command; res=$?; set -e dance, how can I do that?

From the bash manual:
The shell does not exit if the command that fails is [...] part of any command executed in a && or || list [...].
So, just do:
#!/bin/bash
set -eu
foo() {
# exit code will be 0, 1, or 2
return $(( RANDOM % 3 ))
}
ret=0
foo || ret=$?
echo "foo() exited with: $ret"
Example runs:
$ ./foo.sh
foo() exited with: 1
$ ./foo.sh
foo() exited with: 0
$ ./foo.sh
foo() exited with: 2
This is the canonical way of doing it.

as an alternative
ans=0
some-command || ans=$?

Maybe try running the commands in question in a subshell, like this?
res=$(some-command > /dev/null; echo $?)

Given behavior of shell described at this question it's possible to use following construct:
#!/bin/sh
set -e
{ custom_command; rc=$?; } || :
echo $rc

Another option is to use simple if. It is a bit longer, but fully supported by bash, i.e. that the command can return non-zero value, but the script doesn't exit even with set -e. See it in this simple script:
#! /bin/bash -eu
f () {
return 2
}
if f;then
echo Command succeeded
else
echo Command failed, returned: $?
fi
echo Script still continues.
When we run it, we can see that script still continues after non-zero return code:
$ ./test.sh
Command failed, returned: 2
Script still continues.

Use a wrapper function to execute your commands:
function __e {
set +e
"$#"
__r=$?
set -e
}
__e yourcommand arg1 arg2
And use $__r instead of $?:
if [[ __r -eq 0 ]]; then
echo "success"
else
echo "failed"
fi
Another method to call commands in a pipe, only that you have to quote the pipe. This does a safe eval.
function __p {
set +e
local __A=() __I
for (( __I = 1; __I <= $#; ++__I )); do
if [[ "${!__I}" == '|' ]]; then
__A+=('|')
else
__A+=("\"\$$__I\"")
fi
done
eval "${__A[#]}"
__r=$?
set -e
}
Example:
__p echo abc '|' grep abc
And I actually prefer this syntax:
__p echo abc :: grep abc
Which I could do with
...
if [[ ${!__I} == '::' ]]; then
...

Related

In bash, either exit script without exiting the shell or export/set variables from within subshell

I have a function that runs a set of scripts that set variables, functions, and aliases in the current shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
. "$script"
done
}
If one of the scripts has an error, I want to exit the script and then exit the function, but not to kill the shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
{(
set -e
. "$script"
)}
if [[ $? -ne 0 ]]; then
>&2 echo $script failed. Skipping remaining scripts.
return 1
fi
done
}
This would do what I want except it doesn't set the variables in the script whether the script succeeds or fails.
Without the subshell, set -e causes the whole shell to exit, which is undesirable.
Is there a way I can either prevent the called script from continuing on an error without killing the shell or else set/export variables, aliases, and functions from within a subshell?
The following script simulates my problem:
test() {
{(
set -e
export foo=bar
false
echo Should not have gotten here!
export bar=baz
)}
local errorCode=$?
echo foo="'$foo'". It should equal 'bar'.
echo bar="'$bar'". It should not be set.
if [[ $errorCode -ne 0 ]]; then
echo Script failed correctly. Exiting function.
return 1
fi
echo Should not have gotten here!
}
test
If worst comes to worse, since these scripts don't actually edit the filesystem, I can run each script in a subshell, check the exit code, and if it succeeds, run it outside of a subshell.
Note that set -e has a number of surprising behaviors -- relying on it is not universally considered a good idea. That caveat being give, though: We can shuffle environment variables, aliases, and shell functions out as text:
envTest() {
local errorCode newVars
newVars=$(
set -e
{
export foo=bar
false
echo Should not have gotten here!
export bar=baz
} >&2
# print generate code which, when eval'd, recreates our functions and variables
declare -p | egrep -v '^declare -[^[:space:]]*r'
declare -f
alias -p
); errorCode=$?
if (( errorCode == 0 )); then
eval "$newVars"
fi
printf 'foo=%q. It should equal %q\n' "$foo" "bar"
printf 'bar=%q. It should not be set.\n' "$bar"
if [[ $errorCode -ne 0 ]]; then
echo 'Script failed correctly. Exiting function.'
return 1
fi
echo 'Should not have gotten here!'
}
envTest
Note that this code only evaluates either export should the entire script segment succeed; the question text and comments appear to indicate that this is acceptable if not desired.

How to return from sourced bash script automatically on any error?

I have a bash script which is only meant to used be when sourced.
I want to return from it automatically on any error, similar to what set -e does.
However setting set -e doesn't work for me because it will also exit the users shell.
Right now I'm handling returning manually like this command || return 1, for each command.
You can also use command || true or command || return.
If your requirement is something different, please update more precisely.
You can use trap. E.g.:
// foo.sh
function func() {
trap 'if [ $? -ne 0 ]; then echo "Trapped!"; return ; fi' DEBUG
echo 'foo'
find -name "foo" . 2> /dev/null
echo 'bar'
}
func
Two notes. First, the trap needs to be inside the function as shown. It won't work if it's just inside the script.
Two, there is a significant limitation. Even if you set the return to the trap (e.g., return 1), while func exists after the bad find command, $? is still zero, no matter what. I'm not sure if there's a way around that, so if it's important to preserve the exit value of the failed command, this may not work.
E.g., if you had:
func
func_return=$?
echo "return value is: $func_return"
func_return will always be zero. I've played around with trying to get the exit value of the failed command to pass out of the function trap and into the function exit value, but have not found a way to do it.
If you need to preserve the return value, you could update a global variable inside the debug trap.
If I understand well, you can set -e locally in each function.
cat sourced
f1 () {
local -
set -e
[ "$1" -eq "$1" ] 2> /dev/null && echo "$1"
}
cat script.sh
. sourced
param='bad'
ret=$(f1 "$param")
[ $? -eq 0 ] && echo "result = $ret" || \
echo "error in sourced file with param $param"
param=3
ret=$(f1 "$param")
[ $? -eq 0 ] && echo "result = $ret" || \
echo "error in sourced file with param $param"

SHELL general function for action state

How to make a code bellow as a general function to be used entire script in bash:
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
You might write a wrapper for command execution:
function exec_cmd {
$#
if [[ $? = 0 ]]; then
echo "success " >> $log
else
echo "failed" >> $log
fi
}
And then execute commands in your script using the function:
exec_cmd command1 arg1 arg2 ...
exec_cmd command2 arg1 arg2 ...
...
If you don't want to wrap the original calls you could use an explicit call, like the following
function check_success {
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
}
ls && check_success
ls non-existant
check_success
There's no really clean way to do that. This is clean and might be good enough?
PS4='($?)[$LINENO]'
exec 2>>"$log"
That will show every command run in the log, and each entry will start with the exit code of the previous command...
You could put this in .bashrc and call it whenever
function log_status { [ $? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status }
If you want it after every command you could make the prompt write to the log (note the original PS1 value is appended).
export PS1="\$([ \$? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status)$PS1"
(I'm not experienced with this, perhaps PROMPT_COMMAND is a more appropriate place to put it)
Or even get more fancy and see the result with colours.
I guess you could also play with getting the last executed command:
How do I get "previous executed command" in a bash script?
Get name of last run program in Bash
BASH: echoing the last command run

Get the exit code for a command in Bash and KornShell (ksh)

I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.

How to get exit status of piped command from inside the pipeline?

Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.

Resources