Exit status of a Command in Bash Scripting is always true - bash

I'm trying to run a command ( gerrit query ) in bash and assign that to a variable.
I'm using this is a bash script file & I want to handle the case that if the command throws an error( i.e if the gerrit query fails), I should be able to handle the same.
For example:
var=`ssh -p $GERRIT_PORT_NUMBER $GERRIT_SERVER_NAME gerrit query --current-patch-set $PATCHSET_ID`
I do know that I can check the last exit status using $? in bash, but for the above case, the assignment to the variable over-rides the earlier exit status ( i.e the gerrit query failure status) and the above command never fails. It is always true.
Can you let me know if there is a way to handle the exit status of a command even when it is assigned to a variable in bash.
Update:
My assumption was wrong here that an assignment was causing the overriding of the exit status and Charles example and explanation in his answer are correct.
The real reason for the exit status being overridden was I was piping the output of the above command to a sed script which was the culprit in overriding the exit status. I found the following which helped me to resolve the issue. https://unix.stackexchange.com/questions/14270/get-exit-status-of-process-thats-piped-to-another/73180#73180 Pipe output and capture exit status in Bash
Complete command that I was trying.
var=ssh -p $GERRIT_PORT_NUMBER $GERRIT_SERVER_NAME gerrit query --current-patch-set $PATCHSET_ID | sed 's/message//'

The assertion made in this question is untrue; assignments do not modify exit status. You can check this yourself:
var=$(false); echo $?
...will correctly emit 1.
That said, if an assignment is done in the context of a local, declare, or similar keyword, this may no longer hold true:
f() { local var=$(false); echo $?; }; f
...will emit 0, and is worked around by separating out the local from the assignment:
f() { local var; var=$(false); echo $?; }; f
...which correctly returns 1.
SSH itself also returns exit status correctly, as you can similarly test yourself:
ssh localhost false; echo $?
...correctly returns 1.
The reasonable conclusion, then, is that gerrit itself is failing to convey a non-successful exit status. This bug should be addressed through gerrit's support mechanisms, rather than as a bash question.

Related

How to make exception for a bash script with set -ex

I have a bash script that has set -ex, which means the running script will exit once any command in it hits an error.
My use case is that there's a subcommand in this script for which I want to catch its error, instead of making the running script shutdown.
E.g., here's myscript.sh
#!/bin/bash
set -ex
# sudo code here
error = $( some command )
if [[ -n $error ]] ; then
#do something
fi
Any idea how I can achieve this?
You can override the output of a single command
this_will_fail || true
Or for an entire block of code
set +e
this_will_fail
set -e
Beware, however, that if you decide you don't want to use set -e in the script anymore this won't work.
If you want to handle a particular command's error status yourself, you can use as the condition in an if statement:
if ! some command; then
echo 'An error occurred!' >&2
# handle error here
fi
Since the command is part of a condition, it won't exit on error. Note that other than the ! (which negates it, so the then clause will run if the command fails rather than it succeeds), you just include the command directly in the if statement (no brackets, parentheses, etc).
BTW, in your pseudocode example, you seem to be treating it as an error if the command produces any output; usually that's not what you want, and I'm assuming you actually want to test the exit status to detect errors.

Exiting a shell-script at the end with a non-zero code if any command fails

I am making a shell script that runs a bunch of tests as part of a CI pipeline. I would like to run all of the tests (I DO NOT WANT TO EXIT EARLY if one test fails). Then, at the end of the script, I would like to return with a negative exit code if any of the tests failed.
Any help would be appreciated. I feel like this would be a very common use case, but I wasn't able to find a solution with a bit of research. I am pretty sure that I don't want set -e, since this exits early.
My current idea is to create a flag to keep track of any failed tests:
flag=0
pytest -s || flag=1
go test -v ./... || flag=1
exit $flag
This seems strange, and like more work than necessary, but I am new to bash scripts. Am I missing something?
One possible way would be to catch the non-zero exit code via trap with ERR. Assuming your tests don't contain pipelines | and just return the error code straight to the shell launched, you could do
#!/usr/bin/env bash
exitCodeArray=()
onFailure() {
exitCodeArray+=( "$?" )
}
trap onFailure ERR
# Add all your tests here
addNumbers () {
local IFS='+'
printf "%s" "$(( $* ))"
}
Add your tests anywhere after the above snippet. So we keep adding the exit code to the array whenever a test returns a non-zero return code. So for the final assertion we check if the sum of the array elements is 0, because in an ideal case all cases should return that if it is successful. We reset the trap set before
trap '' ERR
if (( $(addNumbers "${exitCodeArray[#]}") )); then
printf 'some of your tests failed\n' >&2
exit -1
fi
The only way I could imagine using less code is if the shell had some sort of special all compound command that might look something like
# hypothetical all command
all do
pytest -s
go test -v ./...
done
whose exit status is the logical or of the exit statuses of the contained command. (An analogous any command would have the logical and of its commands' exit statuses as its own exit status.)
Lacking such a command, you current approach is what I would use. You could adapt #melpomene's suggestion of a chk function (which I would call after a command rather than having it call your command so that it works with arbitrary shell commands):
chk () { flag=$(( flag | $? )); }
flag=0
pytest -s; chk
go test -v ./...; chk
exit "$flag"
If you aren't using it for anything else, you could abuse the DEBUG trap to update flag before each command.
trap 'flag=$((flag | $?))' DEBUG
pytest -s
go test -v ./...
exit "$flag"
(Be aware that a debug trap executes before the shell executes another command, not immediately after a command is executed. It's possible that the only time this matters is if you expect the trap to fire between the last command completing and the shell exiting, but it's still worth being aware of.)
I vote for Inian's answer. Traps seem like the perfect way to go.
That said, you might also streamline things by use of arrays.
#!/usr/bin/env bash
testlist=(
"pytest -s"
"go test -v ./..."
)
for this in "${testlist[#]}"; do
$this || flag=1
done
exit $flag
You could of course fetch the content of the array from another file, if you wanted to make a more generic test harness that could be used by multiple tools. Heck, mapfile could be a good way to populate an array.

How to use detect failure in command substitution

As you know set -e is very useful to discover any failure when command is executed. However I found it to be delicate to use and I don't know how to use it in the following scenarios:
==============First example================================
set -e
function main() {
local X1="$(exp::notdefined)"
echo "Will reach here : ${LINENO}"
X2="$(exp::notdefined)"
echo "Will not reach here : ${LINENO}"
}
main
==============Second example================================
set -e
function exp::tmp() {
echo "Now '$-' is "$-" : ${LINENO}"
false
return 0
}
function main() {
X1="$(exp::tmp)"
echo "Will reach here : ${LINENO}. '\$X1' : ${X1}"
X2="$(set -e ; exp::tmp)"
echo "Will not reach here : ${LINENO}"
}
main
===============================
The first example shows that, if we use command substitution on a local variable, then it will not fail even if the command substituted is not found. I don't know how to detect these kinds of failures.
The second example shows that, if the bash options (-e) will not propagate unless we call set -e inside the command braces. Is there any better way to do this?
You request immediate exit on pipeline failure with -e, e.g.:
-e Exit immediately if a pipeline (which may consistof a single simple
command), a list, or a compound command (see SHELL GRAMMAR above),
exits with a non-zero status.
The reason the bad command substitution does not cause failure within the function is because local provides its own return status.
local [option] [name[=value] ...]
... The return status is 0 unless local is used outside a function, an
invalid name is supplied, or name is a readonly variable.
The assignment of a failed command substitution does not cause local to return non-zero. Therefore, no immediate-exit is triggered.
As far as checking for a failure of command substitution following local, since the output is assigned to the variable, and the return will not be non-zero in the event of command substitution failure, you would have to validate by checking the variable contents itself for expected values for success/failure.
From the bash manual:
Subshells spawned to execute command substitutions inherit the value of
the -e option from the parent shell. When not in posix mode, bash
clears the -e option in such subshells.
Example 2 behaves differently when bash runs with --posix; however for example 1 I can't find any documentation explaining why local would cause this.

Bash script does not quit on first "exit" call when calling the problematic function using $(func)

Sorry I cannot give a clear title for what's happening but here is the simplified problem code.
#!/bin/bash
# get the absolute path of .conf directory
get_conf_dir() {
local path=$(some_command) || { echo "please install some_command first."; exit 100; }
echo "$path"
}
# process the configuration
read_conf() {
local conf_path="$(get_conf_dir)/foo.conf"
[ -r "$conf_path" ] || { echo "conf file not found"; exit 200; }
# more code ...
}
read_conf
So basically here what I am trying to do is, reading a simple configuration file in bash script, and I have some trouble in error handling.
The some_command is a command which comes from a 3rd party library (i.e. greadlink from coreutils), required for obtain the path.
When running the code above, I expect it outputs "command not found" because that's where the FIRST error occurs, but actually it always prints "conf file not found".
I am very confused about such behavior, and I think BASH probably intent to handle thing like this but I don't know why. And most importantly, how to fix it?
Any idea would be greatly appreciated.
Do you see your please install some_command first message anywhere? Is it in $conf_path from the local conf_path="$(get_conf_dir)/foo.conf" line? Do you have a $conf_path value of please install some_command first/foo.conf? Which then fails the -r test?
No, you don't. (But feel free to echo the value of $conf_path in that exit 200 block to confirm this fact.) (Also Error messages should, in general, get sent to standard error and not standard output anyway. So they should be echo "..." 2>&1. That way they don't be caught by the normal command substitution at all.)
The reason you don't is because that exit 100 block is never happening.
You can see this with set -x at the top of your script also. Go try it.
See what I mean?
The reason it isn't happening is that the failure return of some_command is being swallowed by the local path=$(some_command) assignment statement.
Try running this command:
f() { local a=$(false); echo "Returned: $?"; }; f
Do you expect to see Returned: 1? You might but you won't see that.
What you will see is Returned: 0.
Now try either of these versions:
f() { a=$(false); echo "Returned: $?"; }; f
f() { local a; a=$(false); echo "Returned: $?"; }; f
Get the output you expected in the first place?
Right. local and export and declare and typeset are statements on their own. They have their own return values. They ignore (and replace) the return value of the commands that execute in their contexts.
The solution to your problem is to split the local path and path=$(some_command) statements.
http://www.shellcheck.net/ catches this (and many other common errors). You should make it your friend.
In addition to the above (if you've managed to follow along this far) even with the changes mentioned so far your exit 100 won't exit the main script since it will only exit the sub-shell spawned by the command substitution in the assignment.
If you want that exit 100 to exit your script then you either need to notice and re-exit with it (check for get_conf_dir failure after the conf_path assignment and exit with the previous exit code) or drop the get_conf_dir function itself and just do that inline in read_conf.

Why do bash "set -e" cause exit after a seeming error-free command?

Without using set -e, the script runs as expected, with all results correctly generated.
After adding set -e, it exits after this command:
./NameOfATool > result.txt
When I wrap set +e and set -e around that command, then the script terminates as expected.
Why would it exit, or what might be wrong with the command?
p.s. NameOfAToolis an executable compiled from C code. When I manually type that command, it runs ok without giving an error.
set -e will cause the script to exit if any command returns a non-zero exit status. (Well, there are a bunch of exceptions, but that's the general rule.) So, ./NameOfATool apparently returns a non-zero exit status. This might mean that it actually thinks there's an error, or it might mean that the program was poorly written and doesn't report an appropriate exit status for success, or it might mean that it uses special exit-status values to report specific things (much like the standard utility diff, which returns 0 for "same", 1 for "different", and 2 for "error").
Try set +e in your trap:
set -e;
trap 'x=$?; set +e; echo Hello; false; echo World; exit 22;' ERR
echo Testing
false
echo Never See This
Omit the set +e and you don't see the "World" as the non-zero exit code in the trap exits before the trap is completed.
As #ruakh said, this indicates that the tool is exiting with a nonzero (=error) status. You can prevent this from exiting the script by putting it in a compound command that always succeeds:
./NameOfATool > result.txt || true
If the tool exits with a nonzero status, it runs true, which succeeds; hence, the entire compound command is considered to have succeeded. If the command's exit status is significant (i.e. you need to be able to tell if it exited with status 0, 1, or 2), you can either record it for later use:
./NameOfATool > result.txt && toolStatus=0 || toolStatus=$?
...or use the status directly:
if ./NameOfATool > result.txt; then
# do things appropriate for exit status = 0
else
toolStatus=$?
# do things appropriate for exit status != 0
fi

Resources