better way to fail if bash `declare var` fails? - bash

Problem
In some bash scripts, I don't want to set -e. So I write variable declarations like
var=$(false) || { echo 'var failed!' 1>&2 ; exit 1 ; }
which will print var failed! .
But using declare, the || is never taken.
declare var=$(false) || { echo 'var failed!' 1>&2 ; exit 1 ; }
That will not print var failed!.
Imperfect Solution
So I've come to using
declare var=$(false)
[ -z "${var}" ] || { echo 'var failed!' 1>&2 ; exit 1 ; }
Does anyone know how to turn the Imperfect Solution two lines into a neat one line ?
In other words, is there a bash idiom to make the declare var failure neater?
More Thoughts
This seems like an unfortunate mistake in the design of bash declare.

Firstly, the issue of two lines vs. one line can be solved with a little thing called Mr. semicolon (also note the && vs. ||; pretty sure you meant the former):
declare var=$(false); [ -z "${var}" ] && { echo 'var failed!' 1>&2 ; exit 1 ; }
But I think you're looking for a better way of detecting the error. The problem is that declare always returns an error code based on whether it succeeded in parsing its options and carrying out the assignment. The error you're trying to detect is inside a command substitution, so it's outside the scope of declare's return code design. Thus, I don't think there's any possible solution for your problem using declare with a command substitution on the RHS. (Actually there are messy things you could do like redirecting error infomation to a flat file from inside the command substitution and reading it back in from your main code, but just no.)
Instead, I'd suggest declaring all your variables in advance of assigning them from command substitutions. In the initial declaration you can assign a default value, if you want. This is how I normally do this kind of thing:
declare -i rc=-1;
declare s='';
declare -i i=-1;
declare -a a=();
s=$(give me a string); rc=$?; if [[ $rc -ne 0 ]]; then echo "s [$rc]." >&2; exit 1; fi;
i=$(give me a number); rc=$?; if [[ $rc -ne 0 ]]; then echo "i [$rc]." >&2; exit 1; fi;
a=($(gimme an array)); rc=$?; if [[ $rc -ne 0 ]]; then echo "a [$rc]." >&2; exit 1; fi;
Edit: Ok, I thought of something that comes close to what you want, but if properly done, it would need to be two statements, and it's ugly, although elegant in a way. And it would only work if the value you want to assign has no spaces or glob (pathname expansion) characters, which makes it quite limited.
The solution involves declaring the variable as an array, and having the command substitution print two words, the first of which being the actual value you want to assign, and the second being the return code of the command substitution. You can then check index 1 afterward (in addition to $?, which can still be used to check the success of the actual declare call, although that shouldn't ever fail), and if success, use index 0, which elegantly can be accessed directly as a normal non-array variable can:
declare -a y=($(echo value-for-y; false; echo $?;)); [[ $? -ne 0 || ${y[1]} -ne 0 ]] && { echo 'error!'; exit 1; }; ## fails, exits
## error!
declare -a y=($(echo value-for-y; true; echo $?;)); [[ $? -ne 0 || ${y[1]} -ne 0 ]] && { echo 'error!'; exit 1; }; ## succeeds
echo $y;
## value-for-y
I don't think you can do better than this. I still recommend my original solution: declare separately from command substitution+assignment.

Related

Can't make shell recursive script work

I'm trying to write a recursive function that given a process number, prints out the process numbers of subprocesses spawned from it for up to k generations. The following code does not seem to work, as what I want to do is print it in a sort of tree-like way, where I print the subprocesses of a given process right under it, as opposed to printing the whole generation first, then the second, then so on. What I am trying is more on the spirit of depth first search run through the "graph". Somehow I am not managing to keep track of the levels of recursion to stop at a certain depth. If anyone can spot the mistakes, please let me know.
function descendantsRecursive() {
level=$2
if [[ "$level" -ge 4 ]]
then
exit
fi
result=($(pgrep -P "$1" .))
for pid in "${result[#]}"; do
echo "$level"
echo "$pid"
if [[ "$(pgrep -P $pid)" == '' ]]
then
:
else
descendantsRecursive $pid $(( $level+1 ))
fi
done
}
The obvious problem is that your variables are global. That doesn't matter for all of them but the fact that level is modified by the call will create a bug.
Furthermore, calling exit kills the entire script, and you just want to stop the called function. Use return
A good habit is to declare all variables as local:
local level=$2 pid result
But you could also use fewer variables:
function descendantsRecursive() {
local pid
if (( $2 >= 4 )); then return; fi
for pid in $(pgrep -P "$1" .) ; do
echo "$2"
echo "$pid"
# This test is unneeded
if [[ "$(pgrep -P $pid)" == '' ]]
then
:
else
descendantsRecursive $pid $(( $2+1 ))
fi
done
}

bash: set exit status without breaking loop?

I am writing bash script to check the fstab structure.
In a for loop, I have placed a return statement to use the exit code later in the script, but it seems that the return statement is breaking the loop after printing the first requested output
How can I assign a return code of 1 without breaking the loop so I will get all the results and not just the first?
for i in $(printf "$child"|awk '/'$new_mounts'/'); do
chid_count=$(printf "$child"|awk '/'$new_mounts'/'|wc -l)
if [[ $chid_count -ge 1 ]]; then
echo -e "\e[96mfstab order check failed: there are child mounts before parent mount:\e[0m"
echo -e "\e[31mError: \e[0m "$child"\e[31m mount point, comes before \e[0m $mounts \e[31m on fstab\e[0m"
return 1
else
return 0
fi
done
If you read the documentation for the language, immediately returning is what return is supposed to do. That's not unique to shell -- I actually can't think of a single language with a return construct where it doesn't behave this way.
If you want to set a value to be used as a return value later, use a variable:
yourfunc() {
local retval=0
for i in ...; do
(( child_count >= 1 )) && retval=1
done
return "$retval"
}

How do I set $? or the return code in Bash?

I want to set a return value once so it goes into the while loop:
#!/bin/bash
while [ $? -eq 1 ]
do
#do something until it returns 0
done
In order to get this working I need to set $? = 1 at the beginning, but that doesn't work.
You can set an arbitrary exit code by executing exit with an argument in a subshell.
$ (exit 42); echo "$?"
42
So you could do:
(exit 1) # or some other value > 0 or use false as others have suggested
while (($?))
do
# do something until it returns 0
done
Or you can emulate a do while loop:
while
# do some stuff
# do some more stuff
# do something until it returns 0
do
continue # just let the body of the while be a no-op
done
Either of those guarantee that the loop is run at least one time which I believe is what your goal is.
For completeness, exit and return each accept an optional argument which is an integer (positive, negative or zero) which sets the return code as the remainder of the integer after division by 256. The current shell (or script or subshell*) is exited using exit and a function is exited using return.
Examples:
$ (exit -2); echo "$?"
254
$ foo () { return 2000; }; foo; echo $?
208
* This is true even for subshells which are created by pipes (except when both job control is disabled and lastpipe is enabled):
$ echo foo | while read -r s; do echo "$s"; exit 333; done; echo "$?"
77
Note that it's better to use break to leave loops, but its argument is for the number of levels of loops to break out of rather than a return code.
Job control is disabled using set +m, set +o monitor or shopt -u -o monitor. To enable lastpipe do shopt -s laspipe. If you do both of those, the exit in the preceding example will cause the while loop and the containing shell to both exit and the final echo there will not be performed.
false always returns an exit code of 1.
#!/bin/bash
false
while [ $? -eq 1 ]
do
#do something until it returns 0
done
#!/bin/bash
RC=1
while [ $RC -eq 1 ]
do
#do something until it returns 0
RC=$?
done
Some of answers rely on rewriting the code. In some cases it might be a foreign code that you have no control over.
Although for this specific question, it is enough to set $? to 1, but if you need to set $? to any value - the only helpful answer is the one from Dennis Williamson's.
A bit more efficient approach, which does not spawn a new child (but is a also less terse), is:
function false() { echo "$$"; return ${1:-1}; }
false 42
Note: echo part is there just to verify it runs in the current process.
I think you can do this implicitly by running a command that is guaranteed to fail, before entering the while loop.
The canonical such command is, of course, false.
Didn't find anything lighter than just a simple function:
function set_return() { return ${1:-0}; }
All other solutions like (...) or [...] or false might contain an external process call.
Old question, but there's a much better answer:
#!/bin/bash
until
#do something until it returns success
do
:;
done
If you're looping until something is successful, then just do that something in the until section. You can put exactly the same code in the until section you were thinking you had to put in the do/done section. You aren't forced to write the code in the do/done section and then transfer its results back to the while or until.
$? can contain a byte value between 0..255. Return numbers outside this range will be remapped to this range as if a bitwise and 255 was applied.
exit value - can be used to set the value, but is brutal since it will terminate a process/script.
return value - when used in a function is somewhat traditional.
[[ ... ]] - is good for evaluating boolean expressions.
Here is an example of exit:
# Create a subshell, but, exit it with an error code:
$( exit 34 ); echo $? # outputs: 34
Here are examples of return:
# Define a `$?` setter and test it:
set_return() { return $1; }
set_return 0; echo $? # outputs: 0
set_return 123; echo $? # outputs: 123
set_return 1000; echo $? # outputs: 232
set_return -1; echo $? # outputs: 255
Here are are examples of [ ... ]:
# Define and use a boolean test:
lessthan() { [[ $1 < $2 ]]; }
lessthan 3 8 && echo yes # outputs: yes
lessthan 8 3 && echo yes # outputs: nothing
Note, when using $? as a conditional, zero (0) means success, non-zero means failure.
Would something like this be what your looking for ?
#!/bin/bash
TEMPVAR=1
while [ $TEMPVAR -eq 1 ]
do
#do something until it returns 0
#construct the logic which will cause TEMPVAR to be set 0 then check for it in the
#if statement
if [ yourcodehere ]; then
$TEMPVAR=0
fi
done
You can use until to handle cases where #do something until it returns 0 returns something other than 1 or 0:
#!/bin/bash
false
until [ $? -eq 0 ]
do
#do something until it returns 0
done
This is what I'm using
allow_return_code() {
local LAST_RETURN_CODE=$?
if [[ $LAST_RETURN_CODE -eq $1 ]]; then
return 0
else
return $LAST_RETURN_CODE
fi
}
# it converts 2 to 0,
my-command-returns-2 || allow_return_code 2
echo $?
# 0
# and it preserves the return code other than 2
my-command-returns-8 || allow_return_code 2
echo $?
# 8
Here is an example using both "until" and the ":"
until curl -k "sftp://$Server:$Port/$Folder" --user "$usr:$pwd" -T "$filename";
do :;
done

Add (collect) exit codes in bash

I need to depend on few separate executions in a script and don't want to bundle them all in an ugly 'if' statement. I would like to take the exit code '$?' of each execution and add it; at the end, if this value is over a threshold - I would like to execute a command.
Pseudo code:
ALLOWEDERROR=5
run_something
RESULT=$?
..other things..
run_something_else
RESULT=$RESULT + $?
if [ $RESULT -gt ALLOWEDERROR ]
then echo "Too many errors"
fi
Issue: Even though the Internet claims otherwise, bash refuses to treat the RESULT and $? as integer. What is the correct syntax?
Thanks.
You might want to take a look at the trap builtin to see if it would be helpful:
help trap
or
man bash
you can set a trap for errors like this:
#!/bin/bash
AllowedError=5
SomeErrorHandler () {
(( errcount++ )) # or (( errcount += $? ))
if (( errcount > $AllowedError ))
then
echo "Too many errors"
exit $errcount
fi
}
trap SomeErrorHandler ERR
for i in {1..6}
do
false
echo "Reached $i" # "Reached 6" is never printed
done
echo "completed" # this is never printed
If you count the errors (and only when they are errors) like this instead of using "$?", then you don't have to worry about return values that are other than zero or one. A single return value of 127, for example, would throw you over your threshold immediately. You can also register traps for other signals in addition to ERR.
A quick experiment and dip into bash info says:
declare -i RESULT=$RESULT + $?
since you are adding to the result several times, you can use declare at the start, like this:
declare -i RESULT=0
true
RESULT+=$?
false
RESULT+=$?
false
RESULT+=$?
echo $RESULT
2
which looks much cleaner.
declare -i says that the variable is integer.
Alternatively you can avoid declare and use arithmetic expression brackets:
RESULT=$(($RESULT+$?))
Use the $(( ... )) construct.
$ cat st.sh
RESULT=0
true
RESULT=$(($RESULT + $?))
false
RESULT=$(($RESULT + $?))
false
RESULT=$(($RESULT + $?))
echo $RESULT
$ sh st.sh
2
$
For how to add numbers in Bash also see:
help let
If you want to use ALLOWEDERROR in your script, preface it with a $, e.g $ALLOWEDERROR.
As mouviciel mentioned collecting sum of return codes looks rather senseless. Probably, you can use array for accumulating non-zero result codes and check against its length. Example of this approach is below:
#!/bin/sh
declare RESULT
declare index=0
declare ALLOWED_ERROR=1
function write_result {
if [ $1 -gt 0 ]; then
RESULT[index++]=$1
fi
}
true
write_result $?
false
write_result $?
false
write_result $?
echo ${#RESULT[*]}
if [ ${#RESULT[*]} -gt $ALLOWEDERROR ]
then echo "Too many errors"
fi
Here are some ways to perform an addition in bash or sh:
RESULT=`expr $RESULT + $?`
RESULT=`dc -e "$RESULT $? + pq"`
And some others in bash only:
RESULT=$((RESULT + $?))
RESULT=`bc <<< "$RESULT + $?"`
Anyway, exit status on error is not always 1 and its value does not depend on error level, so in the general case there is not much sense to check a sum of statuses against a threshold.

What is the best way to write a wrapper function that runs commands and logs their exit code

I currently use this function to wrap executing commands and logging their execution, and return code, and exiting in case of a non-zero return code.
However this is problematic as apparently, it does double interpolation, making commands with single or double quotes in them break the script.
Can you recommend a better way?
Here's the function:
do_cmd()
{
eval $*
if [[ $? -eq 0 ]]
then
echo "Successfully ran [ $1 ]"
else
echo "Error: Command [ $1 ] returned $?"
exit $?
fi
}
"$#"
From http://www.gnu.org/software/bash/manual/bashref.html#Special-Parameters:
#
Expands to the positional parameters, starting from one. When the
expansion occurs within double quotes, each parameter expands to a
separate word. That is, "$#" is equivalent to "$1" "$2" .... If the
double-quoted expansion occurs within a word, the expansion of the
first parameter is joined with the beginning part of the original
word, and the expansion of the last parameter is joined with the last
part of the original word. When there are no positional parameters,
"$#" and $# expand to nothing (i.e., they are removed).
This means spaces in the arguments are re-quoted correctly.
do_cmd()
{
"$#"
ret=$?
if [[ $ret -eq 0 ]]
then
echo "Successfully ran [ $# ]"
else
echo "Error: Command [ $# ] returned $ret"
exit $ret
fi
}
Additional to "$#" what Douglas says, i would use
return $?
And not exit. It would exit your shell instead of returning from the function. If in cases you want to exit your shell if something went wrong, you can do that in the caller:
do_cmd false i will fail executing || exit
# commands in a row. exit as soon as the first fails
do_cmd one && do_cmd && two && do_cmd three || exit
(That way, you can handle failures and then exit gracefully).

Resources