bash: set exit status without breaking loop? - bash

I am writing bash script to check the fstab structure.
In a for loop, I have placed a return statement to use the exit code later in the script, but it seems that the return statement is breaking the loop after printing the first requested output
How can I assign a return code of 1 without breaking the loop so I will get all the results and not just the first?
for i in $(printf "$child"|awk '/'$new_mounts'/'); do
chid_count=$(printf "$child"|awk '/'$new_mounts'/'|wc -l)
if [[ $chid_count -ge 1 ]]; then
echo -e "\e[96mfstab order check failed: there are child mounts before parent mount:\e[0m"
echo -e "\e[31mError: \e[0m "$child"\e[31m mount point, comes before \e[0m $mounts \e[31m on fstab\e[0m"
return 1
else
return 0
fi
done

If you read the documentation for the language, immediately returning is what return is supposed to do. That's not unique to shell -- I actually can't think of a single language with a return construct where it doesn't behave this way.
If you want to set a value to be used as a return value later, use a variable:
yourfunc() {
local retval=0
for i in ...; do
(( child_count >= 1 )) && retval=1
done
return "$retval"
}

Related

Assigning the output of a C program in unix shell script and checking the value

Let's say I have a C program that evaluates to either a zero or non zero integer; basically a program that evaluates to a boolean value.
I wish to write a shell script that can find out whether the C program evaluates to zero or not. I am currently trying to assign the return value of the C program to a variable in a shell script but seem to be unable to do so. I currently have;
#!/bin/sh
variable=/path/to/executable input1
I know that assigning values in shell script requires us not to have spaces, but I do not know another way around this, since running this seems to evaluate to an error since the shell interprets input1 as a command, not an input. Is there a way I can do this?
I am also unsure as to how to check the return value of the C program. Should I just use an if statement and check if the C program evaluates to a value equal to zero or not?
This is very basic
#!/bin/sh
variable=`/path/to/executable input1`
or
#!/bin/sh
variable=$(/path/to/executable input1)
and to get the return code from the program use
echo $?
You can assign with backticks or $(...) as shown in iharob's answer.
Another way is to interpret a zero return value as success and evaluate that directly (see manual):
if /path/to/executable input1; then
echo "The return value was 0"
else
echo "The return value was not 0"
fi
Testing with a little dummy program that exits with 0 if fed "yes" and exits with 1 else:
#!/bin/bash
var="$1"
if [[ $var == yes ]]; then
exit 0
else
exit 1
fi
Testing:
$ if ./executable yes; then echo "Returns 0"; else echo "Doesn't return 0"; fi
Returns 0
$ if ./executable no; then echo "Returns 0"; else echo "Doesn't return 0"; fi
Doesn't return 0
If not using Bash: if [ "$var" = "yes" ]; then

better way to fail if bash `declare var` fails?

Problem
In some bash scripts, I don't want to set -e. So I write variable declarations like
var=$(false) || { echo 'var failed!' 1>&2 ; exit 1 ; }
which will print var failed! .
But using declare, the || is never taken.
declare var=$(false) || { echo 'var failed!' 1>&2 ; exit 1 ; }
That will not print var failed!.
Imperfect Solution
So I've come to using
declare var=$(false)
[ -z "${var}" ] || { echo 'var failed!' 1>&2 ; exit 1 ; }
Does anyone know how to turn the Imperfect Solution two lines into a neat one line ?
In other words, is there a bash idiom to make the declare var failure neater?
More Thoughts
This seems like an unfortunate mistake in the design of bash declare.
Firstly, the issue of two lines vs. one line can be solved with a little thing called Mr. semicolon (also note the && vs. ||; pretty sure you meant the former):
declare var=$(false); [ -z "${var}" ] && { echo 'var failed!' 1>&2 ; exit 1 ; }
But I think you're looking for a better way of detecting the error. The problem is that declare always returns an error code based on whether it succeeded in parsing its options and carrying out the assignment. The error you're trying to detect is inside a command substitution, so it's outside the scope of declare's return code design. Thus, I don't think there's any possible solution for your problem using declare with a command substitution on the RHS. (Actually there are messy things you could do like redirecting error infomation to a flat file from inside the command substitution and reading it back in from your main code, but just no.)
Instead, I'd suggest declaring all your variables in advance of assigning them from command substitutions. In the initial declaration you can assign a default value, if you want. This is how I normally do this kind of thing:
declare -i rc=-1;
declare s='';
declare -i i=-1;
declare -a a=();
s=$(give me a string); rc=$?; if [[ $rc -ne 0 ]]; then echo "s [$rc]." >&2; exit 1; fi;
i=$(give me a number); rc=$?; if [[ $rc -ne 0 ]]; then echo "i [$rc]." >&2; exit 1; fi;
a=($(gimme an array)); rc=$?; if [[ $rc -ne 0 ]]; then echo "a [$rc]." >&2; exit 1; fi;
Edit: Ok, I thought of something that comes close to what you want, but if properly done, it would need to be two statements, and it's ugly, although elegant in a way. And it would only work if the value you want to assign has no spaces or glob (pathname expansion) characters, which makes it quite limited.
The solution involves declaring the variable as an array, and having the command substitution print two words, the first of which being the actual value you want to assign, and the second being the return code of the command substitution. You can then check index 1 afterward (in addition to $?, which can still be used to check the success of the actual declare call, although that shouldn't ever fail), and if success, use index 0, which elegantly can be accessed directly as a normal non-array variable can:
declare -a y=($(echo value-for-y; false; echo $?;)); [[ $? -ne 0 || ${y[1]} -ne 0 ]] && { echo 'error!'; exit 1; }; ## fails, exits
## error!
declare -a y=($(echo value-for-y; true; echo $?;)); [[ $? -ne 0 || ${y[1]} -ne 0 ]] && { echo 'error!'; exit 1; }; ## succeeds
echo $y;
## value-for-y
I don't think you can do better than this. I still recommend my original solution: declare separately from command substitution+assignment.

Why does "if 0;" not work in shell scripting?

I wrote the following shell script, just to see if I understand the syntax to use if statements:
if 0; then
echo yes
fi
This doesn't work. It yields the error
./iffin: line 1: 0: command not found
what am I doing wrong?
use
if true; then
echo yes
fi
if expects the return code from a command. 0 is not a command. true is a command.
The bash manual doesnt say much on the subject but here it is:
http://www.gnu.org/software/bash/manual/bashref.html#Conditional-Constructs
You may want to look into the test command for more complex conditional logic.
if test foo = foo; then
echo yes
fi
AKA
if [ foo = foo ]; then
echo yes
fi
To test for numbers being non-zero, use the arithmetic expression:
if (( 0 )) ; then
echo Never echoed
else
echo Always echoed
fi
It makes more sense to use variables than literal numbers, though:
count_lines=$( wc -l < input.txt )
if (( count_lines )) ; then
echo File has $count_lines lines.
fi
Well, from the bash man page:
if list; then list; [ elif list; then list; ] ... [ else list; ] fi
The if list is executed. If its exit status is zero, the then list is executed.
Otherwise, each elif list is executed in turn, and if its exit status is zero,
the corresponding then list is executed and the command completes.
Otherwise, the else list is executed, if present.
The exit status is the exit status of the last command executed,
or zero if no condition tested true.
Which means that argument to if gets executed to get the return code, so in your example you're trying to execute command 0, which apparently does not exist.
What does exist are the commands true, false and test, which is also aliased as [. It allows to write more complex expressions for ifs. Read man test for more info.

Bash: can't call a function; No such file or directory

I am having huge issues that have blocked me all day.
I have a number of functions files with various functions based on use, ie. SSH, logging etc.
I have a script that runs where I add errors to an error array which is global. I have an error trap that calls a script in another 'exit_functions' file.
In this exit_functions file I do various things, but the part I am having issues with is a loop that runs of this array and pulls out the array elements which are actually commands. These commands are functions within one of the functions files that have been included in the calling script using the normal . /path/to/functions_file syntax.
All my other functions such as logging etc are working out of the exit_functions file, it is just these particular functions that can't be called. I get an error:
/functions/exit_functions: line 109: closeSSHTunnel /tmp/ssh_tunnel_iA0yj.lck: No such file or directory
Now, killSSHTunnel is a function that sits in a functions file that has been included previously just as other functions files have been included. I only get this error using these functions calls out of an array.
I am not sure if I have described it properly - please let me know if I can provide any other info.
EDIT full function code below:
/functions/exit_functions file:
exitTrap (){
local LCL_SCRIPT_ERROR
if [ $ERR_COUNT_EXIT_FUNCT -gt 0 ]; then
for LCL_SCRIPT_ERROR in "${ERROR_ARRAY[#]}"; do
case $LCL_SCRIPT_ERROR in
SVN_IMPORT ) # match the SVN_IMPORT error name and search for specific commands to run
exitMsg "$LCL_SCRIPT_ERROR; 111"
executeErrorActions $LCL_SCRIPT_ERROR
exitScript 111
;;
* ) exitMsg "$LCL_SCRIPT_ERROR; 255"
exitScript 255
;;
esac
done
fi
}
So my test code introduces an error state that ends up calling the function above (exitTrap). It has already populated an array named ERROR_ARRAY.
So the executeErrorActions function, also in /functions/exit_functions, loops over the associative array, ERROR_ACTION_ARRAY, looking for a key named (in this test) SVN_IMPORT. When it finds this key it is looking for a string that is ':' delimited. Each delimited section will be a separate command that I was to run when this error condition arises. In my test I have a single entry (no delimiters) that is simply: *closeSSHTunnel /tmp/ssh_tunnel_iA0yj.lck*
Now, with the function below it works fine when I do not include the section between IFS= and unset IFS and instead just use the single line that has the double comments hashes below. As soon as I introduce the IFS separator to break the string up I get the error:
/functions/exit_functions: line 109: closeSSHTunnel /tmp/ssh_tunnel_iA0yj.lck: No such file or directory
executeErrorActions (){
if [ ! $# -eq 1 ]; then
return 1
fi
local ERROR_NAME=$1
local ERROR_ACTION
#ERROR_KEY="SVN_IMPORT"
for ERROR_ACTION in ${!ERROR_ACTION_ARRAY[#]}; do
#echo KEY: $i VALUE: ${ERROR_ACTION_ARRAY[$i]}
if [ $ERROR_NAME == $ERROR_ACTION ]; then
##$(${ERROR_ACTION_ARRAY[$ERROR_ACTION]})
IFS=":"
for i in ${ERROR_ACTION_ARRAY[$ERROR_ACTION]}; do
$i
done
unset IFS
return 0
fi
done
}
Here is the set -x output of the test run using two ':' delimitered commands, the same closeSSHTunnel function above and a simple ls -la call. I add these to the ERROR_ACTION_ARRAY using the addError function call as below:
addError SVN_IMPORT closeSSHTunnel $SSH_TUNNEL_LCK_FILE:ls -la
This is the set -x output:
+ executeErrorActions SVN_IMPORT
+ '[' '!' 1 -eq 1 ']'
+ local ERROR_NAME=SVN_IMPORT
+ local ERROR_ACTION
+ for ERROR_ACTION in '${!ERROR_ACTION_ARRAY[#]}'
+ '[' SVN_IMPORT == SVN_IMPORT ']'
+ IFS=:
+ for i in '${ERROR_ACTION_ARRAY[$ERROR_ACTION]}'
+ 'closeSSHTunnel /tmp/ssh_tunnel_iA0yj.lck'
/functions/exit_functions: line 116: closeSSHTunnel /tmp/ssh_tunnel_iA0yj.lck: No such file or directory
+ for i in '${ERROR_ACTION_ARRAY[$ERROR_ACTION]}'
+ 'ls -la'
/functions/exit_functions: line 116: ls -la: command not found
+ unset IFS
+ return 0
Note how one of them says 'No such file or directory and the other says 'command not found'. This is now day 2 trying to sort this out !
My conclusion is that the IFS change and the string breakup is causing issues.
EDIT 2
If I move the IFS calls around a bit and terminate the IFS on the ':' char inside the loop and then establish it again after the execute call it all works !!!!!!
for ERROR_ACTION in ${!ERROR_ACTION_ARRAY[#]}; do
#echo KEY: $i VALUE: ${ERROR_ACTION_ARRAY[$i]}
if [ $ERROR_NAME == $ERROR_ACTION ]; then
#$(${ERROR_ACTION_ARRAY[$ERROR_ACTION]})
IFS=$':'
for i in ${ERROR_ACTION_ARRAY[$ERROR_ACTION]}; do
unset IFS
$i
IFS=$':'
done
unset IFS
return 0
fi
done
Can anyone explain this ?
It's because you have IFS set to ":" -- bash uses IFS to split the variable into words (i.e. the function name vs. its arguments), not into separate commands. Thus, it's trying to run a command named closeSSHTunnel /tmp/ssh_tunnel_iA0yj.lck, not a command named closeSSHTunnel with the argument /tmp/ssh_tunnel_iA0yj.lck.
$ ERROR_ACTION="echo something"
$ $ERROR_ACTION
something
$ IFS=":"
$ $ERROR_ACTION
-bash: echo something: command not found
$ ERROR_ACTION="echo:something"
$ $ERROR_ACTION
something
If you want to be able to include multiple commands, you'll have to split them yourself.
EDIT: Given the more complete script excerpt, what you need is for IFS to be set to ":" when the for splits the command, but be back to normal when you execute the command:
...
saveIFS="$IFS"
IFS=":"
for i in ${ERROR_ACTION_ARRAY[$ERROR_ACTION]}; do
IFS="$saveIFS"
$i
done
IFS="$saveIFS"
This resets IFS more than is strictly necessary, but it's easier to do that than add the logic to figure out when it actually needs to be reset (and BTW the final reset is there in case the loop never executes).
I don;t know if this was the right way to do this but I looped over the string with IFS=":" and put that into an array and then reset the IFS. I then loop over that array of commands to execute them. All working.
executeErrorActions (){
if [ ! $# -eq 1 ]; then
return 1
fi
local ERROR_NAME=$1
local ERROR_ACTION
local ACTION_ARRAY
local ACTION
if [ ${#ERROR_ACTION_ARRAY[#]} -gt 0 ]; then
for ERROR_ACTION in ${!ERROR_ACTION_ARRAY[#]}; do
if [ $ERROR_NAME == $ERROR_ACTION ]; then
if [ ${#ERROR_ACTION_ARRAY[$ERROR_ACTION]} -gt 0 ]; then
IFS=$':'
ACTION_ARRAY=( ${ERROR_ACTION_ARRAY[$ERROR_ACTION]} )
unset IFS
for ACTION in ${!ACTION_ARRAY[#]}; do
exitMsg "Running action for $ERROR_ACTION"
${ACTION_ARRAY[$ACTION]}
if [ $? -eq 0 ]; then
exitMsg "Error action succeeded"
else
exitMsg "Error action failed"
return 1
fi
done
return 0
else
return 1 #no contents in the array element; no actions to run; maybe there should have been; return error
fi
else
return 0 #no matching actions for this error; still return success
fi
done
else
return 0 #action array is empty
fi
}

How do I set $? or the return code in Bash?

I want to set a return value once so it goes into the while loop:
#!/bin/bash
while [ $? -eq 1 ]
do
#do something until it returns 0
done
In order to get this working I need to set $? = 1 at the beginning, but that doesn't work.
You can set an arbitrary exit code by executing exit with an argument in a subshell.
$ (exit 42); echo "$?"
42
So you could do:
(exit 1) # or some other value > 0 or use false as others have suggested
while (($?))
do
# do something until it returns 0
done
Or you can emulate a do while loop:
while
# do some stuff
# do some more stuff
# do something until it returns 0
do
continue # just let the body of the while be a no-op
done
Either of those guarantee that the loop is run at least one time which I believe is what your goal is.
For completeness, exit and return each accept an optional argument which is an integer (positive, negative or zero) which sets the return code as the remainder of the integer after division by 256. The current shell (or script or subshell*) is exited using exit and a function is exited using return.
Examples:
$ (exit -2); echo "$?"
254
$ foo () { return 2000; }; foo; echo $?
208
* This is true even for subshells which are created by pipes (except when both job control is disabled and lastpipe is enabled):
$ echo foo | while read -r s; do echo "$s"; exit 333; done; echo "$?"
77
Note that it's better to use break to leave loops, but its argument is for the number of levels of loops to break out of rather than a return code.
Job control is disabled using set +m, set +o monitor or shopt -u -o monitor. To enable lastpipe do shopt -s laspipe. If you do both of those, the exit in the preceding example will cause the while loop and the containing shell to both exit and the final echo there will not be performed.
false always returns an exit code of 1.
#!/bin/bash
false
while [ $? -eq 1 ]
do
#do something until it returns 0
done
#!/bin/bash
RC=1
while [ $RC -eq 1 ]
do
#do something until it returns 0
RC=$?
done
Some of answers rely on rewriting the code. In some cases it might be a foreign code that you have no control over.
Although for this specific question, it is enough to set $? to 1, but if you need to set $? to any value - the only helpful answer is the one from Dennis Williamson's.
A bit more efficient approach, which does not spawn a new child (but is a also less terse), is:
function false() { echo "$$"; return ${1:-1}; }
false 42
Note: echo part is there just to verify it runs in the current process.
I think you can do this implicitly by running a command that is guaranteed to fail, before entering the while loop.
The canonical such command is, of course, false.
Didn't find anything lighter than just a simple function:
function set_return() { return ${1:-0}; }
All other solutions like (...) or [...] or false might contain an external process call.
Old question, but there's a much better answer:
#!/bin/bash
until
#do something until it returns success
do
:;
done
If you're looping until something is successful, then just do that something in the until section. You can put exactly the same code in the until section you were thinking you had to put in the do/done section. You aren't forced to write the code in the do/done section and then transfer its results back to the while or until.
$? can contain a byte value between 0..255. Return numbers outside this range will be remapped to this range as if a bitwise and 255 was applied.
exit value - can be used to set the value, but is brutal since it will terminate a process/script.
return value - when used in a function is somewhat traditional.
[[ ... ]] - is good for evaluating boolean expressions.
Here is an example of exit:
# Create a subshell, but, exit it with an error code:
$( exit 34 ); echo $? # outputs: 34
Here are examples of return:
# Define a `$?` setter and test it:
set_return() { return $1; }
set_return 0; echo $? # outputs: 0
set_return 123; echo $? # outputs: 123
set_return 1000; echo $? # outputs: 232
set_return -1; echo $? # outputs: 255
Here are are examples of [ ... ]:
# Define and use a boolean test:
lessthan() { [[ $1 < $2 ]]; }
lessthan 3 8 && echo yes # outputs: yes
lessthan 8 3 && echo yes # outputs: nothing
Note, when using $? as a conditional, zero (0) means success, non-zero means failure.
Would something like this be what your looking for ?
#!/bin/bash
TEMPVAR=1
while [ $TEMPVAR -eq 1 ]
do
#do something until it returns 0
#construct the logic which will cause TEMPVAR to be set 0 then check for it in the
#if statement
if [ yourcodehere ]; then
$TEMPVAR=0
fi
done
You can use until to handle cases where #do something until it returns 0 returns something other than 1 or 0:
#!/bin/bash
false
until [ $? -eq 0 ]
do
#do something until it returns 0
done
This is what I'm using
allow_return_code() {
local LAST_RETURN_CODE=$?
if [[ $LAST_RETURN_CODE -eq $1 ]]; then
return 0
else
return $LAST_RETURN_CODE
fi
}
# it converts 2 to 0,
my-command-returns-2 || allow_return_code 2
echo $?
# 0
# and it preserves the return code other than 2
my-command-returns-8 || allow_return_code 2
echo $?
# 8
Here is an example using both "until" and the ":"
until curl -k "sftp://$Server:$Port/$Folder" --user "$usr:$pwd" -T "$filename";
do :;
done

Resources