How to store return value from function with argument in a variable? [duplicate] - bash

This question already has answers here:
Return value in a Bash function
(11 answers)
Closed 3 years ago.
I'm struggling with storing the return value (0 or 1) of my function in a variable. Whatever I try, $var ends up being empty or I run into error messages.
Here is my specific code:
function screen_exists() {
if screen -list | grep -q "${1}"; then
return 0
else
return 1
fi
}
VAR=$(screen_exists "${PLAYER_SCREEN_NAME}")
echo ${VAR}
I've also tried with a super simple function that always returns 0, but same outcome.

$(...) is command substitution syntax that is used to capture output of given command. If you want to store return value of a function then use $?:
screen_exists() {
screen -list | grep -q "$1"
# implicit here is: return $?
}
screen_exists "${PLAYER_SCREEN_NAME}"
ret=$?
Also note that this function will return 1 if grep doesn't find search patter and 0 if it is success which is the standard norm in shell utilities.

Related

How to convert bash shell string to command [duplicate]

This question already has answers here:
How can I store a command in a variable in a shell script?
(12 answers)
Dynamic variable names in Bash
(19 answers)
Closed 1 year ago.
I am running different program with different config. I tried to convert string (kmeans and bayes) in the inner loop to variables I defined at the beginning, so I can run the programs and capture the console output. kmeans_time and bayes_time are used to record execution time of each program.
#!/bin/bash
kmeans="./kmeans -m40 -n40 -t0.00001 -p 4 -i inputs/random-n1024-d128-c4.txt"
bayes="./bayes -t 4 -v32 -r1024 -n2 -p20 -s0 -i2 -e2"
kmeans_time=0
bayes_time=0
for n in {1..10}
do
for prog in kmeans bayes
do
output=($(${prog} | tail -1))
${$prog + "_time"}=$( echo $kmeans_time + ${output[1]} | bc)
echo ${output[1]}
done
done
However, I got the following errors. It seems that the prog is executed as a string instead of command I defined. Also, concatenation of the time variable filed. I've tried various ways. How is this accomplished in Bash?
./test.sh: line 11: kmeans: command not found
./test.sh: line 12: ${$app + "_time"}=$( echo $kmeans_time + ${output[1]} | bc): bad substitution
What I am trying to do is to execute as follow, which can work properly.
kmeans="./kmeans -m40 -n40 -t0.00001 -p 4 -i inputs/random-n1024-d128-c4.txt"
output=($($kmeans | tail -1))
# output[1] is the execution time
echo "${output[1]}"
kmeas_times=$kmeans_times+${output[1]}
I want to iterate over different programs and calculate each of their average execution time
I am vaguely guessing you are looking for printf -v.
The string in bayes is not a valid command, nor a valid sequence of arguments to another program, so I really can't guess what you are hoping for it to do.
Furthermore, output is not an array, so ${output[1]} is not well-defined. Are you trying to get the first token from the line? You seem to have misplaced the parentheses to make output into an array; but you can replace the tail call with a simple Awk script to just extract the token you want.
Your code would always add the value of kmeans_time to output; if you want to use the variable named by $prog you can use indirect expansion to specify the name of the variable, but you will need a temporary variable for that.
Mmmmaybe something like this? Hopefully this should at least show you what valid Bash syntax looks like.
kmeans_time=0
bayes_time=0
for n in {1..10}
do
for prog in kmeans bayes
do
case $prog in
kmeans) cmd=(./kmeans -m40 -n40 -t0.00001 -p 4 -i inputs/random-n1024-d128-c4.txt);;
bayes) cmd=(./bayes -t 4 -v32 -r1024 -n2 -p20 -s0 -i2 -e2);;
esac
output=$("${cmd[#]}" | awk 'END { print $2 }')
var=${prog}_time
printf -v "$var" %i $((!var + output))
echo "$output"
done
done
As an alternative to the indirect expansion, maybe use an associative array for the accumulated time. (Bash v5+ only, though.)
If running the two programs alternatingly is not important, your code can probably be simplified.
kmeans () {
./kmeans -m40 -n40 -t0.00001 -p 4 -i inputs/random-n1024-d128-c4.txt
}
bayes () {
./bayes -t 4 -v32 -r1024 -n2 -p20 -s0 -i2 -e2
}
get_output () {
awk 'END { print $2 }'
}
loop () {
time=0
for n in {1..10}; do
do
output=("$#" | get_output)
time=$((time+output))
print "$output"
done
printf -v "${0}_time" %i "$time"
}
loop kmeans
loop bayes
Maybe see also http://mywiki.wooledge.org/BashFAQ/050 ("I'm trying to put a command in a variable, but the complex cases always fail").

Bash function returns an unexpected value

I wrote this function in BASH, and it returns an unexpected value:
checkIfUserFound()
{
isUserFound=$( cat user_uid.txt | grep $ADMIN_TO_UPDATE -B 1 | grep "uid" )
return $isUserFound
}
the user_uid.txt file is empty (I enter invalid admin).
but for some reason, it returns "1":
checkIfUserFound
isUserFound=$?
if [ "$isUserFound" -eq "0" ];
then
echo "Searching..."
else
echo "User found..."
fi
This prints "User found..."
Does anyone know how come that is the returning value ? Shouldn't I get "0" when returning from the function ?
The argument to return needs to be a number (a small integer -- the range is 0 through 255). But grep already sets its return code to indicate whether it found a match, and functions already return the return code of the last command in the function; so all you really need is
checkIfUserFound()
{
grep "$ADMIN_TO_UPDATE" -B 1 user_uid.txt |
grep -q "uid"
}
(Notice also how we get rid of the useless use of cat).
The script could probably usefully be refactored to Awk. Perhaps I am guessing correctly what you want:
checkIfUserFound()
{
awk -v user="$ADMIN_TO_UPDATE" '/uid/ { p=1; next }
p { if ($0 ~ user) found=1; p=0 }
END { exit 1-found }' user_uid.txt
}
Finally, the code which calls this function should check whether it succeeded, not whether it printed 0.
if ! checkIfUserFound
then
echo "Searching..."
else
echo "User found..."
fi
Notice perhaps that [ is not part of the if command's syntax (though [ is one of the commands whose result code if is often used to check).
The main problem is that you don't check for failure of any of the commands in the pipe.
The failure (exit code) of the last command in the pipe will be the exit value of the whole pipe. That exit code is available in the $? variable.
And since you want to return the success/failure of the pipe from the function, that's the value you need to return:
return $?
If the pipe doesn't fail then the result of the pipe will be the output on stdout, which will be put into the $isUserFound variable. Since it will not be a valid exit code (which must be numeric) then it can't be returned.
Since you don't need the $isUserFound variable, and you want to return the result of the pipe, your function could be simplified as
checkIfUserFound() {
cat user_uid.txt | grep $ADMIN_TO_UPDATE -B 1 | grep "uid" 2>&1 >/dev/null
}

what is ":" called in bash? Colon usage as a pass statement [duplicate]

This question already has answers here:
What is the purpose of the : (colon) GNU Bash builtin?
(12 answers)
Closed 8 years ago.
See: What is the Bash equivalent of Python's pass statement
What is this ":" colon usage called? For example:
if [[ -n $STRING ]]; then
#printf "[INFO]:STRING: if -n string: STRING:$STRING \n"
:
else
printf "[INFO]:Nothing in the the string\n"
fi
To what that is, run help : in the shell. It gives:
$ help :
:: :
Null command.
No effect; the command does nothing.
Exit Status:
Always succeeds.
Very useful in one-liner infinite loops, for example:
while :; do date; sleep 1; done
Again, you could write the same thing with true instead of :, but this is shorter.
Interestingly:
$ help true
true: true
Return a successful result.
Exit Status:
Always succeeds.
According to this, the difference is that : is "Null command",
while true is "Returns a successful result".
Another difference is that true is usually a real binary:
$ which true
/usr/bin/true
On the other hand, which : gives nothing. (Which makes sense, being a "null command".)
Anyway, #Andy is right, this is duplicate of this other post, which explains it much better.

Use variables outside the subprocess in bash

There's a getStrings() function that calls getPage() function that returns some html page. That html is piped through egrep and sed combination to get only 3 strings. Then I try to put every string into separate variable link, profile, gallery respectively using while read.. construction. But it works only inside the while...done loop because it runs in subprocess. What should I do to use those variables outside the getStrings() function?
getStrings() {
local i=2
local C=0
getPage $(getPageLink 1 $i) |
egrep *some expression that results in 3 strings* |
while read line; do
if (( (C % 3) == 0 )); then
link=$line
elif (( (C % 3) == 1 )); then
profile=$line
else
gallery=$line
fi
C=$((C+1)) #Counter
done
}
Simple: don't run the loop in a subprocess :)
To actually accomplish that, you can use process substitution.
while read line; do
...
done < <(getPage $(getPageLink 1 $i) | egrep ...)
For the curious, a POSIX-compatible way is to use a named pipe (and its possible that bash uses named pipes to implement process substitution):
mkfifo pipe
getPage $(getPageLink 1 $i) | egrep ... > pipe &
while read line; do
...
done < pipe
Starting in bash 4.2, you can just set the lastpipe option, which causes the last command in a pipeline to run in the current shell, rather than a subshell.
shopt -s lastpipe
getPage $(getPageLink 1 $i) | egrep ... | while read line; do
...
done
However, using a while loop is not the best way to set the three variables. It's easier to just call read three times within a command group, so that they all read from the same stream. In any of the three scenarios above, replace the while loop with
{ read link; read profile; read gallery; }
If you want to be a little more flexible, put the names of the variables you might want to read in an array:
fields=( link profile gallery )
then replace the while loop with this for loop instead:
for var in "${fields[#]}"; do read $var; done
This lets you easily adjust your code, should the pipeline ever return more or fewer lines, by just editing the fields array to have the appropriate field names.
One more solving using array:
getStrings() {
array_3=( `getPage | #some function
egrep | ...` ) #pipe conveyor
}

mutiple returns from a function in a shell script

function get_arguments()
{
read -p 'data : ' data
read -p 'lambda: ' lambda
echo $data $lambda
}
data,lambda=$(get_arguments)
But i am getting an error
data : /home/wolfman/Downloads/data
lambda value: 2
./shell_script.sh: line 25: data,lambda,= /home/wolfman/Downloads/data: No such file or directory
But
1) Why is it even evaluating that whether that file exists or not.. its just a string??
2) what am i doing wrong :(
THanks
sh syntax does not allow that. But, the variables in the function are global, so you can just invoke the function and data and lambda will be set in the caller.
functions return an integer value, but they can print arbitrary data which can be read by the caller. For example, you could do:
get_arguments | { read data lambda; echo $data $lambda; }
The drawback is that the values are only available in that block. (The pipe creates a subshell, and the values read by read are only valid in that subshell.)
Just for fun here are a couple of other possible methods.
read -r data lambda <<< $(get_arguments)
or
set -- $(get_arguments)
data=$1
lambda=$2
shells don't allow direct assignment to lists of variables, you have manage that with shell string parsing (or possibly other methods). Try
data_lambda=$(get_arguments)
data=${data_lambda% *}
#-----------------^^space char
lambda=${data_lambda#* }
#------------------^^space char
$d=123 l=345
$data_lambda=$(echo $d $l)
$echo $data_lambda
123 345
$data=${data_lambda% *}
$lambda=${data_lambda#* }
$echo $data
123
$echo $lambda
345
Substituting $(echo $d $l) for data_lambda=$(get_arguments)`.
See my write-up on shell parameter modifiers
IHTH

Resources