Evaluate variable in if statement - bash

So I have an array like:
al_ap_version=('ap_version' '[[ $data -ne $version ]]')
And the condition gets evaluated inside a loop like:
for alert in alert_list; do
data=$(tail -1 somefile)
condition=$(eval echo \${$alert[1]})
if eval "$condition" ; then
echo SomeAlert
fi
done
Whilst this generally works with many scenarios, if $data returns something like "-/-" or "4.2.9", I get errors as it doesn't seem to like complex strings in the variable.
Obviously I can't enclose the variable in single quotes as it won't expand so I'm after any ideas to expand the $data variable (or indeed the $version var which suffers the same possible fate) in a way that the evaluation can handle?

Ignoring the fact that eval is probably super dangerous to use here (unless the data in somefile is controlled by you and only you), there are a few issues to fix in your example code.
In your for loop, alert_list needs to be $alert_list.
Also, as pointed out by #choroba, you should be using != instead of -ne since your input isn't always an integer.
Finally, while debugging, you can add set -x to the top of your script, or add -x to the end of your shebang line to enable verbose output (helps to determine how bash is expanding your variables).
This works for me:
#!/bin/bash -x
data=2.2
version=1
al_ap_version=('ap_version' '[[ $data != $version ]]')
alert_list='al_ap_version'
for alert in $alert_list; do
condition=$(eval echo \${$alert[1]})
if eval "$condition"; then
echo "alert"
fi
done

You could try a more functional approach, even though bash is only just barely capable of such things. On the whole, it is usually a lot easier to pack an action to be executed into a bash function and refer to it with the name of the function, than to try to maintain the action as a string to be evaluated.
But first, the use of an array of names of arrays is awkward. Let's get rid of it.
It's not clear to me the point of element 0, ap_version, in the array al_ap_version but I suppose it has something to do with error messages. If the order of alert processing isn't important, you could replace the list of names of arrays with a single associative array:
declare -A alert_list
alert_list[ap_version]=... # see below
alert_list[os_dsk]=...
and then process them with:
for alert_name in ${!alert_list[#]}; do
alert=${alert_list[$alert_name]}
...
done
Having done that, we can get rid of the eval, with its consequent ugly necessity for juggling quotes, by creating a bash function for each alert:
check_ap_version() {
(($version != $1))
}
Edit: It seems that $1 is not necessarily numeric, so it would be better to use a non-numeric comparison, although exact version match might not be what you're after either. So perhaps it would be better to use:
check_ap_version() {
[[ $version != $1 ]]
}
Note the convention that the first argument of the function is the data value.
Now we can insert the name of the function into the alert array, and call it indirectly in the loop:
declare -A alert_list
alert_list[ap_version]=check_ap_version
alert_list[os_dsk]=check_op_dsk
check_alerts() {
local alert_name alert
local data=$(tail -1 somefile)
for alert_name in ${!alert_list[#]}; do
alert=${alert_list[$alert_name]}
if $alert "$data"; then
signal_alert $alert_name
fi
done
}
If you're prepared to be more disciplined about the function names, you can avoid the associative array, and thereby process the alerts in order. Suppose, for example, that every function has the name check_<alert_name>. Then the above could be:
alert_list=(ap_version os_dsk)
check_alerts() {
local alert_name
local data=$(tail -1 somefile)
for alert_name in $alert_list[#]; do
if check_$alert_name "$data"; then
signal_alert $alert_name
fi
done
}

Related

Iterate between two arrays within a single loop

I have these variables:
bridge_xa_name_list=( "$br_int0_srxa" "$br_int1_srxa" "$br_int2_srxa" "$br_int6_srxa" "$br_int7_srxa" "$br_wan3_srxa1" "$br_wan3_srxa2" )
bridge_xb_name_list=( "$br_int0_srxb" "$br_int1_srxb" "$br_int2_srxb" "$br_int6_srxb" "$br_int7_srxb" "$br_wan3_srxb1" "$br_wan3_srxb2" )
I am trying to use a single loop to iterate all the elements for each array.
At the moment I have a functioning loop but only by referencing the $bridge_xa_name_list
for a in "${bridge_xa_name_list[#]}"; do
shell_echo_textval_green "Network Bridge Name Detected" "$a"
sleep 1
shell_echo_text "Verifying $a network State"
virsh_net_list=$(virsh net-list | grep active | grep $a)
if [[ ! $virsh_net_list == *"active" ]]
then
shell_echo "[Inactive]"
else
shell_echo "[Active]"
shell_echo_green "$a.xml found. Undefining anyway."
virsh net-undefine $a
fi
shell_echo_text "File $a.xml is at $srxa_fld_path"
if [[ -f ${srxa_fld_path}${a}.xml ]]
then
shell_echo "[Yes]"
else
shell_echo "[Not Found]"
shell_echo_text "Attempting to copy $a.xml template to ~/config/$srxa_nm"
cp $xml_file_path $srxa_fld_path${a}.xml
shell_echo ["Copied"]
#Check if Copy was sucessfull
if [[ -f $srxa_fld_path${a}.xml ]]
then
:
else
shell_echo_red "[Failed]"
shell_echo_red "There was an error when trying to copy ${a}.xml"
shell_echo_error_banner "Script Aborted! 1 error(s)"
exit 1
fi
done
$a in my script is iterating all the elements from the 1st array. However, I would like to include the second array as part of the same loop.
These are indexed arrays so you can iterate over the indexes:
for (( i = 0; i < ${#bridge_xa_name_list[#]}; i++ )); do
echo "${bridge_xa_name_list[i]}"
echo "${bridge_xb_name_list[i]}"
done
$a in my script is iterating all the elements from the 1st array. However, I would like to include the second array as part of the same loop.
I think you mean that you want to execute the loop body once for each element of bridge_xa_name_list and also once, separately, for each element of bridge_xb_name_list, without duplicating the body of the loop. Yes, there are at least two easy ways to do that:
Absolutely easiest would be to just specify the additional elements in the loop header:
for a in "${bridge_xa_name_list[#]}" "${bridge_xb_name_list[#]}"; do
# loop body ...
What you need to understand here is that the for loop syntax has nothing in particular to do with accessing an array. The in list of such a command designates zero or more individual values (shell "words") to iterate over, which in the case of your original code are produced by a parameter expansion involving array-valued parameter bridge_xa_name_list. But this is just a special case of the shell's general procedure of expanding each command (path expansion, parameter expansion, command expansion, etc.) before executing it. You can use that however you like.
OR
Make a function around the loop that executes it once for every function argument. Then call that function once for each array:
my_loop() {
for a in "$#"; do
# loop body
done
}
# ...
my_loop "${bridge_xa_name_list[#]}"
my_loop "${bridge_xb_name_list[#]}"
Note that this still exhibits the same expand-then-execute behavior described in the previous item, which is why you have to pass the expansion of each array (to one word per element). There is no direct way to pass the whole array as a single argument.
Note also that the shell supports a special shortcut for iterating over all the elements of $#. For that particular case, you can omit the in list altogether:
my_loop() {
for a; do
# loop body
done
}
Of course, you can also combine the above, by providing the function and calling it once with the elements of both arrays:
my_loop "${bridge_xa_name_list[#]}" "${bridge_xb_name_list[#]}"

Conditional on non-instantiated variable

I am new to Bash scripting, having a lot more experience with C-type languages. I have written a few scripts with a conditional that checks the value of a non-instantiated variable and if it doesn't exist or match a value sets the variable. On top of that the whole thing is in a for loop. Something like this:
for i in ${!my_array[#]}; do
if [ "${my_array[i]}" = true ]
then
#do something
else
my_array[i]=true;
fi
done
This would fail through a null pointer in Java since my_array[i] is not instantiated until after it is checked. Is this good practice in Bash? My script is working the way I designed, but I have learned that just because a kluge works now doesn't mean it will work in the future.
Thanks!
You will find this page on parameter expansion helpful, as well as this one on conditionals.
An easy way to test a variable is to check it for nonzero length.
if [[ -n "$var" ]]
then : do stuff ...
I also like to make it fatal to access a nonexisting variable; this means extra work, but better safety.
set -u # unset vars are fatal to access without exception handling
if [[ -n "${var:-}" ]] # handles unset during check
then : do stuff ...
By default, referencing undefined (or "unset") variable names in shell scripts just gives the empty string. But is an exception: if the shell is run with the -u option or set -u has been run in it, expansions of unset variables are treated as errors and (if the shell is not interactive) cause the shell to exit. Bash applies this principle to array elements as well:
$ array=(zero one two)
$ echo "${array[3]}"
$ echo "array[3] = '${array[3]}'"
array[3] = ''
$ set -u
$ echo "array[3] = '${array[3]}'"
-bash: array[3]: unbound variable
There are also modifiers you can use to control what expansions do if a variable (or array element) is undefined and/or empty (defined as the empty string):
$ array=(zero one '')
$ echo "array[2] is ${array[2]-unset}, array[3] is ${array[3]-unset}"
array[2] is , array[3] is unset
$ echo "array[2] is ${array[2]:-unset or empty}, array[3] is ${array[3]:-unset or empty}"
array[2] is unset or empty, array[3] is unset or empty
There are a bunch of other variants, see the POSIX shell syntax standard, section 2.6.2 (Parameter Expansion).
BTW, you do need to use curly braces (as I did above) around anything other than a plain variable reference. $name[2] is a reference to the plain variable name (or element 0 if it's an array), followed by the string "[2]"; ${name[2]}, on the other hand, is a reference to element 2 of the array name. Also, you pretty much always want to wrap variable references in double-quotes (or include them in double-quoted strings), to prevent the shell from "helpfully" splitting them into words and/or expanding them into lists of matching files. For example, this test:
if [ $my_array[i] = true ]
is (mostly) equivalent to:
if [ ${my_array[0]}[i] = true ]
...which isn't what you want at all. But this one:
if [ ${my_array[i]} = true ]
still doesn't work, because if my_array[i] is unset (or empty) it'll expand to the equivalent of:
if [ = true ]
...which is bad test expression syntax. You want this:
if [ "${my_array[i]}" = true ]

Accessing function-definition-time, not evaluation-time, value for a variable in bash

I hope that I can do something like this, and the output would be "hello"
#!/bin/bash
foo="hello"
dummy() {
local local_foo=`echo $foo`
echo $local_foo
}
foo=''
dummy
This question means that I would like to capture the value of some global values at definition time, usually used via source blablabla.bash and would like that it defines a function that captures current variable's value.
The Sane Way
Functions are evaluated when they're run, not when they're defined. Since you want to capture a variable as it exists at definition time, you'll need a separate variable assigned at that time.
foo="hello"
# By convention, global variables prefixed by a function name and double underscore are for
# the exclusive use of that function.
readonly dummy__foo="$foo" # capture foo as of dummy definition time, and prevent changes
dummy() {
local local_foo=$dummy__foo # ...and refer to that captured copy
echo "$local_foo"
}
foo=""
dummy
The Insane Way
If you're willing to commit crimes against humanity, however, it is possible to do code generation to capture a value. For instance:
# usage: with_locals functionname k1=v1 [k2=v2 [...]]
with_locals() {
local func_name func_text assignments
func_name=$1; shift || return ## fail if out of arguments
(( $# )) || return ## noop if not given at least one assignment
func_text=$(declare -f "$func_name")
for arg; do
if [[ $arg = *=* ]]; then ## if we already look like an assignment, leave be
printf -v arg_q 'local %q; ' "$arg"
else ## otherwise, assume we're a bare name and run a lookup
printf -v arg_q 'local %q=%q; ' "$arg" "${!arg}"
fi
assignments+="$arg_q"
done
# suffix first instance of { in the function definition with our assignments
eval "${func_text/{/{ $assignments}"
}
...thereafter:
foo=hello
dummy() {
local local_foo="$foo"
echo "$local_foo"
}
with_locals dummy foo ## redefine dummy to always use the current value of "foo"
foo=''
dummy
Well, you can comment out or remove the foo='' line, and that will do it. The function dummy does not execute until you call it, which is after you've blanked out the foo value, so it makes sense that you would get a blank line echoed. Hope this helps.
There is no way to execute the code inside a function unless that function gets called by bash. There is only an alternative of calling some other function that is used to define the function you want to call after.
That is what a dynamic function definition is.
I don't believe that you want that.
An alternative is to store the value of foo (calling the function) and then calling it again after the value has changed. Something hack-sh like this:
#!/bin/bash
foo="hello"
dummy() {
${global_foo+false} &&
global_foo="$foo" ||
echo "old_foo=$global_foo new_foo=$foo"
}
dummy
foo='new'
dummy
foo="a whole new foo"
dummy
Calling it will print:
$ ./script
old_foo=hello new_foo=new
old_foo=hello new_foo=a whole new foo
As I am not sure this address your real problem, just: Hope this helps.
After inspired by #CharlesDuffy, I think using eval might solve some of the problems, and the example can be modified as following:
#!/bin/bash
foo="hello"
eval "
dummy() {
local local_foo=$foo
echo \$local_foo
}
"
foo=''
dummy
Which will give the result 'hello' instead of nothing.
#CharlesDuffy pointed out that such solution is quite dangerous:
local local_foo=$foo is dangerously buggy: If your foo value contains
an expansion such as $(rm -rf $HOME), it'll be executed
Using eval is good in performance, however being bad in security. And therefore I'd suggest #CharlesDuffy 's answer.

Get length of an empty or unset array when “nounset” option is in effect

Due to the fact that Bash, when running in set -o nounset mode (aka set -u), may consider empty arrays as unset regardless of whether they have actually been assigned an empty value, care must be taken when attempting to expand an array — one of the workarounds is to check whether the array length is zero. Not to mention that getting the number of elements in an array is a common operation by itself.
While developing with Bash 4.2.47(1)-release in openSUSE 42.1, I accustomed to that getting array size with ${#ARRAY_NAME[#]} succeeds when array is either empty or unset. However, while checking my script with Bash 4.3.46(1)-release in FreeBSD 10.3, it turned out that this operation may fail with generic “unbound variable” error message. Providing default value for expansion does not seem to work for array length. Providing alternative command chains seems to work, but not inside a function called through a subshell expansion — functions just exits after the first failure. What else can be of any help here?
Consider the following example:
function Size ()
{
declare VAR="$1"
declare REF="\${#${VAR}[#]}"
eval "echo \"${REF}\" || echo 0" 2>/dev/null || echo 0
}
set -u
declare -a MYARRAY
echo "size: ${#MYARRAY[#]}"
echo "size: ${#MYARRAY[#]-0}"
echo "Size: $(Size 'MYARRAY')"
echo -n "Size: "; Size 'MYARRAY'
In openSUSE environment, all echo lines output 0, as expected. In FreeBSD, the same outcome is only possible when the array is explicitly assigned an empty value: MYARRAY=(); otherwise, both inline queries in the first two lines fail, the third line just outputs Size: (meaning that the expansion result is empty), and only the last line succeeds completely thanks to the outer || echo 0 — however passing the result through to the screen is not what is usually intended when trying to obtain array length.
Here is the summary of my observations:
Bash 4.2 Bash 4.3
openSUSE FreeBSD
counting elements of unset array OK FAILED
counting elements of empty array OK OK
content expansion of unset array FAILED FAILED
content expansion of unset array(*) OK OK
content expansion of empty array FAILED FAILED
content expansion of empty array(*) OK OK
(* with fallback value supplied)
To me, that looks pretty inconsistent. Is there any real future-proof and cross-platform solution for that?
There are known (documented) differences between the Linux and BSD flavors of bash. I would suggest writing your code as per the POSIX standard. You can start here for more information -> www2.opengroup.org.
With that in mind, you can start bash with the --posix command-line option or you can execute the command set -o posix while bash is running. Either will cause bash to conform to the POSIX standard.
The above suggestion will increase the probability of cross-platform consistency.
As a temporary solution, I followed the route suggested by #william-pursell and just unset the nounset option during the query:
function GetArrayLength ()
{
declare ARRAY_NAME="$1"
declare INDIRECT_REFERENCE="\${#${ARRAY_NAME}[#]}"
case "$-" in
*'u'*)
set +u
eval "echo \"${INDIRECT_REFERENCE}\""
set -u
;;
*)
eval "echo \"${INDIRECT_REFERENCE}\""
;;
esac
}
(Using if instead of case leads to negligibly slower execution on my test machines. Moreover, case allows matching additional options easily if that would become necessary sometime.)
I also tried exploiting the fact that content expansion (with fallback or replacement value) usually succeeds even for unset arrays:
function GetArrayLength ()
{
declare ARRAY_NAME="$1"
declare INDIRECT_REFERENCE="${ARRAY_NAME}[#]"
if [[ -z "${!INDIRECT_REFERENCE+isset}" ]]; then
echo 0
else
INDIRECT_REFERENCE="\${#${ARRAY_NAME}[#]}"
eval "echo \"${INDIRECT_REFERENCE}\""
fi
}
However, it turns out that Bash does not optimize ${a[#]+b} expansion, as execution time clearly increases for larger arrays — although being the smallest one for empty or unset arrays.
Nevertheless, if anyone has a better solution, fell free to post other answers.

Having problems calling function within bash script

I've been working on our intro scripting assignment, and am having issues calling functions within the script. I am in the second portion of the assignment, and I am just testing to make sure what I have is (hopefully) going to work. I have gathered some directories, and ask a yes or no question. When I get a 'y', I wrote a little function that I call, and when I get a 'n' I have another function, both simple echoes. What is the issue?
part_two(){
answer=""
for value in "$#";do
echo "$value"
while [ "$answer" != "y" -a "$answer" != "n" ]
do
echo -n "Would you like to save the results to a file? (y/n): "
read answer
done
if [ "$answer" = "n" ]
then
part_six
elif [ "$answer" = "y" ]
then
part_five
fi
done
}
part_two $#
part_five(){
echo -n "working yes";
}
part_six(){
echo -n "working no";
}
Any help would be greatly appreciated, as always.
Much like in C a function must be defined before it's used. In your code snippet you are calling part_two (which is calling part_five and part_six) before declaring the two functions.
Have you tried moving their definitions to the start of the script?
EDIT:
In most cases, the best way to deal with this in Bash is to simply define all functions at the start of the script before executing any actual commands. The order of the definitions does not really matter - the shell only looks up a function when it's about to use it - so generally there are no dependency issues etc. that you may have to think about.
EDIT 2:
There are cases where you may not be able to just define a function at the start of the script. A common case is when you use conditional constructs to dynamically select or modify the declaration of a function e..g.:
if [[ "$1" = 0 ]]; then
function show() {
echo Zero
}
else
function show() {
echo Not-zero
}
fi
In these cases you have to make sure that each function call happens after that function (and any others that it calls) is declared.
EDIT 3:
In bash a function declaration is actually the function foo() { ... } block where you define its implementation - and yes, the function keyword is not strictly necessary. There are no function prototypes as in C - they would not make sense anyway because shell scripts are generally parsed as they are executed. Newer Bash version do read a script at once, but they mostly check for syntax errors and not for logical errors such as this one.
BTW the official term is "function declaration", but even the Bash info page uses "declaration" and "definition" interchangeably.

Resources