I am confused about a bash script.
I have the following code:
function grep_search() {
magic_way_to_define_magic_variable_$1=`ls | tail -1`
echo $magic_variable_$1
}
I want to be able to create a variable name containing the first argument of the command and bearing the value of e.g. the last line of ls.
So to illustrate what I want:
$ ls | tail -1
stack-overflow.txt
$ grep_search() open_box
stack-overflow.txt
So, how should I define/declare $magic_way_to_define_magic_variable_$1 and how should I call it within the script?
I have tried eval, ${...}, \$${...}, but I am still confused.
I've been looking for better way of doing it recently. Associative array sounded like overkill for me. Look what I found:
suffix=bzz
declare prefix_$suffix=mystr
...and then...
varname=prefix_$suffix
echo ${!varname}
From the docs:
The ‘$’ character introduces parameter expansion, command substitution, or arithmetic expansion. ...
The basic form of parameter expansion is ${parameter}. The value of parameter is substituted. ...
If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of indirection. Bash uses the value formed by expanding the rest of parameter as the new parameter; this is then expanded and that value is used in the rest of the expansion, rather than the expansion of the original parameter. This is known as indirect expansion. The value is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. ...
Use an associative array, with command names as keys.
# Requires bash 4, though
declare -A magic_variable=()
function grep_search() {
magic_variable[$1]=$( ls | tail -1 )
echo ${magic_variable[$1]}
}
If you can't use associative arrays (e.g., you must support bash 3), you can use declare to create dynamic variable names:
declare "magic_variable_$1=$(ls | tail -1)"
and use indirect parameter expansion to access the value.
var="magic_variable_$1"
echo "${!var}"
See BashFAQ: Indirection - Evaluating indirect/reference variables.
Beyond associative arrays, there are several ways of achieving dynamic variables in Bash. Note that all these techniques present risks, which are discussed at the end of this answer.
In the following examples I will assume that i=37 and that you want to alias the variable named var_37 whose initial value is lolilol.
Method 1. Using a “pointer” variable
You can simply store the name of the variable in an indirection variable, not unlike a C pointer. Bash then has a syntax for reading the aliased variable: ${!name} expands to the value of the variable whose name is the value of the variable name. You can think of it as a two-stage expansion: ${!name} expands to $var_37, which expands to lolilol.
name="var_$i"
echo "$name" # outputs “var_37”
echo "${!name}" # outputs “lolilol”
echo "${!name%lol}" # outputs “loli”
# etc.
Unfortunately, there is no counterpart syntax for modifying the aliased variable. Instead, you can achieve assignment with one of the following tricks.
1a. Assigning with eval
eval is evil, but is also the simplest and most portable way of achieving our goal. You have to carefully escape the right-hand side of the assignment, as it will be evaluated twice. An easy and systematic way of doing this is to evaluate the right-hand side beforehand (or to use printf %q).
And you should check manually that the left-hand side is a valid variable name, or a name with index (what if it was evil_code # ?). By contrast, all other methods below enforce it automatically.
# check that name is a valid variable name:
# note: this code does not support variable_name[index]
shopt -s globasciiranges
[[ "$name" == [a-zA-Z_]*([a-zA-Z_0-9]) ]] || exit
value='babibab'
eval "$name"='$value' # carefully escape the right-hand side!
echo "$var_37" # outputs “babibab”
Downsides:
does not check the validity of the variable name.
eval is evil.
eval is evil.
eval is evil.
1b. Assigning with read
The read builtin lets you assign values to a variable of which you give the name, a fact which can be exploited in conjunction with here-strings:
IFS= read -r -d '' "$name" <<< 'babibab'
echo "$var_37" # outputs “babibab\n”
The IFS part and the option -r make sure that the value is assigned as-is, while the option -d '' allows to assign multi-line values. Because of this last option, the command returns with an non-zero exit code.
Note that, since we are using a here-string, a newline character is appended to the value.
Downsides:
somewhat obscure;
returns with a non-zero exit code;
appends a newline to the value.
1c. Assigning with printf
Since Bash 3.1 (released 2005), the printf builtin can also assign its result to a variable whose name is given. By contrast with the previous solutions, it just works, no extra effort is needed to escape things, to prevent splitting and so on.
printf -v "$name" '%s' 'babibab'
echo "$var_37" # outputs “babibab”
Downsides:
Less portable (but, well).
Method 2. Using a “reference” variable
Since Bash 4.3 (released 2014), the declare builtin has an option -n for creating a variable which is a “name reference” to another variable, much like C++ references. Just as in Method 1, the reference stores the name of the aliased variable, but each time the reference is accessed (either for reading or assigning), Bash automatically resolves the indirection.
In addition, Bash has a special and very confusing syntax for getting the value of the reference itself, judge by yourself: ${!ref}.
declare -n ref="var_$i"
echo "${!ref}" # outputs “var_37”
echo "$ref" # outputs “lolilol”
ref='babibab'
echo "$var_37" # outputs “babibab”
This does not avoid the pitfalls explained below, but at least it makes the syntax straightforward.
Downsides:
Not portable.
Risks
All these aliasing techniques present several risks. The first one is executing arbitrary code each time you resolve the indirection (either for reading or for assigning). Indeed, instead of a scalar variable name, like var_37, you may as well alias an array subscript, like arr[42]. But Bash evaluates the contents of the square brackets each time it is needed, so aliasing arr[$(do_evil)] will have unexpected effects… As a consequence, only use these techniques when you control the provenance of the alias.
function guillemots {
declare -n var="$1"
var="«${var}»"
}
arr=( aaa bbb ccc )
guillemots 'arr[1]' # modifies the second cell of the array, as expected
guillemots 'arr[$(date>>date.out)1]' # writes twice into date.out
# (once when expanding var, once when assigning to it)
The second risk is creating a cyclic alias. As Bash variables are identified by their name and not by their scope, you may inadvertently create an alias to itself (while thinking it would alias a variable from an enclosing scope). This may happen in particular when using common variable names (like var). As a consequence, only use these techniques when you control the name of the aliased variable.
function guillemots {
# var is intended to be local to the function,
# aliasing a variable which comes from outside
declare -n var="$1"
var="«${var}»"
}
var='lolilol'
guillemots var # Bash warnings: “var: circular name reference”
echo "$var" # outputs anything!
Source:
BashFaq/006: How can I use variable variables (indirect variables, pointers, references) or associative arrays?
BashFAQ/048: eval command and security issues
Example below returns value of $name_of_var
var=name_of_var
echo $(eval echo "\$$var")
Use declare
There is no need on using prefixes like on other answers, neither arrays. Use just declare, double quotes, and parameter expansion.
I often use the following trick to parse argument lists contanining one to n arguments formatted as key=value otherkey=othervalue etc=etc, Like:
# brace expansion just to exemplify
for variable in {one=foo,two=bar,ninja=tip}
do
declare "${variable%=*}=${variable#*=}"
done
echo $one $two $ninja
# foo bar tip
But expanding the argv list like
for v in "$#"; do declare "${v%=*}=${v#*=}"; done
Extra tips
# parse argv's leading key=value parameters
for v in "$#"; do
case "$v" in ?*=?*) declare "${v%=*}=${v#*=}";; *) break;; esac
done
# consume argv's leading key=value parameters
while test $# -gt 0; do
case "$1" in ?*=?*) declare "${1%=*}=${1#*=}";; *) break;; esac
shift
done
Combining two highly rated answers here into a complete example that is hopefully useful and self-explanatory:
#!/bin/bash
intro="You know what,"
pet1="cat"
pet2="chicken"
pet3="cow"
pet4="dog"
pet5="pig"
# Setting and reading dynamic variables
for i in {1..5}; do
pet="pet$i"
declare "sentence$i=$intro I have a pet ${!pet} at home"
done
# Just reading dynamic variables
for i in {1..5}; do
sentence="sentence$i"
echo "${!sentence}"
done
echo
echo "Again, but reading regular variables:"
echo $sentence1
echo $sentence2
echo $sentence3
echo $sentence4
echo $sentence5
Output:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
Again, but reading regular variables:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
This will work too
my_country_code="green"
x="country"
eval z='$'my_"$x"_code
echo $z ## o/p: green
In your case
eval final_val='$'magic_way_to_define_magic_variable_"$1"
echo $final_val
This should work:
function grep_search() {
declare magic_variable_$1="$(ls | tail -1)"
echo "$(tmpvar=magic_variable_$1 && echo ${!tmpvar})"
}
grep_search var # calling grep_search with argument "var"
An extra method that doesn't rely on which shell/bash version you have is by using envsubst. For example:
newvar=$(echo '$magic_variable_'"${dynamic_part}" | envsubst)
For zsh (newers mac os versions), you should use
real_var="holaaaa"
aux_var="real_var"
echo ${(P)aux_var}
holaaaa
Instead of "!"
As per BashFAQ/006, you can use read with here string syntax for assigning indirect variables:
function grep_search() {
read "$1" <<<$(ls | tail -1);
}
Usage:
$ grep_search open_box
$ echo $open_box
stack-overflow.txt
Even though it's an old question, I still had some hard time with fetching dynamic variables names, while avoiding the eval (evil) command.
Solved it with declare -n which creates a reference to a dynamic value, this is especially useful in CI/CD processes, where the required secret names of the CI/CD service are not known until runtime. Here's how:
# Bash v4.3+
# -----------------------------------------------------------
# Secerts in CI/CD service, injected as environment variables
# AWS_ACCESS_KEY_ID_DEV, AWS_SECRET_ACCESS_KEY_DEV
# AWS_ACCESS_KEY_ID_STG, AWS_SECRET_ACCESS_KEY_STG
# -----------------------------------------------------------
# Environment variables injected by CI/CD service
# BRANCH_NAME="DEV"
# -----------------------------------------------------------
declare -n _AWS_ACCESS_KEY_ID_REF=AWS_ACCESS_KEY_ID_${BRANCH_NAME}
declare -n _AWS_SECRET_ACCESS_KEY_REF=AWS_SECRET_ACCESS_KEY_${BRANCH_NAME}
export AWS_ACCESS_KEY_ID=${_AWS_ACCESS_KEY_ID_REF}
export AWS_SECRET_ACCESS_KEY=${_AWS_SECRET_ACCESS_KEY_REF}
echo $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY
aws s3 ls
Wow, most of the syntax is horrible! Here is one solution with some simpler syntax if you need to indirectly reference arrays:
#!/bin/bash
foo_1=(fff ddd) ;
foo_2=(ggg ccc) ;
for i in 1 2 ;
do
eval mine=( \${foo_$i[#]} ) ;
echo ${mine[#]}" " ;
done ;
For simpler use cases I recommend the syntax described in the Advanced Bash-Scripting Guide.
KISS approach:
a=1
c="bam"
let "$c$a"=4
echo $bam1
results in 4
I want to be able to create a variable name containing the first argument of the command
script.sh file:
#!/usr/bin/env bash
function grep_search() {
eval $1=$(ls | tail -1)
}
Test:
$ source script.sh
$ grep_search open_box
$ echo $open_box
script.sh
As per help eval:
Execute arguments as a shell command.
You may also use Bash ${!var} indirect expansion, as already mentioned, however it doesn't support retrieving of array indices.
For further read or examples, check BashFAQ/006 about Indirection.
We are not aware of any trick that can duplicate that functionality in POSIX or Bourne shells without eval, which can be difficult to do securely. So, consider this a use at your own risk hack.
However, you should re-consider using indirection as per the following notes.
Normally, in bash scripting, you won't need indirect references at all. Generally, people look at this for a solution when they don't understand or know about Bash Arrays or haven't fully considered other Bash features such as functions.
Putting variable names or any other bash syntax inside parameters is frequently done incorrectly and in inappropriate situations to solve problems that have better solutions. It violates the separation between code and data, and as such puts you on a slippery slope toward bugs and security issues. Indirection can make your code less transparent and harder to follow.
For indexed arrays, you can reference them like so:
foo=(a b c)
bar=(d e f)
for arr_var in 'foo' 'bar'; do
declare -a 'arr=("${'"$arr_var"'[#]}")'
# do something with $arr
echo "\$$arr_var contains:"
for char in "${arr[#]}"; do
echo "$char"
done
done
Associative arrays can be referenced similarly but need the -A switch on declare instead of -a.
POSIX compliant answer
For this solution you'll need to have r/w permissions to the /tmp folder.
We create a temporary file holding our variables and leverage the -a flag of the set built-in:
$ man set
...
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands.
Therefore, if we create a file holding our dynamic variables, we can use set to bring them to life inside our script.
The implementation
#!/bin/sh
# Give the temp file a unique name so you don't mess with any other files in there
ENV_FILE="/tmp/$(date +%s)"
MY_KEY=foo
MY_VALUE=bar
echo "$MY_KEY=$MY_VALUE" >> "$ENV_FILE"
# Now that our env file is created and populated, we can use "set"
set -a; . "$ENV_FILE"; set +a
rm "$ENV_FILE"
echo "$foo"
# Output is "bar" (without quotes)
Explaining the steps above:
# Enables the -a behavior
set -a
# Sources the env file
. "$ENV_FILE"
# Disables the -a behavior
set +a
While I think declare -n is still the best way to do it there is another way nobody mentioned it, very useful in CI/CD
function dynamic(){
export a_$1="bla"
}
dynamic 2
echo $a_2
This function will not support spaces so dynamic "2 3" will return an error.
for varname=$prefix_suffix format, just use:
varname=${prefix}_suffix
Let's say I have these variables:
VAR_A=resultA
VAR_B=resultB
X=A
I want to get the value of VAR_A or VAR_B based on the value of X.
This is working and gives resultA:
VAR="VAR_$X"
RESULT=${!VAR}
My question is, is there a one-liner for this?
Because indirection expansion doesn't seem to work if it is not the wole name of the variable which is expanded. I tried:
RESULT=${!VAR_$X}
RESULT=${!"VAR_$X"}
...and a lot of other combinations, but it always write "bad substitution"...
There doesn't appear to be a shorter way when using the bash 2 notation of ${!var}.
For reference, the "new" bash 2 notation is value=${!string_name_var} and the "old" notation would be: eval value=\$$string_name_var.
You could just use the old notation to make it work like you wanted:
eval RESULT=\$"VAR_$X"
Reference: https://www.tldp.org/LDP/abs/html/bashver2.html#BASH2REF
You'd be better off using an associative array, at least on recent versions of Bash:
declare -A var=(
[A]=resultA
[B]=resultB
)
x=A
result="${var[$x]}"
echo "$result" # -> resultA
This should work.
RESULT=$(eval echo \$VAR_$X)
The \ in front of the $ makes it be treated literally.
I have 2 large arrays with hash values stored in them. I'm trying to find the best way to verify all of the hash values in array_a are also found in array_b. The best I've got so far is
Import the Hash files into an array
Sort each array
For loop through array_a
Inside of array_a's for loop, do another for look for array_b (seems inefficient).
If found unset value in array_b
Set "found" value to 1 and break loop
If array_a doesn't have a match output to file.
I have large images that I need to verify have been uploaded to the site and the hash values match. I've created a file from the original files and scraped the website ones to create a second list of hash values. Trying to keep this a vanilla as possible, so only using typical bash functionality.
#!/bin/bash
array_a=($(< original_sha_values.txt))
array_b=($(< sha_values_after_downloaded.txt))
# Sort to speed up.
IFS=$'\n' array_a_sorted=($(sort <<<"${array_a[*]}"))
unset IFS
IFS=$'\n' array_b_sorted=($(sort <<<"${array_b[*]}"))
unset IFS
for item1 in "${array_a_sorted[#]}" ; do
found=0
for item2 in "${!array_b_sorted[#]}" ; do
if [[ $item1 == ${array_b_sorted[$item2]} ]]; then
unset 'array_b_sorted[item2]'
found=1
break
fi
done
if [[ $found == 0 ]]; then
echo "$item1" >> hash_is_missing_a_match.log
fi
done
Sorting to sped it up a lot
IFS=$'\n' array_a_sorted=($(sort <<<"${array_a[*]}"))
unset IFS
IFS=$'\n' array_b_sorted=($(sort <<<"${array_b[*]}"))
unset IFS
Is this really the best way of doing this?
for item1 in "${array_a_sorted[#]}" ; do
...
for item2 in "${!array_b_sorted[#]}" ; do
if ...
unset 'array_b_sorted[item2]'
break
Both arrays have 12,000 lines of 64bit hashes, taking 20+ minutes to compare. Is there a way to improve the speed?
you're doing it hard way.
If the task is: find the entries in file1 not in file2. Here is a shorter approach
$ comm -23 <(sort f1) <(sort f2)
I think karakfa's answer is probably the best approach if you just want to get it done and not worry about optimizing bash code.
However, if you still want to do it in bash, and you are willing to use some bash-specific features, you could shave off a lot of time using an associative array instead of two regular arrays:
# Read the original hash values into a bash associative array
declare -A original_hashes=()
while read hash; do
original_hashes["$hash"]=1
done < original_sha_values.txt
# Then read the downloaded values and check each one to see if it exists
# in the associative array. Lookup time *should* be O(1)
while read hash; do
if [[ -z "${original_hashes["$hash"]+x}" ]]; then
echo "$hash" >> hash_is_missing_a_match.log
fi
done < sha_values_after_downloaded.txt
This should be a lot faster than the nested loop implementation using regular arrays. Also, I didn't need any sorting, and all of the insertions and lookups on the associative array should be O(1), assuming bash implements associative arrays as hash tables. I couldn't find anything authoritative to back that up though, so take that with a grain of salt. Either way, it should still be faster than the nested loop method.
If you want the output sorted, you can just change the last line to:
done < <(sort sha_values_after_downloaded.txt)
in which case you're still only having to sort one file, not two.
Problem
Sourcing the result of declare -p for a valid Bash associative array in which keys contain square brackets results in a bad array subscript error.
Testing Procedure
Do:
$ declare -A array
$ key='var[0]'
$ array["$key"]=37
$ echo ${array["$key"]}
37
$ declare -p array > def.sh
$ cat def.sh
declare -A array='(["var[0]"]="37" )'
$ . def.sh
bash: [var[0]]=37: bad array subscript
In the above code, note:
I am able to specify a key that contains square brackets: var[0]
The key is quoted for setters and getters
I am able to do an assignment using this key
I am able to get the value from the associative array using this key
Using declare -p I am able to save this definition to a file: def.sh
When sourcing the file def.sh an error is emitted.
My Environment
The version of Bash I'm using is 4.2.46(1)-release (x86_64-redhat-linux-gnu).
I am on a CentOS 7.3.1611 (Core) server
Workarounds
If instead of doing declare -p array > def.sh I do instead:
{
echo 'declare -A array'
for Key in "${!array[#]}"; do
EscapedKey="$(sed 's|"|\\"|g' <<<"$Key")"
echo "array[\"$EscapedKey\"]=${array["$Key"]}"
done
} > def.sh
then sourcing the def.sh file works. Note that in the above example, I'm also escaping quote characters that might be a part of the key. I do understand that what I have above is not exhaustive. Because of these complications, I would prefer a solution that doesn't involve such workarounds, if at all possible.
Question
Is there some shopt,set -o <option>, or something else I can do to enable me to persist an associative array whose keys may contain square brackets or other special characters to a file and to later be able to source that file successfully? I am needing a solution that works in my environment above.
It's a bug
This is a bug in bash 4.2. It's fixed in 4.3.
I tested this by compiling bash 4.2, 4.2.53 and 4.3 from http://ftp.gnu.org/gnu/bash/ and replicated the steps above. 4.3 behaves like 4.4 - there is no such issue. In bash 4.3 however, declare will print
declare -A array='(["var[0]"]="37" )'
just as 4.2 does. 4.4 does not add the quotes around the right-hand side, instead printing this:
declare -A array=(["var[0]"]="37" )
This makes no difference from what the testing showed.
There a possibly related option in complete_fullquote but it was added in 4.4 so it can't be used as a workaround.
It seems that outside of using a version >=4.3 this needs to be worked around and the one you used is the most straightforward way of doing it.
A workaround
There is an alternative if you want to avoid the sed calls though (tested using bash 4.2):
function array2file {
# local variable for the keys
declare -a keys
# check if the array exists, to protect against injection
# by passing a crafted string
declare -p "$1" >/dev/null || return 1;
printf "declare -A %s\n" "$1"
# create a string with all the keys so we can iterate
# because we can't use eval at for's declaration
# we do it this way to allow for spaces in the keys, since that's valid
eval "keys=(\"\${!$1[#]}\")"
for k in "${keys[#]}"
do
printf "%s[\"${k//\"/\\\\\"}\"]=" "$1"
# the extra quoting here protects against spaces
# within the element's value - injection doesn't work here
# but we still need to make sure there's consistency
eval "printf \"\\\"%s\\\"\n\" \"\${$1[\"${k//\"/\\\"}\"]}\""
done
}
This will properly add quotes around the key and also escape all doublequotes within the key itself. You can place this in a file, which you source. Then use:
array2file array > ./def.sh
where array is whatever array name you've chosen. By redirecting the output you'll get properly quoted keys and you can define your associative array as you did before, then pass it to this function for storage.
Extra credit
If you change the variable provided to the first printf inside the for loop from $1 to ${2:-$1} and do the same at the printf at the top, then you can optionally create the definition of a new array with the 2nd argument as its name, allowing renaming of sorts. This will only happen if you provide 2 strings instead of one (quoted of course). The setup allows for this to be done easily, so I've added it here.
This would let you work around cases where interfacing with existing code can be difficult with a predefined function.
I'm trying to compare values of two variables both containing strings-as-numbers. For example:
var1="5.4.7.1"
var2="6.2.4.5"
var3="1-4"
var4="1-5"
var5="2.3-3"
var6="2.3.4"
Sadly, I don't even know where to start... Any help will be appreciated!
What I meant is how would I go about comparing the value of $var5 to $var6 and determine with one of them is higher.
EDIT: Better description of the problem.
You can use [[ ${str1} < ${str2} ]] style test. This should work:
function max()
{
[[ "$1" < "$2" ]] && echo $2 || echo $1
}
max=$(max ${var5} ${var6})
echo "max=${max}."
It depends of the required portability of the solution. If you don't care about that and you use a deb based distribution, you can use the dpkg --compare-versions feature.
However, if you need to run your script on distros without dpkg I would use following approach.
The value you need to compare consist of first (the first element) and the rest (all others). The first is usually called the head and the rest - tail, but I deliberately use names first and rest, to not confuse with head(1) and tail(1) tools available on Unix systems.
In case first($var1) is not equal to first($var2) you just compares those firsts elements. If firsts are equal, just recursively run the compare function on rest($var1) and rest($var2). As a border case you need to decide what to do if values are like:
var1 = "2.3.4"
var2 = "2.3"
and in some step you will compare empty and non-empty first.
Hint for implementing first and rest functions:
foo="2.3-4.5"
echo ${foo%%[^0-9][0-9]*}
echo ${foo#[0-9]*[^0-9]}
If those are unclear to you, read man bash section titled Parameter Expansion. Searching the manual for ## string will show you the exact section immediately.
Also, make sure, you are comparing elements numerically not in lexical order. For example compare the result of following commands:
[[ 9 > 10 ]]; echo $?
[[ 9 -gt 10 ]]; echo $?