bash: pass associative array as a parameter to another script - bash

I want to be able to do:
Script1.sh
declare -A map=(
['A']=1
['B']=2
['C']=3
['D']=4
)
sh script2.sh ???
Script2.sh
params = ...
echo ${params['A']}
ie, access parameters by keys. I have seen related questions for normal arrays and the answer to them has been to pass the array as:
sh script2.sh "${AR[#]}"
which I believe translates to:
sh script2.sh "${map[0]}" "${map[1]}" "${map[2]}"
But with that, I can only access the elements based on their order.
Is there a clever trick to achieve what I want? perhaps with something that passes on "A=1" "B=2" "C=3" "D=4" instead and have script2.sh parse them? or is there a neater solution?

If you are only calling script2.sh from inside script1.sh, then all you need to do (as #markp-fuso pointed out) is source script2.sh and it will run in the current context with all the data already loaded.
If you really want it to be on the command line, then pass it as key=val and have your code in script2.sh check each of it's args for that format and set them in an associative array.
declare -A map=()
for arg in "$#"
do if [[ "$arg" =~ ^[A-Z]=[0-9]$ ]] # more complex k/v will get ugly
then map[${arg/=?}]=${arg/?=} # as will the assignment w/o eval
fi
done
# And finally, just to see what got loaded -
declare -p map
$: script2.sh C=3 A=1
declare -A map=([A]="1" [C]="3" )
As mentioned above, a more complicated possible set of key names and/or values will require a suitably more complex test as well as assignment logic. Clearly, for anything but the simplest cases, this is going to quickly get problematic.
Even better, set up a full getopts loop, and pass your args with proper indicators. This takes more design and more implementation, but that's what it takes to get more functionality.

Assumptions:
the array is the only item being passed to script2 (this could be relaxed but would likely require adding some option flag processing to script2)
the array name will always be map (could probably make this dynamic but that's for another day)
the array indices and values do not contain any special/control characters (eg, line feeds) otherwise passing the array structure on the command line to script2 gets mucho complicated really quick (there are likely some workarounds for this scenario, too)
Some basic components:
The array named map:
$ declare -A map=(
['A']=1
['B']=2
['C']=3
['D']=4
)
Use typeset to generate a command to (re)produce the contents of array map:
$ typeset -p map
declare -A map=([A]="1" [B]="2" [C]="3" [D]="4" )
From here we can pass the typeset output to script2 and then have script2 evaluate the input, eg:
$ cat script1
echo "in script1"
declare -A map=(
['A']=1
['B']=2
['C']=3
['D']=4
)
./script2 $(typeset -p map)
$ cat script2
echo "in script2"
echo " \$# = $#"
eval "$#"
for i in "${!map[#]}"
do
echo "${i} : ${map[${i}]}"
done
Running script1 generates:
$ ./script1
in script1
in script2
$# = declare -A map=([A]="1" [B]="2" [C]="3" [D]="4" )
A : 1
B : 2
C : 3
D : 4
I know, I know, I know ... eval == evil. I'll have to think about a replacement for eval ... also open to suggestions.

Related

Appending command line arguments to a Bash array

I am trying to write a Bash script that appends a string to a Bash array, where the string contains the path to a Python script together with the arguments passed into the Bash script, enclosed in double quotes.
If I call the script using ./script.sh -o "a b", I would like a CMD_COUNT of 1, but I am getting 2 instead.
script.sh:
#!/bin/bash
declare -a COMMANDS=()
COMMANDS+=("/path/to/myscript.py \"${#}\"")
CMD_COUNT=${#COMMANDS[*]}
echo $CMD_COUNT
How can I ensure that the appended string is /path/to/myscript.py "-o" "a b"?
EDIT: The full script is actually like this:
script.sh:
#!/bin/bash
declare -a COMMANDS=()
COMMANDS+=("/path/to/myscript2.py")
COMMANDS+=("/path/to/myscript.py \"${#}\"")
CMD_COUNT=${#COMMANDS[*]}
echo $CMD_COUNT
for i in ${!COMMANDS[*]}
do
echo "${0} - command: ${COMMANDS[${i}]}"
${COMMANDS[${i}]}
done
It's a bad idea, but if it's what you really want, printf %q can be used to generate a string that, when parsed by the shell, will result in a given list of arguments. (The exact escaping might not be identical to what you'd write by hand, but the effect of evaluating it -- using eval -- will be).
#!/bin/bash
declare -a COMMANDS=( )
printf -v command '%q ' "/path/to/myscript" "$#"
COMMANDS+=( "$command" )
CMD_COUNT=${#COMMANDS[#]}
echo "$CMD_COUNT"
...but, as I said, this is all a bad idea.
Best-practice ways to encapsulate code as data in bash involve using functions, or arrays with one element per argument.
eval results in code that's prone to security bugs.

Bad Substitution - Variable name inside other variable name - in Bash

I have a problem in one of my scripts, here it is simplified.
I want to name a variable using another variable in it. My script is:
#! /bin/bash
j=1
SAMPLE${j}_CHIP=5
echo ${SAMPLE${j}_CHIP}
This script echoes:
line 3: SAMPLE1_CHIP=5: command not found
line 4: ${SAMPLE${j}_CHIP}: bad substitution
I'm trying to do that in order to name several samples in a while loop changing the "j" parameter.
Anyone knows how to name a variable like that?
It's possible with eval, but don't use dynamic variable names. Arrays are much, much better.
$ j=1
$ SAMPLES[j]=5
$ echo ${SAMPLES[j]}
5
You can initialize an entire array at once like so:
$ SAMPLES=(5 10 15 20)
And you can append with:
$ SAMPLES+=(25 30)
Indices start at 0.
To read the value of the variable, you may use indirection: ${!var}:
#! /bin/bash
j=1
val=get_5
var=SAMPLE${j}_CHIP
declare "$var"="$val"
echo "${!var}"
The problem is to make the variable get the value.
I used declare above, and the known options are:
declare "$var"="$val"
printf -v "$var" '%s' "$val"
eval $var'=$val'
export "$var=$val"
The most risky option is to use eval. If the contents of var or val may be set by an external user, you have set a way to get code injection. It may seem safe today, but after someone edit the code for some reason, it may get changed to give an attacker a chance to "get in".
Probably the best solution is to avoid all the above.
Associative Array
One alternative is to use Associative Arrays:
#! /bin/bash
j=1
val=get_5
var=SAMPLE${j}_CHIP
declare -A array
array[$var]=$val
echo "${array[$var]}"
Quite less risky and you get a similar named index.
Plain array
But it is clear that the safest solution is to use the simplest of solutions:
#! /bin/bash
j=1
val=get_5
array[j]=$val
echo "${array[j]}"
All done, little risk.
If you really want to use variable variables:
#! /bin/bash
j=1
var="SAMPLE${j}_CHIP"
export ${var}=5
echo "${!var}" # prints 5
However, there are other approaches to solving the parent issue, which are likely less confusing than this approach.
j=1
eval "SAMPLE${j}_CHIP=5"
echo "${SAMPLE1_CHIP}"
Or
j=1
var="SAMPLE${j}_CHIP"
eval "$var=5"
echo "${!var}"
As others said, it's normally not possible. Here is a workaround if you wish. Note that you have to use eval when declaring a nested variable, and ⭗ instead of $ when accessing it (I use ⭗ as a function name, because why not).
#!/bin/bash
function ⭗ {
if [[ ! "$*" = *\{*\}* ]]
then echo $*
else ⭗ $(eval echo $(echo $* | sed -r 's%\{([^\{\}]*)\}%$(echo ${\1})%'))
fi
}
j=1
eval SAMPLE${j}_CHIP=5
echo `⭗ {SAMPLE{j}_CHIP}`
c=CHIP
echo `⭗ {SAMPLE{j}_{c}}`

Pass bash array to stdin

I currently have a bash script like this, which successfully calls the program prog:
#!/bin/bash
var1=hello
var2=world
prog <<EOF
$var1
$var2
EOF
Instead of var1 and var2, how would I pass each element within an array (of unknown length, since I am using $#) to prog in the same manner?
Strictly speaking, you would want something like
for line in "$#"; do
echo "$line"
done | prog
It's not a here document, but it has the same effect. Here documents and arrays were developed for two different use cases.
Even more strictly speaking, $# is not an array, although it tries very hard to behave like one. :)
You could loop over each element of the array and echo each value into the program:
vars=('foo' 'foo bar' 'bar')
for var in "${vars[#]}"; do echo $var; done | prog
FAIRNESS UPDATE: #chepner beat me to this answer by a few seconds :)
As far as I know, you cannot pass variables, but you can pass arguments, so here's a fix:
prog $VAR1 $VAR2 <<EOF
And inside prog you could use:
ARR=($#)
to save all the positional parameters to the variable ARR.

Expanding variables Bash Scripting

I have two variables in bash that complete the name of another one, and I want to expand it but don't know how to do it
I have:
echo $sala
a
echo $i
10
and I want to expand ${a10} in this form ${$sala$i} but apparently the {} scape the $ signs.
There are a few ways, with different advantages and disadvantages. The safest way is to save the complete parameter name in a single parameter, and then use indirection to expand it:
tmp="$sala$i" # sets $tmp to 'a10'
echo "${!tmp}" # prints the parameter named by $tmp, namely $a10
A slightly simpler way is a command like this:
eval echo \${$sala$i}
which will run eval with the arguments echo and ${a10}, and therefore run echo ${a10}. This way is less safe in general — its behavior depends a bit more chaotically on the values of the parameters — but it doesn't require a temporary variable.
Use the eval.
eval "echo \${$sala$i}"
Put the value in another variable.
result=$(eval "echo \${$sala$i}")
The usual answer is eval:
sala=a
i=10
a10=37
eval echo "\$$sala$i"
This echoes 37. You can use "\${$sala$i}" if you prefer.
Beware of eval, especially if you need to preserve spaces in argument lists. It is vastly powerful, and vastly confusing. It will work with old shells as well as Bash, which may or may not be a merit in your eyes.
You can do it via indirection:
$ a10=blah
$ sala=a
$ i=10
$ ref="${sala}${i}"
$ echo $ref
a10
$ echo ${!ref}
blah
However, if you have indexes like that... an array might be more appropriate:
$ declare -a a
$ i=10
$ a[$i]="test"
$ echo ${a[$i]}
test

How to wrap another shell still passing $OPTIND as-is?

I'm trying to wrap a bash script b with a script a.
However I want to pass the options passed to a also to b as they are.
#!/bin/bash
# script a
./b ${#:$OPTIND}
This will also print $1 (if any). What's the simplest way not to?
So calling:
./a -c -d 5 first-arg
I want b to execute:
./b -c -d 5 # WITHOUT first-arg
In bash, you can build an array containing the options, and use that array to call the auxiliary program.
call_b () {
typeset -i i=0
typeset -a a; a=()
while ((++i <= OPTIND)); do # for i=1..$OPTIND
a+=("${!i}") # append parameter $i to $a
done
./b "${a[#]}"
}
call_b "$#"
In any POSIX shell (ash, bash, ksh, zsh under sh or ksh emulation, …), you can build a list with "$1" "$2" … and use eval to set different positional parameters.
call_b () {
i=1
while [ $i -le $OPTIND ]; do
a="$a \"\$$i\""
i=$(($i+1))
done
eval set -- $a
./b "$#"
}
call_b "$#"
As often, this is rather easier in zsh.
./b "${(#)#[1,$OPTIND]}"
Why are you using ${#:$OPTIND} and not just $# or $*?
The ${parameter:index} syntax says to use index to parse $parameter. If you're using $#, it'll use index as an index into the parameters.
$ set one two three four #Sets "$#"
$ echo $#
one two three four
$ echo ${#:0}
one two three four
$ echo ${#:1}
one two three four
$ echo ${#:2}
two three four
$OPTIND is really only used if you're using getopts. This counts the number of times getopts processes the parameters in $#. According to the bash manpage:
OPTIND is initialized to 1 each time the shell or a shell script is invoked.
Which may explain why you're constantly getting the value of 1.
EDITED IN RESPONSE TO EDITED QUESTION
#David - "./b $# " still prints the arguments of passed to a (see Q edit). I want to pass only the options of a and not the args
So, if I executed:
$ a -a foo -b bar -c fubar barfu barbar
You want to pass to b:
$ b -a foo -b bar -c fubar
but not
$ b -arg1 foo -arg2 bar -arg3 fubar barfu barbar
That's going to be tricky...
Is there a reason why you can't pass the whole line to b and just ignore it?
I believe it might be possible to use regular expressions:
$ echo "-a bar -b foo -c barfoo foobar" | sed 's/\(-[a-z] [^- ][^- ]*\) *\([^-][^-]*\)$/\1/'
-a bar -b foo -c barfoo
I can't vouch that this regular expression will work in all situations (i.e. what if there are no parameters?). Basically, I'm anchoring it to the end of the line, and then matching for the last parameter and argument and the rest of the line. I do a replace with just the last parameter and argument.
I've tested it in a few situations, but you might simply be better off using getopts to capture the arguments and then passing those to b yourself, or simply have b ignore those extra arguments if possible.
In order to separate the command options from the regular arguments, you need to know which options take arguments, and which stand alone.
In the example command:
./a -c -d 5 first-arg
-c and -d might be standalone options and 5 first-arg the regular arguments
5 might be an argument to the -d option (this seems to be what you mean)
-d might be an argument to the -c option and (as in the first case) 5 first-arg the regular arguments.
Here's how I'd handle it, assuming -a, -b, -c and -d are the only options, and that -b and -d is the only ones that take an option argument. Note that it is necessary to parse all of the options in order to figure out where they end.
#!/bin/bash
while getopts ab:cd: OPT; do
case "$OPT" in
a|b|c|d) : ;; # Don't do anything, we're just here for the parsing
?) echo "Usage: $0 [-ac] [-b something] [-d something] [args...]" >&2
exit 1 ;;
esac
done
./b "${#:1:$((OPTIND-1))}"
The entire while loop is there just to compute OPTIND. The ab:cd: in the getopts command defines what options are allowed and which ones take arguments (indicated by colons). The cryptic final expression means "elements 1 through OPTIND-1 of the argument array, passed as separate words".

Resources