zsh vs bash: how do parenthesis alter variable assignment behavior? - bash

I am having some troubles and misunderstanding about how are variable assignment and parenthesis handled in the different existing shells.
What is currently puzzling me is the following:
Always using the following command
./script.sh a b c d
when running the following code
#!/bin/zsh
bar=$#
for foo in $bar
do
echo $foo
done
the output is
a b c d
and with
#!/bin/zsh
bar=($#)
for foo in $bar
do
echo $foo
done
it is (what I initially wanted)
a
b
c
d
but using bash or sh
#!/bin/bash
bar=$#
for foo in $bar
do
echo $foo
done
gives
a
b
c
d
and
#!/bin/bash
bar=($#)
for foo in $bar
do
echo $foo
done
it is just
a
What is going on there ?

Joint operations
For both of the shells involved, the examples given will assume an explicitly set argv list:
# this sets $1 to "first entry", $2 to "second entry", etc
$ set -- "first entry" "second entry" "third entry"
In both shells, declare -p can be used to emit the value of a variable name in unambiguous form, though how they represent that form can vary.
In bash
Expansion rules in bash are generally compatible with ksh and, where applicable, POSIX sh semantics. Being compatible with these shells requires that unquoted expansion perform string-splitting and glob expansion (replacing * with a list of files in the current directory, for instance).
Using parenthesis in a variable assignment makes it an array. Compare these three assignments:
# this sets arr_str="first entry second entry third entry"
$ arr_str=$#
$ declare -p arr_str
declare -- arr="first entry second entry third entry"
# this sets arr=( first entry second entry third entry )
$ arr=( $# )
declare -a arr='([0]="first" [1]="entry" [2]="second" [3]="entry" [4]="third" [5]="entry")'
# this sets arr=( "first entry" "second entry" "third entry" )
$ arr=( "$#" )
$ declare -p arr
declare -a arr='([0]="first entry" [1]="second entry" [2]="third entry")'
Similarly, on expansion, quotes and sigils matter:
# quoted expansion, first item only
$ printf '%s\n' "$arr"
first entry
# unquoted expansion, first item only: that item is string-split into two separate args
$ printf '%s\n' $arr
first
entry
# unquoted expansion, all items: each word expanded into its own argument
$ printf '%s\n' ${arr[#]}
first
entry
second
entry
third
entry
# quoted expansion, all items: original arguments all preserved
$ printf '%s\n' "${arr[#]}"
first entry
second entry
third entry
In zsh
zsh does a great deal of magic to try to do what the user means, rather than what's compatible with historical shells (ksh, POSIX sh, etc). However, even there, doing the wrong thing can have results other than what you want:
# Assigning an array to a string still flattens it in zsh
$ arr_str=$#
$ declare -p arr_str
typeset arr_str='first entry second entry third entry'
# ...but quotes aren't needed to keep arguments together on array assignments.
$ arr=( $# )
$ declare -p arr
typeset -a arr
arr=('first entry' 'second entry' 'third entry')
# in zsh, expanding an array always expands to all entries
$ printf '%s\n' $arr
first entry
second entry
third entry
# ...and unquoted string expansion doesn't do string-splitting by default:
$ printf '%s\n' $arr_str
first entry second entry third entry

When you do this:
bar=($#)
You're actually creating a bash shell array. To iterate a bash array use:
bar=( "$#" ) # safer way to create array
for foo in "${bar[#]}"
do
echo "$foo"
done

Related

On returning array list in shell script, getting 1 as value rather than actual array [duplicate]

I have a function that creates an array and I want to return the array to the caller:
create_array() {
local my_list=("a", "b", "c")
echo "${my_list[#]}"
}
my_algorithm() {
local result=$(create_array)
}
With this, I only get an expanded string. How can I "return" my_list without using anything global?
With Bash version 4.3 and above, you can make use of a nameref so that the caller can pass in the array name and the callee can use a nameref to populate the named array, indirectly.
#!/usr/bin/env bash
create_array() {
local -n arr=$1 # use nameref for indirection
arr=(one "two three" four)
}
use_array() {
local my_array
create_array my_array # call function to populate the array
echo "inside use_array"
declare -p my_array # test the array
}
use_array # call the main function
Produces the output:
inside use_array
declare -a my_array=([0]="one" [1]="two three" [2]="four")
You could make the function update an existing array as well:
update_array() {
local -n arr=$1 # use nameref for indirection
arr+=("two three" four) # update the array
}
use_array() {
local my_array=(one)
update_array my_array # call function to update the array
}
This is a more elegant and efficient approach since we don't need command substitution $() to grab the standard output of the function being called. It also helps if the function were to return more than one output - we can simply use as many namerefs as the number of outputs.
Here is what the Bash Manual says about nameref:
A variable can be assigned the nameref attribute using the -n option
to the declare or local builtin commands (see Bash Builtins) to create
a nameref, or a reference to another variable. This allows variables
to be manipulated indirectly. Whenever the nameref variable is
referenced, assigned to, unset, or has its attributes modified (other
than using or changing the nameref attribute itself), the operation is
actually performed on the variable specified by the nameref variable’s
value. A nameref is commonly used within shell functions to refer to a
variable whose name is passed as an argument to the function. For
instance, if a variable name is passed to a shell function as its
first argument, running
declare -n ref=$1 inside the function creates a nameref variable ref
whose value is the variable name passed as the first argument.
References and assignments to ref, and changes to its attributes, are
treated as references, assignments, and attribute modifications to the
variable whose name was passed as $1.
What's wrong with globals?
Returning arrays is really not practical. There are lots of pitfalls.
That said, here's one technique that works if it's OK that the variable have the same name:
$ f () { local a; a=(abc 'def ghi' jkl); declare -p a; }
$ g () { local a; eval $(f); declare -p a; }
$ f; declare -p a; echo; g; declare -p a
declare -a a='([0]="abc" [1]="def ghi" [2]="jkl")'
-bash: declare: a: not found
declare -a a='([0]="abc" [1]="def ghi" [2]="jkl")'
-bash: declare: a: not found
The declare -p commands (except for the one in f() are used to display the state of the array for demonstration purposes. In f() it's used as the mechanism to return the array.
If you need the array to have a different name, you can do something like this:
$ g () { local b r; r=$(f); r="declare -a b=${r#*=}"; eval "$r"; declare -p a; declare -p b; }
$ f; declare -p a; echo; g; declare -p a
declare -a a='([0]="abc" [1]="def ghi" [2]="jkl")'
-bash: declare: a: not found
-bash: declare: a: not found
declare -a b='([0]="abc" [1]="def ghi" [2]="jkl")'
-bash: declare: a: not found
Bash can't pass around data structures as return values. A return value must be a numeric exit status between 0-255. However, you can certainly use command or process substitution to pass commands to an eval statement if you're so inclined.
This is rarely worth the trouble, IMHO. If you must pass data structures around in Bash, use a global variable--that's what they're for. If you don't want to do that for some reason, though, think in terms of positional parameters.
Your example could easily be rewritten to use positional parameters instead of global variables:
use_array () {
for idx in "$#"; do
echo "$idx"
done
}
create_array () {
local array=("a" "b" "c")
use_array "${array[#]}"
}
This all creates a certain amount of unnecessary complexity, though. Bash functions generally work best when you treat them more like procedures with side effects, and call them in sequence.
# Gather values and store them in FOO.
get_values_for_array () { :; }
# Do something with the values in FOO.
process_global_array_variable () { :; }
# Call your functions.
get_values_for_array
process_global_array_variable
If all you're worried about is polluting your global namespace, you can also use the unset builtin to remove a global variable after you're done with it. Using your original example, let my_list be global (by removing the local keyword) and add unset my_list to the end of my_algorithm to clean up after yourself.
You were not so far out with your original solution. You had a couple of problems, you used a comma as a separator, and you failed to capture the returned items into a list, try this:
my_algorithm() {
local result=( $(create_array) )
}
create_array() {
local my_list=("a" "b" "c")
echo "${my_list[#]}"
}
Considering the comments about embedded spaces, a few tweaks using IFS can solve that:
my_algorithm() {
oldIFS="$IFS"
IFS=','
local result=( $(create_array) )
IFS="$oldIFS"
echo "Should be 'c d': ${result[1]}"
}
create_array() {
IFS=','
local my_list=("a b" "c d" "e f")
echo "${my_list[*]}"
}
Use the technique developed by Matt McClure:
http://notes-matthewlmcclure.blogspot.com/2009/12/return-array-from-bash-function-v-2.html
Avoiding global variables means you can use the function in a pipe. Here is an example:
#!/bin/bash
makeJunk()
{
echo 'this is junk'
echo '#more junk and "b#d" characters!'
echo '!#$^%^&(*)_^&% ^$##:"<>?/.,\\"'"'"
}
processJunk()
{
local -a arr=()
# read each input and add it to arr
while read -r line
do
arr+=('"'"$line"'" is junk')
done;
# output the array as a string in the "declare" representation
declare -p arr | sed -e 's/^declare -a [^=]*=//'
}
# processJunk returns the array in a flattened string ready for "declare"
# Note that because of the pipe processJunk cannot return anything using
# a global variable
returned_string="$(makeJunk | processJunk)"
# convert the returned string to an array named returned_array
# declare correctly manages spaces and bad characters
eval "declare -a returned_array=${returned_string}"
for junk in "${returned_array[#]}"
do
echo "$junk"
done
Output is:
"this is junk" is junk
"#more junk and "b#d" characters!" is junk
"!#$^%^&(*)_^&% ^$##:"<>?/.,\\"'" is junk
A pure bash, minimal and robust solution based on the 'declare -p' builtin — without insane global variables
This approach involves the following three steps:
Convert the array with 'declare -p' and save the output in a variable.
myVar="$( declare -p myArray )"
The output of the declare -p statement can be used to recreate the array.
For instance the output of declare -p myVar might look like this:
declare -a myVar='([0]="1st field" [1]="2nd field" [2]="3rd field")'
Use the echo builtin to pass the variable to a function or to pass it back from there.
In order to preserve whitspaces in array fields when echoing the variable, IFS is temporarly set to a control character (e.g. a vertical tab).
Only the right-hand-side of the declare statement in the variable is to be echoed - this can be achieved by parameter expansion of the form ${parameter#word}. As for the example above: ${myVar#*=}
Finally, recreate the array where it is passed to using the eval and the 'declare -a' builtins.
Example 1 - return an array from a function
#!/bin/bash
# Example 1 - return an array from a function
function my-fun () {
# set up a new array with 3 fields - note the whitespaces in the
# 2nd (2 spaces) and 3rd (2 tabs) field
local myFunArray=( "1st field" "2nd field" "3rd field" )
# show its contents on stderr (must not be output to stdout!)
echo "now in $FUNCNAME () - showing contents of myFunArray" >&2
echo "by the help of the 'declare -p' builtin:" >&2
declare -p myFunArray >&2
# return the array
local myVar="$( declare -p myFunArray )"
local IFS=$'\v';
echo "${myVar#*=}"
# if the function would continue at this point, then IFS should be
# restored to its default value: <space><tab><newline>
IFS=' '$'\t'$'\n';
}
# main
# call the function and recreate the array that was originally
# set up in the function
eval declare -a myMainArray="$( my-fun )"
# show the array contents
echo ""
echo "now in main part of the script - showing contents of myMainArray"
echo "by the help of the 'declare -p' builtin:"
declare -p myMainArray
# end-of-file
Output of Example 1:
now in my-fun () - showing contents of myFunArray
by the help of the 'declare -p' builtin:
declare -a myFunArray='([0]="1st field" [1]="2nd field" [2]="3rd field")'
now in main part of the script - showing contents of myMainArray
by the help of the 'declare -p' builtin:
declare -a myMainArray='([0]="1st field" [1]="2nd field" [2]="3rd field")'
Example 2 - pass an array to a function
#!/bin/bash
# Example 2 - pass an array to a function
function my-fun () {
# recreate the array that was originally set up in the main part of
# the script
eval declare -a myFunArray="$( echo "$1" )"
# note that myFunArray is local - from the bash(1) man page: when used
# in a function, declare makes each name local, as with the local
# command, unless the ‘-g’ option is used.
# IFS has been changed in the main part of this script - now that we
# have recreated the array it's better to restore it to the its (local)
# default value: <space><tab><newline>
local IFS=' '$'\t'$'\n';
# show contents of the array
echo ""
echo "now in $FUNCNAME () - showing contents of myFunArray"
echo "by the help of the 'declare -p' builtin:"
declare -p myFunArray
}
# main
# set up a new array with 3 fields - note the whitespaces in the
# 2nd (2 spaces) and 3rd (2 tabs) field
myMainArray=( "1st field" "2nd field" "3rd field" )
# show the array contents
echo "now in the main part of the script - showing contents of myMainArray"
echo "by the help of the 'declare -p' builtin:"
declare -p myMainArray
# call the function and pass the array to it
myVar="$( declare -p myMainArray )"
IFS=$'\v';
my-fun $( echo "${myVar#*=}" )
# if the script would continue at this point, then IFS should be restored
# to its default value: <space><tab><newline>
IFS=' '$'\t'$'\n';
# end-of-file
Output of Example 2:
now in the main part of the script - showing contents of myMainArray
by the help of the 'declare -p' builtin:
declare -a myMainArray='([0]="1st field" [1]="2nd field" [2]="3rd field")'
now in my-fun () - showing contents of myFunArray
by the help of the 'declare -p' builtin:
declare -a myFunArray='([0]="1st field" [1]="2nd field" [2]="3rd field")'
I recently discovered a quirk in BASH in that a function has direct access to the variables declared in the functions higher in the call stack. I've only just started to contemplate how to exploit this feature (it promises both benefits and dangers), but one obvious application is a solution to the spirit of this problem.
I would also prefer to get a return value rather than using a global variable when delegating the creation of an array. There are several reasons for my preference, among which are to avoid possibly disturbing an preexisting value and to avoid leaving a value that may be invalid when later accessed. While there are workarounds to these problems, the easiest is have the variable go out of scope when the code is finished with it.
My solution ensures that the array is available when needed and discarded when the function returns, and leaves undisturbed a global variable with the same name.
#!/bin/bash
myarr=(global array elements)
get_an_array()
{
myarr=( $( date +"%Y %m %d" ) )
}
request_array()
{
declare -a myarr
get_an_array "myarr"
echo "New contents of local variable myarr:"
printf "%s\n" "${myarr[#]}"
}
echo "Original contents of global variable myarr:"
printf "%s\n" "${myarr[#]}"
echo
request_array
echo
echo "Confirm the global myarr was not touched:"
printf "%s\n" "${myarr[#]}"
Here is the output of this code:
When function request_array calls get_an_array, get_an_array can directly set the myarr variable that is local to request_array. Since myarr is created with declare, it is local to request_array and thus goes out of scope when request_array returns.
Although this solution does not literally return a value, I suggest that taken as a whole, it satisfies the promises of a true function return value.
Useful example: return an array from function
function Query() {
local _tmp=`echo -n "$*" | mysql 2>> zz.err`;
echo -e "$_tmp";
}
function StrToArray() {
IFS=$'\t'; set $1; for item; do echo $item; done; IFS=$oIFS;
}
sql="SELECT codi, bloc, requisit FROM requisits ORDER BY codi";
qry=$(Query $sql0);
IFS=$'\n';
for row in $qry; do
r=( $(StrToArray $row) );
echo ${r[0]} - ${r[1]} - ${r[2]};
done
I tried various implementations, and none preserved arrays that had elements with spaces ... because they all had to use echo.
# These implementations only work if no array items contain spaces.
use_array() { eval echo '(' \"\${${1}\[\#\]}\" ')'; }
use_array() { local _array="${1}[#]"; echo '(' "${!_array}" ')'; }
Solution
Then I came across Dennis Williamson's answer. I incorporated his method into the following functions so they can a) accept an arbitrary array and b) be used to pass, duplicate and append arrays.
# Print array definition to use with assignments, for loops, etc.
# varname: the name of an array variable.
use_array() {
local r=$( declare -p $1 )
r=${r#declare\ -a\ *=}
# Strip keys so printed definition will be a simple list (like when using
# "${array[#]}"). One side effect of having keys in the definition is
# that when appending arrays (i.e. `a1+=$( use_array a2 )`), values at
# matching indices merge instead of pushing all items onto array.
echo ${r//\[[0-9]\]=}
}
# Same as use_array() but preserves keys.
use_array_assoc() {
local r=$( declare -p $1 )
echo ${r#declare\ -a\ *=}
}
Then, other functions can return an array using catchable output or indirect arguments.
# catchable output
return_array_by_printing() {
local returnme=( "one" "two" "two and a half" )
use_array returnme
}
eval test1=$( return_array_by_printing )
# indirect argument
return_array_to_referenced_variable() {
local returnme=( "one" "two" "two and a half" )
eval $1=$( use_array returnme )
}
return_array_to_referenced_variable test2
# Now both test1 and test2 are arrays with three elements
I needed a similar functionality recently, so the following is a mix of the suggestions made by RashaMatt and Steve Zobell.
echo each array/list element as separate line from within a function
use mapfile to read all array/list elements echoed by a function.
As far as I can see, strings are kept intact and whitespaces are preserved.
#!bin/bash
function create-array() {
local somearray=("aaa" "bbb ccc" "d" "e f g h")
for elem in "${somearray[#]}"
do
echo "${elem}"
done
}
mapfile -t resa <<< "$(create-array)"
# quick output check
declare -p resa
Some more variations…
#!/bin/bash
function create-array-from-ls() {
local somearray=("$(ls -1)")
for elem in "${somearray[#]}"
do
echo "${elem}"
done
}
function create-array-from-args() {
local somearray=("$#")
for elem in "${somearray[#]}"
do
echo "${elem}"
done
}
mapfile -t resb <<< "$(create-array-from-ls)"
mapfile -t resc <<< "$(create-array-from-args 'xxx' 'yy zz' 't s u' )"
sentenceA="create array from this sentence"
sentenceB="keep this sentence"
mapfile -t resd <<< "$(create-array-from-args ${sentenceA} )"
mapfile -t rese <<< "$(create-array-from-args "$sentenceB" )"
mapfile -t resf <<< "$(create-array-from-args "$sentenceB" "and" "this words" )"
# quick output check
declare -p resb
declare -p resc
declare -p resd
declare -p rese
declare -p resf
Here is a solution with no external array references and no IFS manipulation:
# add one level of single quotes to args, eval to remove
squote () {
local a=("$#")
a=("${a[#]//\'/\'\\\'\'}") # "'" => "'\''"
a=("${a[#]/#/\'}") # add "'" prefix to each word
a=("${a[#]/%/\'}") # add "'" suffix to each word
echo "${a[#]}"
}
create_array () {
local my_list=(a "b 'c'" "\\\"d
")
squote "${my_list[#]}"
}
my_algorithm () {
eval "local result=($(create_array))"
# result=([0]="a" [1]="b 'c'" [2]=$'\\"d\n')
}
[Note: the following was rejected as an edit of this answer for reasons that make no sense to me (since the edit was not intended to address the author of the post!), so I'm taking the suggestion to make it a separate answer.]
A simpler implementation of Steve Zobell's adaptation of Matt McClure's technique uses the bash built-in (since version == 4) readarray as suggested by RastaMatt to create a representation of an array that can be converted into an array at runtime. (Note that both readarray and mapfile name the same code.) It still avoids globals (allowing use of the function in a pipe), and still handles nasty characters.
For some more-fully-developed (e.g., more modularization) but still-kinda-toy examples, see bash_pass_arrays_between_functions. Following are a few easily-executable examples, provided here to avoid moderators b!tching about external links.
Cut the following block and paste it into a bash terminal to create /tmp/source.sh and /tmp/junk1.sh:
FP='/tmp/source.sh' # path to file to be created for `source`ing
cat << 'EOF' > "${FP}" # suppress interpretation of variables in heredoc
function make_junk {
echo 'this is junk'
echo '#more junk and "b#d" characters!'
echo '!#$^%^&(*)_^&% ^$##:"<>?/.,\\"'"'"
}
### Use 'readarray' (aka 'mapfile', bash built-in) to read lines into an array.
### Handles blank lines, whitespace and even nastier characters.
function lines_to_array_representation {
local -a arr=()
readarray -t arr
# output array as string using 'declare's representation (minus header)
declare -p arr | sed -e 's/^declare -a [^=]*=//'
}
EOF
FP1='/tmp/junk1.sh' # path to script to run
cat << 'EOF' > "${FP1}" # suppress interpretation of variables in heredoc
#!/usr/bin/env bash
source '/tmp/source.sh' # to reuse its functions
returned_string="$(make_junk | lines_to_array_representation)"
eval "declare -a returned_array=${returned_string}"
for elem in "${returned_array[#]}" ; do
echo "${elem}"
done
EOF
chmod u+x "${FP1}"
# newline here ... just hit Enter ...
Run /tmp/junk1.sh: output should be
this is junk
#more junk and "b#d" characters!
!#$^%^&(*)_^&% ^$##:"<>?/.,\\"'
Note lines_to_array_representation also handles blank lines. Try pasting the following block into your bash terminal:
FP2='/tmp/junk2.sh' # path to script to run
cat << 'EOF' > "${FP2}" # suppress interpretation of variables in heredoc
#!/usr/bin/env bash
source '/tmp/source.sh' # to reuse its functions
echo '`bash --version` the normal way:'
echo '--------------------------------'
bash --version
echo # newline
echo '`bash --version` via `lines_to_array_representation`:'
echo '-----------------------------------------------------'
bash_version="$(bash --version | lines_to_array_representation)"
eval "declare -a returned_array=${bash_version}"
for elem in "${returned_array[#]}" ; do
echo "${elem}"
done
echo # newline
echo 'But are they *really* the same? Ask `diff`:'
echo '-------------------------------------------'
echo 'You already know how to capture normal output (from `bash --version`):'
declare -r PATH_TO_NORMAL_OUTPUT="$(mktemp)"
bash --version > "${PATH_TO_NORMAL_OUTPUT}"
echo "normal output captured to file # ${PATH_TO_NORMAL_OUTPUT}"
ls -al "${PATH_TO_NORMAL_OUTPUT}"
echo # newline
echo 'Capturing L2AR takes a bit more work, but is not onerous.'
echo "Look # contents of the file you're about to run to see how it's done."
declare -r RAW_L2AR_OUTPUT="$(bash --version | lines_to_array_representation)"
declare -r PATH_TO_COOKED_L2AR_OUTPUT="$(mktemp)"
eval "declare -a returned_array=${RAW_L2AR_OUTPUT}"
for elem in "${returned_array[#]}" ; do
echo "${elem}" >> "${PATH_TO_COOKED_L2AR_OUTPUT}"
done
echo "output from lines_to_array_representation captured to file # ${PATH_TO_COOKED_L2AR_OUTPUT}"
ls -al "${PATH_TO_COOKED_L2AR_OUTPUT}"
echo # newline
echo 'So are they really the same? Per'
echo "\`diff -uwB "${PATH_TO_NORMAL_OUTPUT}" "${PATH_TO_COOKED_L2AR_OUTPUT}" | wc -l\`"
diff -uwB "${PATH_TO_NORMAL_OUTPUT}" "${PATH_TO_COOKED_L2AR_OUTPUT}" | wc -l
echo '... they are the same!'
EOF
chmod u+x "${FP2}"
# newline here ... just hit Enter ...
Run /tmp/junk2.sh # commandline. Your output should be similar to mine:
`bash --version` the normal way:
--------------------------------
GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
`bash --version` via `lines_to_array_representation`:
-----------------------------------------------------
GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
But are they *really* the same? Ask `diff`:
-------------------------------------------
You already know how to capture normal output (from `bash --version`):
normal output captured to file # /tmp/tmp.Ni1bgyPPEw
-rw------- 1 me me 308 Jun 18 16:27 /tmp/tmp.Ni1bgyPPEw
Capturing L2AR takes a bit more work, but is not onerous.
Look # contents of the file you're about to run to see how it's done.
output from lines_to_array_representation captured to file # /tmp/tmp.1D6O2vckGz
-rw------- 1 me me 308 Jun 18 16:27 /tmp/tmp.1D6O2vckGz
So are they really the same? Per
`diff -uwB /tmp/tmp.Ni1bgyPPEw /tmp/tmp.1D6O2vckGz | wc -l`
0
... they are the same!
There's no need to use eval or to change IFS to \n. There are at least 2 good ways to do this.
1) Using echo and mapfile
You can simply echo each item of the array in the function, then use mapfile to turn it into an array:
outputArray()
{
for i
{
echo "$i"
}
}
declare -a arr=( 'qq' 'www' 'ee rr' )
mapfile -t array < <(outputArray "${arr[#]}")
for i in "${array[#]}"
do
echo "i=$i"
done
To make it work using pipes, add (( $# == 0 )) && readarray -t temp && set "${temp[#]}" && unset temp to the top of output array. It converts stdin to parameters.
2) Using declare -p and sed
This can also be done using declare -p and sed instead of mapfile.
outputArray()
{
(( $# == 0 )) && readarray -t temp && set "${temp[#]}" && unset temp
for i; { echo "$i"; }
}
returnArray()
{
local -a arr=()
(( $# == 0 )) && readarray -t arr || for i; { arr+=("$i"); }
declare -p arr | sed -e 's/^declare -a [^=]*=//'
}
declare -a arr=( 'qq' 'www' 'ee rr' )
declare -a array=$(returnArray "${arr[#]}")
for i in "${array[#]}"
do
echo "i=$i"
done
declare -a array=$(outputArray "${arr[#]}" | returnArray)
echo
for i in "${array[#]}"
do
echo "i=$i"
done
declare -a array < <(outputArray "${arr[#]}" | returnArray)
echo
for i in "${array[#]}"
do
echo "i=$i"
done
This can also be done by simply passing array variable to the function and assign array values to this var then use this var outside of function. For example.
create_array() {
local __resultArgArray=$1
local my_list=("a" "b" "c")
eval $__resultArgArray="("${my_list[#]}")"
}
my_algorithm() {
create_array result
echo "Total elements in the array: ${#result[#]}"
for i in "${result[#]}"
do
echo $i
done
}
my_algorithm
The easest way y found
my_function()
{
array=(one two three)
echo ${array[#]}
}
result=($(my_function))
echo ${result[0]}
echo ${result[1]}
echo ${result[2]}
You can also use the declare -p method more easily by taking advantage of declare -a's double-evaluation when the value is a string (no true parens outside the string):
# return_array_value returns the value of array whose name is passed in.
# It turns the array into a declaration statement, then echos the value
# part of that statement with parentheses intact. You can use that
# result in a "declare -a" statement to create your own array with the
# same value. Also works for associative arrays with "declare -A".
return_array_value () {
declare Array_name=$1 # namespace locals with caps to prevent name collision
declare Result
Result=$(declare -p $Array_name) # dehydrate the array into a declaration
echo "${Result#*=}" # trim "declare -a ...=" from the front
}
# now use it. test for robustness by skipping an index and putting a
# space in an entry.
declare -a src=([0]=one [2]="two three")
declare -a dst="$(return_array_value src)" # rehydrate with double-eval
declare -p dst
> declare -a dst=([0]="one" [2]="two three") # result matches original
Verifying the result, declare -p dst yields declare -a dst=([0]="one" [2]="two three")", demonstrating that this method correctly deals with both sparse arrays as well as entries with an IFS character (space).
The first thing is to dehydrate the source array by using declare -p to generate a valid bash declaration of it. Because the declaration is a full statement, including "declare" and the variable name, we strip that part from the front with ${Result#*=}, leaving the parentheses with the indices and values inside: ([0]="one" [2]="two three").
It then rehydrates the array by feeding that value to your own declare statement, one where you choose the array name. It relies on the fact that the right side of the dst array declaration is a string with parentheses that are inside the string, rather than true parentheses in the declare itself, e.g. not declare -a dst=( "true parens outside string" ). This triggers declare to evaluate the string twice, once into a valid statement with parentheses (and quotes in the value preserved), and another for the actual assignment. I.e. it evaluates first to declare -a dst=([0]="one" [2]="two three"), then evaluates that as a statement.
Note that this double evaluation behavior is specific to the -a and -A options of declare.
Oh, and this method works with associative arrays as well, just change -a to -A.
Because this method relies on stdout, it works across subshell boundaries like pipelines, as others have noted.
I discuss this method in more detail in my blog post
If your source data is formatted with each list element on a separate line, then the mapfile builtin is a simple and elegant way to read a list into an array:
$ list=$(ls -1 /usr/local) # one item per line
$ mapfile -t arrayVar <<<"$list" # -t trims trailing newlines
$ declare -p arrayVar | sed 's#\[#\n[#g'
declare -a arrayVar='(
[0]="bin"
[1]="etc"
[2]="games"
[3]="include"
[4]="lib"
[5]="man"
[6]="sbin"
[7]="share"
[8]="src")'
Note that, as with the read builtin, you would not ordinarily* use mapfile in a pipeline (or subshell) because the assigned array variable would be unavailable to subsequent statements (* unless bash job control is disabled and shopt -s lastpipe is set).
$ help mapfile
mapfile: mapfile [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum] [array]
Read lines from the standard input into an indexed array variable.
Read lines from the standard input into the indexed array variable ARRAY, or
from file descriptor FD if the -u option is supplied. The variable MAPFILE
is the default ARRAY.
Options:
-n count Copy at most COUNT lines. If COUNT is 0, all lines are copied.
-O origin Begin assigning to ARRAY at index ORIGIN. The default index is 0.
-s count Discard the first COUNT lines read.
-t Remove a trailing newline from each line read.
-u fd Read lines from file descriptor FD instead of the standard input.
-C callback Evaluate CALLBACK each time QUANTUM lines are read.
-c quantum Specify the number of lines read between each call to CALLBACK.
Arguments:
ARRAY Array variable name to use for file data.
If -C is supplied without -c, the default quantum is 5000. When
CALLBACK is evaluated, it is supplied the index of the next array
element to be assigned and the line to be assigned to that element
as additional arguments.
If not supplied with an explicit origin, mapfile will clear ARRAY before
assigning to it.
Exit Status:
Returns success unless an invalid option is given or ARRAY is readonly or
not an indexed array.
You can try this
my_algorithm() {
create_array list
for element in "${list[#]}"
do
echo "${element}"
done
}
create_array() {
local my_list=("1st one" "2nd two" "3rd three")
eval "${1}=()"
for element in "${my_list[#]}"
do
eval "${1}+=(\"${element}\")"
done
}
my_algorithm
The output is
1st one
2nd two
3rd three
I'd suggest piping to a code block to set values of an array. The strategy is POSIX compatible, so you get both Bash and Zsh, and doesn't run the risk of side effects like the posted solutions.
i=0 # index for our new array
declare -a arr # our new array
# pipe from a function that produces output by line
ls -l | { while read data; do i=$i+1; arr[$i]="$data"; done }
# example of reading that new array
for row in "${arr[#]}"; do echo "$row"; done
This will work for zsh and bash, and won't be affected by spaces or special characters. In the case of the OP, the output is transformed by echo, so it is not actually outputting an array, but printing it (as others mentioned shell functions return status not values). We can change it to a pipeline ready mechanism:
create_array() {
local my_list=("a", "b", "c")
for row in "${my_list[#]}"; do
echo "$row"
done
}
my_algorithm() {
i=0
declare -a result
create_array | { while read data; do i=$i+1; result[$i]="$data"; done }
}
If so inclined, one could remove the create_array pipeline process from my_algorithm and chain the two functions together
create_array | my_algorithm
A modern Bash implementation using #Q to safely output array elements:
#!/usr/bin/env bash
return_array_elements() {
local -a foo_array=('1st one' '2nd two' '3rd three')
printf '%s\n' "${foo_array[#]#Q}"
}
use_array_elements() {
local -a bar_array="($(return_array_elements))"
# Display declareation of bar_array
# which is local to this function, but whose elements
# hahaves been returned by the return_array_elements function
declare -p bar_array
}
use_array_elements
Output:
declare -a bar_array=([0]="1st one" [1]="2nd two" [2]="3rd three")
While the declare -p approach is elegant indeed, you can still create a global array using declare -g within a function and have it visible outside the scope of the function:
create_array() {
declare -ag result=("a", "b", "c")
}
my_algorithm() {
create_array
echo "${result[#]}"
}

Reverse Command Line Parameters in a bash script

I have to write a simple bash script for my programming class. The idea is to use a for loop with $* (names of Files as Command Line Parameters). The task is to reverse and print out the Command Line parameters while still using the for inFile in $*; do loop.
I have no idea how to do this.
#!/bin/bash
for inFile in $*;do
echo $inFile
done
I know this doesn't work it just prints out the command line parameters in order.
The idea to loop over $* to reverse command line arguments is broken,
when any command line argument contains a white space. For example when the command line arguments are foo and "bar baz", the output of the script in the question will be:
foo
bar
baz
When the correct output should be:
foo
bar baz
The exact wording of the task is important.
For example, if the task is to print the arguments in reverse, and it doesn't mention $*, then you can use a counting loop in reverse, and ${!i} to expand to the value of the numbered positional parameters:
# nice clean solution
for ((i = $#; i > 0; i--)); do
echo "${!i}"
done
Another example, if the task insists that you must use $* and accepts that the command line arguments will only have supported characters, then you could collect the parameters into an array, and then print the content of the array in reverse, again with a counting loop:
args=()
# not recommended, unsafe due to shell expansion of $*
for arg in $*; do
args+=("$arg")
done
for ((i = ${#args[#]} - 1; i >= 0; i--)); do
echo "${args[i]}"
done
If you are not allowed to use arrays, then you can prepend values to a string, and then iterate over that string:
# dirtiest solution, with unsafe expansions and unquoted variables, not recommended
args=
for arg in $*; do
args="$arg $args"
done
for arg in $args; do
echo "$arg"
done

Pass an array via command-line to be handled with getopts()

I try to pass some parameters to my .bash file.
terminal:
arr=("E1" "E2" "E3")
param1=("foo")
param2=("bar")
now I want to call my execute.bash file.
execute.bash -a ${arr[#]} -p $param1 -c param2
this is my file:
execute.bash:
while getopts ":a:p:c:" opt; do
case $opt in
a) ARRAY=${OPTARG};;
p) PARAM1=${OPTARG};;
c) PARAM2=${OPTARG};;
\?) exit "Invalid option -$OPTARG";;
esac
done
for a in "${ARRAY[#]}"; do
echo "$a"
done
echo "$PARAM1"
echo "$PARAM2"
But my file only prints:
E1
foo
bar
Whats the problem with my script?
Expanding all values in the array using ${arr[#]} expands each value as a separate command-line argument, so getopt only sees the first value as the parameter to the "-a" option.
If you expand using ${arr[*]} then all of the array values are expanded into a single command-line argument, so getopt can see all of the values in the array as a single argument to the "-a" option.
There are a couple of other issues: you need to quote the values on the command line:
< execute.bash -a ${arr[#]} -p $param1 -c param2
> execute.bash -a "${arr[*]}" -p $param1 -c $param2
and use braces around ${OPTARG} in the getopt processing to make it an array assignment:
< a) ARRAY=${OPTARG};;
> a) ARRAY=(${OPTARG});;
after making these changes, I get this output:
E1
E2
E3
foo
bar
which I think is what you are expecting.
You have a problem with passing the array as one of the parameter for the -a flag. Arrays in bash get expanded in command line before the actual script is invoked. The "${array[#]}" expansions outputs words separated by white-space
So your script is passed as
-a "E1" "E2" "E3" -p foo -c bar
So with the getopts() call to the argument OPTARG for -a won't be populated with not more than the first value, i.e. only E1. One would way to achieve this is to use the array expansion of type "${array[*]}" which concatenates the string with the default IFS (white-space), so that -a now sees one string with the words of the array concatenated, i.e. as if passed as
-a "E1 E2 E3" -p foo -c bar
I've emphasized the quote to show arg for -a will be received in getopts()
#!/usr/bin/env bash
while getopts ":a:p:c:" opt; do
case $opt in
a) ARRAY="${OPTARG}";;
p) PARAM1="${OPTARG}";;
c) PARAM2="${OPTARG}";;
\?) exit "Invalid option -$OPTARG";;
esac
done
# From the received string ARRAY we are basically re-constructing another
# array splitting on the default IFS character which can be iterated over
# as in your input example
read -r -a splitArray <<<"$ARRAY"
for a in "${splitArray[#]}"; do
echo "$a"
done
echo "$PARAM1"
echo "$PARAM2"
and now call the script with args as. Note that you are using param1 and param2 are variables but your definition seems to show it as an array. Your initialization should just look like
arr=("E1" "E2" "E3")
param1="foo"
param2="bar"
and invoked as
-a "${arr[*]}" -p "$param1" -c "$param2"
A word of caution would be to ensure that the words in the array arr don't already contain words that contain spaces. Reading them back as above in that case would have a problem of having those words split because the nature of IFS handling in bash. In that case though use a different de-limiter say |, # while passing the array expansion.
If I want to export the array MY_ARRAY, I use at caller side:
[[ $MY_ARRAY ]] && export A_MY_ARRAY=$(declare -p MY_ARRAY)
... and at at sub script side:
[[ $A_MY_ARRAY =~ ^declare ]] && eval $A_MY_ARRAY
The concept works for parameters too. At caller side:
SUB_SCRIPT "$(declare -p MY_ARRAY)"
... and at at sub script side:
[[ $1 =~ ^declare ]] && eval $1
The only issue of both solution is, that the variable names are the same at both sides. This can be changed if replacing the variable name before expanding it.

How to use bash substitution to append a newline at the end of each element of a list

I am looking for a bash one liner that appends a newline after each element of a list. If I call the script as:
./script arg1 arg2 arg3
I want the output to be
arg1
arg2
arg3
I tried different variations of the following. The newline does not get added. Any ordinary char gets added.
# pretty much works except for an extra space
list=${#/%/x}
echo "$list"
# appends 'n'
list=${#/%/\n}
echo "$list"
# appends nothing
list=${#/%/$'\n'}
echo "$list"
# appends nothing, \x078 would append 'x'
list=${#/%/$'\x0D'}
echo "$list"
# appends nothing
CR=$'\n'
list=${#/%/$CR}
echo "$list"
# same issues with arrays
tmp=($#)
list=${tmp/%/\n}
echo "$list"
What fix or alternative do you suggest? I obviously could write a loop or call tr but that's precisely what I thought I could avoid with a bash substitution.
You can use this function with "$#":
f() { printf "%s\n" "$#"; }
f arg1 arg2 arg3
arg1
arg2
arg3
As per man bash
# Expands to the positional parameters, starting from one. When the expansion occurs
within double quotes, each parameter expands to a separate word. That is, "$#" is
equivalent to "$1" "$2" ...
printf would have been my answer as well. Another technique is to use IFS:
$ IFS=$'\n'
$ list="$*"
$ echo "$list"
arg1
arg2
arg3
Notes:
uses ANSI-C quoting for the newline sequence
"$*" (with the quotes, crucial) joins the positional params using the first char of $IFS
quote the shell variable for the echo command to preserve the inner newlines.
That redefines the IFS value for the current shell. You can save the old value first and restore it after:
oldIFS=$IFS; IFS=$'\n'; list="$*"; IFS=$oldIFS
or you can use a subshell so the modification is discarded for you:
$ list=$( IFS=$'\n'; echo "$*" )

What is the difference between ${var}, "$var", and "${var}" in the Bash shell?

What the title says: what does it mean to encapsulate a variable in {}, "", or "{}"? I haven't been able to find any explanations online about this - I haven't been able to refer to them except for using the symbols, which doesn't yield anything.
Here's an example:
declare -a groups
groups+=("CN=exampleexample,OU=exampleexample,OU=exampleexample,DC=example,DC=com")
groups+=("CN=example example,OU=example example,OU=example example,DC=example,DC=com")
This:
for group in "${groups[#]}"; do
echo $group
done
Proves to be much different than this:
for group in $groups; do
echo $group
done
and this:
for group in ${groups}; do
echo $group
done
Only the first one accomplishes what I want: to iterate through each element in the array. I'm not really clear on the differences between $groups, "$groups", ${groups} and "${groups}". If anyone could explain it, I would appreciate it.
As an extra question - does anyone know the accepted way to refer to these encapsulations?
Braces ($var vs. ${var})
In most cases, $var and ${var} are the same:
var=foo
echo $var
# foo
echo ${var}
# foo
The braces are only needed to resolve ambiguity in expressions:
var=foo
echo $varbar
# Prints nothing because there is no variable 'varbar'
echo ${var}bar
# foobar
Quotes ($var vs. "$var" vs. "${var}")
When you add double quotes around a variable, you tell the shell to treat it as a single word, even if it contains whitespaces:
var="foo bar"
for i in "$var"; do # Expands to 'for i in "foo bar"; do...'
echo $i # so only runs the loop once
done
# foo bar
Contrast that behavior with the following:
var="foo bar"
for i in $var; do # Expands to 'for i in foo bar; do...'
echo $i # so runs the loop twice, once for each argument
done
# foo
# bar
As with $var vs. ${var}, the braces are only needed for disambiguation, for example:
var="foo bar"
for i in "$varbar"; do # Expands to 'for i in ""; do...' since there is no
echo $i # variable named 'varbar', so loop runs once and
done # prints nothing (actually "")
var="foo bar"
for i in "${var}bar"; do # Expands to 'for i in "foo barbar"; do...'
echo $i # so runs the loop once
done
# foo barbar
Note that "${var}bar" in the second example above could also be written "${var}"bar, in which case you don't need the braces anymore, i.e. "$var"bar. However, if you have a lot of quotes in your string these alternative forms can get hard to read (and therefore hard to maintain). This page provides a good introduction to quoting in Bash.
Arrays ($var vs. $var[#] vs. ${var[#]})
Now for your array. According to the bash manual:
Referencing an array variable without a subscript is equivalent to referencing the array with a subscript of 0.
In other words, if you don't supply an index with [], you get the first element of the array:
foo=(a b c)
echo $foo
# a
Which is exactly the same as
foo=(a b c)
echo ${foo}
# a
To get all the elements of an array, you need to use # as the index, e.g. ${foo[#]}. The braces are required with arrays because without them, the shell would expand the $foo part first, giving the first element of the array followed by a literal [#]:
foo=(a b c)
echo ${foo[#]}
# a b c
echo $foo[#]
# a[#]
This page is a good introduction to arrays in Bash.
Quotes revisited (${foo[#]} vs. "${foo[#]}")
You didn't ask about this but it's a subtle difference that's good to know about. If the elements in your array could contain whitespace, you need to use double quotes so that each element is treated as a separate "word:"
foo=("the first" "the second")
for i in "${foo[#]}"; do # Expands to 'for i in "the first" "the second"; do...'
echo $i # so the loop runs twice
done
# the first
# the second
Contrast this with the behavior without double quotes:
foo=("the first" "the second")
for i in ${foo[#]}; do # Expands to 'for i in the first the second; do...'
echo $i # so the loop runs four times!
done
# the
# first
# the
# second
TL;DR
All the examples you give are variations on Bash Shell Expansions. Expansions happen in a particular order, and some have specific use cases.
Braces as Token Delimiters
The ${var} syntax is primarily used for delimiting ambiguous tokens. For example, consider the following:
$ var1=foo; var2=bar; var12=12
$ echo $var12
12
$ echo ${var1}2
foo2
Braces in Array Expansions
The braces are required to access the elements of an array and for other special expansions. For example:
$ foo=(1 2 3)
# Returns first element only.
$ echo $foo
1
# Returns all array elements.
$ echo ${foo[*]}
1 2 3
# Returns number of elements in array.
$ echo ${#foo[*]}
3
Tokenization
Most of the rest of your questions have to do with quoting, and how the shell tokenizes input. Consider the difference in how the shell performs word splitting in the following examples:
$ var1=foo; var2=bar; count_params () { echo $#; }
# Variables are interpolated into a single string.
$ count_params "$var1 $var2"
1
# Each variable is quoted separately, created two arguments.
$ count_params "$var1" "$var2"
2
The # symbol interacts with quoting differently than *. Specifically:
$# "[e]xpands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands to a separate word."
In an array, "[i]f the word is double-quoted, ${name[*]} expands to a single word with the value of each array member separated by the first character of the IFS variable, and ${name[#]} expands each element of name to a separate word."
You can see this in action as follows:
$ count_params () { echo $#; }
$ set -- foo bar baz
$ count_params "$#"
3
$ count_params "$*"
1
The use of a quoted expansion matters a great deal when variables refer to values with spaces or special characters that might prevent the shell from word-splitting the way you intend. See Quoting for more on how quoting works in Bash.
You need to distinguish between arrays and simple variables — and your example is using an array.
For plain variables:
$var and ${var} are exactly equivalent.
"$var" and "${var}" are exactly equivalent.
However, the two pairs are not 100% identical in all cases. Consider the output below:
$ var=" abc def "
$ printf "X%sX\n" $var
XabcX
XdefX
$ printf "X%sX\n" "${var}"
X abc def X
$
Without the double quotes around the variable, the internal spacing is lost and the expansion is treated as two arguments to the printf command. With the double quotes around the variable, the internal spacing is preserved and the expansion is treated as one argument to the printf command.
With arrays, the rules are both similar and different.
If groups is an array, referencing $groups or ${groups} is tantamount to referencing ${groups[0]}, the zeroth element of the array.
Referencing "${groups[#]}" is analogous to referencing "$#"; it preserves the spacing in the individual elements of the array, and returns a list of values, one value per element of the array.
Referencing ${groups[#]} without the double quotes does not preserve spacing and can introduce more values than there are elements in the array if some of the elements contain spaces.
For example:
$ groups=("abc def" " pqr xyz ")
$ printf "X%sX\n" ${groups[#]}
XabcX
XdefX
XpqrX
XxyzX
$ printf "X%sX\n" "${groups[#]}"
Xabc defX
X pqr xyz X
$ printf "X%sX\n" $groups
XabcX
XdefX
$ printf "X%sX\n" "$groups"
Xabc defX
$
Using * instead of # leads to subtly different results.
See also How to iterate over the arguments in a bash script.
The second sentence of the first paragraph under Parameter Expansion in man bash says,
The parameter name or symbol to be expanded may be enclosed in braces, which are optional but serve to protect the variable to be expanded from characters immediately following it which could be interpreted as part of the name.
Which tells you that the name is simply braces, and the main purpose is to clarify where the name begins and ends:
foo='bar'
echo "$foobar"
# nothing
echo "${foo}bar"
barbar
If you read further you discover,
The braces are required when parameter is a positional parameter with more than one digit…
Let's test:
$ set -- {0..100}
$ echo $22
12
$ echo ${22}
20
Huh. Neat. I honestly didn't know that before writing this (I've never had more than 9 positional parameters before.)
Of course, you also need braces to do the powerful parameter expansion features like
${parameter:-word}
${parameter:=word}
${parameter:?word}
… [read the section for more]
as well as array expansion.
A related case not covered above. Quoting an empty variable seems to change things for test -n. This is specifically given as an example in the info text for coreutils, but not really explained:
16.3.4 String tests
-------------------
These options test string characteristics. You may need to quote
STRING arguments for the shell. For example:
test -n "$V"
The quotes here prevent the wrong arguments from being passed to
`test' if `$V' is empty or contains special characters.
I'd love to hear the detailed explanation. My testing confirms this, and I'm now quoting my variables for all string tests, to avoid having -z and -n return the same result.
$ unset a
$ if [ -z $a ]; then echo unset; else echo set; fi
unset
$ if [ -n $a ]; then echo set; else echo unset; fi
set # highly unexpected!
$ unset a
$ if [ -z "$a" ]; then echo unset; else echo set; fi
unset
$ if [ -n "$a" ]; then echo set; else echo unset; fi
unset # much better
Well, I know that encapsulation of a variable helps you to work with something like:
${groups%example}
or syntax like that, where you want to do something with your variable before returning the value.
Now, if you see your code, all the magic is inside
${groups[#]}
the magic is in there because you can't write just: $groups[#]
You're putting your variable inside the {} because you want to use special characters [] and #. You can't name or call your variable just: # or something[] because these are reserved characters for other operations and names.
$var and ${var} are the same, if var is the name of the variable.
The braces are required when parameter is a positional parameter with more than one digit, or when parameter is followed by a character that is not to be interpreted as part of its name.
Thus, "$var" and "${var}" are the same.
However, $var and "$var" are different.
Bash will do Word Splitting for $var but not for "$var".
The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting.
Note: Word splitting won't be performed in variable assignment:
https://www.gnu.org/software/bash/manual/html_node/Shell-Parameters.html
A variable may be assigned to by a statement of the form
name=[value]
If value is not given, the variable is assigned the null string. All values undergo tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, and quote removal (see Shell Parameter Expansion). If the variable has its integer attribute set, then value is evaluated as an arithmetic expression even if the $((…)) expansion is not used (see Arithmetic Expansion). Word splitting and filename expansion are not performed.

Resources