Is there a way to set the positional parameters of a bash script from within a function?
In the global scope one can use set -- <arguments> to change the positional arguments, but it doesn't work inside a function because it changes the function's positional parameters.
A quick illustration:
# file name: script.sh
# called as: ./script.sh -opt1 -opt2 arg1 arg2
function change_args() {
set -- "a" "c" # this doesn't change the global args
}
echo "original args: $#" # original args: -opt1 -opt2 arg1 arg2
change_args
echo "changed args: $#" # changed args: -opt1 -opt2 arg1 arg2
# desired outcome: changed args: a c
As stated before, the answer is no, but if someone need this there's the option of setting an external array (_ARRAY), modifying it from within the function and then using set -- ${_ARRAY[#]} after the fact. For example:
#!/bin/bash
_ARGS=()
shift_globally () {
shift
_ARGS=$#
}
echo "before: " "$#"
shift_globally "$#"
set -- "${_ARGS[#]}"
echo "after: " "$#"
If you test it:
./test.sh a b c d
> before: a b c d
> after: b c d
It's not technically what you're asking for but it's a workaround that might help someone who needs a similar behaviour.
Not really. A function actually has its own scope. A parameter assigned in a function is global by default, but if you explicitly declare it it has local scope:
foo() {
x=3 # global
declare y # local
...
}
The positional parameters are always local to the function, so anything you do to manipulate them will only take effect within the function scope.
Technically, you can always use recursion to solve this issue. If you can confidently call the original script again, you can reorder the arguments that way. E.g.
declare -i depth
outer() {
if (( depth++ < 1 )); then
outer "${#:4:$#}" "${#:1:3}"
return
fi
main "$#"
}
main() {
printf '%s\n' "$#"
}
outer "$#"
This seems like an error prone and confusing solution to a problem I don't understand, but it essentially works.
I was searching for this myself and came up with the below, i.e. return a space separated string of the array (sorry, positional parameters) from the function by printing it to stdout and capture it with command substitution, then parse the string word for word and append it back into the positional arguments. Clear as mud!
Obviously won't work for args with spaces in them below but that's of course possible to workaround too.
#!/bin/sh
fn() {
set -- d e f
res=""
for i; do
res="${res:+${res} }${i}"
done
printf "%s\n" "$res"
}
set -- a b c
rv=$(fn)
set --
for word in $rv; do
set -- "$#" "$word"
done
for i; do
echo positional parms "$i"
done
Related
I'm setting up my shell environments and I want to be able to use some of the same functions/aliases in zsh as in bash. One of these functions opens either .bashrc or .zshrc in an editor (whichever file is relevant), waits for the editor to close, then reloads the rc file.
# a very simplified version of this function
editrc() {
local rcfile=".$(basename $SHELL)rc"
code -w ~/$rcfile
. ~/$rcfile
}
I use the value of rcfile in a few other functions, so I've pulled it out of the function declaration.
_rc=".$(basename $SHELL)rc"
editrc() {
code -w ~/$_rc
. ~/$_rc
}
# ... other functions that use it ...
unset _rc
However, because I'm a neat freak, I want to unset _rc at the end of my script, but I still want my functions to run correctly. Is there a clever way to evaluate $_rc at the time the function is declared?
I know I could use eval and place everything except $_rc instances within single quotes, but that seems like a pain, since the full version of my function uses both single-quotes and double-quotes.
_rc=".$(basename $SHELL)rc"
eval 'editrc() {
echo Here'"'"'s a thing that uses single quotes. As you can see it'"'"'s a pain.
code -w ~/'$_rc'
. ~/'$_rc'
}'
# ... other functions using `_rc`
unset _rc
I'm guessing I could declare my functions, then do some magic with eval "$(declare -f editrc | awk)". It very well be more pain than it's worth, but I'm always interested in learning new things.
Note: I'd love to generalize this into a utility function that does this.
_myvar=foo
anothervar=bar
myfunc() {
echo $_myvar $anothervar
}
# redeclares myfunc with `$_myvar` expanded, but leaves `$anothervar` as-is
expandfunctionvars myfunc '$_myvar'
Is there a clever way to evaluate $_rc at the time the function is declared?
_rc=".$(basename "$SHELL")rc"
# while you could eval here, source lets you work with a stream
source <(
cat <<EOF
editrc() {
local _rc
# first safely trasfer context
$(declare -p _rc)
EOF
# use quoted here string to do anything inside without caring.
cat <<'EOF'
# do anything else
echo "Here's a thing that uses single quotes. As you can see it's not a pain, just choose proper quoting."
code -w "~/$_rc"
. "~/$_rc"
}
EOF
)
unset _rc
Generally first use declare -p to transfer variables as strings to be evaluated. Then after you "import" variables, use a quoted here document to do anything as in a normal script.
References to read:
<<EOF is a here document. Note the difference in parsing when the here delimiter is quoted vs unquoted.
<(..) is a process substitution
The source command reads a pipe created by process substitution. Inside the process subtitution I output the function to be sourced. With the first here document I output the function name definition, with a local of the variable so that it doesn't pollute global namespace. Then with declare -p I output the variable definition as a properly quoted string later to be sourced by source. Then with a quoted here document I output the rest of the function, so that I do not need to care about quoting.
The code is bash specific, I know nothing about zsh and don't use it.
You could do it with eval too:
eval '
editrc() {
local _rc
# first safely trasfer context
'"$(declare -p _rc)"'
# use quoted here string to do anything inside without caring.
# do anything else
echo "Here'\''s a thing that uses single quotes. As you can see it'\''s not a pain, just choose proper quoting."
code -w "~/$_rc"
. "~/$_rc"
}'
But for me using a quoted here document delimiter allows for easier writing.
While KamilCuck was working on their answer, I devised a function that will take in any function name and a set of variable names, expand just those variables, and redeclare the function.
expandFnVars() {
if [[ $# -lt 2 ]]; then
>&2 echo 'expandFnVars requires at least two arguments: the function name and the variable(s) to be expanded'
return 1
fi
local fn="$1"
shift
local vars=("$#")
if [[ -z "$(declare -F $fn 2> /dev/null)" ]]; then
>&2 echo $fn is not a function.
return 1
fi
foundAllVars=true
for v in $vars; do
if [[ -z "$(declare -p $v 2> /dev/null)" ]]; then
>&2 echo $v is not a declared value.
foundAllVars=false
fi
done
[[ $foundAllVars != true ]] && return 1
fn="$(declare -f $fn)"
for v in $vars; do
local val="$(eval 'echo $'$v)" # get the value of the varable represented by $v
val="${val//\"/\\\"}" # escape any double-quotes
val="${val//\\/\\\\\\}" # escape any backslashes
fn="$(echo "$fn" | sed -r 's/"?\$'$v'"?/"'"$val"'"/g')" # replace instances of "$$v" and $$v with $val
done
eval "$fn"
}
Usage:
foo="foo bar"
bar='$foo'
baz=baz
fn() {
echo $bar $baz
}
expandFnVars fn bar
declare -f fn
# prints:
# fn ()
# {
# echo "$foo" $baz
# }
expandFnVars fn foo
declare -f fn
# prints:
# fn ()
# {
# echo "foo bar" $baz
# }
Looking at it now, I see one flaw. Suppose $bar in the original function was in single-quotes. We probably would not want its value to be replaced. This could be fixed by some clever regex lookbehinds to count the number of unescaped 's, but I'm happy with it as-is.
I have written shell script with different functions, that I can invoke from the command line like so:
bash script.sh -i -b
So that will run those two functions and not the other ones in the script. However, I want to reverse this logic, by have the script run every function by default if I just do
bash script.sh
And if I pass arguments like -i -b I would like to skip over those functions instead. Any help is appreciated!
To implement those the two logics:
If no arguments at all, run all functions
For each argument passed, skip some function execution
You can do it like this:
#!/bin/bash
function func_i {
echo "I am i."
}
function func_b {
echo "I am b."
}
function main {
# Check if there are no arguments, run all functions and exit.
if [ $# -eq 0 ]; then
func_i
func_b
exit 0
fi
# Parse arguments -i and -b, marking them for no execution if they are passed to the script.
proc_i=true
proc_b=true
while getopts "ib" OPTION; do
case $OPTION in
i)
proc_i=false
;;
b)
proc_b=false
;;
*)
echo "Incorrect options provided"
exit 1
;;
esac
done
# Execute whatever function is marked for run.
if $proc_i; then func_i; fi
if $proc_b; then func_b; fi
}
main "$#"
Some explanations:
$# returns the number of arguments passed to the script. If $# is equal to 0, then no arguments were passed to the script.
getops accepts switches -i and -b, all other switches will result in error handled in the *) case.
You could black list items from the list of functions that are called by default. Something like:
#!/bin/bash
list='a b c d e f g h i'
# define some functions
for name in $list; do
eval "func_$name() { echo func_$name called with arg \$1; }"
done
# black list items from list
for x; do
list=$(echo "$list" | tr -d ${x#-})
done
for name in $list; do
func_$name $name
done
But frankly it makes more sense to do something like:
$ cat script.sh
#!/bin/bash
list='a b c d e f g h i'
test $# = 0 && set -- $list # set default list of functions to call
# define some function
for name in $list; do
eval "func_$name() { echo func_$name called with arg \$1; }"
done
for name; do
func_$name $name
done
$ bash ./script.sh
func_a called with arg a
func_b called with arg b
func_c called with arg c
func_d called with arg d
func_e called with arg e
func_f called with arg f
func_g called with arg g
func_h called with arg h
func_i called with arg i
$ bash ./script.sh c g
func_c called with arg c
func_g called with arg g
I want to ask if it is possible to pass arguments to a script function by reference:
i.e. to do something that would look like this in C++:
void boo(int &myint) { myint = 5; }
int main() {
int t = 4;
printf("%d\n", t); // t->4
boo(t);
printf("%d\n", t); // t->5
}
So then in BASH I want to do something like:
function boo ()
{
var1=$1 # now var1 is global to the script but using it outside
# this function makes me lose encapsulation
local var2=$1 # so i should use a local variable ... but how to pass it back?
var2='new' # only changes the local copy
#$1='new' this is wrong of course ...
# ${!1}='new' # can i somehow use indirect reference?
}
# call boo
SOME_VAR='old'
echo $SOME_VAR # -> old
boo "$SOME_VAR"
echo $SOME_VAR # -> new
Any thoughts would be appreciated.
It's 2018, and this question deserves an update. At least in Bash, as of Bash 4.3-alpha, you can use namerefs to pass function arguments by reference:
function boo()
{
local -n ref=$1
ref='new'
}
SOME_VAR='old'
echo $SOME_VAR # -> old
boo SOME_VAR
echo $SOME_VAR # -> new
The critical pieces here are:
Passing the variable's name to boo, not its value: boo SOME_VAR, not boo $SOME_VAR.
Inside the function, using local -n ref=$1 to declare a nameref to the variable named by $1, meaning it's not a reference to $1 itself, but rather to a variable whose name $1 holds, i.e. SOME_VAR in our case. The value on the right-hand side should just be a string naming an existing variable: it doesn't matter how you get the string, so things like local -n ref="my_var" or local -n ref=$(get_var_name) would work too. declare can also replace local in contexts that allow/require that. See chapter on Shell Parameters in Bash Reference Manual for more information.
The advantage of this approach is (arguably) better readability and, most importantly, avoiding eval, whose security pitfalls are many and well-documented.
From the Bash man-page (Parameter Expansion):
If the first character of parameter is an exclamation point (!), a
level of variable indirection is introduced. Bash uses the value of
the variable formed from the rest of parameter as the name of the
variable; this variable is then expanded and that value is used in
the rest of the substitution, rather than the value of parameter
itself. This is known as indirect expansion.
Therefore a reference is the variable's name. Here is a swap function using
variable indirection that does not require a temporary variable:
function swap()
{ #
# #param VARNAME1 VARNAME2
#
eval "$1=${!2} $2=${!1}"
}
$ a=1 b=2
$ swap a b
$ echo $a $b
2 1
Use a helper function upvar:
# Assign variable one scope above the caller.
# Usage: local "$1" && upvar $1 value [value ...]
# Param: $1 Variable name to assign value to
# Param: $* Value(s) to assign. If multiple values, an array is
# assigned, otherwise a single value is assigned.
# NOTE: For assigning multiple variables, use 'upvars'. Do NOT
# use multiple 'upvar' calls, since one 'upvar' call might
# reassign a variable to be used by another 'upvar' call.
# See: http://fvue.nl/wiki/Bash:_Passing_variables_by_reference
upvar() {
if unset -v "$1"; then # Unset & validate varname
if (( $# == 2 )); then
eval $1=\"\$2\" # Return single value
else
eval $1=\(\"\${#:2}\"\) # Return array
fi
fi
}
And use it like this from within Newfun():
local "$1" && upvar $1 new
For returning multiple variables, use another helper function upvars. This allows passing multiple variables within one call, thus avoiding possible conflicts if one upvar call changes a variable used in another subsequent upvar call.
See: http://www.fvue.nl/wiki/Bash:_Passing_variables_by_reference for helper function upvars and more information.
The problem with:
eval $1=new
is that it's not safe if $1 happens to contain a command:
set -- 'ls /;true'
eval $1=new # Oops
It would be better to use printf -v:
printf -v "$1" %s new
But printf -v cannot assign arrays.
Moreover, both eval and printf won't work if the variable happens to be declared local:
g() { local b; eval $1=bar; } # WRONG
g b # Conflicts with `local b'
echo $b # b is empty unexpected
The conflict stays there even if local b is unset:
g() { local b; unset b; eval $1=bar; } # WRONG
g b # Still conflicts with `local b'
echo $b # b is empty unexpected
I have found a way to do this but I am not sure how correct this is:
Newfun()
{
local var1="$1"
eval $var1=2
# or can do eval $1=2 if no local var
}
var=1
echo var is $var # $var = 1
newfun 'var' # pass the name of the variable…
echo now var is $var # $var = 2
So we pass the variable name as opposed to the value and then use eval ...
Bash doesn't have anything like references built into it, so basically the only way you would be able to do what you want is to pass the function the name of the global variable you want it to modify. And even then you'll need an eval statement:
boo() {
eval ${1}="new"
}
SOME_VAR="old"
echo $SOME_VAR # old
boo "SOME_VAR"
echo $SOME_VAR # new
I don't think you can use indirect references here because Bash automatically accesses the value of the variable whose name is stored in the indirect reference. It doesn't give you the chance to set it.
Ok, so this question has been waiting for a 'real' solution for some time now, and I am glad to say that we can now accomplish this without using eval at all.
The key to remember is to declare a reference in both the caller as the callee, at least in my example:
#!/bin/bash
# NOTE this does require a bash version >= 4.3
set -o errexit -o nounset -o posix -o pipefail
passedByRef() {
local -n theRef
if [ 0 -lt $# ]; then
theRef=$1
echo -e "${FUNCNAME}:\n\tthe value of my reference is:\n\t\t${theRef}"
# now that we have a reference, we can assign things to it
theRef="some other value"
echo -e "${FUNCNAME}:\n\tvalue of my reference set to:\n\t\t${theRef}"
else
echo "Error: missing argument"
exit 1
fi
}
referenceTester() {
local theVariable="I am a variable"
# note the absence of quoting and escaping etc.
local -n theReference=theVariable
echo -e "${FUNCNAME}:\n\tthe value of my reference is:\n\t\t${theReference}"
passedByRef theReference
echo -e "${FUNCNAME}:\n\tthe value of my reference is now:\n\t\t${theReference},\n\tand the pointed to variable:\n\t\t${theVariable}"
}
Output when run:
referenceTester:
the value of my reference is:
I am a variable
passedByRef:
the value of my reference is:
I am a variable
passedByRef:
value of my reference set to:
some other value
referenceTester:
the value of my reference is now:
some other value,
and the pointed to variable:
some other value
Eval should never be used on a string that a user can set because its dangerous. Something like "string; rm -rf ~" will be bad. So generally its best to find solutions where you don't have to worry about it.
However, eval will be needed to set the passed variables, as the comment noted.
$ y=four
$ four=4
$ echo ${!y}
4
$ foo() { x=$1; echo ${!x}; }
$ foo four
4
#!/bin/bash
append_string()
{
if [ -z "${!1}" ]; then
eval "${1}='$2'"
else
eval "${1}='${!1}''${!3}''$2'"
fi
}
PETS=''
SEP='|'
append_string "PETS" "cat" "SEP"
echo "$PETS"
append_string "PETS" "dog" "SEP"
echo "$PETS"
append_string "PETS" "hamster" "SEP"
echo "$PETS"
Output:
cat
cat|dog
cat|dog|hamster
Structure for calling that function is:
append_string name_of_var_to_update string_to_add name_of_var_containing_sep_char
Name of variable is passed to fuction about PETS and SEP while string to append is passed the usual way as value. "${!1}" refers to contents of global PETS variable. In the beginning that variable is empty and contens is added each time we call the function. Separator character can be selected as needed. "eval" starting lines update PETS variable.
This is what works for me on Ubuntu bash shell
#!/bin/sh
iteration=10
increment_count()
{
local i
i=$(($1+1))
eval ${1}=\$i
}
increment_count iteration
echo $iteration #prints 11
increment_count iteration
echo $iteration #prints 12
I'm attempting to write a function in bash that will access the scripts command line arguments, but they are replaced with the positional arguments to the function. Is there any way for the function to access the command line arguments if they aren't passed in explicitly?
# Demo function
function stuff {
echo $0 $*
}
# Echo's the name of the script, but no command line arguments
stuff
# Echo's everything I want, but trying to avoid
stuff $*
If you want to have your arguments C style (array of arguments + number of arguments) you can use $# and $#.
$# gives you the number of arguments.
$# gives you all arguments. You can turn this into an array by args=("$#").
So for example:
args=("$#")
echo $# arguments passed
echo ${args[0]} ${args[1]} ${args[2]}
Note that here ${args[0]} actually is the 1st argument and not the name of your script.
My reading of the Bash Reference Manual says this stuff is captured in BASH_ARGV,
although it talks about "the stack" a lot.
#!/bin/bash
shopt -s extdebug
function argv {
for a in ${BASH_ARGV[*]} ; do
echo -n "$a "
done
echo
}
function f {
echo f $1 $2 $3
echo -n f ; argv
}
function g {
echo g $1 $2 $3
echo -n g; argv
f
}
f boo bar baz
g goo gar gaz
Save in f.sh
$ ./f.sh arg0 arg1 arg2
f boo bar baz
fbaz bar boo arg2 arg1 arg0
g goo gar gaz
ggaz gar goo arg2 arg1 arg0
f
fgaz gar goo arg2 arg1 arg0
#!/usr/bin/env bash
echo name of script is $0
echo first argument is $1
echo second argument is $2
echo seventeenth argument is $17
echo number of arguments is $#
Edit: please see my comment on question
Ravi's comment is essentially the answer. Functions take their own arguments. If you want them to be the same as the command-line arguments, you must pass them in. Otherwise, you're clearly calling a function without arguments.
That said, you could if you like store the command-line arguments in a global array to use within other functions:
my_function() {
echo "stored arguments:"
for arg in "${commandline_args[#]}"; do
echo " $arg"
done
}
commandline_args=("$#")
my_function
You have to access the command-line arguments through the commandline_args variable, not $#, $1, $2, etc., but they're available. I'm unaware of any way to assign directly to the argument array, but if someone knows one, please enlighten me!
Also, note the way I've used and quoted $# - this is how you ensure special characters (whitespace) don't get mucked up.
# Save the script arguments
SCRIPT_NAME=$0
ARG_1=$1
ARGS_ALL=$*
function stuff {
# use script args via the variables you saved
# or the function args via $
echo $0 $*
}
# Call the function with arguments
stuff 1 2 3 4
One can do it like this as well
#!/bin/bash
# script_name function_test.sh
function argument(){
for i in $#;do
echo $i
done;
}
argument $#
Now call your script like
./function_test.sh argument1 argument2
This is #mcarifio response with several comments incorporated:
#!/bin/bash
shopt -s extdebug
function stuff() {
local argIndex="${#BASH_ARGV[#]}"
while [[ argIndex -gt 0 ]] ; do
argIndex=$((argIndex - 1))
echo -n "${BASH_ARGV[$argIndex]} "
done
echo
}
stuff
I want to highlight:
The shopt -s extdebug is important. Without this the BASH_ARGV array will be empty unless you use it in top level part of the script (it means outside of the stuff function). Details here: Why does the variable BASH_ARGV have a different value in a function, depending on whether it is used before calling the function
BASH_ARGV is a stack so arguments are stored there in backward order. That's the reason why I decrement the index inside loop so we get arguments in the right order.
Double quotes around the ${BASH_ARGV[#]} and the # as an index instead of * are needed so arguments with spaces are handled properly. Details here: bash arrays - what is difference between ${#array_name[*]} and ${#array_name[#]}
You can use the shift keyword (operator?) to iterate through them.
Example:
#!/bin/bash
function print()
{
while [ $# -gt 0 ]
do
echo "$1"
shift 1
done
}
print "$#"
I do it like this:
#! /bin/bash
ORIGARGS="$#"
function init(){
ORIGOPT= "- $ORIGARGS -" # tacs are for sed -E
echo "$ORIGOPT"
}
The simplest and likely the best way to get arguments passed from the command line to a particular function is to include the arguments directly in the function call.
# first you define your function
function func_ImportantPrints() {
printf '%s\n' "$1"
printf '%s\n' "$2"
printf '%s\n' "$3"
}
# then when you make your function call you do this:
func_ImportantPrints "$#"
This is useful no matter if you are sending the arguments to main or some function like func_parseArguments (a function containing a case statement as seen in previous examples) or any function in the script.
I want to ask if it is possible to pass arguments to a script function by reference:
i.e. to do something that would look like this in C++:
void boo(int &myint) { myint = 5; }
int main() {
int t = 4;
printf("%d\n", t); // t->4
boo(t);
printf("%d\n", t); // t->5
}
So then in BASH I want to do something like:
function boo ()
{
var1=$1 # now var1 is global to the script but using it outside
# this function makes me lose encapsulation
local var2=$1 # so i should use a local variable ... but how to pass it back?
var2='new' # only changes the local copy
#$1='new' this is wrong of course ...
# ${!1}='new' # can i somehow use indirect reference?
}
# call boo
SOME_VAR='old'
echo $SOME_VAR # -> old
boo "$SOME_VAR"
echo $SOME_VAR # -> new
Any thoughts would be appreciated.
It's 2018, and this question deserves an update. At least in Bash, as of Bash 4.3-alpha, you can use namerefs to pass function arguments by reference:
function boo()
{
local -n ref=$1
ref='new'
}
SOME_VAR='old'
echo $SOME_VAR # -> old
boo SOME_VAR
echo $SOME_VAR # -> new
The critical pieces here are:
Passing the variable's name to boo, not its value: boo SOME_VAR, not boo $SOME_VAR.
Inside the function, using local -n ref=$1 to declare a nameref to the variable named by $1, meaning it's not a reference to $1 itself, but rather to a variable whose name $1 holds, i.e. SOME_VAR in our case. The value on the right-hand side should just be a string naming an existing variable: it doesn't matter how you get the string, so things like local -n ref="my_var" or local -n ref=$(get_var_name) would work too. declare can also replace local in contexts that allow/require that. See chapter on Shell Parameters in Bash Reference Manual for more information.
The advantage of this approach is (arguably) better readability and, most importantly, avoiding eval, whose security pitfalls are many and well-documented.
From the Bash man-page (Parameter Expansion):
If the first character of parameter is an exclamation point (!), a
level of variable indirection is introduced. Bash uses the value of
the variable formed from the rest of parameter as the name of the
variable; this variable is then expanded and that value is used in
the rest of the substitution, rather than the value of parameter
itself. This is known as indirect expansion.
Therefore a reference is the variable's name. Here is a swap function using
variable indirection that does not require a temporary variable:
function swap()
{ #
# #param VARNAME1 VARNAME2
#
eval "$1=${!2} $2=${!1}"
}
$ a=1 b=2
$ swap a b
$ echo $a $b
2 1
Use a helper function upvar:
# Assign variable one scope above the caller.
# Usage: local "$1" && upvar $1 value [value ...]
# Param: $1 Variable name to assign value to
# Param: $* Value(s) to assign. If multiple values, an array is
# assigned, otherwise a single value is assigned.
# NOTE: For assigning multiple variables, use 'upvars'. Do NOT
# use multiple 'upvar' calls, since one 'upvar' call might
# reassign a variable to be used by another 'upvar' call.
# See: http://fvue.nl/wiki/Bash:_Passing_variables_by_reference
upvar() {
if unset -v "$1"; then # Unset & validate varname
if (( $# == 2 )); then
eval $1=\"\$2\" # Return single value
else
eval $1=\(\"\${#:2}\"\) # Return array
fi
fi
}
And use it like this from within Newfun():
local "$1" && upvar $1 new
For returning multiple variables, use another helper function upvars. This allows passing multiple variables within one call, thus avoiding possible conflicts if one upvar call changes a variable used in another subsequent upvar call.
See: http://www.fvue.nl/wiki/Bash:_Passing_variables_by_reference for helper function upvars and more information.
The problem with:
eval $1=new
is that it's not safe if $1 happens to contain a command:
set -- 'ls /;true'
eval $1=new # Oops
It would be better to use printf -v:
printf -v "$1" %s new
But printf -v cannot assign arrays.
Moreover, both eval and printf won't work if the variable happens to be declared local:
g() { local b; eval $1=bar; } # WRONG
g b # Conflicts with `local b'
echo $b # b is empty unexpected
The conflict stays there even if local b is unset:
g() { local b; unset b; eval $1=bar; } # WRONG
g b # Still conflicts with `local b'
echo $b # b is empty unexpected
I have found a way to do this but I am not sure how correct this is:
Newfun()
{
local var1="$1"
eval $var1=2
# or can do eval $1=2 if no local var
}
var=1
echo var is $var # $var = 1
newfun 'var' # pass the name of the variable…
echo now var is $var # $var = 2
So we pass the variable name as opposed to the value and then use eval ...
Bash doesn't have anything like references built into it, so basically the only way you would be able to do what you want is to pass the function the name of the global variable you want it to modify. And even then you'll need an eval statement:
boo() {
eval ${1}="new"
}
SOME_VAR="old"
echo $SOME_VAR # old
boo "SOME_VAR"
echo $SOME_VAR # new
I don't think you can use indirect references here because Bash automatically accesses the value of the variable whose name is stored in the indirect reference. It doesn't give you the chance to set it.
Ok, so this question has been waiting for a 'real' solution for some time now, and I am glad to say that we can now accomplish this without using eval at all.
The key to remember is to declare a reference in both the caller as the callee, at least in my example:
#!/bin/bash
# NOTE this does require a bash version >= 4.3
set -o errexit -o nounset -o posix -o pipefail
passedByRef() {
local -n theRef
if [ 0 -lt $# ]; then
theRef=$1
echo -e "${FUNCNAME}:\n\tthe value of my reference is:\n\t\t${theRef}"
# now that we have a reference, we can assign things to it
theRef="some other value"
echo -e "${FUNCNAME}:\n\tvalue of my reference set to:\n\t\t${theRef}"
else
echo "Error: missing argument"
exit 1
fi
}
referenceTester() {
local theVariable="I am a variable"
# note the absence of quoting and escaping etc.
local -n theReference=theVariable
echo -e "${FUNCNAME}:\n\tthe value of my reference is:\n\t\t${theReference}"
passedByRef theReference
echo -e "${FUNCNAME}:\n\tthe value of my reference is now:\n\t\t${theReference},\n\tand the pointed to variable:\n\t\t${theVariable}"
}
Output when run:
referenceTester:
the value of my reference is:
I am a variable
passedByRef:
the value of my reference is:
I am a variable
passedByRef:
value of my reference set to:
some other value
referenceTester:
the value of my reference is now:
some other value,
and the pointed to variable:
some other value
Eval should never be used on a string that a user can set because its dangerous. Something like "string; rm -rf ~" will be bad. So generally its best to find solutions where you don't have to worry about it.
However, eval will be needed to set the passed variables, as the comment noted.
$ y=four
$ four=4
$ echo ${!y}
4
$ foo() { x=$1; echo ${!x}; }
$ foo four
4
#!/bin/bash
append_string()
{
if [ -z "${!1}" ]; then
eval "${1}='$2'"
else
eval "${1}='${!1}''${!3}''$2'"
fi
}
PETS=''
SEP='|'
append_string "PETS" "cat" "SEP"
echo "$PETS"
append_string "PETS" "dog" "SEP"
echo "$PETS"
append_string "PETS" "hamster" "SEP"
echo "$PETS"
Output:
cat
cat|dog
cat|dog|hamster
Structure for calling that function is:
append_string name_of_var_to_update string_to_add name_of_var_containing_sep_char
Name of variable is passed to fuction about PETS and SEP while string to append is passed the usual way as value. "${!1}" refers to contents of global PETS variable. In the beginning that variable is empty and contens is added each time we call the function. Separator character can be selected as needed. "eval" starting lines update PETS variable.
This is what works for me on Ubuntu bash shell
#!/bin/sh
iteration=10
increment_count()
{
local i
i=$(($1+1))
eval ${1}=\$i
}
increment_count iteration
echo $iteration #prints 11
increment_count iteration
echo $iteration #prints 12