Bash indirect variable assignment inside a function - bash

I have a script where the user input needs to be evaluated several times, the solution im working on is to put the evaluation bits into a function, and simply call the function every time i need to evaluate the input.
The problem is though that when im trying to update the $1 variable (that referes to the first variable parameter of the function) I get the error message "$VARIABLE command not found".
Here is the code:
function input_handler() {
if is_integer $1; then
selid="$1 -1"
if [[ "$1" -le "0" ]]; then
echo "Please use a simple positive number!"
else
if [[ "$1" -le "${#array[*]}" ]]; then
eval $1="${array[selid]}"
echo "Ok, moving on..."
else
echo "That number seems too large, try again?"
fi
fi
else
if [ -e $2/$1 ]; then
echo "Ok, moving on..."
else
echo "That item is not on the list, try again!"
fi
fi
}
And this command:
input_handler $doctype $docpath
Gives this output:
5
./test: line 38: 5=sun: command not found
Ok, moving on...
Now this is almost correct, but what im after is doctype=sun, not 5=sun, in other words I need the $1 variable name not its value. Changing the line eval $1="${array[selid]}" to eval doctype="${array[selid]}" fixes this particular instance. But this does not fix my problem as I need to run this function on different variables with different names.

Maybe not fully understand what you want achieve, but check the next example:
weirdfunc () {
echo " weirdfunc: variable name is: $1"
echo " weirdfunc: variable value is: ${!1}"
eval "$1=$(( ${!1} + 1))" #assign
}
myvar="5"
echo "the value of myvar before: $myvar"
weirdfunc myvar #call with the NAME not with the value, so NOT weridfunc $myvar
echo "the value of myvar after: $myvar"
In short - when you want to do anything with the variable NAME in an called function, you should pass the NAME of the variable and NOT his value. So call the function
somefunc NAME
instead of
somefunc $NAME
and use the above constructs to get the name and value inside the function.

You can't update the value of $1 with a traditional assignment, but you can update the positional parameters with the set builtin.
$ f() { echo "$#"; set -- a b c; echo "$#"; echo $2; }
$ f 1 2 3
1 2 3
a b c
b
Just keep in mind this will wipe out all the positional parameters you don't re-set each time, so you'll need to set $2 if you want to keep it around.
Your best bet is probably to assign the values in the positional parameters to names and just use names from then on.

If you protect the variable name, Bash will evaluate and assign to $1 instead of try to execute $1=value.
eval "$1"=${array[selid]}

Positional parameters are read-only. So what you want to do is not possible. You should do something like
foo=$1
and then work with $foo instead of $1

Related

Update value of a variable from a function in shell script

a=master
b="9876"
secondfunction(){
a=develop
b="1234"
echo "Inside second function"
echo $1
echo $2
}
secondfunction $a $b
In the above shell script echo command prints a=master and b=9876. I want to print a=develop and b =1234.
Here you are indeed changing the value of a and b like you want to, To see this you can print out a and b after the function has been called
echo $a $b
secondfunction $a $b
echo $a $b
and you can see that the output will be master 9876 for the first one and develop 1234 for the second one. Just that you cannot change the value of $1 and $2 by changing the values of a and b. For that you'll need to explicitly change them using set like #william has pointed out.
Also variables in shell script are globally scoped unless explicitly mentioned otherwise. So if all you want to do is change the values of those variables, you need not explicitly pass them into the function as arguments.
a=master
b="9876"
secondfunction(){
a=develop
b="1234"
echo "Inside second function"
echo $a
echo $b
}
secondfunction
will also work

`set -u` (nounset) vs checking whether I have arguments

I'm trying to improve this nasty old script. I found an undefined variable error to fix, so I added set -u to catch any similar errors.
I get an undefined variable error for "$1", because of this code
if [ -z "$1" ]; then
process "$command"
It just wants to know if there are arguments or not. (The behaviour when passed an empty string as the first argument is not intended. It won't be a problem if we happen to fix that as well).
What's a good way to check whether we have arguments, when running with set -u?
The code above won't work if we replace "$1" with "$#", because of the special way "$#" is expanded when there is more than one argument.
$# contains the number of arguments, so you can test for $1, $2, etc. to exist before accessing them.
if (( $# == 0 )); then
# no arguments
else
# have arguments
fi;
You can ignore the automatic exit due to set -u by setting a default value in the parameter expansion:
#!/bin/sh
set -u
if [ -z "${1-}" ] ; then
echo "\$1 not set or empty"
exit 1
fi
echo "$2" # this will crash if $2 is unset
The syntax is ${parameter-default}, which gives the string default if the named parameter is unset, and the value of parameter otherwise. Similarly, ${parameter:-default} gives default if the named parameter is unset or empty. Above, we just used an empty default value. (${1:-} would be the same here, since we'd just change an empty value to an empty value.)
That's a feature of the POSIX shell and works with other variables too, not just the positional parameters.
If you want to tell the difference between an unset variable and an empty value, use ${par+x}:
if [ "${1+x}" != x ] ; then
echo "\$1 is not set"
fi
My personal favorite :
if
(($#))
then
# We have at least one argument
fi
Or :
if
((!$#))
then
# We have no argument
fi
If each positional argument has a fixed meaning, you can also use this construct:
: ${1:?Missing first argument}
If the first positional argument isn't set, the shell will print "Missing first argument" as an error message and exit. Otherwise, the rest of the script can continue, safe in the knowledge the $1 does, indeed, have a non-empty value.
Use $#, the number of arguments. This provides the most consistent handling for empty arguments.
You might also see the use of "$*". It is similar to "$#", but it is expanded differently when there are multiple arguments.
a() {
for i in "$#"; do echo $i; done
}
b() {
for i in "$*"; do echo $i; done
}
c() {
echo $#
}
echo "a()"
a "1 2" 3
echo "b()"
b "1 2" 3
echo "c()"
c "1 2" 3
# Result:
a()
1 2
3
b()
1 2 3
c()
2

Getting piped data to functions

Example output
Say I have a function, a:
function a() {
read -r VALUE
if [[ -n "$VALUE" ]]; then # empty variable check
echo "$VALUE"
else
echo "Default value"
fi
}
So, to demonstrate piping to that function:
nick#nick-lt:~$ echo "Something" | a
Something
However, piping data to this function should be optional. So, this should also be valid. and give the following output:
nick#nick-lt:~$ a
Default value
However, the function hangs, as the read command waits for data from stdin.
What I've tried
Honestly not a lot, because I don't know much about this, and searching on Google returned very little.
Conceptually, I thought there might be a way to "push" an empty (or whitespace, whatever works) value to the stdin stream, so that even empty stdin at least has this value appended/prepended, triggering read and then simply trim off that first/last character. I didn't find a way to do this.
Question
How can I, if possible, make both of the above scenarios work for function a, so that piping is optional?
EDIT: Apologies, quickly written question. Should work properly now.
One way is to check whether standard input (fd 0) is a terminal. If so, don't read, because that will cause the user to have to enter something.
function a() {
value=""
if [ \! -t 0 ] ; then # read only if fd 0 is a pipe (not a tty)
read -r value
fi
if [ "$value" ] ; then # if nonempty, print it!
echo "$value"
else
echo "Default value"
fi
}
I checked this on cygwin: a prints "Default value" and echo 42 | a prints "42".
Two issues:
Syntactic, You need a space, before closing ]]
Algorithmic, You need the -n (non-zero length) variable test, not -z (zero length)
So:
if [[ -n "$VALUE" ]]; then
Or simply:
if [[ "$VALUE" ]]; then
As [[ is a shell builtin, you don't strictly need the double quotes:
if [[ $VALUE ]]; then
Also refrain from using all uppercases as variable name, as these are usually used for denoting environment variables, and your defined one might somehow overwrite already existing one. So use lowercase variable name:
if [[ $value ]]; then
unless you are export-ing your variable, and strictly need it to be uppercased, also make sure it is not overwriting any already existing one.
Also, i would add a timeout to read e.g. -t 5 for 5 seconds, and if no input is entered, print the default value. Also change the function name to something more meaningful.
Do:
function myfunc () {
read -rt5 value
if [[ "$value" ]]; then
echo "$value"
else
echo "Default value"
fi
}
Example:
$ function myfunc () { read -rt5 value; if [[ "$value" ]]; then echo "$value"; else echo "Default value"; fi ;}
$ myfunc
Default value
$ echo "something" | myfunc
something
$ myfunc
foobar
foobar

In a function Bash: how to check if an argument is a set variable?

I want to implement a bash function which test is the 1st argument is actually a variable, defined somewhere.
For instance, in my .bashrc :
customPrompt='yes';
syntaxOn='no';
[...]
function my_func {
[...]
# I want to test if the string $1 is the name of a variable defined up above
# so something like:
if [[ $$1 == 'yes' ]];then
echo "$1 is set to yes";
else
echo "$1 is not set or != to yes";
fi
# but of course $$1 doesn't work
}
output needed :
$ my_func customPrompt
> customPrompt is set to yes
$ my_func syntaxOn
> syntaxOn is set but != to yes
$ my_func foobar
> foobar is not set
I tried a lot of test, like -v "$1", -z "$1", -n "$1", but all of them test $1 as a string not as a variable.
(please correct me if I make not myself clear enought)
In the bash you can use the indirect variable subtituion.
t1=some
t2=yes
fufu() {
case "${!1}" in
yes) echo "$1: set to yes. Value: ${!1}";;
'') echo "$1: not set. Value: ${!1:-UNDEF}";;
*) echo "$1: set to something other than yes. Value: ${!1}";;
esac
}
fufu t1
fufu t2
fufu t3
prints
t1: set to something other than yes. Value: some
t2: set to yes. Value: yes
t3: not set. Value: UNDEF
The ${!variablename} in bash mean indirect variable expansion. Described in the e.g. https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
Whrere:
The basic form of parameter expansion is ${parameter}. The value of
parameter is substituted. The braces are required when parameter is a
positional parameter with more than one digit, or when parameter is
followed by a character that is not to be interpreted as part of its
name.
If the first character of parameter is an exclamation point (!), a
level of variable indirection is introduced. Bash uses the value of
the variable formed from the rest of parameter as the name of the
variable; this variable is then expanded and that value is used in the
rest of the substitution, rather than the value of parameter itself.
This is known as indirect expansion. The exceptions to this are the
expansions of ${!prefix } and ${!name[#]} described below. The
exclamation point must immediately follow the left brace in order to
introduce indirection.
Also, check this: https://stackoverflow.com/a/16131829/632407 how to modify in a function a value of the variable passed indirectly.
You can check variable set or not by simply like
if [[ $var ]]
then
echo "Sorry First set variable"
else
echo $var
fi
You can do something like this for your script
customPrompt='yes';
syntaxOn='no';
function my_func
{
if [[ ${!1} ]];then
echo "$1 is set to ${!1}";
else
echo "$1 is not set";
fi
}
my_func customPrompt
my_func syntaxOn
my_func foobar
Output:
customPrompt is set to yes
syntaxOn is set to no
foobar is not set
You can customize the function as per you requirement by simply making some comparison conditions.
For more details you can check this answer
If you really want to check if your variable is set or unset (not just empty), use this format:
function my_func {
if [[ -z ${!1+.} ]]; then
echo "$1 is not set."
elif [[ ${!1} == yes ]]; then
echo "$1 is set to yes"
else
echo "$1 is set to \"${!1}\"."
fi
}
You're going to have problems...
The Bash shell is a very wily creature. Before you execute anything, Bash comes in and interpolates your command. Your command or shell script never sees whether or not you have a variable as a parameter.
$ set -x
set -x
$ foo=bar
+ foo=bar
$ echo "$foo"
+ echo bar
bar
$ set +x
The set -x turns on debugging mode in the shell. It shows you what a command actually executes. For example, I set foo=bar and then do echo $foo. My echo command doesn't see $foo. Instead, before echo executes, it interpolates $foo with bar. All echo sees at this point is that it's suppose to take bar as its argument (not $foo).
This is awesomely powerful. It means that your program doesn't have to sit there and interpret the command line. If you typed echo *.txt, echo doesn't have to expand *.txt because the shell has already done the dirty work.
For example, here's a test shell script:
#! /bin/sh
if [[ $1 = "*" ]]
then
echo "The first argument was '*'"
else
"I was passed in $# parameters"
fi
Now, I'll run my shell script:
$ test.sh *
I was passed in 24 parameters
What? Wasn't the first parameter of my script a *? No. The shell grabbed * and expanded it to be all of the files and directories in my directory. My shell script never saw the *. However, I can do this:
$ test.sh '*'
The first argument was '*'
The single quotes tell the shell not to interpolate anything. (Double quotes prevent globbing, but still allow for environment variable expansion).
This if I wanted to see if my first parameter is a variable, I have to pass it in single quotes:
$ test.sh '$foo'
And, I can do this as a test:
if [[ $1 != ${1#$} ]]
then
echo "The first parameter is the variable '$1'"
fi
The ${1#$} looks a bit strange, but it's just ${var#pattern}. This removes pattern from the left most side of $var. I am taking $1 and removing the $ if it exists. This gets expanded in the shell as:
if [[ $foo != foo ]]
which is true.
So, several things:
First, you've got to stop the shell from interpolating your variable. That means you have to use single quotes around the name.
You have to use pattern matching to verify that the first parameter starts with a $.
Once you do that, you should be able to use your variable with ${$1} in your script.

Is there a way to avoid positional arguments in bash?

I have to write a function in bash. The function will take about 7 arguments. I know that I can call a function like this:
To call a function with parameters:
function_name $arg1 $arg2
And I can refer my parameters like this inside the function:
function_name () {
echo "Parameter #1 is $1"
}
My question is, is there a better way refer to the parameters inside the function? Can I avoid the $1, $2, $3, .... thing and simply use the $arg1, $arg2, ...?
Is there a proper method for this or do I need to re-assign these parameters to some other variables inside the function? E.g.:
function_name () {
$ARG1=$1
echo "Parameter #1 is $ARG1"
}
Any example would be much appreciated.
The common way of doing that is assigning the arguments to local variables in the function, i.e.:
copy() {
local from=${1}
local to=${2}
# ...
}
Another solution may be getopt-style option parsing.
copy() {
local arg from to
while getopts 'f:t:' arg
do
case ${arg} in
f) from=${OPTARG};;
t) to=${OPTARG};;
*) return 1 # illegal option
esac
done
}
copy -f /tmp/a -t /tmp/b
Sadly, bash can't handle long options which would be more readable, i.e.:
copy --from /tmp/a --to /tmp/b
For that, you either need to use the external getopt program (which I think has long option support only on GNU systems) or implement the long option parser by hand, i.e.:
copy() {
local from to
while [[ ${1} ]]; do
case "${1}" in
--from)
from=${2}
shift
;;
--to)
to=${2}
shift
;;
*)
echo "Unknown parameter: ${1}" >&2
return 1
esac
if ! shift; then
echo 'Missing parameter argument.' >&2
return 1
fi
done
}
copy --from /tmp/a --to /tmp/b
Also see: using getopts in bash shell script to get long and short command line options
You can also be lazy, and just pass the 'variables' as arguments to the function, i.e.:
copy() {
local "${#}"
# ...
}
copy from=/tmp/a to=/tmp/b
and you'll have ${from} and ${to} in the function as local variables.
Just note that the same issue as below applies — if a particular variable is not passed, it will be inherited from parent environment. You may want to add a 'safety line' like:
copy() {
local from to # reset first
local "${#}"
# ...
}
to ensure that ${from} and ${to} will be unset when not passed.
And if something very bad is of your interest, you could also assign the arguments as global variables when invoking the function, i.e.:
from=/tmp/a to=/tmp/b copy
Then you could just use ${from} and ${to} within the copy() function. Just note that you should then always pass all parameters. Otherwise, a random variable may leak into the function.
from= to=/tmp/b copy # safe
to=/tmp/b copy # unsafe: ${from} may be declared elsewhere
If you have bash 4.1 (I think), you can also try using associative arrays. It will allow you to pass named arguments but it will be ugly. Something like:
args=( [from]=/tmp/a [to]=/tmp/b )
copy args
And then in copy(), you'd need to grab the array.
You can always pass things through the environment:
#!/bin/sh
foo() {
echo arg1 = "$arg1"
echo arg2 = "$arg2"
}
arg1=banana arg2=apple foo
All you have to do is name variables on the way in to the function call.
function test() {
echo $a
}
a='hello world' test
#prove variable didnt leak
echo $a .
This isn't just a feature of functions, you could have that function in it's own script and call a='hello world' test.sh and it would work just the same
As an extra little bit of fun, you can combine this method with positional arguments (say you were making a script and some users mightn't know the variable names).
Heck, why not let it have defaults for those arguments too? Well sure, easy peasy!
function test2() {
[[ -n "$1" ]] && local a="$1"; [[ -z "$a" ]] && local a='hi'
[[ -n "$2" ]] && local b="$2"; [[ -z "$b" ]] && local b='bye'
echo $a $b
}
#see the defaults
test2
#use positional as usual
test2 '' there
#use named parameter
a=well test2
#mix it up
b=one test2 nice
#prove variables didnt leak
echo $a $b .
Note that if test was its own script, you don't need to use the local keyword.
Shell functions have full access to any variable available in their calling scope, except for those variable names that are used as local variables inside the function itself. In addition, any non-local variable set within a function is available on the outside after the function has been called. Consider the following example:
A=aaa
B=bbb
echo "A=$A B=$B C=$C"
example() {
echo "example(): A=$A B=$B C=$C"
A=AAA
local B=BBB
C=CCC
echo "example(): A=$A B=$B C=$C"
}
example
echo "A=$A B=$B C=$C"
This snippet has the following output:
A=aaa B=bbb C=
example(): A=aaa B=bbb C=
example(): A=AAA B=BBB C=CCC
A=AAA B=bbb C=CCC
The obvious disadvantage of this approach is that functions are not self-contained any more and that setting a variable outside a function may have unintended side-effects. It would also make things harder if you wanted to pass data to a function without assigning it to a variable first, since this function is not using positional parameters any more.
The most common way to handle this is to use local variables for arguments and any temporary variable within a function:
example() {
local A="$1" B="$2" C="$3" TMP="/tmp"
...
}
This avoids polluting the shell namespace with function-local variables.
I think I have a solution for you.
With a few tricks you can actually pass named parameters to functions, along with arrays.
The method I developed allows you to access parameters passed to a function like this:
testPassingParams() {
#var hello
l=4 #array anArrayWithFourElements
l=2 #array anotherArrayWithTwo
#var anotherSingle
#reference table # references only work in bash >=4.3
#params anArrayOfVariedSize
test "$hello" = "$1" && echo correct
#
test "${anArrayWithFourElements[0]}" = "$2" && echo correct
test "${anArrayWithFourElements[1]}" = "$3" && echo correct
test "${anArrayWithFourElements[2]}" = "$4" && echo correct
# etc...
#
test "${anotherArrayWithTwo[0]}" = "$6" && echo correct
test "${anotherArrayWithTwo[1]}" = "$7" && echo correct
#
test "$anotherSingle" = "$8" && echo correct
#
test "${table[test]}" = "works"
table[inside]="adding a new value"
#
# I'm using * just in this example:
test "${anArrayOfVariedSize[*]}" = "${*:10}" && echo correct
}
fourElements=( a1 a2 "a3 with spaces" a4 )
twoElements=( b1 b2 )
declare -A assocArray
assocArray[test]="works"
testPassingParams "first" "${fourElements[#]}" "${twoElements[#]}" "single with spaces" assocArray "and more... " "even more..."
test "${assocArray[inside]}" = "adding a new value"
In other words, not only you can call your parameters by their names (which makes up for a more readable core), you can actually pass arrays (and references to variables - this feature works only in bash 4.3 though)! Plus, the mapped variables are all in the local scope, just as $1 (and others).
The code that makes this work is pretty light and works both in bash 3 and bash 4 (these are the only versions I've tested it with). If you're interested in more tricks like this that make developing with bash much nicer and easier, you can take a look at my Bash Infinity Framework, the code below was developed for that purpose.
Function.AssignParamLocally() {
local commandWithArgs=( $1 )
local command="${commandWithArgs[0]}"
shift
if [[ "$command" == "trap" || "$command" == "l="* || "$command" == "_type="* ]]
then
paramNo+=-1
return 0
fi
if [[ "$command" != "local" ]]
then
assignNormalCodeStarted=true
fi
local varDeclaration="${commandWithArgs[1]}"
if [[ $varDeclaration == '-n' ]]
then
varDeclaration="${commandWithArgs[2]}"
fi
local varName="${varDeclaration%%=*}"
# var value is only important if making an object later on from it
local varValue="${varDeclaration#*=}"
if [[ ! -z $assignVarType ]]
then
local previousParamNo=$(expr $paramNo - 1)
if [[ "$assignVarType" == "array" ]]
then
# passing array:
execute="$assignVarName=( \"\${#:$previousParamNo:$assignArrLength}\" )"
eval "$execute"
paramNo+=$(expr $assignArrLength - 1)
unset assignArrLength
elif [[ "$assignVarType" == "params" ]]
then
execute="$assignVarName=( \"\${#:$previousParamNo}\" )"
eval "$execute"
elif [[ "$assignVarType" == "reference" ]]
then
execute="$assignVarName=\"\$$previousParamNo\""
eval "$execute"
elif [[ ! -z "${!previousParamNo}" ]]
then
execute="$assignVarName=\"\$$previousParamNo\""
eval "$execute"
fi
fi
assignVarType="$__capture_type"
assignVarName="$varName"
assignArrLength="$__capture_arrLength"
}
Function.CaptureParams() {
__capture_type="$_type"
__capture_arrLength="$l"
}
alias #trapAssign='Function.CaptureParams; trap "declare -i \"paramNo+=1\"; Function.AssignParamLocally \"\$BASH_COMMAND\" \"\$#\"; [[ \$assignNormalCodeStarted = true ]] && trap - DEBUG && unset assignVarType && unset assignVarName && unset assignNormalCodeStarted && unset paramNo" DEBUG; '
alias #param='#trapAssign local'
alias #reference='_type=reference #trapAssign local -n'
alias #var='_type=var #param'
alias #params='_type=params #param'
alias #array='_type=array #param'
I was personally hoping to see some sort of syntax like
func(a b){
echo $a
echo $b
}
But since that's not a thing, and a I see quite a few references to global variables (not without the caveat of scoping and naming conflicts), I'll share my approach.
Using the copy function from Michal's answer:
copy(){
cp $from $to
}
from=/tmp/a
to=/tmp/b
copy
This is bad, because from and to are such broad words that any number of functions could use this. You could quickly end up with a naming conflict or a "leak" on your hands.
letter(){
echo "From: $from"
echo "To: $to"
echo
echo "$1"
}
to=Emily
letter "Hello Emily, you're fired for missing two days of work."
# Result:
# From: /tmp/a
# To: Emily
# Hello Emily, you're fired for missing two days of work.
So my approach is to "namespace" them. I name the variable after the function and delete it after the function is done with it. Of course, I only use it for optional values that have default values. Otherwise, I just use positional args.
copy(){
if [[ $copy_from ]] && [[ $copy_to ]]; then
cp $copy_from $copy_to
unset copy_from copy_to
fi
}
copy_from=/tmp/a
copy_to=/tmp/b
copy # Copies /tmp/a to /tmp/b
copy # Does nothing, as it ought to
letter "Emily, you're 'not' re-hired for the 'not' bribe ;)"
# From: (no /tmp/a here!)
# To:
# Emily, you're 'not' re-hired for the 'not' bribe ;)
I would make a terrible boss...
In practice, my function names are more elaborate than "copy" or "letter".
The most recent example to my memory is get_input(), which has gi_no_sort and gi_prompt.
gi_no_sort is a true/false value that determines whether the completion suggestions are sorted or not. Defaults to true
gi_prompt is a string that is...well, that's self-explanatory. Defaults to "".
The actual arguments the function takes are the source of the aforementioned 'completion suggestions' for the input prompt, and as said list is taken from $# in the function, the "named args" are optional[1], and there's no obvious way to distinguish between a string meant as a completion and a boolean/prompt-message, or really anything space-separated in bash, for that matter[2]; the above solution ended up saving me a lot of trouble.
notes:
So a hard-coded shift and $1, $2, etc. are out of the question.
E.g. is "0 Enter a command: {1..9} $(ls)" a value of 0, "Enter a command:", and a set of 1 2 3 4 5 6 7 8 9 <directory contents>? Or are "0", "Enter", "a", and "command:" part of that set as well? Bash will assume the latter whether you like it or not.
Arguments get sent to functions as an tuple of individual items, so they have no names as such, just positions. this allows some interesting possibilities like below, but it does mean that you are stuck with $1. $2, etc. as to whether to map them to better names, the question comes down to how big the function is, and how much clearer it will make reading the code. if its complex, then mapping meaningful names ($BatchID, $FirstName, $SourceFilePath) is a good idea. for simple stuff though, it probably isn't necessary. I certianly wouldn't bother if you are using names like $arg1.
now, if you just want to echo back the parameters, you can iterate over them:
for $arg in "$#"
do
echo "$arg"
done
just a fun fact; unless you are processing a list, you are probably interested in somthing more useful
this is an older topic, but still i'd like to share the function below (requires bash 4). It parses named arguments and sets the variables in the scripts environment. Just make sure you have sane default values for all parameters you need. The export statement at the end could also just be an eval. It's great in combination with shift to extend existing scripts which already take a few positional parameters and you dont want to change the syntax, but still add some flexibility.
parseOptions()
{
args=("$#")
for opt in "${args[#]}"; do
if [[ ! "${opt}" =~ .*=.* ]]; then
echo "badly formatted option \"${opt}\" should be: option=value, stopping..."
return 1
fi
local var="${opt%%=*}"
local value="${opt#*=}"
export ${var}="${value}"
done
return 0
}

Resources