I'm creating a unix script that will call and pass arguments to function in unix. Once called, the function should identify how many parameters passed to it. I tried the normal way of calling and passing of arguments to function and it works. However, I noticed that the function is counting the arguments word by word and my problem on it is that, what if I have a single argument that contains spaces or a multiple arguments but some of it should be single argument but with spaces? Is it possible to identify by the function that the arguments specified should be considered as single argument? I already used double quotation and it didn't work.
Here is the relevant portion of my script.
#!/usr/bin/ksh
ARG_CNT() {
SCRIPT_AR_CNT=$#
if [ SCRIPT_AR_CNT -lt 3 ]; then
echo "Error. Incorrect number of arguments specified."
echo "Error. Execute \"./script_template.ksh -h\" for help."
exit 1
fi
}
echo "Specify the Arguments: "
read SCRIPT_AR
if [ "${SCRIPT_AR}" = "" ] || [ "${SCRIPT_AR}" = "." ]; then
exit
else
ARG_CNT $SCRIPT_AR
fi
Your problem is that you're not quoting your variables:
ARG_CNT $SCRIPT_AR
If you don't quote regular variables, they'll be split on $IFS. You should only leave out quotes if you explicitly want this kind of splitting, and that should be rare (so comment it). Quoting also slightly improves performance.
ARG_CNT "$SCRIPT_AR"
If I may suggest more edits:
#!/usr/bin/ksh
arg_cnt() {
#ALL_CAPS should be reserved to env variables (exported vars) and shell config variables
script_ar_cnt=$#
[ script_ar_cnt -lt 3 ] && {
echo "Error. Incorrect number of arguments specified."
echo "Error. Execute \"./script_template.ksh -h\" for help."
exit 1
} >&2
}
echo "Specify the Arguments: "
read script_ar
ex_dataerr=65 # data format error
{ [ -z "$script_ar" ] || [ "$script_ar" = "." ]; } && exit "$ex_dataerr"
arg_cnt "$script_ar"
Related
I'm trying to improve this nasty old script. I found an undefined variable error to fix, so I added set -u to catch any similar errors.
I get an undefined variable error for "$1", because of this code
if [ -z "$1" ]; then
process "$command"
It just wants to know if there are arguments or not. (The behaviour when passed an empty string as the first argument is not intended. It won't be a problem if we happen to fix that as well).
What's a good way to check whether we have arguments, when running with set -u?
The code above won't work if we replace "$1" with "$#", because of the special way "$#" is expanded when there is more than one argument.
$# contains the number of arguments, so you can test for $1, $2, etc. to exist before accessing them.
if (( $# == 0 )); then
# no arguments
else
# have arguments
fi;
You can ignore the automatic exit due to set -u by setting a default value in the parameter expansion:
#!/bin/sh
set -u
if [ -z "${1-}" ] ; then
echo "\$1 not set or empty"
exit 1
fi
echo "$2" # this will crash if $2 is unset
The syntax is ${parameter-default}, which gives the string default if the named parameter is unset, and the value of parameter otherwise. Similarly, ${parameter:-default} gives default if the named parameter is unset or empty. Above, we just used an empty default value. (${1:-} would be the same here, since we'd just change an empty value to an empty value.)
That's a feature of the POSIX shell and works with other variables too, not just the positional parameters.
If you want to tell the difference between an unset variable and an empty value, use ${par+x}:
if [ "${1+x}" != x ] ; then
echo "\$1 is not set"
fi
My personal favorite :
if
(($#))
then
# We have at least one argument
fi
Or :
if
((!$#))
then
# We have no argument
fi
If each positional argument has a fixed meaning, you can also use this construct:
: ${1:?Missing first argument}
If the first positional argument isn't set, the shell will print "Missing first argument" as an error message and exit. Otherwise, the rest of the script can continue, safe in the knowledge the $1 does, indeed, have a non-empty value.
Use $#, the number of arguments. This provides the most consistent handling for empty arguments.
You might also see the use of "$*". It is similar to "$#", but it is expanded differently when there are multiple arguments.
a() {
for i in "$#"; do echo $i; done
}
b() {
for i in "$*"; do echo $i; done
}
c() {
echo $#
}
echo "a()"
a "1 2" 3
echo "b()"
b "1 2" 3
echo "c()"
c "1 2" 3
# Result:
a()
1 2
3
b()
1 2 3
c()
2
I have a bash statement to test a command line argument. If the argument passed to the script is "clean", then the script removes all .o files. Otherwise, it builds a program. However, not matter what is passed (if anything), the script still thinks that the argument "clean" is being passed.
#!/bin/bash
if test "`whoami`" != "root" ; then
echo "You must be logged in as root to build (for loopback mounting)"
echo "Enter 'su' or 'sudo bash' to switch to root"
exit
fi
ARG=$1
if [ $ARG == "clean" ] ; then
echo ">>> cleaning up object files..."
rm -r src/*.o
echo ">>> done. "
echo ">>> Press enter to continue..."
read
else
#Builds program
fi
Answer for first version of question
In bash, spaces are important. Replace:
[ $ARG=="clean" ]
With:
[ "$ARG" = "clean" ]
bash interprets $ARG=="clean" as a single-string. If a single-string is placed in a test statement, test returns false if the string is empty and true if it is non-empty. $ARG=="clean" will never be empty. Thus [ $ARG=="clean" ] will always return true.
Second, $ARG should be quoted. Otherwise, if it is empty, then the statement reduces to `[ == "clean" ] which is an error ("unary operator expected").
Third, it is best practices to use lower or mixed case for your local variables. The system uses upper-case shell variables and you don't want to accidentally overwrite one of them.
Lastly, with [...], the POSIX operator for equal, in the string sense, is =. Bash will accept either = or == but = is more portable.
first:
Every string must double quoted or will error absent argument.
second:
for string used only = or != not a == and also -n and -z commands.
third:
you may combine conditions by -a and -o commands but newer used enclose in () yous conditions so not to get error. Logical operators acts through operators presidence, fist calculate -o operator and then -a! For example
[ -n "$1" -a $1 = '-h' -o $1 = '--help' ] && { usage; exit 0; }
will work when passed to script at least 1 argument and is -h or --help. All spaces must be!!! Bush do short cycle logical evaluations. So don't trouble for case when $1 don't exist in second condition because of result of this expression is determined in first one. next don't calculate in this case. But if your argument may contains space symbols you need it double quote. You must do it also in command line too! Else you get error in script or split your arguments in two or more parts.
Operator == isn't used in test. For numbers(not siring) used -eq or -ne commands. See man 1 test for full descriptions. test EXPRESSION... equivalent of [ EXPRESSIONS... ]. More shirt form of test.
I am having a problem with testing $? within a function, regardless of how it is passed.
__myretval () {
if [ $1 -ne 0 ]; then
printf -- "%s" "$1"
fi
}
PS1="$(__myretval '$?') $"
The goal being to have retvals show if they not 0. The function is MUCH more detailed than this and it must be in the function, please do not suggest pulling this out of the function.
$ false
1 $ true
$
I have tried every combination I can think of, but nothing seems to work, including but not limited to combinations of the following. I've tried putting the value in quotes, and without quotes; I've tried doing the same for the 0, with quotes and without.
if [ $1 -ne 0 ]; then
if [ $1 != 0 ]; then
if [ $? -ne 0 ]; then
if [ $? != 0 ]; then
PS1="$(__myretval "$?") $"
PS1="$(__myretval "\$?") $"
Either the value always prints or it never prints.
This works for me:
__myretval () {
if (($1)); then
printf -- "%s" "$1"
fi
}
PS1='$(__myretval "$?") $'
It seems your problem was with the quotes.
When you state:
PS1="$(__myretval '$?') $"
what you're doing is (because of the double quotes): setting PS1 to the output of the function __myretval with the argument '$?', where $? is expanded now. So your PS1 never changes.
What you want instead is PS1 to contain the string:
$(__myretval "$?") $
so that this string is expanded (evaluated) at each new prompt. That's why you should use single quotes to define your PS1.
You need to alternate the use of single ' and double " quotes to control when
the contained expression and variables get evaluated.
Also, why pass $? as a parameter? Add local status=$? as the first statement in the function, and you won't need any checks:
__myretval () {
local status=$?
[[ $status -eq 0 ]] && return
echo $status
}
Also, because $status is now guaranteed to contain a valid string with no surprises, we won't need printf, as $( ... ) will drop the final newline.
How you set PS1 will require appropriate quoting:
PS1="${OTHER_VARIABLE}"'$(__myretval) $ '
Double quotes evaluate when PS1 gets set, single quotes postpone that.
I'm writing a bash script to automate some sysadmin stuff. I start with checking that a number of variables are the defined. The way I'm doing that now is like so:
function is_defined {
if [ -z "$2" ]; then
echo "$1 is not defined"
exit
fi
}
is_defined "PROJECTNAME" $PROJECTNAME
What I would love to have is a function that only takes one argument: the variable name as a string, checks that it is defined and if it's not defined tell the user so and exit.
What's the right substitution magic to do this in bash?
Something like this:
function is_defined {
if [ -z "${!1}" ]; then
echo "$1 is not defined"
exit 1
fi
}
${!a} as #sehe already stated will print value of variable, which name is $1
It is possible, with an exotic parameter expansion: ${!var}, which expands to the variable whose name is the $var.
Version 1
is_defined() {
if [ -z "${!1}" ]; then
echo "$1 is not defined"
exit 1
fi
}
But we can simplify it further:
bash has the ${var?errormsg} parameter expansion among its lesser-known features. It basically means "if var is defined, expand to its value; otherwise, print errormsg, set $? nonzero and skip to next command". errormsg is optional, and defaults to parameter null or not set (but the ? is required). As usual with exotic parameter expansions, it can be modified with a colon (${var:?errormsg}) to also error if the variable has an empty value.
In a non-interactive shell, an error generated by this kind of parameter expansion will abort the shellscript.
Version 2
is_defined() {
: ${!1:?"parameter $1 null or unset"}
}
Tested on my MinGW bash just then. The : command just ignores all its input, does nothing, and returns success. (This does have the annoying side-effect of polluting your error message by prefixing it with sh: !1:; use at own desire.)
I currently use this function to wrap executing commands and logging their execution, and return code, and exiting in case of a non-zero return code.
However this is problematic as apparently, it does double interpolation, making commands with single or double quotes in them break the script.
Can you recommend a better way?
Here's the function:
do_cmd()
{
eval $*
if [[ $? -eq 0 ]]
then
echo "Successfully ran [ $1 ]"
else
echo "Error: Command [ $1 ] returned $?"
exit $?
fi
}
"$#"
From http://www.gnu.org/software/bash/manual/bashref.html#Special-Parameters:
#
Expands to the positional parameters, starting from one. When the
expansion occurs within double quotes, each parameter expands to a
separate word. That is, "$#" is equivalent to "$1" "$2" .... If the
double-quoted expansion occurs within a word, the expansion of the
first parameter is joined with the beginning part of the original
word, and the expansion of the last parameter is joined with the last
part of the original word. When there are no positional parameters,
"$#" and $# expand to nothing (i.e., they are removed).
This means spaces in the arguments are re-quoted correctly.
do_cmd()
{
"$#"
ret=$?
if [[ $ret -eq 0 ]]
then
echo "Successfully ran [ $# ]"
else
echo "Error: Command [ $# ] returned $ret"
exit $ret
fi
}
Additional to "$#" what Douglas says, i would use
return $?
And not exit. It would exit your shell instead of returning from the function. If in cases you want to exit your shell if something went wrong, you can do that in the caller:
do_cmd false i will fail executing || exit
# commands in a row. exit as soon as the first fails
do_cmd one && do_cmd && two && do_cmd three || exit
(That way, you can handle failures and then exit gracefully).