I have a simple script where the first argument is reserved for the filename, and all other optional arguments should be passed to other parts of the script.
Using Google I found this wiki, but it provided a literal example:
echo "${#: -1}"
I can't get anything else to work, like:
echo "${#:2,1}"
I get "Bad substitution" from the terminal.
What is the problem, and how can I process all but the first argument passed to a bash script?
Use this:
echo "${#:2}"
The following syntax:
echo "${*:2}"
would work as well, but is not recommended, because as #Gordon already explained, that using *, it runs all of the arguments together as a single argument with spaces, while # preserves the breaks between them (even if some of the arguments themselves contain spaces). It doesn't make the difference with echo, but it matters for many other commands.
If you want a solution that also works in /bin/sh try
first_arg="$1"
shift
echo First argument: "$first_arg"
echo Remaining arguments: "$#"
shift [n] shifts the positional parameters n times. A shift sets the value of $1 to the value of $2, the value of $2 to the value of $3, and so on, decreasing the value of $# by one.
Working in bash 4 or higher version:
#!/bin/bash
echo "$0"; #"bash"
bash --version; #"GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu)"
In function:
echo $#; #"p1" "p2" "p3" "p4" "p5"
echo ${#: 0}; #"bash" "p1" "p2" "p3" "p4" "p5"
echo ${#: 1}; #"p1" "p2" "p3" "p4" "p5"
echo ${#: 2}; #"p2" "p3" "p4" "p5"
echo ${#: 2:1}; #"p2"
echo ${#: 2:2}; #"p2" "p3"
echo ${#: -2}; #"p4" "p5"
echo ${#: -2:1}; #"p4"
Notice the space between ':' and '-', otherwise it means different:
${var:-word} If var is null or unset,
word is substituted for var. The value of var does not change.
${var:+word} If var is set,
word is substituted for var. The value of var does not change.
Which is described in:Unix / Linux - Shell Substitution
http://wiki.bash-hackers.org/scripting/posparams
It explains the use of shift (if you want to discard the first N parameters) and then implementing Mass Usage
Came across this looking for something else.
While the post looks fairly old, the easiest solution in bash is illustrated below (at least bash 4) using set -- "${#:#}" where # is the starting number of the array element we want to preserve forward:
#!/bin/bash
someVar="${1}"
someOtherVar="${2}"
set -- "${#:3}"
input=${#}
[[ "${input[*],,}" == *"someword"* ]] && someNewVar="trigger"
echo -e "${someVar}\n${someOtherVar}\n${someNewVar}\n\n${#}"
Basically, the set -- "${#:3}" just pops off the first two elements in the array like perl's shift and preserves all remaining elements including the third. I suspect there's a way to pop off the last elements as well.
Related
I've got a few Unix shell scripts where I need to check that certain environment variables are set before I start doing stuff, so I do this sort of thing:
if [ -z "$STATE" ]; then
echo "Need to set STATE"
exit 1
fi
if [ -z "$DEST" ]; then
echo "Need to set DEST"
exit 1
fi
which is a lot of typing. Is there a more elegant idiom for checking that a set of environment variables is set?
EDIT: I should mention that these variables have no meaningful default value - the script should error out if any are unset.
Parameter Expansion
The obvious answer is to use one of the special forms of parameter expansion:
: ${STATE?"Need to set STATE"}
: ${DEST:?"Need to set DEST non-empty"}
Or, better (see section on 'Position of double quotes' below):
: "${STATE?Need to set STATE}"
: "${DEST:?Need to set DEST non-empty}"
The first variant (using just ?) requires STATE to be set, but STATE="" (an empty string) is OK — not exactly what you want, but the alternative and older notation.
The second variant (using :?) requires DEST to be set and non-empty.
If you supply no message, the shell provides a default message.
The ${var?} construct is portable back to Version 7 UNIX and the Bourne Shell (1978 or thereabouts). The ${var:?} construct is slightly more recent: I think it was in System III UNIX circa 1981, but it may have been in PWB UNIX before that. It is therefore in the Korn Shell, and in the POSIX shells, including specifically Bash.
It is usually documented in the shell's man page in a section called Parameter Expansion. For example, the bash manual says:
${parameter:?word}
Display Error if Null or Unset. If parameter is null or unset, the expansion of word (or a message to that effect if word is not present) is written to the standard error and the shell, if it is not interactive, exits. Otherwise, the value of parameter is substituted.
The Colon Command
I should probably add that the colon command simply has its arguments evaluated and then succeeds. It is the original shell comment notation (before '#' to end of line). For a long time, Bourne shell scripts had a colon as the first character. The C Shell would read a script and use the first character to determine whether it was for the C Shell (a '#' hash) or the Bourne shell (a ':' colon). Then the kernel got in on the act and added support for '#!/path/to/program' and the Bourne shell got '#' comments, and the colon convention went by the wayside. But if you come across a script that starts with a colon, now you will know why.
Position of double quotes
blong asked in a comment:
Any thoughts on this discussion? https://github.com/koalaman/shellcheck/issues/380#issuecomment-145872749
The gist of the discussion is:
… However, when I shellcheck it (with version 0.4.1), I get this message:
In script.sh line 13:
: ${FOO:?"The environment variable 'FOO' must be set and non-empty"}
^-- SC2086: Double quote to prevent globbing and word splitting.
Any advice on what I should do in this case?
The short answer is "do as shellcheck suggests":
: "${STATE?Need to set STATE}"
: "${DEST:?Need to set DEST non-empty}"
To illustrate why, study the following. Note that the : command doesn't echo its arguments (but the shell does evaluate the arguments). We want to see the arguments, so the code below uses printf "%s\n" in place of :.
$ mkdir junk
$ cd junk
$ > abc
$ > def
$ > ghi
$
$ x="*"
$ printf "%s\n" ${x:?You must set x} # Careless; not recommended
abc
def
ghi
$ unset x
$ printf "%s\n" ${x:?You must set x} # Careless; not recommended
bash: x: You must set x
$ printf "%s\n" "${x:?You must set x}" # Careful: should be used
bash: x: You must set x
$ x="*"
$ printf "%s\n" "${x:?You must set x}" # Careful: should be used
*
$ printf "%s\n" ${x:?"You must set x"} # Not quite careful enough
abc
def
ghi
$ x=
$ printf "%s\n" ${x:?"You must set x"} # Not quite careful enough
bash: x: You must set x
$ unset x
$ printf "%s\n" ${x:?"You must set x"} # Not quite careful enough
bash: x: You must set x
$
Note how the value in $x is expanded to first * and then a list of file names when the overall expression is not in double quotes. This is what shellcheck is recommending should be fixed. I have not verified that it doesn't object to the form where the expression is enclosed in double quotes, but it is a reasonable assumption that it would be OK.
Try this:
[ -z "$STATE" ] && echo "Need to set STATE" && exit 1;
Your question is dependent on the shell that you are using.
Bourne shell leaves very little in the way of what you're after.
BUT...
It does work, just about everywhere.
Just try and stay away from csh. It was good for the bells and whistles it added, compared the Bourne shell, but it is really creaking now. If you don't believe me, just try and separate out STDERR in csh! (-:
There are two possibilities here. The example above, namely using:
${MyVariable:=SomeDefault}
for the first time you need to refer to $MyVariable. This takes the env. var MyVariable and, if it is currently not set, assigns the value of SomeDefault to the variable for later use.
You also have the possibility of:
${MyVariable:-SomeDefault}
which just substitutes SomeDefault for the variable where you are using this construct. It doesn't assign the value SomeDefault to the variable, and the value of MyVariable will still be null after this statement is encountered.
Surely the simplest approach is to add the -u switch to the shebang (the line at the top of your script), assuming you’re using bash:
#!/bin/sh -u
This will cause the script to exit if any unbound variables lurk within.
${MyVariable:=SomeDefault}
If MyVariable is set and not null, it will reset the variable value (= nothing happens).
Else, MyVariable is set to SomeDefault.
The above will attempt to execute ${MyVariable}, so if you just want to set the variable do:
MyVariable=${MyVariable:=SomeDefault}
In my opinion the simplest and most compatible check for #!/bin/sh is:
if [ "$MYVAR" = "" ]
then
echo "Does not exist"
else
echo "Exists"
fi
Again, this is for /bin/sh and is compatible also on old Solaris systems.
bash 4.2 introduced the -v operator which tests if a name is set to any value, even the empty string.
$ unset a
$ b=
$ c=
$ [[ -v a ]] && echo "a is set"
$ [[ -v b ]] && echo "b is set"
b is set
$ [[ -v c ]] && echo "c is set"
c is set
I always used:
if [ "x$STATE" == "x" ]; then echo "Need to set State"; exit 1; fi
Not that much more concise, I'm afraid.
Under CSH you have $?STATE.
For future people like me, I wanted to go a step forward and parameterize the var name, so I can loop over a variable sized list of variable names:
#!/bin/bash
declare -a vars=(NAME GITLAB_URL GITLAB_TOKEN)
for var_name in "${vars[#]}"
do
if [ -z "$(eval "echo \$$var_name")" ]; then
echo "Missing environment variable $var_name"
exit 1
fi
done
We can write a nice assertion to check a bunch of variables all at once:
#
# assert if variables are set (to a non-empty string)
# if any variable is not set, exit 1 (when -f option is set) or return 1 otherwise
#
# Usage: assert_var_not_null [-f] variable ...
#
function assert_var_not_null() {
local fatal var num_null=0
[[ "$1" = "-f" ]] && { shift; fatal=1; }
for var in "$#"; do
[[ -z "${!var}" ]] &&
printf '%s\n' "Variable '$var' not set" >&2 &&
((num_null++))
done
if ((num_null > 0)); then
[[ "$fatal" ]] && exit 1
return 1
fi
return 0
}
Sample invocation:
one=1 two=2
assert_var_not_null one two
echo test 1: return_code=$?
assert_var_not_null one two three
echo test 2: return_code=$?
assert_var_not_null -f one two three
echo test 3: return_code=$? # this code shouldn't execute
Output:
test 1: return_code=0
Variable 'three' not set
test 2: return_code=1
Variable 'three' not set
More such assertions here: https://github.com/codeforester/base/blob/master/lib/assertions.sh
This can be a way too:
if (set -u; : $HOME) 2> /dev/null
...
...
http://unstableme.blogspot.com/2007/02/checks-whether-envvar-is-set-or-not.html
None of the above solutions worked for my purposes, in part because I checking the environment for an open-ended list of variables that need to be set before starting a lengthy process. I ended up with this:
mapfile -t arr < variables.txt
EXITCODE=0
for i in "${arr[#]}"
do
ISSET=$(env | grep ^${i}= | wc -l)
if [ "${ISSET}" = "0" ];
then
EXITCODE=-1
echo "ENV variable $i is required."
fi
done
exit ${EXITCODE}
Rather than using external shell scripts I tend to load in functions in my login shell. I use something like this as a helper function to check for environment variables rather than any set variable:
is_this_an_env_variable ()
local var="$1"
if env |grep -q "^$var"; then
return 0
else
return 1
fi
}
The $? syntax is pretty neat:
if [ $?BLAH == 1 ]; then
echo "Exists";
else
echo "Does not exist";
fi
Or anything in shell script to implement the same thing?
I was doing an assignment that requires us to write a Bourne shell script that shows the last argument of a bunch, e.g.:
lastarg arg1 arg2 arg3 ..... argN
which would show:
argN
I was not sure if there's any equivalencies to hasNext in Java as it's easy to implement.
Sorry if I was rude and unclear.
#!/bin/bash
all=($#)
# to make things short:
# you can use what's in a variable as a variable name
last=$(( $# )) # get number of arguments
echo ${!last} # use that to get the last argument. notice the !
# while the number of arguments is not 0
# put what is in argument $1 into next
# move all arguments to the left
# $1=foo $2=bar $4=moo
# shift
# $1=bar $2=moo
while [ $# -ne 0 ]; do
next=$1
shift
echo $next
done
# but the problem was the last argument...
# all=($#): put all arguments into an array
# ${all[n]}: get argument number n
# $(( 1+2 )): do math
# ${#all[#]}: get the count of element in an array
echo -e "all:\t ${all[#]}"
echo -e "second:\t ${all[1]}"
echo -e "fifth:\t ${all[4]}"
echo -e "# of elements:\t ${#all[#]}"
echo -e "last element:\t ${all[ (( ${#all[#]} -1 )) ]}"
ok, last edit (omg :p)
$ sh unix-java-hasnext.sh one two three seventyfour sixtyeight
sixtyeight
one
two
three
seventyfour
sixtyeight
all: one two three seventyfour sixtyeight
second: two
fifth: sixtyeight
# of elements: 5
last element: sixtyeight
POSIX-based shell languages don't implement iterators.
The only things you have are for V in words ; do ... ; done or implementing the loop with while and manual stuff to update and test the loop variable.
This is not place to make wild guesses, still: Bash provide shift operator, for loop and more.
(If this is for argument processing you have getopt library. More info in Using getopts in bash shell script to get long and short command line options )
I found this function:
findsit()
{
OPTIND=1
local case=""
local usage="findsit: find string in files.
Usage: fstr [-i] \"pattern\" [\"filename pattern\"] "
while getopts :it opt
do
case "$opt" in
i) case="-i " ;;
*) echo "$usage"; return;;
esac
done
shift $(( $OPTIND - 1 ))
if [ "$#" -lt 1 ]; then
echo "$usage"
return;
fi
find . -type f -name "${2:-*}" -print0 | \
xargs -0 egrep --color=always -sn ${case} "$1" 2>&- | more
}
I understand the output and what it does, but there are some terms I still don't understand and find it hard to find a reference, but believe they would be useful to learn in my programming. Can anyone quickly explain them? Some don't have man pages.
local
getopts
case
shift
$#
${2:-*}
2>&-
Thank you.
local: Local variable. Let's say you had a variable called foo in your program. You call a function that also has a variable foo. Let's say the function changes the value of foo.
Try this program:
testme()
{
foo="barfoo"
echo "In function: $foo"
}
foo="bar"
echo "In program: $foo"
testme
echo "After function in program: $foo"
Notice that the value of $foo has been changed by the function even after the function has completed. By declaring local foo="barfoo" instead of just foo="barfoo", we could have prevented this from happening.
case: A case statement is a way of specifying a list of options and what you want to do with each of those options. It is sort of like an if/then/else statement.
These two are more or less equivelent:
if [[ "$foo" == "bar" ]]
then
echo "He said 'bar'!"
elif [[ "$foo" == "foo" ]]
then
echo "Don't repeat yourself!"
elif [[ "$foo" == "foobar" ]]
then
echo "Shouldn't it be 'fubar'?"
else
echo "You didn't put anything I understand"
fi
and
case $foo in
bar)
echo "He said 'bar'!"
;;
foo)
echo "Don't repeat yourself!"
;;
foobar)
echo "Shouldn't it be 'fubar'?"
;;
*)
echo "You didn't put anything I understand"
;;
esac
The ;; ends the case option. Otherwise, it'll drop down to the next one and execute those lines too. I have each option in three lines, but they're normally combined like
foobar) echo "Shouldn't it be 'fubar'?";;
shift: The command line arguments are put in the variable called $*. When you say shift, it takes the first value in that $* variable, and deletes it.
getopts: Getopts is a rather complex command. It's used to parse the value of single letter options in the $# variable (which contains the parameters and arguments from the command line). Normally, you employ getopts in a while loop and use case statement to parse the output. The format is getopts <options> var. The var is the variable that will contain each option one at a time. The specify the single letter parameters and which ones require an argument. The best way to explain it is to show you a simple example.
$#: The number of parameters/arguments on the command line.
${var:-alternative}: This says to use the value of the environment variable $var. However, if this environment variable is not set or is null, use the value alternative instead. In this program ${2:-*} is used instead. The $2 represents the second parameter of what's left in the command line parameters/arguments after everything has been shifted out due to the shift command.
2>&-: This moves Standard Error to Standard Output. Standard Error is where error messages are put. Normally, they're placed on your terminal screen just like Standard Output. However, if you redirect your output into a file, error messages are still printed to the terminal window. In this case, redirecting the output to a file will also redirect any error messages too.
Those are bash built-ins. You should read the bash man page or, for getopts, try help getopts
One at a time (it's really annoying to type on ipad hence switched to laptop):
local lets you define local variables (within the scope of a function)
getopts is a bash builtin which implements getopt-style argument processing (the -a, -b... type arguments)
case is the bash form for a switch statement. The syntax is
case: case WORD in [PATTERN [| PATTERN]...) COMMANDS ;;]... esac
shift shifts all of the arguments by 1 (so that the second argument becomes the first, third becomes second, ...) similar to perl shift. If you specify an argument, it will shift by that many indices (so shift 2 will assign $3 -> $1, $4 -> $2, ...)
$# is the number of arguments passed to the function
${2:-*} is a default argument form. Basically, it looks at the second argument ($2 is the second arg) and if it is not assigned, it will replace it with *.
2>&- is output redirection (in this case, for standard error)
$1 is the first argument.
$# is all of them.
How can I find the last argument passed to a shell
script?
This is Bash-only:
echo "${#: -1}"
This is a bit of a hack:
for last; do true; done
echo $last
This one is also pretty portable (again, should work with bash, ksh and sh) and it doesn't shift the arguments, which could be nice.
It uses the fact that for implicitly loops over the arguments if you don't tell it what to loop over, and the fact that for loop variables aren't scoped: they keep the last value they were set to.
$ set quick brown fox jumps
$ echo ${*: -1:1} # last argument
jumps
$ echo ${*: -1} # or simply
jumps
$ echo ${*: -2:1} # next to last
fox
The space is necessary so that it doesn't get interpreted as a default value.
Note that this is bash-only.
The simplest answer for bash 3.0 or greater is
_last=${!#} # *indirect reference* to the $# variable
# or
_last=$BASH_ARGV # official built-in (but takes more typing :)
That's it.
$ cat lastarg
#!/bin/bash
# echo the last arg given:
_last=${!#}
echo $_last
_last=$BASH_ARGV
echo $_last
for x; do
echo $x
done
Output is:
$ lastarg 1 2 3 4 "5 6 7"
5 6 7
5 6 7
1
2
3
4
5 6 7
The following will work for you.
# is for array of arguments.
: means at
$# is the length of the array of arguments.
So the result is the last element:
${#:$#}
Example:
function afunction{
echo ${#:$#}
}
afunction -d -o local 50
#Outputs 50
Note that this is bash-only.
Use indexing combined with length of:
echo ${#:${##}}
Note that this is bash-only.
Found this when looking to separate the last argument from all the previous one(s).
Whilst some of the answers do get the last argument, they're not much help if you need all the other args as well. This works much better:
heads=${#:1:$#-1}
tail=${#:$#}
Note that this is bash-only.
This works in all POSIX-compatible shells:
eval last=\${$#}
Source: http://www.faqs.org/faqs/unix-faq/faq/part2/section-12.html
Here is mine solution:
pretty portable (all POSIX sh, bash, ksh, zsh) should work
does not shift original arguments (shifts a copy).
does not use evil eval
does not iterate through the whole list
does not use external tools
Code:
ntharg() {
shift $1
printf '%s\n' "$1"
}
LAST_ARG=`ntharg $# "$#"`
From oldest to newer solutions:
The most portable solution, even older sh (works with spaces and glob characters) (no loop, faster):
eval printf "'%s\n'" "\"\${$#}\""
Since version 2.01 of bash
$ set -- The quick brown fox jumps over the lazy dog
$ printf '%s\n' "${!#} ${#:(-1)} ${#: -1} ${#:~0} ${!#}"
dog dog dog dog dog
For ksh, zsh and bash:
$ printf '%s\n' "${#: -1} ${#:~0}" # the space beetwen `:`
# and `-1` is a must.
dog dog
And for "next to last":
$ printf '%s\n' "${#:~1:1}"
lazy
Using printf to workaround any issues with arguments that start with a dash (like -n).
For all shells and for older sh (works with spaces and glob characters) is:
$ set -- The quick brown fox jumps over the lazy dog "the * last argument"
$ eval printf "'%s\n'" "\"\${$#}\""
The last * argument
Or, if you want to set a last var:
$ eval last=\${$#}; printf '%s\n' "$last"
The last * argument
And for "next to last":
$ eval printf "'%s\n'" "\"\${$(($#-1))}\""
dog
If you are using Bash >= 3.0
echo ${BASH_ARGV[0]}
For bash, this comment suggested the very elegant:
echo "${#:$#}"
To silence shellcheck, use:
echo ${*:$#}
As a bonus, both also work in zsh.
shift `expr $# - 1`
echo "$1"
This shifts the arguments by the number of arguments minus 1, and returns the first (and only) remaining argument, which will be the last one.
I only tested in bash, but it should work in sh and ksh as well.
I found #AgileZebra's answer (plus #starfry's comment) the most useful, but it sets heads to a scalar. An array is probably more useful:
heads=( "${#: 1: $# - 1}" )
tail=${#:${##}}
Note that this is bash-only.
Edit: Removed unnecessary $(( )) according to #f-hauri's comment.
A solution using eval:
last=$(eval "echo \$$#")
echo $last
If you want to do it in a non-destructive way, one way is to pass all the arguments to a function and return the last one:
#!/bin/bash
last() {
if [[ $# -ne 0 ]] ; then
shift $(expr $# - 1)
echo "$1"
#else
#do something when no arguments
fi
}
lastvar=$(last "$#")
echo $lastvar
echo "$#"
pax> ./qq.sh 1 2 3 a b
b
1 2 3 a b
If you don't actually care about keeping the other arguments, you don't need it in a function but I have a hard time thinking of a situation where you would never want to keep the other arguments unless they've already been processed, in which case I'd use the process/shift/process/shift/... method of sequentially processing them.
I'm assuming here that you want to keep them because you haven't followed the sequential method. This method also handles the case where there's no arguments, returning "". You could easily adjust that behavior by inserting the commented-out else clause.
For tcsh:
set X = `echo $* | awk -F " " '{print $NF}'`
somecommand "$X"
I'm quite sure this would be a portable solution, except for the assignment.
After reading the answers above I wrote a Q&D shell script (should work on sh and bash) to run g++ on PGM.cpp to produce executable image PGM. It assumes that the last argument on the command line is the file name (.cpp is optional) and all other arguments are options.
#!/bin/sh
if [ $# -lt 1 ]
then
echo "Usage: `basename $0` [opt] pgm runs g++ to compile pgm[.cpp] into pgm"
exit 2
fi
OPT=
PGM=
# PGM is the last argument, all others are considered options
for F; do OPT="$OPT $PGM"; PGM=$F; done
DIR=`dirname $PGM`
PGM=`basename $PGM .cpp`
# put -o first so it can be overridden by -o specified in OPT
set -x
g++ -o $DIR/$PGM $OPT $DIR/$PGM.cpp
The following will set LAST to last argument without changing current environment:
LAST=$({
shift $(($#-1))
echo $1
})
echo $LAST
If other arguments are no longer needed and can be shifted it can be simplified to:
shift $(($#-1))
echo $1
For portability reasons following:
shift $(($#-1));
can be replaced with:
shift `expr $# - 1`
Replacing also $() with backquotes we get:
LAST=`{
shift \`expr $# - 1\`
echo $1
}`
echo $LAST
echo $argv[$#argv]
Now I just need to add some text because my answer was too short to post. I need to add more text to edit.
This is part of my copy function:
eval echo $(echo '$'"$#")
To use in scripts, do this:
a=$(eval echo $(echo '$'"$#"))
Explanation (most nested first):
$(echo '$'"$#") returns $[nr] where [nr] is the number of parameters. E.g. the string $123 (unexpanded).
echo $123 returns the value of 123rd parameter, when evaluated.
eval just expands $123 to the value of the parameter, e.g. last_arg. This is interpreted as a string and returned.
Works with Bash as of mid 2015.
To return the last argument of the most recently used command use the special parameter:
$_
In this instance it will work if it is used within the script before another command has been invoked.
#! /bin/sh
next=$1
while [ -n "${next}" ] ; do
last=$next
shift
next=$1
done
echo $last
Try the below script to find last argument
# cat arguments.sh
#!/bin/bash
if [ $# -eq 0 ]
then
echo "No Arguments supplied"
else
echo $* > .ags
sed -e 's/ /\n/g' .ags | tac | head -n1 > .ga
echo "Last Argument is: `cat .ga`"
fi
Output:
# ./arguments.sh
No Arguments supplied
# ./arguments.sh testing for the last argument value
Last Argument is: value
Thanks.
There is a much more concise way to do this. Arguments to a bash script can be brought into an array, which makes dealing with the elements much simpler. The script below will always print the last argument passed to a script.
argArray=( "$#" ) # Add all script arguments to argArray
arrayLength=${#argArray[#]} # Get the length of the array
lastArg=$((arrayLength - 1)) # Arrays are zero based, so last arg is -1
echo ${argArray[$lastArg]}
Sample output
$ ./lastarg.sh 1 2 buckle my shoe
shoe
Using parameter expansion (delete matched beginning):
args="$#"
last=${args##* }
It's also easy to get all before last:
prelast=${args% *}
$ echo "${*: -1}"
That will print the last argument
With GNU bash version >= 3.0:
num=$# # get number of arguments
echo "${!num}" # print last argument
Just use !$.
$ mkdir folder
$ cd !$ # will run: cd folder
I've got a few Unix shell scripts where I need to check that certain environment variables are set before I start doing stuff, so I do this sort of thing:
if [ -z "$STATE" ]; then
echo "Need to set STATE"
exit 1
fi
if [ -z "$DEST" ]; then
echo "Need to set DEST"
exit 1
fi
which is a lot of typing. Is there a more elegant idiom for checking that a set of environment variables is set?
EDIT: I should mention that these variables have no meaningful default value - the script should error out if any are unset.
Parameter Expansion
The obvious answer is to use one of the special forms of parameter expansion:
: ${STATE?"Need to set STATE"}
: ${DEST:?"Need to set DEST non-empty"}
Or, better (see section on 'Position of double quotes' below):
: "${STATE?Need to set STATE}"
: "${DEST:?Need to set DEST non-empty}"
The first variant (using just ?) requires STATE to be set, but STATE="" (an empty string) is OK — not exactly what you want, but the alternative and older notation.
The second variant (using :?) requires DEST to be set and non-empty.
If you supply no message, the shell provides a default message.
The ${var?} construct is portable back to Version 7 UNIX and the Bourne Shell (1978 or thereabouts). The ${var:?} construct is slightly more recent: I think it was in System III UNIX circa 1981, but it may have been in PWB UNIX before that. It is therefore in the Korn Shell, and in the POSIX shells, including specifically Bash.
It is usually documented in the shell's man page in a section called Parameter Expansion. For example, the bash manual says:
${parameter:?word}
Display Error if Null or Unset. If parameter is null or unset, the expansion of word (or a message to that effect if word is not present) is written to the standard error and the shell, if it is not interactive, exits. Otherwise, the value of parameter is substituted.
The Colon Command
I should probably add that the colon command simply has its arguments evaluated and then succeeds. It is the original shell comment notation (before '#' to end of line). For a long time, Bourne shell scripts had a colon as the first character. The C Shell would read a script and use the first character to determine whether it was for the C Shell (a '#' hash) or the Bourne shell (a ':' colon). Then the kernel got in on the act and added support for '#!/path/to/program' and the Bourne shell got '#' comments, and the colon convention went by the wayside. But if you come across a script that starts with a colon, now you will know why.
Position of double quotes
blong asked in a comment:
Any thoughts on this discussion? https://github.com/koalaman/shellcheck/issues/380#issuecomment-145872749
The gist of the discussion is:
… However, when I shellcheck it (with version 0.4.1), I get this message:
In script.sh line 13:
: ${FOO:?"The environment variable 'FOO' must be set and non-empty"}
^-- SC2086: Double quote to prevent globbing and word splitting.
Any advice on what I should do in this case?
The short answer is "do as shellcheck suggests":
: "${STATE?Need to set STATE}"
: "${DEST:?Need to set DEST non-empty}"
To illustrate why, study the following. Note that the : command doesn't echo its arguments (but the shell does evaluate the arguments). We want to see the arguments, so the code below uses printf "%s\n" in place of :.
$ mkdir junk
$ cd junk
$ > abc
$ > def
$ > ghi
$
$ x="*"
$ printf "%s\n" ${x:?You must set x} # Careless; not recommended
abc
def
ghi
$ unset x
$ printf "%s\n" ${x:?You must set x} # Careless; not recommended
bash: x: You must set x
$ printf "%s\n" "${x:?You must set x}" # Careful: should be used
bash: x: You must set x
$ x="*"
$ printf "%s\n" "${x:?You must set x}" # Careful: should be used
*
$ printf "%s\n" ${x:?"You must set x"} # Not quite careful enough
abc
def
ghi
$ x=
$ printf "%s\n" ${x:?"You must set x"} # Not quite careful enough
bash: x: You must set x
$ unset x
$ printf "%s\n" ${x:?"You must set x"} # Not quite careful enough
bash: x: You must set x
$
Note how the value in $x is expanded to first * and then a list of file names when the overall expression is not in double quotes. This is what shellcheck is recommending should be fixed. I have not verified that it doesn't object to the form where the expression is enclosed in double quotes, but it is a reasonable assumption that it would be OK.
Try this:
[ -z "$STATE" ] && echo "Need to set STATE" && exit 1;
Your question is dependent on the shell that you are using.
Bourne shell leaves very little in the way of what you're after.
BUT...
It does work, just about everywhere.
Just try and stay away from csh. It was good for the bells and whistles it added, compared the Bourne shell, but it is really creaking now. If you don't believe me, just try and separate out STDERR in csh! (-:
There are two possibilities here. The example above, namely using:
${MyVariable:=SomeDefault}
for the first time you need to refer to $MyVariable. This takes the env. var MyVariable and, if it is currently not set, assigns the value of SomeDefault to the variable for later use.
You also have the possibility of:
${MyVariable:-SomeDefault}
which just substitutes SomeDefault for the variable where you are using this construct. It doesn't assign the value SomeDefault to the variable, and the value of MyVariable will still be null after this statement is encountered.
Surely the simplest approach is to add the -u switch to the shebang (the line at the top of your script), assuming you’re using bash:
#!/bin/sh -u
This will cause the script to exit if any unbound variables lurk within.
${MyVariable:=SomeDefault}
If MyVariable is set and not null, it will reset the variable value (= nothing happens).
Else, MyVariable is set to SomeDefault.
The above will attempt to execute ${MyVariable}, so if you just want to set the variable do:
MyVariable=${MyVariable:=SomeDefault}
In my opinion the simplest and most compatible check for #!/bin/sh is:
if [ "$MYVAR" = "" ]
then
echo "Does not exist"
else
echo "Exists"
fi
Again, this is for /bin/sh and is compatible also on old Solaris systems.
bash 4.2 introduced the -v operator which tests if a name is set to any value, even the empty string.
$ unset a
$ b=
$ c=
$ [[ -v a ]] && echo "a is set"
$ [[ -v b ]] && echo "b is set"
b is set
$ [[ -v c ]] && echo "c is set"
c is set
I always used:
if [ "x$STATE" == "x" ]; then echo "Need to set State"; exit 1; fi
Not that much more concise, I'm afraid.
Under CSH you have $?STATE.
For future people like me, I wanted to go a step forward and parameterize the var name, so I can loop over a variable sized list of variable names:
#!/bin/bash
declare -a vars=(NAME GITLAB_URL GITLAB_TOKEN)
for var_name in "${vars[#]}"
do
if [ -z "$(eval "echo \$$var_name")" ]; then
echo "Missing environment variable $var_name"
exit 1
fi
done
We can write a nice assertion to check a bunch of variables all at once:
#
# assert if variables are set (to a non-empty string)
# if any variable is not set, exit 1 (when -f option is set) or return 1 otherwise
#
# Usage: assert_var_not_null [-f] variable ...
#
function assert_var_not_null() {
local fatal var num_null=0
[[ "$1" = "-f" ]] && { shift; fatal=1; }
for var in "$#"; do
[[ -z "${!var}" ]] &&
printf '%s\n' "Variable '$var' not set" >&2 &&
((num_null++))
done
if ((num_null > 0)); then
[[ "$fatal" ]] && exit 1
return 1
fi
return 0
}
Sample invocation:
one=1 two=2
assert_var_not_null one two
echo test 1: return_code=$?
assert_var_not_null one two three
echo test 2: return_code=$?
assert_var_not_null -f one two three
echo test 3: return_code=$? # this code shouldn't execute
Output:
test 1: return_code=0
Variable 'three' not set
test 2: return_code=1
Variable 'three' not set
More such assertions here: https://github.com/codeforester/base/blob/master/lib/assertions.sh
This can be a way too:
if (set -u; : $HOME) 2> /dev/null
...
...
http://unstableme.blogspot.com/2007/02/checks-whether-envvar-is-set-or-not.html
None of the above solutions worked for my purposes, in part because I checking the environment for an open-ended list of variables that need to be set before starting a lengthy process. I ended up with this:
mapfile -t arr < variables.txt
EXITCODE=0
for i in "${arr[#]}"
do
ISSET=$(env | grep ^${i}= | wc -l)
if [ "${ISSET}" = "0" ];
then
EXITCODE=-1
echo "ENV variable $i is required."
fi
done
exit ${EXITCODE}
Rather than using external shell scripts I tend to load in functions in my login shell. I use something like this as a helper function to check for environment variables rather than any set variable:
is_this_an_env_variable ()
local var="$1"
if env |grep -q "^$var"; then
return 0
else
return 1
fi
}
The $? syntax is pretty neat:
if [ $?BLAH == 1 ]; then
echo "Exists";
else
echo "Does not exist";
fi