I found this code in an autoconf configure script. What is the following code trying to do?
if ${am_cv_autoconf_installed+:} false; then :
$as_echo_n "(cached) " >&6
else
Lots of stuff going on here. Let's break it down.
First of all, the syntax ${var+foo} is a common idiom for checking whether the variable var has been defined. If var is defined, then ${var+foo} will expand to the string foo. Otherwise, it will expand to an empty string.
Most commonly (in bash, anyway), this syntax is used as follows:
if [ -n "${var+foo}" ]; then
echo "var is defined"
else
echo "var is not defined"
fi
Note that foo is just any arbitrary text. You could just as well use x or abc or ilovetacos.
However, in your example, there are no brackets. So whatever ${am_cv_autoconf_installed+:} expands to (if anything) will be evaluated as a command. As it turns out, : is actually a shell command. Namely, it's the "null command". It has no effect, other than to set the command exit status to 0 (success). Likewise, false is a shell command that does nothing, but sets the exit status to 1 (failure).
So depending on whether the variable am_cv_autoconf_installed is defined, the script will then execute one of the following commands:
: false
-OR-
false
In the first case, it calls the null command with the string "false" as an argument, which is simply ignored, causing the if statement to evaluate to true. In the second case, it calls the false command, causing the if statement to evaluate to false.
So all this is really doing is checking whether am_cv_autoconf_installed is defined. If this were just an ordinary bash script and didn't require any particular level of portability, it would have been a lot simpler to just do:
if [ -n "${am_cv_autoconf_installed+x}" ]; then
However, since this is a configure script, it was no doubt written this way for maximum portability. Not all shells will have the -n test. Some may not even have the [ ] syntax.
The rest should be fairly self-explanatory. If the variable is defined, the if statement evaluates to true (or more accurately, it sets the exit status to 0), causing the $as_echo_n "(cached) " >&6 line to execute. Otherwise, it does whatever is in the else clause.
I'm guessing $as_echo_n is just the environment-specific version of echo -n, which means it will print "(cached) " with no trailing newline. The >&6 means the output will be redirected to file descriptor 6 which presumably is set up elsewhere in the script (probably a log file or some such).
My apology for not being able to find such a seemingly trivial thing myself.
I need to pass more than one boolean parameter to shell script (Bash) as follows:
./script --parameter1 --parameter2
and so on.
All are to be considered true if set.
In the beginning of the script, I use set -u.
Normal parameter with value passing I currently do as follows:
# this script accepts the following arguments:
# 1. mode
# 2. window
while [[ $# > 1 ]]
do
cmdline_argument="$1"
case $cmdline_argument in
-m|--mode)
mode="$2"
shift
;;
-w|--window)
window="$2"
shift
;;
esac
shift
done
I would like to add something like
-r|--repeat)
repeat=true
shift
;;
I do not understand why it does not work as expected.
It exits immediately with error:
./empire: line 450: repeat: unbound variable
Where the line 450 is:
if [ "$repeat" == true ];
When you use set -u, it's an error to dereference any variable that hasn't had a value explicitly assigned.
Thus, you need to set repeat=0 (or repeat=false) at the top of your script, or to use a dereference method that has an explicit default behavior when the value is unset; see BashFAQ #112.
Just saw this in a script at my internship. I've googled but for the life of me
I cannot figure out what kind of for loop this is.
# Declare variables for use
35 args=$*
36 noofargs=$#
37 isjavapassed="false"
38 isjavavalid="false"
39 argsarray=""
40 d64option=""
41
42 varcount=0;
43 for tempvar
44 do ....
how does this for loop work? it is literally just for tempvar? where is the terminating condition or the list it iterates over? There is nothing above this but comments. Any help is much appreciated. Thank you.
From the Bash Reference Manual section on Looping Constructs:
The syntax of the for command is:
for name [ [in [words …] ] ; ] do commands; done
Expand words, and execute commands once for each member in the resultant list, with name bound to the current member. If ‘in words’ is not present, the for command executes the commands once for each positional parameter that is set, as if ‘in "$#"’ had been specified (see Special Parameters). The return status is the exit status of the last command that executes. If there are no items in the expansion of words, no commands are executed, and the return status is zero.
So that's a case where "in words" is missing and so it loops over the positional parameters.
Basic for syntax is:
for arg in [list]
do
command(s)...
done
Omitting the in [list] part of a for loop causes the loop to operate
on $# -- the positional parameters.
Try running the below script with and without arguments:
for a
do
echo -n "$a "
done
SOURCE Example 11-6
Can anyone tells me what does this script means found in a .sh file:
[ ! -n "$T_R" ] && echo "Message Appear" && exit 1;
Edit: Correcting for misinformation pointed out by tripleee
The brackets [ ]
are an alias for 'test', which tests whether a condition is met. Not to complicate matters, but do note that this is discrete from the the bash shell keyword [[ ]] (Thanks, tripleee for clearing that up!). See This post for further details. These days, most people seem to use the latter due to its more robust feature set.
Between the brackets, the script is testing to determine whether the variable "$T_R" is an empty string.
The -n operator returns true if the length of the string passed to it as an argument is non-empty.
The ! inverts the case (the test succeeds if the result is not
true). So in this case, test suceeds (returns 0) if the length of
the string variable "$T_R" is **not non-zero ** (i.e. if the
variable is an empty-string, or is non-existant).
The double-ampersand, && operator means only execute the subsequent code in the event of success, so the message "Message Appear" will only be echoed in the event the test succeeds (again, if "$T_R" is empty or unset).
Finally, the && exit 1 says to exit returning status 1 after successfully echoing the Message Appear message.
The bash and test man pages are extremely helpful on all of these topics and should be consulted for further details.
The chained && is a common short-circuit idiom.
Instead of writing
if true; then
if true; then
echo moo
fi
fi
you can abbreviate to just true && true && echo moo.
echo will usually succeed so true && echo moo && exit 1 will execute both the echo and the exit if true succeeds (which obviously it always will).
(There are probably extreme corner cases where echo could fail, but if that happens, you are toast anyways so I don't think it makes sense to try to guard against those.)
The [ is an alias for test which is a general comparison helper for shell scripts (in Bash, it's even a built-in). test -n checks whether a string is non-empty.
! is the general negation operator, so it inverts the test to checking for an empty string.
(This is slightly unidiomatic, because there is a separate test -z "$T_R" which checks specifically for the string being empty.)
What is the purpose of a command that does nothing, being little more than a comment leader, but is actually a shell builtin in and of itself?
It's slower than inserting a comment into your scripts by about 40% per call, which probably varies greatly depending on the size of the comment. The only possible reasons I can see for it are these:
# poor man's delay function
for ((x=0;x<100000;++x)) ; do : ; done
# inserting comments into string of commands
command ; command ; : we need a comment in here for some reason ; command
# an alias for `true'
while : ; do command ; done
I guess what I'm really looking for is what historical application it might have had.
Historically, Bourne shells didn't have true and false as built-in commands. true was instead simply aliased to :, and false to something like let 0.
: is slightly better than true for portability to ancient Bourne-derived shells. As a simple example, consider having neither the ! pipeline operator nor the || list operator (as was the case for some ancient Bourne shells). This leaves the else clause of the if statement as the only means for branching based on exit status:
if command; then :; else ...; fi
Since if requires a non-empty then clause and comments don't count as non-empty, : serves as a no-op.
Nowadays (that is: in a modern context) you can usually use either : or true. Both are specified by POSIX, and some find true easier to read. However there is one interesting difference: : is a so-called POSIX special built-in, whereas true is a regular built-in.
Special built-ins are required to be built into the shell; Regular built-ins are only "typically" built in, but it isn't strictly guaranteed. There usually shouldn't be a regular program named : with the function of true in PATH of most systems.
Probably the most crucial difference is that with special built-ins, any variable set by the built-in - even in the environment during simple command evaluation - persists after the command completes, as demonstrated here using ksh93:
$ unset x; ( x=hi :; echo "$x" )
hi
$ ( x=hi true; echo "$x" )
$
Note that Zsh ignores this requirement, as does GNU Bash except when operating in POSIX compatibility mode, but all other major "POSIX sh derived" shells observe this including dash, ksh93, and mksh.
Another difference is that regular built-ins must be compatible with exec - demonstrated here using Bash:
$ ( exec : )
-bash: exec: :: not found
$ ( exec true )
$
POSIX also explicitly notes that : may be faster than true, though this is of course an implementation-specific detail.
I use it to easily enable/disable variable commands:
#!/bin/bash
if [[ "$VERBOSE" == "" || "$VERBOSE" == "0" ]]; then
vecho=":" # no "verbose echo"
else
vecho=echo # enable "verbose echo"
fi
$vecho "Verbose echo is ON"
Thus
$ ./vecho
$ VERBOSE=1 ./vecho
Verbose echo is ON
This makes for a clean script. This cannot be done with '#'.
Also,
: >afile
is one of the simplest ways to guarantee that 'afile' exists but is 0 length.
A useful application for : is if you're only interested in using parameter expansions for their side-effects rather than actually passing their result to a command.
In that case, you use the parameter expansion as an argument to either : or false depending upon whether you want an exit status of 0 or 1. An example might be
: "${var:=$1}"
Since : is a builtin, it should be pretty fast.
: can also be for block comment (similar to /* */ in C language). For example, if you want to skip a block of code in your script, you can do this:
: << 'SKIP'
your code block here
SKIP
Two more uses not mentioned in other answers:
Logging
Take this example script:
set -x
: Logging message here
example_command
The first line, set -x, makes the shell print out the command before running it. It's quite a useful construct. The downside is that the usual echo Log message type of statement now prints the message twice. The colon method gets round that. Note that you'll still have to escape special characters just like you would for echo.
Cron job titles
I've seen it being used in cron jobs, like this:
45 10 * * * : Backup for database ; /opt/backup.sh
This is a cron job that runs the script /opt/backup.sh every day at 10:45. The advantage of this technique is that it makes for better looking email subjects when the /opt/backup.sh prints some output.
It's similar to pass in Python.
One use would be to stub out a function until it gets written:
future_function () { :; }
If you'd like to truncate a file to zero bytes, useful for clearing logs, try this:
:> file.log
You could use it in conjunction with backticks (``) to execute a command without displaying its output, like this:
: `some_command`
Of course you could just do some_command > /dev/null, but the :-version is somewhat shorter.
That being said I wouldn't recommend actually doing that as it would just confuse people. It just came to mind as a possible use-case.
It's also useful for polyglot programs:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
~function(){ ... }
This is now both an executable shell-script and a JavaScript program: meaning ./filename.js, sh filename.js, and node filename.js all work.
(Definitely a little bit of a strange usage, but effective nonetheless.)
Some explication, as requested:
Shell-scripts are evaluated line-by-line; and the exec command, when run, terminates the shell and replaces it's process with the resultant command. This means that to the shell, the program looks like this:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
As long as no parameter expansion or aliasing is occurring in the word, any word in a shell-script can be wrapped in quotes without changing its' meaning; this means that ':' is equivalent to : (we've only wrapped it in quotes here to achieve the JavaScript semantics described below)
... and as described above, the first command on the first line is a no-op (it translates to : //, or if you prefer to quote the words, ':' '//'. Notice that the // carries no special meaning here, as it does in JavaScript; it's just a meaningless word that's being thrown away.)
Finally, the second command on the first line (after the semicolon), is the real meat of the program: it's the exec call which replaces the shell-script being invoked, with a Node.js process invoked to evaluate the rest of the script.
Meanwhile, the first line, in JavaScript, parses as a string-literal (':'), and then a comment, which is deleted; thus, to JavaScript, the program looks like this:
':'
~function(){ ... }
Since the string-literal is on a line by itself, it is a no-op statement, and is thus stripped from the program; that means that the entire line is removed, leaving only your program-code (in this example, the function(){ ... } body.)
Self-documenting functions
You can also use : to embed documentation in a function.
Assume you have a library script mylib.sh, providing a variety of functions. You could either source the library (. mylib.sh) and call the functions directly after that (lib_function1 arg1 arg2), or avoid cluttering your namespace and invoke the library with a function argument (mylib.sh lib_function1 arg1 arg2).
Wouldn't it be nice if you could also type mylib.sh --help and get a list of available functions and their usage, without having to manually maintain the function list in the help text?
#!/bin/bash
# all "public" functions must start with this prefix
LIB_PREFIX='lib_'
# "public" library functions
lib_function1() {
: This function does something complicated with two arguments.
:
: Parameters:
: ' arg1 - first argument ($1)'
: ' arg2 - second argument'
:
: Result:
: " it's complicated"
# actual function code starts here
}
lib_function2() {
: Function documentation
# function code here
}
# help function
--help() {
echo MyLib v0.0.1
echo
echo Usage: mylib.sh [function_name [args]]
echo
echo Available functions:
declare -f | sed -n -e '/^'$LIB_PREFIX'/,/^}$/{/\(^'$LIB_PREFIX'\)\|\(^[ \t]*:\)/{
s/^\('$LIB_PREFIX'.*\) ()/\n=== \1 ===/;s/^[ \t]*: \?['\''"]\?/ /;s/['\''"]\?;\?$//;p}}'
}
# main code
if [ "${BASH_SOURCE[0]}" = "${0}" ]; then
# the script was executed instead of sourced
# invoke requested function or display help
if [ "$(type -t - "$1" 2>/dev/null)" = function ]; then
"$#"
else
--help
fi
fi
A few comments about the code:
All "public" functions have the same prefix. Only these are meant to be invoked by the user, and to be listed in the help text.
The self-documenting feature relies on the previous point, and uses declare -f to enumerate all available functions, then filters them through sed to only display functions with the appropriate prefix.
It is a good idea to enclose the documentation in single quotes, to prevent undesired expansion and whitespace removal. You'll also need to be careful when using apostrophes/quotes in the text.
You could write code to internalize the library prefix, i.e. the user only has to type mylib.sh function1 and it gets translated internally to lib_function1. This is an exercise left to the reader.
The help function is named "--help". This is a convenient (i.e. lazy) approach that uses the library invoke mechanism to display the help itself, without having to code an extra check for $1. At the same time, it will clutter your namespace if you source the library. If you don't like that, you can either change the name to something like lib_help or actually check the args for --help in the main code and invoke the help function manually.
I saw this usage in a script and thought it was a good substitute for invoking basename within a script.
oldIFS=$IFS
IFS=/
for basetool in $0 ; do : ; done
IFS=$oldIFS
...
this is a replacement for the code: basetool=$(basename $0)
Another way, not yet mentioned here is the initialisation of parameters in infinite while-loops. Below is not the cleanest example, but it serves it's purpose.
#!/usr/bin/env bash
[ "$1" ] && foo=0 && bar="baz"
while : "${foo=2}" "${bar:=qux}"; do
echo "$foo"
(( foo == 3 )) && echo "$bar" && break
(( foo=foo+1 ))
done