Many questions about part of a shell script generated by autoconf - bash

Here's the code snippet from a shell script. (It's from MPFR library's configure script and it starts with #!/bin/sh. The original script is over 17000 lines long.. It's used when building gcc.)
Because I have so many questions in a short piece of code, I have embedded my questions in the code. Please can somebody explain to me why the code is like this? Also, though I have a vague idea, I would appreciate if someone could explain what this code is doing (I understand it will be difficult because it's only a part of a big script).
if { { ac_try="$ac_link"
# <---- question 1 : why is the first curly bracket used for if condition? (probably just for grouping and using the last return code)
# <---- question 2 : Is this second bracket for locally used code(probably)?
case "(($ac_try" in # <---- question 3 : what is this "((" symbol?
*\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
*) ac_try_echo=$ac_try;;
esac
eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\""
$as_echo "$ac_try_echo"; } >&5 # <---- question 4 : what is this >&5 redirection? I know >&{1,2,3} but not 5.
(eval "$ac_link") 2>&5
# <----- question 5 : why use sub-shell here? not to use eval result?
ac_status=$?
$as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
test $ac_status = 0; }; then : # <---- question 6 : is this ':'(nop) here ?
....
some commands
....
else
....
some commands
....
fi

From Bash man page:
{ list; }
list is simply executed in the current shell environment. list
must be terminated with a newline or semicolon. This is known
as a group command. The return status is the exit status of
list. Note that unlike the metacharacters ( and ), { and } are
reserved words and must occur where a reserved word is permitted
to be recognized. Since they do not cause a word break, they
must be separated from list by whitespace or another shell
metacharacter.
{} is just to list a few commands to run, very much like cmd1; cmd2; cmd3. For example, if you write cmd1 ; cmd2 | cmd3, do you mean {cmd1; cmd2;} | cmd3 or cmd1; {cmd2 | cmd3;}.
{{ }} is just nested command list, easy: e.g. {cmd1; cmd2; {cmd3; cmd4;}; }
For question 3, (( is just in a source string to be matched with the following patterns. If you are asking why it is used, we need possible values of $ac_try to analyze why. Honestly, I don't see many shell scripts purposely adding (( in front of a source string to be matched for patterns.
For question 4,
>&5: if file descriptor 5 is not yet created (i.e. mentioned in any part of the script... => be careful, you need to care the scope, some codes runs in sub-shell, which is counted as a sub-shell context/scope), create an unnamed file (well, temp file, if you like), with descriptor 5. This file can be used in other part of the script as an input.
For example, see the part mentioning "exchanges STDIN and STDOUT" in my answer to another question here.
For question 5, the eval, I am not quite sure, just a quick guess (and it depends on what command it evals) by providing you an example why sub-shell makes some differences:
cmd="Foo=1; ls"
(eval $cmd) # this command runs in sub-shell and thus $Foo in current shell will not be changed.
eval $cmd # this command runs in current shell and thus $Foo is changed, and it will affect all subsequent commands.
For question 6, look carefully at the man page I mentioned at top of the answer, the {} list syntax, require a final ;. i.e. {cmd1; cmd2 ; } The last ; is required.
--- UPDATE ---
Question 6: Sorry for not seeing the colon... :-)
It's no op: see this link.

Related

Get the contents of an expanded expression given to eval through Bash internals

I'm writing some shell functions that allow to print stack traces when errors occur. For this I'm using the BASH_LINENO array which contain the line number for each frame. Then I retrieve the line from the file using BASH_SOURCE array and a subprocess like line="$(tail -n+$lineno "$file" | head -n1)".
Anyway, it works well, except when an error occur within an eval. The problem is that the line number corresponds to the line after the expression given to eval has been expanded. Therefore, when I retrieve the line with head and tail, obviously it's now the wrong one, or it's not a line at all (lineno is superior to the number of lines in the file).
So I wonder how I could get the actual expanded line. I looked at the variables provided by Bash, but none seems to help in this case.
Example, script1.sh:
#!/usr/bin/env bash
eval "$(./script2.sh)"
script2.sh:
#!/usr/bin/env bash
echo
echo
echo
echo false
When I hit the false line when executing script1.sh, the line number I get is 4, and the file source I get is script1.sh, so it's wrong.
When the line is out of the file, I could detect it, and print the first previous eval line instead, but it's very hacky and I'm sure there are a few different cases to handle. And if the line is within the file, then I cannot even know if it's the right one or not.
eval is hell :'(
Ideally, the BASH_COMMAND would be an array as well, and I could retrieve the commands from it instead of reading the files.
Another idea I just have would be to force the user to pipe the result of the expression into a command that will compress it on one line. Any ideas how, or programs to do that? A simple join on ";" seems to naive (again, lots of edge cases).
P.S.: sorry for the title, I have difficulty giving a meaningful title to this one :/
Eventually I found a workaround: by overriding the eval command with my own function, I was able to change the way I print the stack trace for errors happening in eval statements.
eval() {
# pre eval logic
command eval "$#"
# post eval logic
}
Anyway, please don't use eval, or if you do, use only one line arguments:
# GOOD: "easy" to deal with
for i in ...; do
eval "$(some command)"
done
# BAD: this will mess up your line numbers
eval "$(for i in ...; do
some command $i
done)"

Way to create multiline comments in Bash?

I have recently started studying shell script and I'd like to be able to comment out a set of lines in a shell script. I mean like it is in case of C/Java :
/* comment1
comment2
comment3
*/`
How could I do that?
Use : ' to open and ' to close.
For example:
: '
This is a
very neat comment
in bash
'
Multiline comment in bash
: <<'END_COMMENT'
This is a heredoc (<<) redirected to a NOP command (:).
The single quotes around END_COMMENT are important,
because it disables variable resolving and command resolving
within these lines. Without the single-quotes around END_COMMENT,
the following two $() `` commands would get executed:
$(gibberish command)
`rm -fr mydir`
comment1
comment2
comment3
END_COMMENT
Note: I updated this answer based on comments and other answers, so comments prior to May 22nd 2020 may no longer apply. Also I noticed today that some IDE's like VS Code and PyCharm do not recognize a HEREDOC marker that contains spaces, whereas bash has no problem with it, so I'm updating this answer again.
Bash does not provide a builtin syntax for multi-line comment but there are hacks using existing bash syntax that "happen to work now".
Personally I think the simplest (ie least noisy, least weird, easiest to type, most explicit) is to use a quoted HEREDOC, but make it obvious what you are doing, and use the same HEREDOC marker everywhere:
<<'###BLOCK-COMMENT'
line 1
line 2
line 3
line 4
###BLOCK-COMMENT
Single-quoting the HEREDOC marker avoids some shell parsing side-effects, such as weird subsitutions that would cause crash or output, and even parsing of the marker itself. So the single-quotes give you more freedom on the open-close comment marker.
For example the following uses a triple hash which kind of suggests multi-line comment in bash. This would crash the script if the single quotes were absent. Even if you remove ###, the FOO{} would crash the script (or cause bad substitution to be printed if no set -e) if it weren't for the single quotes:
set -e
<<'###BLOCK-COMMENT'
something something ${FOO{}} something
more comment
###BLOCK-COMMENT
ls
You could of course just use
set -e
<<'###'
something something ${FOO{}} something
more comment
###
ls
but the intent of this is definitely less clear to a reader unfamiliar with this trickery.
Note my original answer used '### BLOCK COMMENT', which is fine if you use vanilla vi/vim but today I noticed that PyCharm and VS Code don't recognize the closing marker if it has spaces.
Nowadays any good editor allows you to press ctrl-/ or similar, to un/comment the selection. Everyone definitely understands this:
# something something ${FOO{}} something
# more comment
# yet another line of comment
although admittedly, this is not nearly as convenient as the block comment above if you want to re-fill your paragraphs.
There are surely other techniques, but there doesn't seem to be a "conventional" way to do it. It would be nice if ###> and ###< could be added to bash to indicate start and end of comment block, seems like it could be pretty straightforward.
After reading the other answers here I came up with the below, which IMHO makes it really clear it's a comment. Especially suitable for in-script usage info:
<< ////
Usage:
This script launches a spaceship to the moon. It's doing so by
leveraging the power of the Fifth Element, AKA Leeloo.
Will only work if you're Bruce Willis or a relative of Milla Jovovich.
////
As a programmer, the sequence of slashes immediately registers in my brain as a comment (even though slashes are normally used for line comments).
Of course, "////" is just a string; the number of slashes in the prefix and the suffix must be equal.
I tried the chosen answer, but found when I ran a shell script having it, the whole thing was getting printed to screen (similar to how jupyter notebooks print out everything in '''xx''' quotes) and there was an error message at end. It wasn't doing anything, but: scary. Then I realised while editing it that single-quotes can span multiple lines. So.. lets just assign the block to a variable.
x='
echo "these lines will all become comments."
echo "just make sure you don_t use single-quotes!"
ls -l
date
'
what's your opinion on this one?
function giveitauniquename()
{
so this is a comment
echo "there's no need to further escape apostrophes/etc if you are commenting your code this way"
the drawback is it will be stored in memory as a function as long as your script runs unless you explicitly unset it
only valid-ish bash allowed inside for instance these would not work without the "pound" signs:
1, for #((
2, this #wouldn't work either
function giveitadifferentuniquename()
{
echo nestable
}
}
Here's how I do multiline comments in bash.
This mechanism has two advantages that I appreciate. One is that comments can be nested. The other is that blocks can be enabled by simply commenting out the initiating line.
#!/bin/bash
# : <<'####.block.A'
echo "foo {" 1>&2
fn data1
echo "foo }" 1>&2
: <<'####.block.B'
fn data2 || exit
exit 1
####.block.B
echo "can't happen" 1>&2
####.block.A
In the example above the "B" block is commented out, but the parts of the "A" block that are not the "B" block are not commented out.
Running that example will produce this output:
foo {
./example: line 5: fn: command not found
foo }
can't happen
Simple solution, not much smart:
Temporarily block a part of a script:
if false; then
while you respect syntax a bit, please
do write here (almost) whatever you want.
but when you are
done # write
fi
A bit sophisticated version:
time_of_debug=false # Let's set this variable at the beginning of a script
if $time_of_debug; then # in a middle of the script
echo I keep this code aside until there is the time of debug!
fi
in plain bash
to comment out
a block of code
i do
:||{
block
of code
}

How Does Bash Tokenize Scripts?

Coming from a C++: it always seems like magic to me that some whitespace has an effect on the validity or semantics of the script. Here's an example:
echo a 2 > &1
bash: syntax error near unexpected token `&'
echo a 2 >&1
a 2
echo a 2>&1
a
echo a 2>& 1
a
Looking at this didn't help much. My main problem is that it does not feel consistent; and I am in a state of confusion.
I'm trying to find out how bash tokenizes its scripts. A general description thereof to clear up any confusion would be appreciated.
Edit:
I am NOT looking for redirections specifically. They just came up as example. Other examples:
A="something"
A = "something"
if [$x = $y];
if [ $x = $y ];
Why isn't there a space necessary between ] and ;? Why does assignment require an immediate equal sign? ...
2>&1 is a single operator token, so any whitespace that breaks it up will change the meaning of the command. It just happens to be a parameterized token, which means the shell will further tokenize it to determine what exactly the operator does. The general form is n>&m, where n is the file descriptor you are redirecting, and m is the descriptor you are copying to. In this case, you are saying that the standard error (2) of the command should be copied to whatever standard output (1) is currently open on.
The examples you gave have the behavior they do for good reason.
Redirection sources default to FD 1. Thus, >&1 is legitimate syntax on its own -- it redirects FD 1 to FD 1 -- meaning allowing whitespace before the > would result in an ambiguous syntax: The parser couldn't tell if the preceding token was its own word or a redirection source.
Nothing other than a FD number is valid under >&, unless you're in a very new bash which allows a variable to be dereferenced to retrieve a FD number. In any event, anything immediately following >& is known to be a file descriptor, so allowing optional whitespace creates no ambiguity there.
a = 1 is parsed as a legitimate command, not a syntax error: It runs the command a with the first argument = and the second argument 1. Disallowing whitespace within assignments eliminates this ambiguity. Similarly, a= foo has a separate and distinct meaning: It exports an environment variable a with an empty value while running the command foo. Relaxing the whitespace rules would disallow both of these legitimate commands.
[ is a command, not special syntax known to the parser; thus, [foo tries to find a command (named, say, /usr/bin/[foo), requiring whitespace.
; takes precedence in the parser as a statement separator, rather than being treated as part of a word, unless quoted or escaped. The same is true of & (another separator), or a newline.
The thing is, there's no single general rule which will explain all this; you need to read and learn the language syntax. Fortunately, there's not very much syntax: Almost all commands are "simple commands", which follow very simple and clear rules. You're asking about, and we're explaining, some of the exceptions to that; there are other exceptions, such as [[ ]] in bash, but they're small enough in total that they can be learned.
Other suggested resources:
http://aosabook.org/en/bash.html (The Architecture of Open Source Applications; chapter on bash)
http://mywiki.wooledge.org/BashParser (Wooledge wiki high-level description of the parser -- though this focuses more on expansion rules than tokenization)
http://mywiki.wooledge.org/BashGuide (an introductory guide to bash syntax in general, written with more of a focus on accuracy and best practices than some competing materials).

bash while loop with command as part of the expression?

I am trying to read part of a file and stop and a particular line, using bash. I am not very familiar with bash, but I've been reading the manual and various references, and I don't understand why something like the following does not work (but instead produces a syntax error):
while { read -u 4 line } && (test "$line" != "$header_line")
do
echo in loop, line=$line
done
I think I could write a loop that tests a "done" variable, and then do my real tests inside the loop and set "done" appropriately, but I am curious as to 1) why the above does not work, and 2) is there some small correction that would make it work? I still fairly confused about when to use [, (, {, or ((, so perhaps some other combination would work, though I have tried several.
(Note: The "read -u 4 line" works fine when I call it above the loop. I have opened a file on file descriptor 4.)
I think what you want is more like this:
while read -u 4 line && test "$line" != "$header_line"
do
...
done
Braces (the {} characters) are used to separate variables from other parts of a string when whitespace cannot be used. For example, echo "${var}x" will print the value of the variable var followed by an x, but echo "$varx" will print the value of the variable varx.
Brackets (the [] characters) are used as a shortcut for the test program. [ is another name for test, but when test detects that it was called with [ it required a ] as its last argument. The point is clarity.
Parenthesis (the () characters) are used in a number of different situations. They generally start subshells, although not always (I'm not really certain in case #3 here):
Retrieving a single exit code from a series of processes, or a single output stream from a sequence of commands. For example, (echo "Hi" ; echo "Bye") | sed -e "s/Hi/Hello/" will print two lines, "Hello" and "Bye". It is the easiest way to get multiple echo statements to produce a single stream.
Evaluating commands as if they were variables: $(expr 1 + 1) will act like a variable, but will produce the value 2.
Performing math: $((5 * 4 / 3 + 2 % 1)) will evaluate like a variable, but will compute the result of that mathematical expression.
The && operator is a list operator - he seperates two commands and only executes when the first is true, but in this case the first is the while and he is expecting his do stuff. And then he reaches do and the while stuff is already history.
Your intention is to put it into the expression. So you put it together with (). E.g. this a solution with just a small change
while ( read -u 4 line && test "$line" != "$header_line" )

What is the purpose of the : (colon) GNU Bash builtin?

What is the purpose of a command that does nothing, being little more than a comment leader, but is actually a shell builtin in and of itself?
It's slower than inserting a comment into your scripts by about 40% per call, which probably varies greatly depending on the size of the comment. The only possible reasons I can see for it are these:
# poor man's delay function
for ((x=0;x<100000;++x)) ; do : ; done
# inserting comments into string of commands
command ; command ; : we need a comment in here for some reason ; command
# an alias for `true'
while : ; do command ; done
I guess what I'm really looking for is what historical application it might have had.
Historically, Bourne shells didn't have true and false as built-in commands. true was instead simply aliased to :, and false to something like let 0.
: is slightly better than true for portability to ancient Bourne-derived shells. As a simple example, consider having neither the ! pipeline operator nor the || list operator (as was the case for some ancient Bourne shells). This leaves the else clause of the if statement as the only means for branching based on exit status:
if command; then :; else ...; fi
Since if requires a non-empty then clause and comments don't count as non-empty, : serves as a no-op.
Nowadays (that is: in a modern context) you can usually use either : or true. Both are specified by POSIX, and some find true easier to read. However there is one interesting difference: : is a so-called POSIX special built-in, whereas true is a regular built-in.
Special built-ins are required to be built into the shell; Regular built-ins are only "typically" built in, but it isn't strictly guaranteed. There usually shouldn't be a regular program named : with the function of true in PATH of most systems.
Probably the most crucial difference is that with special built-ins, any variable set by the built-in - even in the environment during simple command evaluation - persists after the command completes, as demonstrated here using ksh93:
$ unset x; ( x=hi :; echo "$x" )
hi
$ ( x=hi true; echo "$x" )
$
Note that Zsh ignores this requirement, as does GNU Bash except when operating in POSIX compatibility mode, but all other major "POSIX sh derived" shells observe this including dash, ksh93, and mksh.
Another difference is that regular built-ins must be compatible with exec - demonstrated here using Bash:
$ ( exec : )
-bash: exec: :: not found
$ ( exec true )
$
POSIX also explicitly notes that : may be faster than true, though this is of course an implementation-specific detail.
I use it to easily enable/disable variable commands:
#!/bin/bash
if [[ "$VERBOSE" == "" || "$VERBOSE" == "0" ]]; then
vecho=":" # no "verbose echo"
else
vecho=echo # enable "verbose echo"
fi
$vecho "Verbose echo is ON"
Thus
$ ./vecho
$ VERBOSE=1 ./vecho
Verbose echo is ON
This makes for a clean script. This cannot be done with '#'.
Also,
: >afile
is one of the simplest ways to guarantee that 'afile' exists but is 0 length.
A useful application for : is if you're only interested in using parameter expansions for their side-effects rather than actually passing their result to a command.
In that case, you use the parameter expansion as an argument to either : or false depending upon whether you want an exit status of 0 or 1. An example might be
: "${var:=$1}"
Since : is a builtin, it should be pretty fast.
: can also be for block comment (similar to /* */ in C language). For example, if you want to skip a block of code in your script, you can do this:
: << 'SKIP'
your code block here
SKIP
Two more uses not mentioned in other answers:
Logging
Take this example script:
set -x
: Logging message here
example_command
The first line, set -x, makes the shell print out the command before running it. It's quite a useful construct. The downside is that the usual echo Log message type of statement now prints the message twice. The colon method gets round that. Note that you'll still have to escape special characters just like you would for echo.
Cron job titles
I've seen it being used in cron jobs, like this:
45 10 * * * : Backup for database ; /opt/backup.sh
This is a cron job that runs the script /opt/backup.sh every day at 10:45. The advantage of this technique is that it makes for better looking email subjects when the /opt/backup.sh prints some output.
It's similar to pass in Python.
One use would be to stub out a function until it gets written:
future_function () { :; }
If you'd like to truncate a file to zero bytes, useful for clearing logs, try this:
:> file.log
You could use it in conjunction with backticks (``) to execute a command without displaying its output, like this:
: `some_command`
Of course you could just do some_command > /dev/null, but the :-version is somewhat shorter.
That being said I wouldn't recommend actually doing that as it would just confuse people. It just came to mind as a possible use-case.
It's also useful for polyglot programs:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
~function(){ ... }
This is now both an executable shell-script and a JavaScript program: meaning ./filename.js, sh filename.js, and node filename.js all work.
(Definitely a little bit of a strange usage, but effective nonetheless.)
Some explication, as requested:
Shell-scripts are evaluated line-by-line; and the exec command, when run, terminates the shell and replaces it's process with the resultant command. This means that to the shell, the program looks like this:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
As long as no parameter expansion or aliasing is occurring in the word, any word in a shell-script can be wrapped in quotes without changing its' meaning; this means that ':' is equivalent to : (we've only wrapped it in quotes here to achieve the JavaScript semantics described below)
... and as described above, the first command on the first line is a no-op (it translates to : //, or if you prefer to quote the words, ':' '//'. Notice that the // carries no special meaning here, as it does in JavaScript; it's just a meaningless word that's being thrown away.)
Finally, the second command on the first line (after the semicolon), is the real meat of the program: it's the exec call which replaces the shell-script being invoked, with a Node.js process invoked to evaluate the rest of the script.
Meanwhile, the first line, in JavaScript, parses as a string-literal (':'), and then a comment, which is deleted; thus, to JavaScript, the program looks like this:
':'
~function(){ ... }
Since the string-literal is on a line by itself, it is a no-op statement, and is thus stripped from the program; that means that the entire line is removed, leaving only your program-code (in this example, the function(){ ... } body.)
Self-documenting functions
You can also use : to embed documentation in a function.
Assume you have a library script mylib.sh, providing a variety of functions. You could either source the library (. mylib.sh) and call the functions directly after that (lib_function1 arg1 arg2), or avoid cluttering your namespace and invoke the library with a function argument (mylib.sh lib_function1 arg1 arg2).
Wouldn't it be nice if you could also type mylib.sh --help and get a list of available functions and their usage, without having to manually maintain the function list in the help text?
#!/bin/bash
# all "public" functions must start with this prefix
LIB_PREFIX='lib_'
# "public" library functions
lib_function1() {
: This function does something complicated with two arguments.
:
: Parameters:
: ' arg1 - first argument ($1)'
: ' arg2 - second argument'
:
: Result:
: " it's complicated"
# actual function code starts here
}
lib_function2() {
: Function documentation
# function code here
}
# help function
--help() {
echo MyLib v0.0.1
echo
echo Usage: mylib.sh [function_name [args]]
echo
echo Available functions:
declare -f | sed -n -e '/^'$LIB_PREFIX'/,/^}$/{/\(^'$LIB_PREFIX'\)\|\(^[ \t]*:\)/{
s/^\('$LIB_PREFIX'.*\) ()/\n=== \1 ===/;s/^[ \t]*: \?['\''"]\?/ /;s/['\''"]\?;\?$//;p}}'
}
# main code
if [ "${BASH_SOURCE[0]}" = "${0}" ]; then
# the script was executed instead of sourced
# invoke requested function or display help
if [ "$(type -t - "$1" 2>/dev/null)" = function ]; then
"$#"
else
--help
fi
fi
A few comments about the code:
All "public" functions have the same prefix. Only these are meant to be invoked by the user, and to be listed in the help text.
The self-documenting feature relies on the previous point, and uses declare -f to enumerate all available functions, then filters them through sed to only display functions with the appropriate prefix.
It is a good idea to enclose the documentation in single quotes, to prevent undesired expansion and whitespace removal. You'll also need to be careful when using apostrophes/quotes in the text.
You could write code to internalize the library prefix, i.e. the user only has to type mylib.sh function1 and it gets translated internally to lib_function1. This is an exercise left to the reader.
The help function is named "--help". This is a convenient (i.e. lazy) approach that uses the library invoke mechanism to display the help itself, without having to code an extra check for $1. At the same time, it will clutter your namespace if you source the library. If you don't like that, you can either change the name to something like lib_help or actually check the args for --help in the main code and invoke the help function manually.
I saw this usage in a script and thought it was a good substitute for invoking basename within a script.
oldIFS=$IFS
IFS=/
for basetool in $0 ; do : ; done
IFS=$oldIFS
...
this is a replacement for the code: basetool=$(basename $0)
Another way, not yet mentioned here is the initialisation of parameters in infinite while-loops. Below is not the cleanest example, but it serves it's purpose.
#!/usr/bin/env bash
[ "$1" ] && foo=0 && bar="baz"
while : "${foo=2}" "${bar:=qux}"; do
echo "$foo"
(( foo == 3 )) && echo "$bar" && break
(( foo=foo+1 ))
done

Resources