What is the purpose of the EOL semicolon in this bash script? - bash

I'm trying to adapt some code I found for a monitor layout script.
...
while read l
do
dir=$(dirname $l);
status=$(cat $l);
dev=$(echo $dir | cut -d\- -f 2-);
if [ $(expr match $dev "HDMI") != "0" ]
...
As per the Bash man page: A semicolon can either be a metacharacter or a control operator.
I understand the metacharacter use is for consecutive commands on the same line. Is it being used as a control operator in this case? I haven't used it this way before and the script functions without it. I don't want to remove or keep it without understanding its purpose.

There might have been some thinking process going on along the lines of
"this should execute even if concatenated into a single line", or
"this was created from a single line originally".
But in that case you would need additional semicolons at the end of the while and if lines to make it work.
So, no, there is no purpose to them other than a bad habit by the programmer in question.

No, you will not need a semicolon in this case. You would need a semicolon in bash if there is more then one command on a single line. Here is an example of where a semicolon is often used in bash to make it more readable:
if [ $a -gt 12 ]; then
You see here, the if and the then are two different commands, but you can use the semicolon to put them on the same line, which makes the code easier to read in my opinion.

As per #anubhava's comment on the question"
No there is no apparent good reasons for this...

Related

How to create new names for files with problematic characters for use in an existing bash scripted environment?

The goal is to get rid of (by changing) filenames that give headaches for scripting by translating them to something else. The reason is that in this nearly 30 year Unix / Linux environment, with a lot of existing scripts that may not be "written correctly", a new, large and important cache of files arrived that have to be managed, and so, a colleague has asked me to write a script to help with "problematic filenames" and translate them. They've got a list of chars to turn into dots, such as the comma, and another list to turn into underscores, such as whitespace, as but two examples and ran into problems which I asked about over here.
I was using tr to do it, but commenters to it said I should perhaps ask just about this instead of how to get tr to work. So, I have!
Parameter expansion can do this for you.
Note that unlike when using tr (as requested on your other question), when using parameter expansion you don't need to use backslashes inside your character class definitions: put the expansion in double quotes and bash will treat the results of that expansion as literal.
#!/usr/bin/env bash
toDots='\,;:|+##$%^&*~'
toUnderscores='}{]['"'"'="()`!'
# requires bash 5+: if debug=1, then print what we would do instead of doing it
runOrDebug() {
if (( debug )); then
printf '%s\n' "${*#Q}"
else
"$#"
fi
}
renameFiles() {
local name subDots subBoth
for name; do
subDots=${name//["$toDots"]/.}
subBoth=${subDots//["$toUnderscores"]/_}
if [[ $subBoth != "$name" ]]; then
runOrDebug mv -- "$name" "$subBoth"
fi
done
}
debug=1 renameFiles '[/a],/;[p:r|o\b+lem#a#t$i%c]/#(%$^!/(e^n&t*ry)~='
Note that toUnderscores is (except for the single quote in the middle) in single quotes, so all the backslashes in it are part of the variable's data rather than being syntax; because globs use character class syntax from REs, they're parsed as POSIX regular expression character class syntax.
See a demonstration of the technique running at https://ideone.com/kKE7IJ

Why is a separator not required in `for i do cmd; done` [duplicate]

This question already has an answer here:
Is a semicolon prohibited after NAME in `for NAME do ...`?
(1 answer)
Closed 4 years ago.
for i do echo $i; done
How is this even legal? (I would expect it to be written with an extra semi-colon for i; do echo $i; done) It works in bash, dash, zsh, and ksh. The standard (by which I mean http://pubs.opengroup.org/onlinepubs/9699919799/) states:
The for loop requires that the reserved words do and done be used to
delimit the sequence of commands. The format for the for loop is as follows:
for name [ in [word ... ]]
do
compound-list
done
So clearly when "in word" is omitted, do is serving as a separator. So the implication seems to be that the separator (the newline) after the [ in [word .. ]] actually belongs inside the closing right bracket. Can someone point to anything in the standard which justifies this (IMO) horrible abuse of the language?
If you look at the GNU man page, you see the loop has this syntax:
for
The syntax of the for command is:
for name [ [in [words …] ] ; ] do commands; done
So as you can see the extra semi-colon is part of the optional section.
The linux-die man page states the same.
Perhaps someone else can fill you in on the history of the syntax, but in shell scripting, space characters (by default) are the delimiters. The ; character is just a statement separator. It's often used instead of a newline to write logic as a "one-liner". Semicolons at the end of a line in a shell script are superfluous as both are command separators.
Also worth noting is the bracket syntax above is meant to indicate that those parts of the syntax are optional (i.e. you don't actually use brackets in the script there).
Finally, I find it's good to think of shells as... well, shells. You can write scripts with them and as "languages" they're Turing complete, but the syntax is often kind of funky.

Way to create multiline comments in Bash?

I have recently started studying shell script and I'd like to be able to comment out a set of lines in a shell script. I mean like it is in case of C/Java :
/* comment1
comment2
comment3
*/`
How could I do that?
Use : ' to open and ' to close.
For example:
: '
This is a
very neat comment
in bash
'
Multiline comment in bash
: <<'END_COMMENT'
This is a heredoc (<<) redirected to a NOP command (:).
The single quotes around END_COMMENT are important,
because it disables variable resolving and command resolving
within these lines. Without the single-quotes around END_COMMENT,
the following two $() `` commands would get executed:
$(gibberish command)
`rm -fr mydir`
comment1
comment2
comment3
END_COMMENT
Note: I updated this answer based on comments and other answers, so comments prior to May 22nd 2020 may no longer apply. Also I noticed today that some IDE's like VS Code and PyCharm do not recognize a HEREDOC marker that contains spaces, whereas bash has no problem with it, so I'm updating this answer again.
Bash does not provide a builtin syntax for multi-line comment but there are hacks using existing bash syntax that "happen to work now".
Personally I think the simplest (ie least noisy, least weird, easiest to type, most explicit) is to use a quoted HEREDOC, but make it obvious what you are doing, and use the same HEREDOC marker everywhere:
<<'###BLOCK-COMMENT'
line 1
line 2
line 3
line 4
###BLOCK-COMMENT
Single-quoting the HEREDOC marker avoids some shell parsing side-effects, such as weird subsitutions that would cause crash or output, and even parsing of the marker itself. So the single-quotes give you more freedom on the open-close comment marker.
For example the following uses a triple hash which kind of suggests multi-line comment in bash. This would crash the script if the single quotes were absent. Even if you remove ###, the FOO{} would crash the script (or cause bad substitution to be printed if no set -e) if it weren't for the single quotes:
set -e
<<'###BLOCK-COMMENT'
something something ${FOO{}} something
more comment
###BLOCK-COMMENT
ls
You could of course just use
set -e
<<'###'
something something ${FOO{}} something
more comment
###
ls
but the intent of this is definitely less clear to a reader unfamiliar with this trickery.
Note my original answer used '### BLOCK COMMENT', which is fine if you use vanilla vi/vim but today I noticed that PyCharm and VS Code don't recognize the closing marker if it has spaces.
Nowadays any good editor allows you to press ctrl-/ or similar, to un/comment the selection. Everyone definitely understands this:
# something something ${FOO{}} something
# more comment
# yet another line of comment
although admittedly, this is not nearly as convenient as the block comment above if you want to re-fill your paragraphs.
There are surely other techniques, but there doesn't seem to be a "conventional" way to do it. It would be nice if ###> and ###< could be added to bash to indicate start and end of comment block, seems like it could be pretty straightforward.
After reading the other answers here I came up with the below, which IMHO makes it really clear it's a comment. Especially suitable for in-script usage info:
<< ////
Usage:
This script launches a spaceship to the moon. It's doing so by
leveraging the power of the Fifth Element, AKA Leeloo.
Will only work if you're Bruce Willis or a relative of Milla Jovovich.
////
As a programmer, the sequence of slashes immediately registers in my brain as a comment (even though slashes are normally used for line comments).
Of course, "////" is just a string; the number of slashes in the prefix and the suffix must be equal.
I tried the chosen answer, but found when I ran a shell script having it, the whole thing was getting printed to screen (similar to how jupyter notebooks print out everything in '''xx''' quotes) and there was an error message at end. It wasn't doing anything, but: scary. Then I realised while editing it that single-quotes can span multiple lines. So.. lets just assign the block to a variable.
x='
echo "these lines will all become comments."
echo "just make sure you don_t use single-quotes!"
ls -l
date
'
what's your opinion on this one?
function giveitauniquename()
{
so this is a comment
echo "there's no need to further escape apostrophes/etc if you are commenting your code this way"
the drawback is it will be stored in memory as a function as long as your script runs unless you explicitly unset it
only valid-ish bash allowed inside for instance these would not work without the "pound" signs:
1, for #((
2, this #wouldn't work either
function giveitadifferentuniquename()
{
echo nestable
}
}
Here's how I do multiline comments in bash.
This mechanism has two advantages that I appreciate. One is that comments can be nested. The other is that blocks can be enabled by simply commenting out the initiating line.
#!/bin/bash
# : <<'####.block.A'
echo "foo {" 1>&2
fn data1
echo "foo }" 1>&2
: <<'####.block.B'
fn data2 || exit
exit 1
####.block.B
echo "can't happen" 1>&2
####.block.A
In the example above the "B" block is commented out, but the parts of the "A" block that are not the "B" block are not commented out.
Running that example will produce this output:
foo {
./example: line 5: fn: command not found
foo }
can't happen
Simple solution, not much smart:
Temporarily block a part of a script:
if false; then
while you respect syntax a bit, please
do write here (almost) whatever you want.
but when you are
done # write
fi
A bit sophisticated version:
time_of_debug=false # Let's set this variable at the beginning of a script
if $time_of_debug; then # in a middle of the script
echo I keep this code aside until there is the time of debug!
fi
in plain bash
to comment out
a block of code
i do
:||{
block
of code
}

Error: =: command not found (Bash Script)

I found a nifty little shell script that I wanted to use from this website here. I have followed everything step-by-step, but receive the following error when running this on my CentOS box.
./deploy: line 3: =: command not found
Line 3 only contains...
$ERRORSTRING = "Error. Please make sure you've indicated correct parameters"
I've tried toying around with a bit, but don't understand why it won't accept the "=" character. Is there something wrong with the script, or is it merely something different in the way that my server processes the script?
Thanks!
Gah, that script is full of bad scripting practices (in addition to the outright error you're running into). Here's the outright error:
$ERRORSTRING = "Error. Please make sure you've indicated correct parameters"
As devnull pointed out, this should be:
ERRORSTRING="Error. Please make sure you've indicated correct parameters"
A couple of lines down (and again near the end), we have:
echo $ERRORSTRING;
...which works, but contains two bad ideas: a variable reference without double-quotes around it (which will sometimes be parsed in unexpected ways), and a semicolon at the end of the line (which is a sign that someone is trying to write C or Java or something in a shell script). Use this instead:
echo "$ERRORSTRING"
The next line is:
elif [ $1 == "live" ]
...which might work, depending on whether the value of $1 has spaces, or is defined-but-blank, or anything like that (again, use double-quotes to prevent misparsing!). Also, the == comparison operator is nonstandard -- it'll work, because bash supports it in its [ ... ] builtin syntax, but if you're counting on having bash extensions available, why not use the much cleaner [[ ... ]] replacement? Any of these would be a better replacement for that line:
elif [ "$1" = "live" ]
elif [[ $1 == "live" ]]
elif [[ "$1" == "live" ]]
Personally, I prefer the last. The double-quotes aren't needed in this particular case, but IMO it's safest to just double-quote all variable references unless there's a specific reason not to. A bit further down, there's a elif [ $2 == "go" ] that the same comments apply to.
BTW, there's a good sanity-checking tool for shell scripts at www.shellcheck.net. It's not quite as picky as I am (e.g. it doesn't flag semicolons at the ends of lines), but it pointed out all of the actual errors in this script...
"Devnulls" answer was correct -- I had to remove the spaces around the "=" and remove the "$" from that line as well. The end result was...
ERRORSTRING="Error. Please make sure you've indicated correct parameters"
I've upvoted Devnull and gniourf_gniourf's comments.
Thank you to all whom have assisted!

BASH Expression to replace beginning and ending of a string in one operation?

Here's a simple problem that's been bugging me for some time. I often find I have a number of input files in some directory, and I want to construct output file names by replacing beginning and ending portions. For example, given this:
source/foo.c
source/bar.c
source/foo_bar.c
I often end up writing BASH expressions like:
for f in source/*.c; do
a="obj/${f##*/}"
b="${a%.*}.obj"
process "$f" "$b"
done
to generate the commands
process "source/foo.c" "obj/foo.obj"
process "source/bar.c "obj/bar.obj"
process "source/foo_bar.c "obj/foo_bar.obj"
The above works, but its a lot wordier than I like, and I would prefer to avoid the temporary variables. Ideally there would be some command that could replace the beginning and ends of a string in one shot, so that I could just write something like:
for f in source/*.c; do process "$f" "obj/${f##*/%.*}.obj"; done
Of course, the above doesn't work. Does anyone know something that will? I'm just trying to save myself some typing here.
Not the prettiest thing in the world, but you can use a regular expression to group the content you want to pick out, and then refer to the BASH_REMATCH array:
if [[ $f =~ ^source/(.*).c$ ]] ; then f="obj/${BASH_REMATCH[1]}.o"; fi
you shouldn't have to worry about your code being "wordier" or not. In fact, being a bit verbose is no harm, consider how much it will improve your(or someone else) understanding of the script. Besides, for performance, using bash's internal string manipulation is much faster than calling external commands. Lastly, you are not going to retype your commands every time you use it right? So why worry that its "wordier" since these commands are already in your script?
Not directly in bash. You can use sed, of course:
b="$(sed 's|^source/(.*).c$|obj/$1.obj|' <<< "$f")"
Why not simply using cd to remove the "source/" part?
This way we can avoid the temporary variables a and b:
for f in $(cd source; printf "%s\n" *.c); do
echo process "source/${f}" "obj/${f%.*}.obj"
done

Resources