Confusing syntax error near unexpected token 'done' [duplicate] - bash

This question already has answers here:
Empty Body For Loop Linux Shell
(4 answers)
Closed 6 months ago.
I am trying to learn shell scripting, so I created a simple script with a loop that does nothing:
#!/bin/bash
names=(test test2 test3 test4)
for name in ${names[#]}
do
#do something
done
however, when I run this script I get the following errors:
./test.sh: line 6: syntax error near unexpected token done'
./test.sh: line 6: done'
What have I missed here? are shell scripts 'tab sensitive'?

No, shell scripts are not tab sensitive (unless you do something really crazy, which you are not doing in this example).
You can't have an empty while do done block, (comments don't count)
Try substituting echo $name instead
#!/bin/bash
names=(test test2 test3 test4)
for name in ${names[#]}
do
printf "%s " $name
done
printf "\n"
output
test test2 test3 test4

dash and bash are a bit brain-dead in this case, they do not allow an empty loop so you need to add a no op command to make this run, e.g. true or :. My tests suggest the : is a bit faster, although they should be the same, not sure why:
time (i=100000; while ((i--)); do :; done)
n average takes 0.262 seconds, while:
time (i=100000; while ((i--)); do true; done)
takes 0.293 seconds. Interestingly:
time (i=100000; while ((i--)); do builtin true; done)
takes 0.356 seconds.
All measurements are an average of 30 runs.

Bash has a built-in no-op, the colon (:), which is more lightweight
than spawning another process to run true.
#!/bin/bash
names=(test test2 test3 test4)
for name in "${names[#]}"
do
:
done
EDIT: William correctly points out that true is also a shell built-in, so take this answer as just another option FYI, not a better solution than using true.

You could replace the nothing with 'true' instead.

You need to have something in your loop otherwise bash complains.

This error is expected with some versions of bash where the script was edited on Windows and so the script actually looks as follows:
#!/bin/bash^M
names=(test test2 test3 test4)^M
for name in ${names[#]}^M
do^M
printf "%s " $name^M
done^M
printf "\n"^M
where the ^M represents the carriage-return character (0x0D). This can easily be seen in vi by using the binary option as in:
vi -b script.sh
To remove those carriage-return characters simply use the vi command:
1,$s/^M//
(note that the ^M above is a single carriage-return character, to enter it in the editor use sequence Control-V Control-M)

Related

What shellenv command does? [duplicate]

After reading the Bash man pages and with respect to this post, I am still having trouble understanding what exactly the eval command does and which would be its typical uses.
For example, if we do:
$ set -- one two three # Sets $1 $2 $3
$ echo $1
one
$ n=1
$ echo ${$n} ## First attempt to echo $1 using brackets fails
bash: ${$n}: bad substitution
$ echo $($n) ## Second attempt to echo $1 using parentheses fails
bash: 1: command not found
$ eval echo \${$n} ## Third attempt to echo $1 using 'eval' succeeds
one
What exactly is happening here and how do the dollar sign and the backslash tie into the problem?
eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line. (If you pass several arguments, they are first joined with spaces between them.)
${$n} is a syntax error in bash. Inside the braces, you can only have a variable name, with some possible prefix and suffixes, but you can't have arbitrary bash syntax and in particular you can't use variable expansion. There is a way of saying “the value of the variable whose name is in this variable”, though:
echo ${!n}
one
$(…) runs the command specified inside the parentheses in a subshell (i.e. in a separate process that inherits all settings such as variable values from the current shell), and gathers its output. So echo $($n) runs $n as a shell command, and displays its output. Since $n evaluates to 1, $($n) attempts to run the command 1, which does not exist.
eval echo \${$n} runs the parameters passed to eval. After expansion, the parameters are echo and ${1}. So eval echo \${$n} runs the command echo ${1}.
Note that most of the time, you must use double quotes around variable substitutions and command substitutions (i.e. anytime there's a $): "$foo", "$(foo)". Always put double quotes around variable and command substitutions, unless you know you need to leave them off. Without the double quotes, the shell performs field splitting (i.e. it splits value of the variable or the output from the command into separate words) and then treats each word as a wildcard pattern. For example:
$ ls
file1 file2 otherfile
$ set -- 'f* *'
$ echo "$1"
f* *
$ echo $1
file1 file2 file1 file2 otherfile
$ n=1
$ eval echo \${$n}
file1 file2 file1 file2 otherfile
$eval echo \"\${$n}\"
f* *
$ echo "${!n}"
f* *
eval is not used very often. In some shells, the most common use is to obtain the value of a variable whose name is not known until runtime. In bash, this is not necessary thanks to the ${!VAR} syntax. eval is still useful when you need to construct a longer command containing operators, reserved words, etc.
Simply think of eval as "evaluating your expression one additional time before execution"
eval echo \${$n} becomes echo $1 after the first round of evaluation. Three changes to notice:
The \$ became $ (The backslash is needed, otherwise it tries to evaluate ${$n}, which means a variable named {$n}, which is not allowed)
$n was evaluated to 1
The eval disappeared
In the second round, it is basically echo $1 which can be directly executed.
So eval <some command> will first evaluate <some command> (by evaluate here I mean substitute variables, replace escaped characters with the correct ones etc.), and then run the resultant expression once again.
eval is used when you want to dynamically create variables, or to read outputs from programs specifically designed to be read like this. See Eval command and security issues for examples. The link also contains some typical ways in which eval is used, and the risks associated with it.
In my experience, a "typical" use of eval is for running commands that generate shell commands to set environment variables.
Perhaps you have a system that uses a collection of environment variables, and you have a script or program that determines which ones should be set and their values. Whenever you run a script or program, it runs in a forked process, so anything it does directly to environment variables is lost when it exits. But that script or program can send the export commands to standard output.
Without eval, you would need to redirect standard output to a temporary file, source the temporary file, and then delete it. With eval, you can just:
eval "$(script-or-program)"
Note the quotes are important. Take this (contrived) example:
# activate.sh
echo 'I got activated!'
# test.py
print("export foo=bar/baz/womp")
print(". activate.sh")
$ eval $(python test.py)
bash: export: `.': not a valid identifier
bash: export: `activate.sh': not a valid identifier
$ eval "$(python test.py)"
I got activated!
The eval statement tells the shell to take eval’s arguments as commands and run them through the command-line. It is useful in a situation like below:
In your script if you are defining a command into a variable and later on you want to use that command then you should use eval:
a="ls | more"
$a
Output:
bash: command not found: ls | more
The above command didn't work as ls tried to list file with name pipe (|) and more. But these files are not there:
eval $a
Output:
file.txt
mailids
remote_cmd.sh
sample.txt
tmp
Update: Some people say one should -never- use eval. I disagree. I think the risk arises when corrupt input can be passed to eval. However there are many common situations where that is not a risk, and therefore it is worth knowing how to use eval in any case. This stackoverflow answer explains the risks of eval and alternatives to eval. Ultimately it is up to the user to determine if/when eval is safe and efficient to use.
The bash eval statement allows you to execute lines of code calculated or acquired, by your bash script.
Perhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order. That's essentially the same behavior as the bash source statement, which is what one would use, unless it was necessary to perform some kind of transformation (e.g. filtering or substitution) on the content of the imported script.
I rarely have needed eval, but I have found it useful to read or write variables whose names were contained in strings assigned to other variables. For example, to perform actions on sets of variables, while keeping the code footprint small and avoiding redundancy.
eval is conceptually simple. However, the strict syntax of the bash language, and the bash interpreter's parsing order can be nuanced and make eval appear cryptic and difficult to use or understand. Here are the essentials:
The argument passed to eval is a string expression that is calculated at runtime. eval will execute the final parsed result of its argument as an actual line of code in your script.
Syntax and parsing order are stringent. If the result isn't an executable line of bash code, in scope of your script, the program will crash on the eval statement as it tries to execute garbage.
When testing you can replace the eval statement with echo and look at what is displayed. If it is legitimate code in the current context, running it through eval will work.
The following examples may help clarify how eval works...
Example 1:
eval statement in front of 'normal' code is a NOP
$ eval a=b
$ eval echo $a
b
In the above example, the first eval statements has no purpose and can be eliminated. eval is pointless in the first line because there is no dynamic aspect to the code, i.e. it already parsed into the final lines of bash code, thus it would be identical as a normal statement of code in the bash script. The 2nd eval is pointless too, because, although there is a parsing step converting $a to its literal string equivalent, there is no indirection (e.g. no referencing via string value of an actual bash noun or bash-held script variable), so it would behave identically as a line of code without the eval prefix.
Example 2:
Perform var assignment using var names passed as string values.
$ key="mykey"
$ val="myval"
$ eval $key=$val
$ echo $mykey
myval
If you were to echo $key=$val, the output would be:
mykey=myval
That, being the final result of string parsing, is what will be executed by eval, hence the result of the echo statement at the end...
Example 3:
Adding more indirection to Example 2
$ keyA="keyB"
$ valA="valB"
$ keyB="that"
$ valB="amazing"
$ eval eval \$$keyA=\$$valA
$ echo $that
amazing
The above is a bit more complicated than the previous example, relying more heavily on the parsing-order and peculiarities of bash. The eval line would roughly get parsed internally in the following order (note the following statements are pseudocode, not real code, just to attempt to show how the statement would get broken down into steps internally to arrive at the final result).
eval eval \$$keyA=\$$valA # substitution of $keyA and $valA by interpreter
eval eval \$keyB=\$valB # convert '$' + name-strings to real vars by eval
eval $keyB=$valB # substitution of $keyB and $valB by interpreter
eval that=amazing # execute string literal 'that=amazing' by eval
If the assumed parsing order doesn't explain what eval is doing enough, the third example may describe the parsing in more detail to help clarify what is going on.
Example 4:
Discover whether vars, whose names are contained in strings, themselves contain string values.
a="User-provided"
b="Another user-provided optional value"
c=""
myvarname_a="a"
myvarname_b="b"
myvarname_c="c"
for varname in "myvarname_a" "myvarname_b" "myvarname_c"; do
eval varval=\$$varname
if [ -z "$varval" ]; then
read -p "$varname? " $varname
fi
done
In the first iteration:
varname="myvarname_a"
Bash parses the argument to eval, and eval sees literally this at runtime:
eval varval=\$$myvarname_a
The following pseudocode attempts to illustrate how bash interprets the above line of real code, to arrive at the final value executed by eval. (the following lines descriptive, not exact bash code):
1. eval varval="\$" + "$varname" # This substitution resolved in eval statement
2. .................. "$myvarname_a" # $myvarname_a previously resolved by for-loop
3. .................. "a" # ... to this value
4. eval "varval=$a" # This requires one more parsing step
5. eval varval="User-provided" # Final result of parsing (eval executes this)
Once all the parsing is done, the result is what is executed, and its effect is obvious, demonstrating there is nothing particularly mysterious about eval itself, and the complexity is in the parsing of its argument.
varval="User-provided"
The remaining code in the example above simply tests to see if the value assigned to $varval is null, and, if so, prompts the user to provide a value.
I originally intentionally never learned how to use eval, because most people will recommend to stay away from it like the plague. However I recently discovered a use case that made me facepalm for not recognizing it sooner.
If you have cron jobs that you want to run interactively to test, you might view the contents of the file with cat, and copy and paste the cron job to run it. Unfortunately, this involves touching the mouse, which is a sin in my book.
Lets say you have a cron job at /etc/cron.d/repeatme with the contents:
*/10 * * * * root program arg1 arg2
You cant execute this as a script with all the junk in front of it, but we can use cut to get rid of all the junk, wrap it in a subshell, and execute the string with eval
eval $( cut -d ' ' -f 6- /etc/cron.d/repeatme)
The cut command only prints out the 6th field of the file, delimited by spaces. Eval then executes that command.
I used a cron job here as an example, but the concept is to format text from stdout, and then evaluate that text.
The use of eval in this case is not insecure, because we know exactly what we will be evaluating before hand.
I've recently had to use eval to force multiple brace expansions to be evaluated in the order I needed. Bash does multiple brace expansions from left to right, so
xargs -I_ cat _/{11..15}/{8..5}.jpg
expands to
xargs -I_ cat _/11/8.jpg _/11/7.jpg _/11/6.jpg _/11/5.jpg _/12/8.jpg _/12/7.jpg _/12/6.jpg _/12/5.jpg _/13/8.jpg _/13/7.jpg _/13/6.jpg _/13/5.jpg _/14/8.jpg _/14/7.jpg _/14/6.jpg _/14/5.jpg _/15/8.jpg _/15/7.jpg _/15/6.jpg _/15/5.jpg
but I needed the second brace expansion done first, yielding
xargs -I_ cat _/11/8.jpg _/12/8.jpg _/13/8.jpg _/14/8.jpg _/15/8.jpg _/11/7.jpg _/12/7.jpg _/13/7.jpg _/14/7.jpg _/15/7.jpg _/11/6.jpg _/12/6.jpg _/13/6.jpg _/14/6.jpg _/15/6.jpg _/11/5.jpg _/12/5.jpg _/13/5.jpg _/14/5.jpg _/15/5.jpg
The best I could come up with to do that was
xargs -I_ cat $(eval echo _/'{11..15}'/{8..5}.jpg)
This works because the single quotes protect the first set of braces from expansion during the parsing of the eval command line, leaving them to be expanded by the subshell invoked by eval.
There may be some cunning scheme involving nested brace expansions that allows this to happen in one step, but if there is I'm too old and stupid to see it.
You asked about typical uses.
One common complaint about shell scripting is that you (allegedly) can't pass by reference to get values back out of functions.
But actually, via "eval", you can pass by reference. The callee can pass back a list of variable assignments to be evaluated by the caller. It is pass by reference because the caller can allowed to specify the name(s) of the result variable(s) - see example below. Error results can be passed back standard names like errno and errstr.
Here is an example of passing by reference in bash:
#!/bin/bash
isint()
{
re='^[-]?[0-9]+$'
[[ $1 =~ $re ]]
}
#args 1: name of result variable, 2: first addend, 3: second addend
iadd()
{
if isint ${2} && isint ${3} ; then
echo "$1=$((${2}+${3}));errno=0"
return 0
else
echo "errstr=\"Error: non-integer argument to iadd $*\" ; errno=329"
return 1
fi
}
var=1
echo "[1] var=$var"
eval $(iadd var A B)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[2] var=$var (unchanged after error)"
eval $(iadd var $var 1)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[3] var=$var (successfully changed)"
The output looks like this:
[1] var=1
errstr=Error: non-integer argument to iadd var A B
errno=329
[2] var=1 (unchanged after error)
[3] var=2 (successfully changed)
There is almost unlimited band width in that text output! And there are more possibilities if the multiple output lines are used: e.g., the first line could be used for variable assignments, the second for continuous 'stream of thought', but that's beyond the scope of this post.
In the question:
who | grep $(tty | sed s:/dev/::)
outputs errors claiming that files a and tty do not exist. I understood this to mean that tty is not being interpreted before execution of grep, but instead that bash passed tty as a parameter to grep, which interpreted it as a file name.
There is also a situation of nested redirection, which should be handled by matched parentheses which should specify a child process, but bash is primitively a word separator, creating parameters to be sent to a program, therefore parentheses are not matched first, but interpreted as seen.
I got specific with grep, and specified the file as a parameter instead of using a pipe. I also simplified the base command, passing output from a command as a file, so that i/o piping would not be nested:
grep $(tty | sed s:/dev/::) <(who)
works well.
who | grep $(echo pts/3)
is not really desired, but eliminates the nested pipe and also works well.
In conclusion, bash does not seem to like nested pipping. It is important to understand that bash is not a new-wave program written in a recursive manner. Instead, bash is an old 1,2,3 program, which has been appended with features. For purposes of assuring backward compatibility, the initial manner of interpretation has never been modified. If bash was rewritten to first match parentheses, how many bugs would be introduced into how many bash programs? Many programmers love to be cryptic.
As clearlight has said, "(p)erhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order". I'm no expert, but the textbook I'm currently reading (Shell-Programmierung by Jürgen Wolf) points to one particular use of this that I think would be a valuable addition to the set of potential use cases collected here.
For debugging purposes, you may want to go through your script line by line (pressing Enter for each step). You could use eval to execute every line by trapping the DEBUG signal (which I think is sent after every line):
trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
I like the "evaluating your expression one additional time before execution" answer, and would like to clarify with another example.
var="\"par1 par2\""
echo $var # prints nicely "par1 par2"
function cntpars() {
echo " > Count: $#"
echo " > Pars : $*"
echo " > par1 : $1"
echo " > par2 : $2"
if [[ $# = 1 && $1 = "par1 par2" ]]; then
echo " > PASS"
else
echo " > FAIL"
return 1
fi
}
# Option 1: Will Pass
echo "eval \"cntpars \$var\""
eval "cntpars $var"
# Option 2: Will Fail, with curious results
echo "cntpars \$var"
cntpars $var
The curious results in option 2 are that we would have passed two parameters as follows:
First parameter: "par1
Second parameter: par2"
How is that for counter intuitive? The additional eval will fix that.
It was adapted from another answer on How can I reference a file for variables using Bash?

Receive values directly from the command line

What I'm attempting to do is receive values from the command line (instead of using the read method and asking the user to enter the values and/or file names in multiple steps).
./hello.sh 5 15 <file_name.txt
I have heard that simply using an array can help do the same, but I am not able to-
Avoid printing
5 15
on the next line
Since 5 and 15 are being printed, I'd expect the string 'abcdefgh' (contents of file_name.txt) to be printed; however, the output stops at
5 15
I would really appreciate it if someone could point out why my code isn't sufficient, and if possible, point me in the direction of some learning resources to broaden my knowledge of this concept.
Here is the code:
#! /usr/bin/bash
echo "$#"
I am simply testing things out (wanted to print out the variables before doing anything with and to them).
<file_name.txt is a redirection. It is not passed as a parameter. The parameters of the script are 5 and 15. The < redirects the file file_name.txt to standard input stdin of the script. You can read from stdin with for example cat.
#!/usr/bin/bash
echo "$#" # outputs parameters of the script joined with spaces
cat # redirects standard input to standard output, i.e. reads from the fiel
why my code isn't sufficient
Your script is not reading from the file, so the content of the file is ignored.
point me in the direction of some learning resources
File descriptors and redirections and standard streams are basic tools in shell - you should learn about them in any shell and linux introduction. My 5 min google search resulted in this link https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-i-o-redirection , which looks like some introduction to the topic.
Will this work?
./hello.sh 5 15 `catfile_name.txt`
And update hello.sh to:
#! /usr/bin/bash
shift 2
echo $#
Here is a more generic solution. It looks at each input parameter in turn. If it is a valid file, it outputs the contents of the file. Otherwise if just prints the parameter.
#! /usr/bin/bash
for $parameter in "${#}"; do # Quotation marks avoid splitting parameters with spaces.
if [ -f $parameter ]; then # '-f {value}' tests if {value} is a file.
cat $parameter
else
echo $parameter # You could also use 'echo -n ...' to skip newlines.
fi
done

bash "$#" not working with arguments starting with '-'

I am working on an option driven bash script that will use getopts. The script has cases where it can accept multiple options and specific cases where only one option is accepted. While testing a few cases out I ran into this issue which I have reduced down to pseudo-code for this question.
for arg in "$#"; do
echo ${arg}
done
echo "end"
Running below returns:
$ ./test.sh -a -b
-a
end
I am running bash 4.1.2, why isn't the -b returned on the empty line? I assume this has to do with the '-'.
I cannot reproduce your exact error, but this is the risk of using echo: if $arg is a valid option, it will be treated as such, not a string to print. Use printf instead:
printf '%s\n' "$arg"
Also check if you have applied any "shift" commands that might remove the arguments before you test then (typical in a argument collection block that might include a case statement)

How to comment out particular lines in a shell script

Can anyone suggest how to comment particular lines in the shell script other than #?
Suppose I want to comment five lines. Instead of adding # to each line, is there any other way to comment the five lines?
You can comment section of a script using a conditional.
For example, the following script:
DEBUG=false
if ${DEBUG}; then
echo 1
echo 2
echo 3
echo 4
echo 5
fi
echo 6
echo 7
would output:
6
7
In order to uncomment the section of the code, you simply need to comment the variable:
#DEBUG=false
(Doing so would print the numbers 1 through 7.)
Yes (although it's a nasty hack). You can use a heredoc thus:
#!/bin/sh
# do valuable stuff here
touch /tmp/a
# now comment out all the stuff below up to the EOF
echo <<EOF
...
...
...
EOF
What's this doing ? A heredoc feeds all the following input up to the terminator (in this case, EOF) into the nominated command. So you can surround the code you wish to comment out with
echo <<EOF
...
EOF
and it'll take all the code contained between the two EOFs and feed them to echo (echo doesn't read from stdin so it all gets thrown away).
Note that with the above you can put anything in the heredoc. It doesn't have to be valid shell code (i.e. it doesn't have to parse properly).
This is very nasty, and I offer it only as a point of interest. You can't do the equivalent of C's /* ... */
for single line comment add # at starting of a line
for multiple line comments add ' (single quote) from where you want to start & add ' (again single quote) at the point where you want to end the comment line.
You have to rely on '#' but to make the task easier in vi you can perform the following (press escape first):
:10,20 s/^/#
with 10 and 20 being the start and end line numbers of the lines you want to comment out
and to undo when you are complete:
:10,20 s/^#//

BASH Variables with multiple commands and reentrant

I have a bash script that sources contents from another file. The contents of the other file are commands I would like to execute and compare the return value. Some of the commands are have multiple commands separated by either a semicolon (;) or by ampersands (&&) and I can't seem to make this work. To work on this, I created some test scripts as shown:
test.conf is the file being sourced by test
Example-1 (this works), My output is 2 seconds in difference
test.conf
CMD[1]="date"
test.sh
. test.conf
i=2
echo "$(${CMD[$i]})"
sleep 2
echo "$(${CMD[$i]})"
Example-2 (this does not work)
test.conf (same script as above)
CMD[1]="date;date"
Example-3 (tried this, it does not work either)
test.conf (same script as above)
CMD[1]="date && date"
I don't want my variable, CMD, to be inside tick marks because then, the commands would be executed at time of invocation of the source and I see no way of re-evaluating the variable.
This script essentially calls CMD on pass-1 to check something, if on pass-1 I get a false reading, I do some work in the script to correct the false reading and re-execute & re-evaluate the output of CMD; pass-2.
Here is an example. Here I'm checking to see if SSHD is running. If it's not running when I evaluate CMD[1] on pass-1, I will start it and re-evaluate CMD[1] again.
test.conf
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
So if I modify this for my test script, then test.conf becomes:
NOTE: Tick marks are not showing up but it's the key below the ~ mark on my keyboard.
CMD[1]=`date;date` or `date && date`
My script looks like this (to handle the tick marks)
. test.conf
i=2
echo "${CMD[$i]}"
sleep 2
echo "${CMD[$i]}"
I get the same date/time printed twice despite the 2 second delay. As such, CMD is not getting re-evaluate.
First of all, you should never use backticks unless you need to be compatible with an old shell that doesn't support $() - and only then.
Secondly, I don't understand why you're setting CMD[1] but then calling CMD[$i] with i set to 2.
Anyway, this is one way (and it's similar to part of Barry's answer):
CMD[1]='$(date;date)' # no backticks (remember - they carry Lime disease)
eval echo "${CMD[1]}" # or $i instead of 1
From the couple of lines of your question, I would have expected some approach like this:
#!/bin/bash
while read -r line; do
# munge $line
if eval "$line"; then
# success
else
# fail
fi
done
Where you have backticks in the source, you'll have to escape them to avoid evaluating them too early. Also, backticks aren't the only way to evaluate code - there is eval, as shown above. Maybe it's eval that you were looking for?
For example, this line:
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
Ought probably look more like this:
CMD[1]='`pgrep -u root -d , sshd 1>/dev/null; echo $?`'

Resources