Will bash script pre-parse the syntax? - bash

I am running bash script that needs to run different code for SunOs and Linux and I am getting the syntax error from the part of the code that not supposed to be true. I did not expect that since I thought that Bash works as interpreter.
The bash version on SunOS is 2.5 and on Linux is 4.1. The syntax it complains about is only supported from 3.1 version.
I tried to disable the newer code with "else" clause but it looks like it still pre-parses.
Also my script has ":" instead of "#! /bin/sh" as first line.
test.sh:
:
echo "`uname`"
if [ `uname` = "SunOS" ]
then
echo "do old stuff"
else
echo "new stuff"
arr=($(grep "^X1" ../foo.txt | sed 's/.*=//'))
fi
The error is
> ./test.sh
SunOS
./test.sh: syntax error at line 8: `arr=' unexpected
If I comment error line then it will work fine:
:
echo "`uname`"
if [ `uname` = "SunOS" ]
then
echo "do old stuff"
else
echo "new stuff"
#arr=($(grep "^X1" ../foo.txt | sed 's/.*=//'))
fi
The result is
> ./test.sh
SunOS
do old stuff
My question is how do I fix this syntax error without commenting? I have to have "if/else" to be able to run this script on different machines.

That array syntax has been supported in bash since at least version 2; if you're getting errors there, it's because your script is not running under bash at all, but under some other shell. This probably has a lot to do with your script starting with : instead of a shebang line, meaning it's up to whatever runs the script to figure out what to run it with, with inconsistent results. I'd strongly recommend using a proper shebang line. If bash doesn't exist in a predictable location, you could use #!/usr/bin/env bash. If bash might not be in the PATH, you could use something like the script prologue here -- a #!/bin/sh shebang followed by commands to find and switch to bash.
As for the question about pre-parsing: yes, bash and other shells will parse all the way to the fi keyword before executing the if construct. They need to find the then, else, and fi keywords in order to figure out what they're going to execute and what they're going to skip, and in order to find those they have to parse their way to them.

You could stick the command in a temporary variable, and then execute the variable if your condition is true. I just ran the following on my system:
> if [ true ]; then echo hi; else [blah]=(--4); fi
-bash: syntax error near unexpected token `--4'
I get a syntax error as you describe. If I then do:
> if [ true ]; then echo hi; else var="[blah]=(--4)" && eval "${var}"; fi
hi
then it echos hi (no error). Finally if I do:
> if [ ]; then echo hi; else var="[blah]=(--4)" && eval "${var}"; fi
-bash: syntax error near unexpected token `--4'
Then it attempted to run the code, and generates an error based on trying to run the code.

Related

How to reassign a variable (zsh) when using source utility

The code below tests if the character from a string is matching regex or not.
str=")Y"
c="${str:0:1}"
if [[ $c =~ [A-Za-z0-9_] ]]; then
echo "YES"
output=$c
else
echo "NO"
output="-"
fi
echo $output
I am running it with
source script-name.sh
However, instead of expected printout
NO
-
I am getting an empty line without dash
NO
I understand the issue is somehow around the way i (re-)assign output variable which being me to questions
How to do it properly?
Why source utility has such implication?
UPD_1: it is for Mac's zsh, not bash
UPD_2: the issue occurs only when running script via 'source' utility like "source script-name.sh"
Running with "./script-name.sh" yield correct result as well.
Your problem can be simplified to do on the zsh command line a
echo -
which also doesn't output anything. Similarily, a
echo - x
would output simply x and not - x.
This does not depend on whether or not you are on the Mac. If you would do a
echo - -
or a
=echo -
(the latter using the external program echo) would print a dash.
Therefore, you can change in your script-name.sh a
=echo $output
or a
echo - $output
and you should be fine.
The zshbuiltins man-page explains it, when describing the echo command:
the first dash, possibly following options, is not printed, but everything following it is printed as an argument.
Therefore, in zsh, at least when printing a variable, it is better to also use a lone dash for the safe side.
Your code gives the expected output for bash 4.2.46 on RHEL7.
Are you maybe using zsh?
See echo 'the character - (dash) in the unix command line
EDIT: Ok, if it's zsh, you probably have to use a hack:
if [[ ${output} == '-' ]]; then
echo - ${output}
else
echo ${output}
fi
or use printf:
printf $output"\n"

Default test expression behaves different in zsh vs bash - why?

Here is a simple test case script which behaves differently in zsh vs bash when I run with $ source test_script.sh from the command line. I don't necessarily know why there is a difference if my shebang clearly states that I want bash to run my script other than the fact that the which command is a built-in in zsh and a program in bash. (FYI - the shebang directory is where my bash program lives which may not be the same as yours--I installed a new version using homebrew)
#!/usr/local/bin/bash
if [ "$(which ls)" ]; then
echo "ls command found"
else
echo "ls command not found"
fi
if [ "$(which foo)" ]; then
echo "foo command found"
else
echo "foo command not found"
I run this script with source ./test-script.sh from zsh and Bash.
Output in zsh:
ls command found
foo command found
Output in bash:
ls command found
foo command not found
My understanding is that default for test or [ ] (which are the same thing) evaluate a string to true if it's not empty/null. To illustrate:
zsh:
$ which foo
foo not found
bash:
$ which foo
$
Moreover if I redirect standard error in zsh like:
$ which foo 2> /dev/null
foo not found
zsh still seems to send foo not found to standard output which is why (I am guessing) my test case passed for both under the zshell; because the expansion of "$(which xxx)" returned a string in both cases (e.g. /some/directory and foo not found (zsh will ALWAYS return a string?).
Lastly, if I remove the double quotes (e.g. $(which xxx)), zsh gives me an error. Here is the output:
ls command found
test_scritp.sh:27: condition expected not:
I am guessing zsh wanted me to use [ ! "$(which xxx)" ]. I don't understand why? It never gave that error when running in bash (and isn't this supposed to run in bash anyway?!).
Why isn't my script using bash? Why is something so trivial as this not working? I understand how to make it work fine in both using the -e option but I simply want to understand why this is all happening. Its driving me bonkers.
There are two separate problems here.
First, the proper command to use is type, not which. Like you note, the command which is a zsh built-in, whereas in Bash, it will execute whatever which command happens to be on your system. There are many variants with different behaviors, which is why POSIX opted to introduce a replacement instead of trying to prescribe a particular behavior for which -- then there would be yet one more possible behavior, and no way to easily root out all the other legacy behaviors. (One early common problem was with a which command which would examine the csh environment, even if you actually used a different shell.)
Secondly, examining a command's string output is a serious antipattern, because strings differ between locales ("not found" vs. "nicht gefunden" vs. "ei löytynyt" vs. etc etc) and program versions -- the proper solution is to examine the command's exit code.
if type ls >/dev/null 2>&1; then
echo "ls command found"
else
echo "ls command not found"
fi
if type foo >/dev/null 2>&1; then
echo "foo command found"
else
echo "foo command not found"
fi
(A related antipattern is to examine $? explicitly. There is very rarely any need to do this, as it is done naturally and transparently by the shell's flow control statements, like if and while.)
Regarding quoting, the shell performs whitespace tokenization and wildcard expansion on unquoted values, so if $string is command not found, the expression
[ $string ]
without quotes around the value evaluates to
[ command not found ]
which looks to the shell like the string "command" followed by some cruft which isn't syntactically valid.
Lastly, as we uncovered in the chat session (linked from comments) the OP was confused about the precise meaning of source, and ended up running a Bash script in a separate process instead. (./test-script instead of source ./test-script). For the record, when you source a file, you cause your current shell to read and execute it; in this setting, the script's shebang line is simply a comment, and is completely ignored by the shell.

Passing argument to script invoked by exec producing undesired result

I'm trying to pass an argument to a shell script via exec, within another shell script. However, I get an error that the script does not exist in the path - but that is not the case.
$ ./run_script.sh
$ blob has just been executed.
$ ./run_script.sh: line 8: /home/s37syed/blob.sh test: No such file or directory
For some reason it's treating the entire execution as one whole absolute path to a script - it isn't reading the string as an argument for blob.sh.
Here is the script that is being executed.
#!/bin/bash
#run_script.sh
blobPID="$(pgrep "blob.sh")"
if [[ -z "$blobPID" ]]
then
echo "blob has just been executed."
#execs as absolute path - carg not read at all
( exec "/home/s37syed/blob.sh test" )
#this works fine, as exepcted
#( exec "/home/s37syed/blob.sh" )
else
echo "blob is currently running with pid $blobPID"
ps $blobPID
fi
And the script being invoked by run_script.sh, not doing much, just emulating a long process/task:
#!/bin/bash
#blob.sh
i=0
carg="$1"
if [[ -z "$carg" ]]
then
echo "nothing entered"
else
echo "command line arg entered: $carg"
fi
while [ $i -lt 100000 ];
do
echo "blob is currently running" >> test.txt
let i=i+1
done
Here is the version of Bash I'm using:
$ bash --version
GNU bash, version 4.2.37(1)-release (x86_64-pc-linux-gnu)
Any advice/comments/help on why this is happening would be much appreciated!
Thanks in advance,
s37syed
Replace
exec "/home/s37syed/blob.sh test"
(which tries to execute a command named "/home/s37syed/blob.sh test" with no arguments)
by
exec /home/s37syed/blob.sh test
(which executes "/home/s37/syed/blob.sh" with a single argument "test").
Aside from the quoting problem Cyrus pointed out, I'm pretty sure you don't want to use exec. What exec does is replace the current shell with the command being executed (rather than running the command as a subprocess, as it would without exec). Putting parentheses around it makes it execute that section in a subshell, thus effectively cancelling out the effect of exec.
As chepner said, you might be thinking of the eval command, which performs an extra parsing pass before executing the command. But eval is a huge bug magnet. It's incredibly easy to use eval in unsafe ways (see BashFAQ #48). If you need to construct a command, see BashFAQ #50 for better ways to do it.

Shell script - check the syntax

How to check the correctness of the syntax contained in the ksh shell script without executing it? To make my point clear: in perl we can execute the command:
perl -c test_script.pl
to check the syntax. Is something similar to this available in ksh?
ksh -n
Most of the Borne Shell family accepts -n. tcsh as well.
I did a small test with the following code:
#!/bin/bash
if [ -f "buggyScript.sh" ; then
echo "found this buggy script"
fi
Note the missing ] in the if. Now I entered
bash -n buggyScript.sh
and the missing ] was not detected.
The second test script looked like this:
#!/bin/bash
if [ -f "buggyScript.sh" ]; then
echo "found this buggy script"
Note the missing fi at at end of the if. Testing this with
bash -n buggyScript.sh
returned
buggyScript.sh: line 5: syntax error: unexpected end of file
Conclusion:
Testing the script with the n option detects some errors, but by no means all of them. So I guess you really find all error only while executing the script.
The tests that you say failed to detect syntax errors, where not in fact syntax errors...
echo is a command (OK a builtin, but still a command) so ksh/bash are not going to check the spelling/syntax of your command.
Similarly "[" is effectively an alias for the test command, and the command expects the closing brace "]" as part of its syntax, not ksh/bash's.
So -n does what it says on the tin, you just haven't read the tin correctly! :-)

Cygwin bash syntax error - but script run perfectly well in Ubuntu

#!/bin/bash
if test "$#" == "4"; then echo "$*"; else echo "args-error" >&2; fi;
This little code snippet troubles me a lot when I tried to run it on both Ubuntu and Cygwin.
Ubuntu runs bash version 4.0+ whereas Cygwin runs 3.2.49; But I reckon version collision shall not be the cause of this, this code runs well under fedora 10 which is also using bash version 3.+
So basically I am wondering if there is a way to code my script once and for all so there are not to have this awful issue later on.
Many thanks in advance.
Edited : I don't have Cygwin by hand at the moment but from my memory, it keeps saying something like couldn't resolve undefined token "fi" something like that.
Edited : well,the original form is like this, just found from server :
#!/bin/bash
if ["$#" == "4"];
then echo "$*";
else echo "args-error" >&2;
fi;
Console complains :
$ ./test.sh 1 2 3
./test.sh: line 2: [3: command not found
args-error
I am also wondering how come that stderr says something goes wrong - command not found - but can still print out the answer?
You need whitespace around the [ and ].
#!/bin/bash
if [ "$#" == "4" ];
then echo "$*";
else echo "args-error" >&2;
fi;
Updated answer:
You need a space after [ otherwise ["$#" is evaluated to for example [3 which doesn't exist. Try this:
if [ "$#" == "4" ];
then echo "$*";
else echo "args-error" >&2;
fi;
It works for me. I would guess that you are getting an error like this:
test.sh: line 2: $'\r': command not found
test.sh: line 3: $'\r': command not found
This can happen because you have edited the file using Windows-style line endings but Bash expects Unix-style line endings. To fix the file, try running this command:
dos2unix test.sh
You of course need to change the filename to the actual filename of your script.
In your edit to show the 'original form' the problem would seem to be that you're missing a space between the [ and the "
try to use operator '=' instead of '==' . And also add one space after [ and another before ] as folowing
if [ "$#" = "4" ];
then echo "$*";
else echo "args-error" >&2;
fi;
Or even try '-eq' instead of '=='
if [ "$#" -eq "4" ];
Because in some systems, it does not accept the operator '=='

Resources