Am trying to run physusr.sh file in GIT Bash in Windows. Am trying to set java home as below in physusr.sh file.
JAVA_HOME=C:/Program Files (x86)/IBM/WebSphere/AppServer
JAVA_EXE=$JAVA_HOME/bin/java
cd /H/US_L3/MLAdminBatchLocal/original
but am facing the error when I run the file GIT Bash
./physusr.sh: line 1: syntax error near unexpected token `('
./physusr.sh: line 1: `JAVA_HOME=C:/Program Files (x86)/IBM/WebSphere/AppServer'
I have tried using double quotes, back slash but I was getting no such file or directory error. How do I make this work. Should I run this sh file using any other tool.
Most probably the '(' bracket character of '(x86)' is causing the problem. When it executes the bash file it is maybe considering it as something else but not the path. So, to solve this, tell the executor that the whole thing is a path or we can say disable the different treatment of special characters like brackets, put the path inside single quotes.
So, change it to:
JAVA_HOME='C:/Program Files (x86)/IBM/WebSphere/AppServer'
JAVA_EXE=$JAVA_HOME/bin/java
cd /H/US_L3/MLAdminBatchLocal/original
I am trying to turn the following into an executable bash script
#!/bin/bash
cd ~/mlpractical
source activate mlp
jupyter notebook
after creating a .rtf file with the above, i then execute from the correct directory
chmod u+x filename
but everytime i then try and open the file i get an output telling me on line 1) the command is not found, on line 2) there is a syntax error etc.
How do I make the script executable (double-clickable) and resolve this error?
I'm not sure about making the script double-clickable (that depends on your OS, and you didn't mention what OS you're using). But it sounds like the script file is in RTF format, and that will certainly cause trouble. Shell scripts must be in absolutely plain Unix-style text files.
They can't have any formatting info, as in RTF, DOC, DOCX, etc files. At best, the shell will try to interpret the formatting info as shell commands, and get lots of errors.
They must have Unix-style line endings. If you use a text editor that saves in DOS/Windows format, you'll have trouble.
They must use a plain enough character encoding that the shell can get away with treating it as plain ASCII. That means no UTF-16. UTF-8 is ok, but don't use fancy characters like curly quotes (“ ” and ‘ ’) -- stick to plain ASCII quotes (" " and ' '). And no byte order marks!
Thanks Gordon, You are right the errors all made clear that text formatting strings were what was being passed to the shell, instead of just the plain text strings i intended.
I am using a MacOS environment and was creating files in text edit which had no .txt option only .rtf
I solved the issue by,
1) using the command line itself to echo text to a file without any extension eg.
echo 'text' > filename
2) i then couldn't figure out the newline syntax so had to keep appending text eg.
echo 'more text' >> filename
3) i then did the following, from the relevant directory, to make it executable,
chmod u+x filename
I am getting confused about Linux shells. It may be that I oversee something obvious as a Linux noob.
All I want is the following script to run:
#!/bin/bash
echo "Type some Text:"
read var
echo "You entered: "$var
Here is the situation:
Installed Ubuntu Server 14.04 in a VirtualBox on windows
Installed it with this packages
A SAMBA mounted on /media/buff
The script is on /media/buff/ShellScript/test.sh
made executable by "sudo chmod a+x /media/buff/ShellScript/test.sh"
The rest is default
I am using PSPad on windows to edit the script file
I read about the dash but I'm not getting it.
Here are some variations:
Using sh to launch
user#Ubuntu:/media/buff/ShellScript$ sh test.sh
Type some Text:
:bad variable nameread var
You entered:
Using bash to launch:
user#Ubuntu:/media/buff/ShellScript$ bash test.sh
Type some Text:
': Ist kein gültiger Bezeichner.var (means no valid identifyier)
You entered:
Changed the Shebang in the script to "#!/bin/sh", Using sh to launch
user#Ubuntu:/media/buff/ShellScript$ sh test.sh
Type some Text:
:bad variable nameread var
You entered:
I searched across the Internet for hours now and I assume, that the script itself is ok, but there are missing some environment settings.
I used "sudo dpkg-reconfigure dash" to set dash as default shell (which I think is ubuntu default anyway)
sadface panda :)
There are most probably carriage returns (CR, '\r') at the end of your lines in the shell script, so the variable is trying to be read into var\r instead of var. This is why your error message is meaningless. A similar error should look like this:
run.sh: 3: export: [... the variable name]: bad variable name
In your case bash throws an error because var\r is illegal for a variable name due to the carriage return, so it prints
test.sh: 3: read: var\r: bad variable name
but the \r jumps to the beginning of the line, overwriting the start of the error message.
Remove the carriage returns fom the ends of lines, possibly by using the dos2unix utility.
Here are a list of editors that support unix newline character.
Brackets has an extension for new-line/end-of-line support and it is built-in in Notepad++. Go to the 'Edit' tab. Find 'EOL Conversions' and select Unix (LF).
That should get it done.
I've got a text file composed of different commands:
set +e; false; echo $?
(false; echo foo); echo \ $?
for i in .; do (false; echo foo); echo \ 1:$?; done; echo \ 2:$?
and so on.
I want to test these commands line-by-line, as a whole, in a bash child process using set -e, then check the error code from the parent bash process to see if the child exited with an error. I'm stuck on how to process the simpler commands (e.g. the first one listed above). Here's one of my recent attempts:
mapfile file < "$dir"/errorData
bash -c "${file[0]}"
echo $?
returns
bash: set +e: command not found
bash: false: command not found
bash: echo 127: command not found
I've tried several variants without luck. Using bash -c \'"${file[1]}"\' leads to one error message about command not found, with the above three strings now combined into one long string. Of course, simply wrapping the file commands in strong quotes and feeding them directly to bash in an interactive shell works fine.
Edit: I'm using bash 4.4.12(1)
For anyone who finds that the shell is parsing their text in ways that seem to violate the reference manual, make sure to check any copied input data for invisible control characters. E.g. I found a non-breaking space in place of what looked like simple whitespace, in the input file with commands to be executed. This is especially common when copying from web pages to an editor or the command line. bash uses whitespace, tabs, and newlines to tokenize, not the non-breaking space (U+0020 in the Unicode standard). A hex editor, e.g. hexfiend, can help troubleshoot similar issues.
I am new to bash scripting and i'm just trying out new things and getting to grips with it.
Basically I am writing a small script to store the content of a file in a variable and then use that variable in an if statement.
Through step by step i have figured out the ways to store variables and then store content of files as variables. I am now working on if statements.
My test if statement if very VERY basic. I just wanted to grasp the syntax before moving onto more complicated if statement for my program.
My if statement is:
if [ "test" = "test" ]
then
echo "This is the same"
fi
Simple right? however when i run the script i am getting the error:
syntax error near unexpected token `fi'
I have tried a number of things from this site as well as others but i am still getting this error and I am unsure what is wrong. Could it be an issue on my computer stopping the script from running?
Edit for ful code. Note i also deleted all the commented out code and just used the if statement, still getting same error.
#!/bin/bash
#this stores a simple variable with the content of the file testy1.txt
#DATA=$(<testy1.txt)
#This echos out the stored variable
#echo $DATA
#simple if statement
if [ "test" = "test" ]
then
echo "has value"
fi
If a script looks ok (you've balanced all your quotes and parentheses, as well as backticks), but issues an error message, this may be caused by funny characters, i.e. characters that are not displayed, such as carriage returns, vertical tabs and others. To diagnose these, examine your script with
od -c script.sh
and look for \r, \v or other unexpected characters. You can get rid of \r for example with the dos2unix script.sh command.
BTW, what operating system and editor are you using?
To complement Jens's helpful answer, which explains the symptoms well and offers a utility-based solution (dos2unix). Sometimes installing a third-party utility is undesired, so here's a solution based on standard utility tr:
tr -d '\r' < script > script.tmp && mv script.tmp script
This removes all \r (CR) characters from the input, saves the output to a temporary file, and then replaces the original file.
While this blindly removes \r instances even if they're not part of \r\n (CRLF) pairs, it's usually safe to assume that \r instances indeed only occur as part of such pairs.
Solutions with other standard utilities (awk, sed) are possible too - see this answer of mine.
If your sed implementation offers the -i option for in-place updating, it may be the simpler choice.
To diagnose the problem I suggest using cat -v script, as its output is easy to parse visually: if you see ^M (which represents \r) at the end of the output lines, you know you're dealing with a file with Window line endings.
Why Your Script Failed So Obscurely
Normally, a shell script that mistakenly has Windows-style CRLF line endings, \r\n, (rather than the required Unix-style LF-only endings, \n) and starts with shebang line #!/bin/bash fails in a manner that does indicate the cause of the problem:
/bin/bash^M: bad interpreter
as a quick SO search can attest. The ^M indicates that the CR was considered part of the interpreter path, which obviously fails.
(If, by contrast, the script's shebang line is env-based, such as #!/usr/bin/env bash, the error message differs, but still points to the cause: env: bash\r: No such file or directory)
The reason you did not see this problem is that you're running in the Windows Unix-emulation environment Cygwin, which - unlike on Unix - allows a shebang line to end in CRLF (presumably to also support invoking other interpreters on Windows that do expect CRLF endings).
The CRLF problem therefore didn't surface until later in your script, and the fact that you had no empty lines after the shebang line further obfuscated the problem:
An empty CRLF-terminated line would cause Bash (4.x) to complain as follows: "bash: line <n>: $'\r': command not found, because Bash tries to execute the CR as a command (since it doesn't recognize it as part of the line ending).
The comment lines directly following the shebang lines are unproblematic, because a comment line ending in CR is still syntactically valid.
Finally, the if statement broke the command, in an obscure manner:
If your file were to end with a line break, as is usually the case, you would have gotten syntax error: unexpected end of file:
The line-ending then and if tokens are seen as then\r and if\r (i.e., the CR is appended) by Bash, and are therefore not recognized as keywords. Bash therefore never sees the end of the if compound command, and complains about encountering the end of the file before seeing the if statement completed.
Since your file did not, you got syntax error near unexpected token 'fi':
The final fi, due to not being followed by a CR, is recognized as a keyword by Bash, whereas the preceding then wasn't (as explained). In this case, Bash therefore saw keyword fi before ever seeing then, and complained about this out-of-place occurrence of fi.
Optional Background Information
Shell scripts that look OK but break due to characters that are either invisible or only look the same as the required characters are a common problem that usually has one of the following causes:
Problem A: The file has Windows-style CRLF (\r\n) line endings rather than Unix-style LF-only (\n) line endings - which is the case here.
Copying a file from a Windows machine or using an editor that saves files with CRLF sequences are among the possible causes.
Problem B: The file has non-ASCII Unicode whitespace and punctuation that looks like regular whitespace, but is technically distinct.
A common cause is source code copied from websites that use non-ASCII whitespace and punctuation for formatting code for display purposes;
an example is use of the no-break space Unicode character (U+00A0; UTF-8 encoding 0xc2 0xa0), which is visually indistinguishable from a normal (ASCII) space (U+0020).
Diagnosing the Problem
The following cat command visualizes:
all normally invisible ASCII control characters, such as \r as ^M.
all non-ASCII characters (assuming the now prevalent UTF-8 encoding), such as the non-break space Unicode char. as M-BM- .
^M is an example of caret notation, which is not obvious, especially with multi-byte characters, but, beyond ^M, it's usually not necessary to know exactly what the notation stands for - you just need to note if the ^<letter> sequences are present at all (problem A), or are present in unexpected places (problem B).
The last point is important: non-ASCII characters can be a legitimate part of source code, such as in string literals and comments. They're only a problem if they're used in place of ASCII punctuation.
LC_ALL=C cat -v script
Note: If you're using GNU utilities, the LC_ALL=C prefix is optional.
Solutions to Problem A: translating line endings from CRLF to LF-only
For solutions based on standard or usually-available-by-default utilities (tr, awk, sed, perl), see this answer of mine.
A more robust and convenient option is the widely used dos2unix utility, if it is already installed (typically, it is not), or installing it is an option.
How you install it depends on your platform; e.g.:
on Ubuntu: sudo apt-get install dos2unix
on macOs, with Homebrew installed, brew install dos2unix
dos2unix script would convert the line endings to LF and update file script in place.
Note that dos2unix also offers additional features, such as changing the character encoding of a file.
Solutions to Problem B: translating Unicode punctuation to ASCII punctuation
Note: By punctuation I mean both whitespace and characters such as -
The challenge in this case is that only Unicode punctuation should be targeted, whereas other non-ASCII characters should be left alone; thus, use of character-transcoding utilities such as iconv is not an option.
nws is a utility (that I wrote) that offers a Unicode-punctuation-to-ASCII-punctuation translation mode while leaving non-punctuation Unicode chars. alone; e.g.:
nws -i --ascii script # translate Unicode punct. to ASCII, update file 'script' in place
Installation:
If you have Node.js installed, install it by simply running [sudo] npm install -g nws-cli, which will place nws in your path.
Otherwise: See the manual installation instructions.
nws has several other functions focused on whitespace handling, including CRLF-to-LF and vice-versa translations (--lf, --crlf).
syntax error near unexpected token `fi'
means that if statement not opened and closed correctly, you need to check from the beginning that every if, for or while statement was opened and closed correctly.
Don't forget to add in the beginning of script :
#!/bin/bash
Here, "$test" should be a variable that stores the content of that file.
if [ "$test" = "test" ]; then
echo "This is the same"
fi