: command not foundop/dominio.sh: line 2: - macos

I'm writing shell scripting for Mac.
Here's my script:
echo "Bienvenido";
/Applications/sdk/platform-tools/adb devices;
sudo /Applications/sdk/platform-tools/adb shell input text 'sp.soporte#gmail.com';
It realize the correct operation, but here is the output :
$ /Users/julien/Desktop/dominio.sh
Bienvenido
: command not foundop/dominio.sh: line 1:
List of devices attached
4790057be1803096 device
: command not foundop/dominio.sh: line 2:
: command not foundop/dominio.sh: line 3:
julien$
If I erase the ; it's not working any more. How should I do????

I think you have Windows-style line endings in your script.
Unix-like systems, including MacOS, use a single LF character to terminate a line; Windows uses a CR-LF pair.
A Windows-style text file looks, on a Unix-like system, like ordinary text with an extra CR character at the end of each line.
Since you have a semicolon at the end of each line, this line:
echo "Bienvenido";
appears to the shell as two commands: echo "Bienvenido" and the CR character (which could actually be a command name if it existed). Note that the echo command was executed.
The shell prints an error message, something like:
/path/to/script: 1: CR: command not found
except that it prints the actual CR (carriage return) character, which moves the cursor to the beginning of the current line, overwriting part of the error message.
Translate your script to use Unix-style line endings. You can use dos2unix for this if you have it. (Read the man page; unlike most filter programs, it overwrites its input file by default.)
Incidentally, you don't need a semicolon on the end of each line of a shell script. Semicolons are needed only when you have multiple commands on one line.
Also, you should probably have a "shebang" as the first line of your script, either #!/bin/sh or #!/bin/bash (use the latter if your script uses bash-specific features).

Related

unable to solve "syntax error near unexpected token `fi'" - hidden control characters (CR) / Unicode whitespace

I am new to bash scripting and i'm just trying out new things and getting to grips with it.
Basically I am writing a small script to store the content of a file in a variable and then use that variable in an if statement.
Through step by step i have figured out the ways to store variables and then store content of files as variables. I am now working on if statements.
My test if statement if very VERY basic. I just wanted to grasp the syntax before moving onto more complicated if statement for my program.
My if statement is:
if [ "test" = "test" ]
then
echo "This is the same"
fi
Simple right? however when i run the script i am getting the error:
syntax error near unexpected token `fi'
I have tried a number of things from this site as well as others but i am still getting this error and I am unsure what is wrong. Could it be an issue on my computer stopping the script from running?
Edit for ful code. Note i also deleted all the commented out code and just used the if statement, still getting same error.
#!/bin/bash
#this stores a simple variable with the content of the file testy1.txt
#DATA=$(<testy1.txt)
#This echos out the stored variable
#echo $DATA
#simple if statement
if [ "test" = "test" ]
then
echo "has value"
fi
If a script looks ok (you've balanced all your quotes and parentheses, as well as backticks), but issues an error message, this may be caused by funny characters, i.e. characters that are not displayed, such as carriage returns, vertical tabs and others. To diagnose these, examine your script with
od -c script.sh
and look for \r, \v or other unexpected characters. You can get rid of \r for example with the dos2unix script.sh command.
BTW, what operating system and editor are you using?
To complement Jens's helpful answer, which explains the symptoms well and offers a utility-based solution (dos2unix). Sometimes installing a third-party utility is undesired, so here's a solution based on standard utility tr:
tr -d '\r' < script > script.tmp && mv script.tmp script
This removes all \r (CR) characters from the input, saves the output to a temporary file, and then replaces the original file.
While this blindly removes \r instances even if they're not part of \r\n (CRLF) pairs, it's usually safe to assume that \r instances indeed only occur as part of such pairs.
Solutions with other standard utilities (awk, sed) are possible too - see this answer of mine.
If your sed implementation offers the -i option for in-place updating, it may be the simpler choice.
To diagnose the problem I suggest using cat -v script, as its output is easy to parse visually: if you see ^M (which represents \r) at the end of the output lines, you know you're dealing with a file with Window line endings.
Why Your Script Failed So Obscurely
Normally, a shell script that mistakenly has Windows-style CRLF line endings, \r\n, (rather than the required Unix-style LF-only endings, \n) and starts with shebang line #!/bin/bash fails in a manner that does indicate the cause of the problem:
/bin/bash^M: bad interpreter
as a quick SO search can attest. The ^M indicates that the CR was considered part of the interpreter path, which obviously fails.
(If, by contrast, the script's shebang line is env-based, such as #!/usr/bin/env bash, the error message differs, but still points to the cause: env: bash\r: No such file or directory)
The reason you did not see this problem is that you're running in the Windows Unix-emulation environment Cygwin, which - unlike on Unix - allows a shebang line to end in CRLF (presumably to also support invoking other interpreters on Windows that do expect CRLF endings).
The CRLF problem therefore didn't surface until later in your script, and the fact that you had no empty lines after the shebang line further obfuscated the problem:
An empty CRLF-terminated line would cause Bash (4.x) to complain as follows: "bash: line <n>: $'\r': command not found, because Bash tries to execute the CR as a command (since it doesn't recognize it as part of the line ending).
The comment lines directly following the shebang lines are unproblematic, because a comment line ending in CR is still syntactically valid.
Finally, the if statement broke the command, in an obscure manner:
If your file were to end with a line break, as is usually the case, you would have gotten syntax error: unexpected end of file:
The line-ending then and if tokens are seen as then\r and if\r (i.e., the CR is appended) by Bash, and are therefore not recognized as keywords. Bash therefore never sees the end of the if compound command, and complains about encountering the end of the file before seeing the if statement completed.
Since your file did not, you got syntax error near unexpected token 'fi':
The final fi, due to not being followed by a CR, is recognized as a keyword by Bash, whereas the preceding then wasn't (as explained). In this case, Bash therefore saw keyword fi before ever seeing then, and complained about this out-of-place occurrence of fi.
Optional Background Information
Shell scripts that look OK but break due to characters that are either invisible or only look the same as the required characters are a common problem that usually has one of the following causes:
Problem A: The file has Windows-style CRLF (\r\n) line endings rather than Unix-style LF-only (\n) line endings - which is the case here.
Copying a file from a Windows machine or using an editor that saves files with CRLF sequences are among the possible causes.
Problem B: The file has non-ASCII Unicode whitespace and punctuation that looks like regular whitespace, but is technically distinct.
A common cause is source code copied from websites that use non-ASCII whitespace and punctuation for formatting code for display purposes;
an example is use of the no-break space Unicode character (U+00A0; UTF-8 encoding 0xc2 0xa0), which is visually indistinguishable from a normal (ASCII) space (U+0020).
Diagnosing the Problem
The following cat command visualizes:
all normally invisible ASCII control characters, such as \r as ^M.
all non-ASCII characters (assuming the now prevalent UTF-8 encoding), such as the non-break space Unicode char. as M-BM- .
^M is an example of caret notation, which is not obvious, especially with multi-byte characters, but, beyond ^M, it's usually not necessary to know exactly what the notation stands for - you just need to note if the ^<letter> sequences are present at all (problem A), or are present in unexpected places (problem B).
The last point is important: non-ASCII characters can be a legitimate part of source code, such as in string literals and comments. They're only a problem if they're used in place of ASCII punctuation.
LC_ALL=C cat -v script
Note: If you're using GNU utilities, the LC_ALL=C prefix is optional.
Solutions to Problem A: translating line endings from CRLF to LF-only
For solutions based on standard or usually-available-by-default utilities (tr, awk, sed, perl), see this answer of mine.
A more robust and convenient option is the widely used dos2unix utility, if it is already installed (typically, it is not), or installing it is an option.
How you install it depends on your platform; e.g.:
on Ubuntu: sudo apt-get install dos2unix
on macOs, with Homebrew installed, brew install dos2unix
dos2unix script would convert the line endings to LF and update file script in place.
Note that dos2unix also offers additional features, such as changing the character encoding of a file.
Solutions to Problem B: translating Unicode punctuation to ASCII punctuation
Note: By punctuation I mean both whitespace and characters such as -
The challenge in this case is that only Unicode punctuation should be targeted, whereas other non-ASCII characters should be left alone; thus, use of character-transcoding utilities such as iconv is not an option.
nws is a utility (that I wrote) that offers a Unicode-punctuation-to-ASCII-punctuation translation mode while leaving non-punctuation Unicode chars. alone; e.g.:
nws -i --ascii script # translate Unicode punct. to ASCII, update file 'script' in place
Installation:
If you have Node.js installed, install it by simply running [sudo] npm install -g nws-cli, which will place nws in your path.
Otherwise: See the manual installation instructions.
nws has several other functions focused on whitespace handling, including CRLF-to-LF and vice-versa translations (--lf, --crlf).
syntax error near unexpected token `fi'
means that if statement not opened and closed correctly, you need to check from the beginning that every if, for or while statement was opened and closed correctly.
Don't forget to add in the beginning of script :
#!/bin/bash
Here, "$test" should be a variable that stores the content of that file.
if [ "$test" = "test" ]; then
echo "This is the same"
fi

If-then-else syntax in tcsh

I'm trying to write a simple script in tcsh (version 6.12.00 (Astron) 2002-07-23), but I am getting tripped up by the if-then-else syntax. I am very new to script writing.
This script works:
#!/bin/tcsh -f
if (1) echo "I disagree"
However, this one does not:
#!/bin/tcsh -f
if ( 1 ) then
echo "I disagree"
else
echo "I agree"
endif
For one thing, this code, when run, echoes both statements. It seems to me it should never see the else. For another, the output also intersperses those echoes with three iterations of ": Command not found."
Edited to add: here is the verbatim output:
: Command not found.
I disagree
: Command not found.
I agree
: Command not found.
I know that the standard advice is to use another shell instead, but I am not really in a position to do that (new job, new colleagues, everyone else uses tcsh, want my scripts to be portable).
When I copy-and-paste your script and run it on my system, it correctly prints I disagree.
When I change the line endings to Windows-style, I get:
: Command not found.
I disagree
: Command not found.
I agree
: Command not found.
So, your script very likely has Windows-style line endings. Fix the line endings, and it should work. The dos2unix command is one way to do that (man dos2unix first; unlike most UNIX text-processing commands, it replaces its input file.)
The problem is that tcsh doesn't recognize ^M ('\r') as an end-of-line character. It sees the then^M at the end of the line as a single command, and prints an error message then^M: Command not found. The ^M causes the cursor to return to the beginning of the line, and the rest of the message overwrite the then.

Lines ending with ^M in my config files on Ubuntu?

In my bash script, I am trying to change a configuration line one of my configuration file.
here is the bash script I used.
#!/bin/bash
jdbcURL(){
ssh ppuser#10.101.5.84 "sed -i \"s|\(\"jdbc.url\" *= *\).*|\1$2|\" $1"
}
jdbcURL $4 $5
After running this script, the configuration file is changed but the problem is, every lines in the configuration file is ending with ^M , so anything wrong in my bash script? Hope anyone help me. Thank You.
The ^M character is a carriage return - an extra character that Windows appends to newlines. It is usually rendered as \r. ^M is another visual representation.
You can strip them with the dos2unix utility:
$ dos2unix myfile
For reference, *nix operating systems (including OSX) use \n to delimit lines; Windows uses \r\n. Mac operating systems, up to OS-9 used \r alone.
You're encountering an issue with different line termination between Unixoid and Windoid worlds. Where Unix and consorts use a single 0x0a (linefeed) character, microsoft's world prefers 0x0d 0x0a (carriage return, linefeed). So if there is a file with lines ending with both carriage return AND linefeed looked at with unixoids, it interprets the linefeed as line terminator, and leaves the carriage return as part of the line, This is what you see as ^M
Conversion utilities to convert line terminators between these different conventions exist, but you ought to be able to let your sed expression take care of it.
As side note, Apple used to use another representation of line end, namely a single carriage return. I don't know whether they still do.

Why is this error happening in bash?

Using cygwin, I have
currentFold="`dirname $0`"
echo ${currentFold}...
This outputs ...gdrive/c/ instead of /cygdrive/c/...
Why is that ?
Your script is stored in DOS format, with a carriage return followed by linefeed (sometimes written "\r\n") at the end of each line; unix uses just a linefeed ("\n") at the end of lines, and so bash is mistaking the carriage return for part of the command. When it sees
currentFold="`dirname $0`"\r
it dutifully sets currentFold to "/cygdrive/c/\r", and when it sees
echo ${currentFold}...\r
it prints "/cygdrive/c/\r...\r". The final carriage return doesn't really matter, but the one in the middle means that the "..." gets printed on top of the "/cy", and you wind up with "...gdrive/c/".
Solution: convert the script to unix format; I believe you'll have the dos2unix command available for this, but you might have to look around for alternatives. In a pinch, you can use
perl -pi -e 's/\r\n?/\n/g' /path/to/script
(see http://www.commandlinefu.com/commands/view/5088/convert-files-from-dos-line-endings-to-unix-line-endings). Then switch to a text editor that saves in unix format rather than DOS.
I would like to add to Gordon Davisson's anwer.
I am also using Cygwin. In my case this happened because my Git for Windows was configured to Checkout Windows-style, commint Unix style line endings.
This is the default option, and was breaking all my cloned shell scripts.
I reran my git setup and changed to Checkout as-is, commit Unix-style line endings which prevented the problem from happening at all.

Bash for loop error

I am trying out a simple bash script using for loop, and kept getting the following error:
'/test.sh: line 2: syntax error near unexpected token `do
'/test.sh: line 2: `do
The following is the code that is being used...
for animal in dog cat elephant
do
echo "There are ${animal}s.... "
done
However, when I tried on other machines.. it is working no problem.
Please help.
Your test.sh script has Windows-style line endings. The shell sees each \r\n sequence as a \r character at the end of the line.
The shell is seeing do\r rather than do. Since \r sends the cursor to the beginning of the line, that's why you're seeing the quotation mark at the beginning of the line. Try
./test.sh 2>&1 | cat -A
to see what's actually in the error message.
Filter your test.sh script through something like dos2unix to correct the line endings. (Be sure to read the man page first; dos2unix, unlike most text filters, overwrites its input file.)
This is a common problem on Cygwin. Did you use a Windows editor to create the script?

Resources