remove CR line terminators - shell

Firstly I would say that I have read this post however I still have problems for the CR line terminators.
There is a file called build_test.sh, I edited in leafpad and it can be displayed right in Vim:
cp ~/moonbox/llvm-2.9/lib/Transforms/MY_TOOL/$1 test.cpp
cd ~/moonbox/llvm-obj/tools/TEST_TOOL/
make
make install
cd -
However:
Using cat build_test.sh it outputs nothing.
Using more build_test.sh it outputs:cd - install/llvm-obj/tools/TEST_TOOL/Y_TOOL/$1 test.cpp
Using less build_test.sh it outputs: cp ~/moonbox/llvm-2.9/lib/Transforms/MY_TOOL/$1 test.cpp^Mcd ~/moonbox/llvm-obj/tools/TEST_TOOL/^Mmake^Mmake install^Mcd -
The result of file build_test.sh is:
build_test.sh: ASCII text, with CR line terminators
Following this post, the ^M no longer exists however there is no more line break :-(
The result of file build_test_no_cr.sh is now:
build_test_nocr.sh: ASCII text, with no line terminators
The solution can be seen here.
However I still would like why cat displays nothing and more displays so odd result. In addition why dos2unix and set fileformat=unix in Vim fails for this case.
ps: I guess that maybe my editor(Vim or leafpad?) generates only \r rather \n for the newline. How can it be so?

Simple \r terminators for newlines are "old Mac" line terminators, it is strange that an editor in 2012+ even generates files with such line terminators... Anyway, you can use the mac2unix command, which is part of the dos2unix distribution:
# Edits thefile inline
mac2unix thefile
# Takes origfile as an input, outputs to dstfile
mac2unix -n origfile dstfile
This command will not munge files which have already expected line terminators, which is a bonus. And the reverse (unix2mac) also exists.
Note that mac2unix is the same as dos2unix -c mac.

Also, if you work with vim, you can enforce UNIX line endings by executing
:set fileformat=unix
:w
or just add
set fileformat=unix
to your .vimrc file

I finally figured out that I could use this command:
tr '^M' '\n' <build_test.sh >build_test_nocr.sh
where ^M is added by pressing Ctrl+v and Enter keys.Alternately, this has the same effect:
tr '\r' '\n' <build_test.sh >build_test_nocr.sh

Related

How to solve "bad interpreter: No such file or directory"

I'm trying to run a sh script and get the following error on Mac:
/usr/bin/perl^M: bad interpreter: No such file or directory
How can I fix this?
Remove ^M control chars with
perl -i -pe 'y|\r||d' script.pl
/usr/bin/perl^M:
Remove the ^M at the end of usr/bin/perl from the #! line at the beginning of the script. That is a spurious ASCII 13 character that is making the shell go crazy.
Possibly you would need to inspect the file with a binary editor if you do not see the character.
You could do like this to convert the file to Mac line-ending format:
$ vi your_script.sh
once in vi type:
:set ff=unix
:x
The problem is that you're trying to use DOS/Windows text format on Linux/Unix/OSX machine.
In DOS/Windows text files a line break, also known as newline, is a combination of two characters: a Carriage Return (CR) followed by a Line Feed (LF). In Unix text files a line break is a single character: the Line Feed (LF).
Dos2unix can convert for you the encoding of files, in example:
dos2unix yourfile yourfile
For help, run: man dos2unix.
You seem to have weird line endings in your script: ^M is a carriage return \r. Transform your script to Unix line endings (just \n instead of \r\n, which is the line ending on Windows systems).
And if you prefer Sublime Text - simply go to View -> Line Endings and check Unix.
An alternative approach:
sudo ln -s /usr/bin/perl /usr/local/bin/perl`echo -e '\r'`

Unix script appends ^M at end of each line

I have a Unix shell script which does the following:
creates a backup of a file
appends some text to a file
Now in #2 if I insert a text, ^M gets appended on all the lines of the file.
For example:
echo " a" >> /cust/vivek.txt
echo " b" >> /cust/vivek.txt
vi vivek.txt
abc^M
bcd^M
a^M
b^M
Any way to avoid this?
I'm not sure how echo could be producing ^M characters but you can remove them by running dos2unix on your file, like this:
dos2unix /cust/vivek.txt
Only
sed -e "s/\r//g" file
worked for me
^M is a carriage return, and is commonly seen when files are copied from Windows. Use:
od -xc filename
that should give a low-level list of what your file looks like. If you file does not come from Windows then another possibility is that your terminal setting are not translating correctly. Check that the TERM environment variable is correct.
If the file has come from Windows, then use dos2unix or sed 's/\r//' file > file.new
I suspect this may be an artifact of your vi settings, rather than the concatenation.
What does
cat -v -e filename
show ? This command will dump out your file and mark the control characters so it's clear what's really in your file. See also this Superuser question/answer set.
^M are the meta characters which entered your file when it was used in windows.
the dos2unix command can fix this.
dos2unix <filename>
If you are using windows, you can use notepad++ and to try Edit > EOL Conversion > Unix/OSX Format, before load file to server.

How to convert Windows end of line in Unix end of line (CR/LF to LF)

I'm a Java developer and I'm using Ubuntu to develop. The project was created in Windows with Eclipse and it's using the Windows-1252 encoding.
To convert to UTF-8 I've used the recode program:
find Web -iname \*.java | xargs recode CP1252...UTF-8
This command gives this error:
recode: Web/src/br/cits/projeto/geral/presentation/GravacaoMessageHelper.java failed: Ambiguous output in step `CR-LF..data
I've searched about it and get the solution in Bash and Windows, Recode: Ambiguous output in step `data..CR-LF' and it says:
Convert line endings from CR/LF to a
single LF: Edit the file with Vim,
give the command :set ff=unix and save
the file. Recode now should run
without errors.
Nice, but I've many files to remove the CR/LF character from, and I can't open each to do it. Vi doesn't provide any option to command line for Bash operations.
Can sed be used to do this? How?
There should be a program called dos2unix that will fix line endings for you. If it's not already on your Linux box, it should be available via the package manager.
sed cannot match \n because the trailing newline is removed before the line is put into the pattern space, but it can match \r, so you can convert \r\n (DOS) to \n (Unix) by removing \r:
sed -i 's/\r//g' file
Warning: this will change the original file
However, you cannot change from Unix EOL to DOS or old Mac (\r) by this. More readings here:
How can I replace a newline (\n) using sed?
Actually, Vim does allow what you're looking for. Enter Vim, and type the following commands:
:args **/*.java
:argdo set ff=unix | update | next
The first of these commands sets the argument list to every file matching **/*.java, which is all Java files, recursively. The second of these commands does the following to each file in the argument list, in turn:
Sets the line-endings to Unix style (you already know this)
Writes the file out iff it's been changed
Proceeds to the next file
I'll take a little exception to jichao's answer. You can actually do everything he just talked about fairly easily. Instead of looking for a \n, just look for carriage return at the end of the line.
sed -i 's/\r$//' "${FILE_NAME}"
To change from Unix back to DOS, simply look for the last character on the line and add a form feed to it. (I'll add -r to make this easier with grep regular expressions.)
sed -ri 's/(.)$/\1\r/' "${FILE_NAME}"
Theoretically, the file could be changed to Mac style by adding code to the last example that also appends the next line of input to the first line until all lines have been processed. I won't try to make that example here, though.
Warning: -i changes the actual file. If you want a backup to be made, add a string of characters after -i. This will move the existing file to a file with the same name with your characters added to the end.
Update: The Unix to DOS conversion can be simplified and made more efficient by not bothering to look for the last character. This also allows us to not require using -r for it to work:
sed -i 's/$/\r/' "${FILE_NAME}"
The tr command can also do this:
tr -d '\15\32' < winfile.txt > unixfile.txt
and should be available to you.
You'll need to run tr from within a script, since it cannot work with file names. For example, create a file myscript.sh:
#!/bin/bash
for f in `find -iname \*.java`; do
echo "$f"
tr -d '\15\32' < "$f" > "$f.tr"
mv "$f.tr" "$f"
recode CP1252...UTF-8 "$f"
done
Running myscript.sh would process all the java files in the current directory and its subdirectories.
In order to overcome
Ambiguous output in step `CR-LF..data'
the simple solution might be to add the -f flag to force the conversion.
Try the Python script by Bryan Maupin found here (I've modified it a little bit to be more generic):
#!/usr/bin/env python
import sys
input_file_name = sys.argv[1]
output_file_name = sys.argv[2]
input_file = open(input_file_name)
output_file = open(output_file_name, 'w')
line_number = 0
for input_line in input_file:
line_number += 1
try: # first try to decode it using cp1252 (Windows, Western Europe)
output_line = input_line.decode('cp1252').encode('utf8')
except UnicodeDecodeError, error: # if there's an error
sys.stderr.write('ERROR (line %s):\t%s\n' % (line_number, error)) # write to stderr
try: # then if that fails, try to decode using latin1 (ISO 8859-1)
output_line = input_line.decode('latin1').encode('utf8')
except UnicodeDecodeError, error: # if there's an error
sys.stderr.write('ERROR (line %s):\t%s\n' % (line_number, error)) # write to stderr
sys.exit(1) # and just keep going
output_file.write(output_line)
input_file.close()
output_file.close()
You can use that script with
$ ./cp1252_utf8.py file_cp1252.sql file_utf8.sql
Go back to Windows, tell Eclipse to change the encoding to UTF-8, then back to Unix and run d2u on the files.

How to convert DOS/Windows newline (CRLF) to Unix newline (LF)

How can I programmatically (not using vi) convert DOS/Windows newlines to Unix newlines?
The dos2unix and unix2dos commands are not available on certain systems.
How can I emulate them with commands such as sed, awk, and tr?
You can use tr to convert from DOS to Unix; however, you can only do this safely if CR appears in your file only as the first byte of a CRLF byte pair. This is usually the case. You then use:
tr -d '\015' <DOS-file >UNIX-file
Note that the name DOS-file is different from the name UNIX-file; if you try to use the same name twice, you will end up with no data in the file.
You can't do it the other way round (with standard 'tr').
If you know how to enter carriage return into a script (control-V, control-M to enter control-M), then:
sed 's/^M$//' # DOS to Unix
sed 's/$/^M/' # Unix to DOS
where the '^M' is the control-M character. You can also use the bash ANSI-C Quoting mechanism to specify the carriage return:
sed $'s/\r$//' # DOS to Unix
sed $'s/$/\r/' # Unix to DOS
However, if you're going to have to do this very often (more than once, roughly speaking), it is far more sensible to install the conversion programs (e.g. dos2unix and unix2dos, or perhaps dtou and utod) and use them.
If you need to process entire directories and subdirectories, you can use zip:
zip -r -ll zipfile.zip somedir/
unzip zipfile.zip
This will create a zip archive with line endings changed from CRLF to CR. unzip will then put the converted files back in place (and ask you file by file - you can answer: Yes-to-all). Credits to #vmsnomad for pointing this out.
Use:
tr -d "\r" < file
Take a look here for examples using sed:
# In a Unix environment: convert DOS newlines (CR/LF) to Unix format.
sed 's/.$//' # Assumes that all lines end with CR/LF
sed 's/^M$//' # In Bash/tcsh, press Ctrl-V then Ctrl-M
sed 's/\x0D$//' # Works on ssed, gsed 3.02.80 or higher
# In a Unix environment: convert Unix newlines (LF) to DOS format.
sed "s/$/`echo -e \\\r`/" # Command line under ksh
sed 's/$'"/`echo \\\r`/" # Command line under bash
sed "s/$/`echo \\\r`/" # Command line under zsh
sed 's/$/\r/' # gsed 3.02.80 or higher
Use sed -i for in-place conversion, e.g., sed -i 's/..../' file.
You can use Vim programmatically with the option -c {command}:
DOS to Unix:
vim file.txt -c "set ff=unix" -c ":wq"
Unix to DOS:
vim file.txt -c "set ff=dos" -c ":wq"
"set ff=unix/dos" means change fileformat (ff) of the file to Unix/DOS end of line format.
":wq" means write the file to disk and quit the editor (allowing to use the command in a loop).
Install dos2unix, then convert a file in-place with
dos2unix <filename>
To output converted text to a different file use
dos2unix -n <input-file> <output-file>
You can install it on Ubuntu or Debian with
sudo apt install dos2unix
or on macOS using Homebrew
brew install dos2unix
Using AWK you can do:
awk '{ sub("\r$", ""); print }' dos.txt > unix.txt
Using Perl you can do:
perl -pe 's/\r$//' < dos.txt > unix.txt
This problem can be solved with standard tools, but there are sufficiently many traps for the unwary that I recommend you install the flip command, which was written over 20 years ago by Rahul Dhesi, the author of zoo.
It does an excellent job converting file formats while, for example, avoiding the inadvertant destruction of binary files, which is a little too easy if you just race around altering every CRLF you see...
If you don't have access to dos2unix, but can read this page, then you can copy/paste dos2unix.py from here.
#!/usr/bin/env python
"""\
convert dos linefeeds (crlf) to unix (lf)
usage: dos2unix.py <input> <output>
"""
import sys
if len(sys.argv[1:]) != 2:
sys.exit(__doc__)
content = ''
outsize = 0
with open(sys.argv[1], 'rb') as infile:
content = infile.read()
with open(sys.argv[2], 'wb') as output:
for line in content.splitlines():
outsize += len(line) + 1
output.write(line + '\n')
print("Done. Saved %s bytes." % (len(content)-outsize))
(Cross-posted from Super User.)
The solutions posted so far only deal with part of the problem, converting DOS/Windows' CRLF into Unix's LF; the part they're missing is that DOS use CRLF as a line separator, while Unix uses LF as a line terminator. The difference is that a DOS file (usually) won't have anything after the last line in the file, while Unix will. To do the conversion properly, you need to add that final LF (unless the file is zero-length, i.e. has no lines in it at all). My favorite incantation for this (with a little added logic to handle Mac-style CR-separated files, and not molest files that're already in unix format) is a bit of perl:
perl -pe 'if ( s/\r\n?/\n/g ) { $f=1 }; if ( $f || ! $m ) { s/([^\n])\z/$1\n/ }; $m=1' PCfile.txt
Note that this sends the Unixified version of the file to stdout. If you want to replace the file with a Unixified version, add perl's -i flag.
It is super duper easy with PCRE;
As a script, or replace $# with your files.
#!/usr/bin/env bash
perl -pi -e 's/\r\n/\n/g' -- $#
This will overwrite your files in place!
I recommend only doing this with a backup (version control or otherwise)
An even simpler AWK solution without a program:
awk -v ORS='\r\n' '1' unix.txt > dos.txt
Technically '1' is your program, because AWK requires one when the given option.
Alternatively, an internal solution is:
while IFS= read -r line;
do printf '%s\n' "${line%$'\r'}";
done < dos.txt > unix.txt
Interestingly, in my Git Bash on Windows, sed "" did the trick already:
$ echo -e "abc\r" >tst.txt
$ file tst.txt
tst.txt: ASCII text, with CRLF line terminators
$ sed -i "" tst.txt
$ file tst.txt
tst.txt: ASCII text
My guess is that sed ignores them when reading lines from the input and always writes Unix line endings to the output.
For Mac OS X if you have Homebrew installed (http://brew.sh/):
brew install dos2unix
for csv in *.csv; do dos2unix -c mac ${csv}; done;
Make sure you have made copies of the files, as this command will modify the files in place.
The -c mac option makes the switch to be compatible with OS X.
I had just to ponder that same question (on Windows-side, but equally applicable to Linux).
Surprisingly, nobody mentioned a very much automated way of doing CRLF <-> LF conversion for text-files using the good old zip -ll option (Info-ZIP):
zip -ll textfiles-lf.zip files-with-crlf-eol.*
unzip textfiles-lf.zip
NOTE: this would create a ZIP file preserving the original file names, but converting the line endings to LF. Then unzip would extract the files as zip'ed, that is, with their original names (but with LF-endings), thus prompting to overwrite the local original files if any.
The relevant excerpt from the zip --help:
zip --help
...
-l convert LF to CR LF (-ll CR LF to LF)
sed -i.bak --expression='s/\r\n/\n/g' <file_path>
Since the question mentions sed, this is the most straightforward way to use sed to achieve this. The expression says replace all carriage-returns and line-feeds with just line-feeds only. That is what you need when you go from Windows to Unix. I verified it works.
Just complementing #Jonathan Leffler's excellent answer, if you have a file with mixed line endings (LF and CRLF) and you need to normalize to CRLF (DOS), use the following commands in sequence...
# DOS to Unix
sed -i $'s/\r$//' "<YOUR_FILE>"
# Unix to DOS (normalized)
sed -i $'s/$/\r/' "<YOUR_FILE>"
NOTE: If you have a file with mixed line endings (LF and CRLF), the second command above alone will cause a mess.
If you need to convert to LF (Unix) the first command alone will be enough...
# DOS to Unix
sed -i $'s/\r$//' "<YOUR_FILE>"
Thanks! 🤗
[Ref(s).: https://stackoverflow.com/a/3777853/3223785 ]
TIMTOWTDI!
perl -pe 's/\r\n/\n/; s/([^\n])\z/$1\n/ if eof' PCfile.txt
Based on Gordon Davisson's answer.
One must consider the possibility of [noeol]...
You can use AWK. Set the record separator (RS) to a regular expression that matches all possible newline character, or characters. And set the output record separator (ORS) to the Unix-style newline character.
awk 'BEGIN{RS="\r|\n|\r\n|\n\r";ORS="\n"}{print}' windows_or_macos.txt > unix.txt
This worked for me
tr "\r" "\n" < sampledata.csv > sampledata2.csv
On Linux, it's easy to convert ^M (Ctrl + M) to *nix newlines (^J) with sed.
It will be something like this on the CLI, and there will actually be a line break in the text. However, the \ passes that ^J along to sed:
sed 's/^M/\
/g' < ffmpeg.log > new.log
You get this by using ^V (Ctrl + V), ^M (Ctrl + M) and \ (backslash) as you type:
sed 's/^V^M/\^V^J/g' < ffmpeg.log > new.log
As an extension to Jonathan Leffler's Unix to DOS solution, to safely convert to DOS when you're unsure of the file's current line endings:
sed '/^M$/! s/$/^M/'
This checks that the line does not already end in CRLF before converting to CRLF.
I made a script based on the accepted answer, so you can convert it directly without needing an additional file in the end and removing and renaming afterwards.
convert-crlf-to-lf() {
file="$1"
tr -d '\015' <"$file" >"$file"2
rm -rf "$file"
mv "$file"2 "$file"
}
Just make sure if you have a file like "file1.txt" that "file1.txt2" doesn't already exist or it will be overwritten. I use this as a temporary place to store the file in.
With Bash 4.2 and newer you can use something like this to strip the trailing CR, which only uses Bash built-ins:
if [[ "${str: -1}" == $'\r' ]]; then
str="${str:: -1}"
fi
I tried
sed 's/^M$//' file.txt
on OS X as well as several other methods (Fixing Dos Line Endings or http://hintsforums.macworld.com/archive/index.php/t-125.html). None worked, and the file remained unchanged (by the way, Ctrl + V, Enter was needed to reproduce ^M). In the end I used TextWrangler. It's not strictly command line, but it works and it doesn't complain.

Setting grep end of line character

I have a fairly simple bash script that does some grep to find all the text in a file that does not match a pattern.
grep -v $1 original.txt >trimmed.txt
The input file ends each line with the Windows line end characters, i.e., with a carriage return and a line feed CR LF.
The output of this command (run in Cygwin) ends each line with an extra carriage return, i.e., CR CR LF.
How do I tell grep to just use CR LF?
I think you can only configure the EOL setting during Cygwin install.
If you run your original file through dos2unix first, then grep should be able to process properly (you may wish to run through unix2dos afterwards to revert the EOLs)

Resources