Variable and string substitution is not working for parameters [duplicate] - bash

The intent of this question is to provide an answer to the daily questions whose answer is "you have DOS line endings" so we can simply close them as duplicates of this one without repeating the same answers ad nauseam.
NOTE: This is NOT a duplicate of any existing question. The intent of this Q&A is not just to provide a "run this tool" answer but also to explain the issue such that we can just point anyone with a related question here and they will find a clear explanation of why they were pointed here as well as the tool to run so solve their problem. I spent hours reading all of the existing Q&A and they are all lacking in the explanation of the issue, alternative tools that can be used to solve it, and/or the pros/cons/caveats of the possible solutions. Also some of them have accepted answers that are just plain dangerous and should never be used.
Now back to the typical question that would result in a referral here:
I have a file containing 1 line:
what isgoingon
and when I print it using this awk script to reverse the order of the fields:
awk '{print $2, $1}' file
instead of seeing the output I expect:
isgoingon what
I get the field that should be at the end of the line appear at the start of the line, overwriting some text at the start of the line:
whatngon
or I get the output split onto 2 lines:
isgoingon
what
What could the problem be and how do I fix it?

The problem is that your input file uses DOS line endings of CRLF instead of UNIX line endings of just LF and you are running a UNIX tool on it so the CR remains part of the data being operated on by the UNIX tool. CR is commonly denoted by \r and can be seen as a control-M (^M) when you run cat -vE on the file while LF is \n and appears as $ with cat -vE.
So your input file wasn't really just:
what isgoingon
it was actually:
what isgoingon\r\n
as you can see with cat -v:
$ cat -vE file
what isgoingon^M$
and od -c:
$ od -c file
0000000 w h a t i s g o i n g o n \r \n
0000020
so when you run a UNIX tool like awk (which treats \n as the line ending) on the file, the \n is consumed by the act of reading the line, but that leaves the 2 fields as:
<what> <isgoingon\r>
Note the \r at the end of the second field. \r means Carriage Return which is literally an instruction to return the cursor to the start of the line so when you do:
print $2, $1
awk will print isgoingon and then will return the cursor to the start of the line before printing what which is why the what appears to overwrite the start of isgoingon.
To fix the problem, do either of these:
dos2unix file
sed 's/\r$//' file
awk '{sub(/\r$/,"")}1' file
perl -pe 's/\r$//' file
Apparently dos2unix is aka frodos in some UNIX variants (e.g. Ubuntu).
Be careful if you decide to use tr -d '\r' as is often suggested as that will delete all \rs in your file, not just those at the end of each line.
Note that GNU awk will let you parse files that have DOS line endings by simply setting RS appropriately:
gawk -v RS='\r\n' '...' file
but other awks will not allow that as POSIX only requires awks to support a single character RS and most other awks will quietly truncate RS='\r\n' to RS='\r'. You may need to add -v BINMODE=3 for gawk to even see the \rs though as the underlying C primitives will strip them on some platforms, e.g. cygwin.
One thing to watch out for is that CSVs created by Windows tools like Excel will use CRLF as the line endings but can have LFs embedded inside a specific field of the CSV, e.g.:
"field1","field2.1
field2.2","field3"
is really:
"field1","field2.1\nfield2.2","field3"\r\n
so if you just convert \r\ns to \ns then you can no longer tell linefeeds within fields from linefeeds as line endings so if you want to do that I recommend converting all of the intra-field linefeeds to something else first, e.g. this would convert all intra-field LFs to tabs and convert all line ending CRLFs to LFs:
gawk -v RS='\r\n' '{gsub(/\n/,"\t")}1' file
Doing similar without GNU awk left as an exercise but with other awks it involves combining lines that do not end in CR as they're read.
Also note that though CR is part of the [[:space:]] POSIX character class, it is not one of the whitespace characters included as separating fields when the default FS of " " is used, whose whitespace characters are only tab, blank, and newline. This can lead to confusing results if your input can have blanks before CRLF:
$ printf 'x y \n'
x y
$ printf 'x y \n' | awk '{print $NF}'
y
$
$ printf 'x y \r\n'
x y
$ printf 'x y \r\n' | awk '{print $NF}'
$
That's because trailing field separator white space is ignored at the beginning/end of a line that has LF line endings, but \r is the final field on a line with CRLF line endings if the character before it was whitespace:
$ printf 'x y \r\n' | awk '{print $NF}' | cat -Ev
^M$

You can use the \R shorthand character class in PCRE for files with unknown line endings. There are even more line ending to consider with Unicode or other platforms. The \R form is a recommended character class from the Unicode consortium to represent all forms of a generic newline.
So if you have an 'extra' you can find and remove it with the regex s/\R$/\n/ will normalize any combination of line endings into \n. Alternatively, you can use s/\R/\n/g to capture any notion of 'line ending' and standardize into a \n character.
Given:
$ printf "what\risgoingon\r\n" > file
$ od -c file
0000000 w h a t \r i s g o i n g o n \r \n
0000020
Perl and Ruby and most flavors of PCRE implement \R combined with the end of string assertion $ (end of line in multi-line mode):
$ perl -pe 's/\R$/\n/' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
$ ruby -pe '$_.sub!(/\R$/,"\n")' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
(Note the \r between the two words is correctly left alone)
If you do not have \R you can use the equivalent of (?>\r\n|\v) in PCRE.
With straight POSIX tools, your best bet is likely awk like so:
$ awk '{sub(/\r$/,"")} 1' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
Things that kinda work (but know your limitations):
tr deletes all \r even if used in another context (granted the use of \r is rare, and XML processing requires that \r be deleted, so tr is a great solution):
$ tr -d "\r" < file | od -c
0000000 w h a t i s g o i n g o n \n
0000016
GNU sed works, but not POSIX sed since \r and \x0D are not supported on POSIX.
GNU sed only:
$ sed 's/\x0D//' file | od -c # also sed 's/\r//'
0000000 w h a t \r i s g o i n g o n \n
0000017
The Unicode Regular Expression Guide is probably the best bet of what the definitive treatment of what a "newline" is.

Run dos2unix. While you can manipulate the line endings with code you wrote yourself, there are utilities which exist in the Linux / Unix world which already do this for you.
If on a Fedora system dnf install dos2unix will put the dos2unix tool in place (should it not be installed).
There is a similar dos2unix deb package available for Debian based systems.
From a programming point of view, the conversion is simple. Search all the characters in a file for the sequence \r\n and replace it with \n.
This means there are dozens of ways to convert from DOS to Unix using nearly every tool imaginable. One simple way is to use the command tr where you simply replace \r with nothing!
tr -d '\r' < infile > outfile

Related

awk adds undesired newline at the end of last detected parameter [duplicate]

The intent of this question is to provide an answer to the daily questions whose answer is "you have DOS line endings" so we can simply close them as duplicates of this one without repeating the same answers ad nauseam.
NOTE: This is NOT a duplicate of any existing question. The intent of this Q&A is not just to provide a "run this tool" answer but also to explain the issue such that we can just point anyone with a related question here and they will find a clear explanation of why they were pointed here as well as the tool to run so solve their problem. I spent hours reading all of the existing Q&A and they are all lacking in the explanation of the issue, alternative tools that can be used to solve it, and/or the pros/cons/caveats of the possible solutions. Also some of them have accepted answers that are just plain dangerous and should never be used.
Now back to the typical question that would result in a referral here:
I have a file containing 1 line:
what isgoingon
and when I print it using this awk script to reverse the order of the fields:
awk '{print $2, $1}' file
instead of seeing the output I expect:
isgoingon what
I get the field that should be at the end of the line appear at the start of the line, overwriting some text at the start of the line:
whatngon
or I get the output split onto 2 lines:
isgoingon
what
What could the problem be and how do I fix it?
The problem is that your input file uses DOS line endings of CRLF instead of UNIX line endings of just LF and you are running a UNIX tool on it so the CR remains part of the data being operated on by the UNIX tool. CR is commonly denoted by \r and can be seen as a control-M (^M) when you run cat -vE on the file while LF is \n and appears as $ with cat -vE.
So your input file wasn't really just:
what isgoingon
it was actually:
what isgoingon\r\n
as you can see with cat -v:
$ cat -vE file
what isgoingon^M$
and od -c:
$ od -c file
0000000 w h a t i s g o i n g o n \r \n
0000020
so when you run a UNIX tool like awk (which treats \n as the line ending) on the file, the \n is consumed by the act of reading the line, but that leaves the 2 fields as:
<what> <isgoingon\r>
Note the \r at the end of the second field. \r means Carriage Return which is literally an instruction to return the cursor to the start of the line so when you do:
print $2, $1
awk will print isgoingon and then will return the cursor to the start of the line before printing what which is why the what appears to overwrite the start of isgoingon.
To fix the problem, do either of these:
dos2unix file
sed 's/\r$//' file
awk '{sub(/\r$/,"")}1' file
perl -pe 's/\r$//' file
Apparently dos2unix is aka frodos in some UNIX variants (e.g. Ubuntu).
Be careful if you decide to use tr -d '\r' as is often suggested as that will delete all \rs in your file, not just those at the end of each line.
Note that GNU awk will let you parse files that have DOS line endings by simply setting RS appropriately:
gawk -v RS='\r\n' '...' file
but other awks will not allow that as POSIX only requires awks to support a single character RS and most other awks will quietly truncate RS='\r\n' to RS='\r'. You may need to add -v BINMODE=3 for gawk to even see the \rs though as the underlying C primitives will strip them on some platforms, e.g. cygwin.
One thing to watch out for is that CSVs created by Windows tools like Excel will use CRLF as the line endings but can have LFs embedded inside a specific field of the CSV, e.g.:
"field1","field2.1
field2.2","field3"
is really:
"field1","field2.1\nfield2.2","field3"\r\n
so if you just convert \r\ns to \ns then you can no longer tell linefeeds within fields from linefeeds as line endings so if you want to do that I recommend converting all of the intra-field linefeeds to something else first, e.g. this would convert all intra-field LFs to tabs and convert all line ending CRLFs to LFs:
gawk -v RS='\r\n' '{gsub(/\n/,"\t")}1' file
Doing similar without GNU awk left as an exercise but with other awks it involves combining lines that do not end in CR as they're read.
Also note that though CR is part of the [[:space:]] POSIX character class, it is not one of the whitespace characters included as separating fields when the default FS of " " is used, whose whitespace characters are only tab, blank, and newline. This can lead to confusing results if your input can have blanks before CRLF:
$ printf 'x y \n'
x y
$ printf 'x y \n' | awk '{print $NF}'
y
$
$ printf 'x y \r\n'
x y
$ printf 'x y \r\n' | awk '{print $NF}'
$
That's because trailing field separator white space is ignored at the beginning/end of a line that has LF line endings, but \r is the final field on a line with CRLF line endings if the character before it was whitespace:
$ printf 'x y \r\n' | awk '{print $NF}' | cat -Ev
^M$
You can use the \R shorthand character class in PCRE for files with unknown line endings. There are even more line ending to consider with Unicode or other platforms. The \R form is a recommended character class from the Unicode consortium to represent all forms of a generic newline.
So if you have an 'extra' you can find and remove it with the regex s/\R$/\n/ will normalize any combination of line endings into \n. Alternatively, you can use s/\R/\n/g to capture any notion of 'line ending' and standardize into a \n character.
Given:
$ printf "what\risgoingon\r\n" > file
$ od -c file
0000000 w h a t \r i s g o i n g o n \r \n
0000020
Perl and Ruby and most flavors of PCRE implement \R combined with the end of string assertion $ (end of line in multi-line mode):
$ perl -pe 's/\R$/\n/' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
$ ruby -pe '$_.sub!(/\R$/,"\n")' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
(Note the \r between the two words is correctly left alone)
If you do not have \R you can use the equivalent of (?>\r\n|\v) in PCRE.
With straight POSIX tools, your best bet is likely awk like so:
$ awk '{sub(/\r$/,"")} 1' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
Things that kinda work (but know your limitations):
tr deletes all \r even if used in another context (granted the use of \r is rare, and XML processing requires that \r be deleted, so tr is a great solution):
$ tr -d "\r" < file | od -c
0000000 w h a t i s g o i n g o n \n
0000016
GNU sed works, but not POSIX sed since \r and \x0D are not supported on POSIX.
GNU sed only:
$ sed 's/\x0D//' file | od -c # also sed 's/\r//'
0000000 w h a t \r i s g o i n g o n \n
0000017
The Unicode Regular Expression Guide is probably the best bet of what the definitive treatment of what a "newline" is.
Run dos2unix. While you can manipulate the line endings with code you wrote yourself, there are utilities which exist in the Linux / Unix world which already do this for you.
If on a Fedora system dnf install dos2unix will put the dos2unix tool in place (should it not be installed).
There is a similar dos2unix deb package available for Debian based systems.
From a programming point of view, the conversion is simple. Search all the characters in a file for the sequence \r\n and replace it with \n.
This means there are dozens of ways to convert from DOS to Unix using nearly every tool imaginable. One simple way is to use the command tr where you simply replace \r with nothing!
tr -d '\r' < infile > outfile

AWK printing single array indices with other text [duplicate]

The intent of this question is to provide an answer to the daily questions whose answer is "you have DOS line endings" so we can simply close them as duplicates of this one without repeating the same answers ad nauseam.
NOTE: This is NOT a duplicate of any existing question. The intent of this Q&A is not just to provide a "run this tool" answer but also to explain the issue such that we can just point anyone with a related question here and they will find a clear explanation of why they were pointed here as well as the tool to run so solve their problem. I spent hours reading all of the existing Q&A and they are all lacking in the explanation of the issue, alternative tools that can be used to solve it, and/or the pros/cons/caveats of the possible solutions. Also some of them have accepted answers that are just plain dangerous and should never be used.
Now back to the typical question that would result in a referral here:
I have a file containing 1 line:
what isgoingon
and when I print it using this awk script to reverse the order of the fields:
awk '{print $2, $1}' file
instead of seeing the output I expect:
isgoingon what
I get the field that should be at the end of the line appear at the start of the line, overwriting some text at the start of the line:
whatngon
or I get the output split onto 2 lines:
isgoingon
what
What could the problem be and how do I fix it?
The problem is that your input file uses DOS line endings of CRLF instead of UNIX line endings of just LF and you are running a UNIX tool on it so the CR remains part of the data being operated on by the UNIX tool. CR is commonly denoted by \r and can be seen as a control-M (^M) when you run cat -vE on the file while LF is \n and appears as $ with cat -vE.
So your input file wasn't really just:
what isgoingon
it was actually:
what isgoingon\r\n
as you can see with cat -v:
$ cat -vE file
what isgoingon^M$
and od -c:
$ od -c file
0000000 w h a t i s g o i n g o n \r \n
0000020
so when you run a UNIX tool like awk (which treats \n as the line ending) on the file, the \n is consumed by the act of reading the line, but that leaves the 2 fields as:
<what> <isgoingon\r>
Note the \r at the end of the second field. \r means Carriage Return which is literally an instruction to return the cursor to the start of the line so when you do:
print $2, $1
awk will print isgoingon and then will return the cursor to the start of the line before printing what which is why the what appears to overwrite the start of isgoingon.
To fix the problem, do either of these:
dos2unix file
sed 's/\r$//' file
awk '{sub(/\r$/,"")}1' file
perl -pe 's/\r$//' file
Apparently dos2unix is aka frodos in some UNIX variants (e.g. Ubuntu).
Be careful if you decide to use tr -d '\r' as is often suggested as that will delete all \rs in your file, not just those at the end of each line.
Note that GNU awk will let you parse files that have DOS line endings by simply setting RS appropriately:
gawk -v RS='\r\n' '...' file
but other awks will not allow that as POSIX only requires awks to support a single character RS and most other awks will quietly truncate RS='\r\n' to RS='\r'. You may need to add -v BINMODE=3 for gawk to even see the \rs though as the underlying C primitives will strip them on some platforms, e.g. cygwin.
One thing to watch out for is that CSVs created by Windows tools like Excel will use CRLF as the line endings but can have LFs embedded inside a specific field of the CSV, e.g.:
"field1","field2.1
field2.2","field3"
is really:
"field1","field2.1\nfield2.2","field3"\r\n
so if you just convert \r\ns to \ns then you can no longer tell linefeeds within fields from linefeeds as line endings so if you want to do that I recommend converting all of the intra-field linefeeds to something else first, e.g. this would convert all intra-field LFs to tabs and convert all line ending CRLFs to LFs:
gawk -v RS='\r\n' '{gsub(/\n/,"\t")}1' file
Doing similar without GNU awk left as an exercise but with other awks it involves combining lines that do not end in CR as they're read.
Also note that though CR is part of the [[:space:]] POSIX character class, it is not one of the whitespace characters included as separating fields when the default FS of " " is used, whose whitespace characters are only tab, blank, and newline. This can lead to confusing results if your input can have blanks before CRLF:
$ printf 'x y \n'
x y
$ printf 'x y \n' | awk '{print $NF}'
y
$
$ printf 'x y \r\n'
x y
$ printf 'x y \r\n' | awk '{print $NF}'
$
That's because trailing field separator white space is ignored at the beginning/end of a line that has LF line endings, but \r is the final field on a line with CRLF line endings if the character before it was whitespace:
$ printf 'x y \r\n' | awk '{print $NF}' | cat -Ev
^M$
You can use the \R shorthand character class in PCRE for files with unknown line endings. There are even more line ending to consider with Unicode or other platforms. The \R form is a recommended character class from the Unicode consortium to represent all forms of a generic newline.
So if you have an 'extra' you can find and remove it with the regex s/\R$/\n/ will normalize any combination of line endings into \n. Alternatively, you can use s/\R/\n/g to capture any notion of 'line ending' and standardize into a \n character.
Given:
$ printf "what\risgoingon\r\n" > file
$ od -c file
0000000 w h a t \r i s g o i n g o n \r \n
0000020
Perl and Ruby and most flavors of PCRE implement \R combined with the end of string assertion $ (end of line in multi-line mode):
$ perl -pe 's/\R$/\n/' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
$ ruby -pe '$_.sub!(/\R$/,"\n")' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
(Note the \r between the two words is correctly left alone)
If you do not have \R you can use the equivalent of (?>\r\n|\v) in PCRE.
With straight POSIX tools, your best bet is likely awk like so:
$ awk '{sub(/\r$/,"")} 1' file | od -c
0000000 w h a t \r i s g o i n g o n \n
0000017
Things that kinda work (but know your limitations):
tr deletes all \r even if used in another context (granted the use of \r is rare, and XML processing requires that \r be deleted, so tr is a great solution):
$ tr -d "\r" < file | od -c
0000000 w h a t i s g o i n g o n \n
0000016
GNU sed works, but not POSIX sed since \r and \x0D are not supported on POSIX.
GNU sed only:
$ sed 's/\x0D//' file | od -c # also sed 's/\r//'
0000000 w h a t \r i s g o i n g o n \n
0000017
The Unicode Regular Expression Guide is probably the best bet of what the definitive treatment of what a "newline" is.
Run dos2unix. While you can manipulate the line endings with code you wrote yourself, there are utilities which exist in the Linux / Unix world which already do this for you.
If on a Fedora system dnf install dos2unix will put the dos2unix tool in place (should it not be installed).
There is a similar dos2unix deb package available for Debian based systems.
From a programming point of view, the conversion is simple. Search all the characters in a file for the sequence \r\n and replace it with \n.
This means there are dozens of ways to convert from DOS to Unix using nearly every tool imaginable. One simple way is to use the command tr where you simply replace \r with nothing!
tr -d '\r' < infile > outfile

Read in a file AS a single line [duplicate]

How can I replace a newline ("\n") with a space ("") using the sed command?
I unsuccessfully tried:
sed 's#\n# #g' file
sed 's#^$# #g' file
How do I fix it?
sed is intended to be used on line-based input. Although it can do what you need.
A better option here is to use the tr command as follows:
tr '\n' ' ' < input_filename
or remove the newline characters entirely:
tr -d '\n' < input.txt > output.txt
or if you have the GNU version (with its long options)
tr --delete '\n' < input.txt > output.txt
Use this solution with GNU sed:
sed ':a;N;$!ba;s/\n/ /g' file
This will read the whole file in a loop (':a;N;$!ba), then replaces the newline(s) with a space (s/\n/ /g). Additional substitutions can be simply appended if needed.
Explanation:
sed starts by reading the first line excluding the newline into the pattern space.
Create a label via :a.
Append a newline and next line to the pattern space via N.
If we are before the last line, branch to the created label $!ba ($! means not to do it on the last line. This is necessary to avoid executing N again, which would terminate the script if there is no more input!).
Finally the substitution replaces every newline with a space on the pattern space (which is the whole file).
Here is cross-platform compatible syntax which works with BSD and OS X's sed (as per #Benjie comment):
sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/ /g' file
As you can see, using sed for this otherwise simple problem is problematic. For a simpler and adequate solution see this answer.
Fast answer
sed ':a;N;$!ba;s/\n/ /g' file
:a create a label 'a'
N append the next line to the pattern space
$! if not the last line, ba branch (go to) label 'a'
s substitute, /\n/ regex for new line, / / by a space, /g global match (as many times as it can)
sed will loop through step 1 to 3 until it reach the last line, getting all lines fit in the pattern space where sed will substitute all \n characters
Alternatives
All alternatives, unlike sed will not need to reach the last line to begin the process
with bash, slow
while read line; do printf "%s" "$line "; done < file
with perl, sed-like speed
perl -p -e 's/\n/ /' file
with tr, faster than sed, can replace by one character only
tr '\n' ' ' < file
with paste, tr-like speed, can replace by one character only
paste -s -d ' ' file
with awk, tr-like speed
awk 1 ORS=' ' file
Other alternative like "echo $(< file)" is slow, works only on small files and needs to process the whole file to begin the process.
Long answer from the sed FAQ 5.10
5.10. Why can't I match or delete a newline using the \n escape
sequence? Why can't I match 2 or more lines using \n?
The \n will never match the newline at the end-of-line because the
newline is always stripped off before the line is placed into the
pattern space. To get 2 or more lines into the pattern space, use
the 'N' command or something similar (such as 'H;...;g;').
Sed works like this: sed reads one line at a time, chops off the
terminating newline, puts what is left into the pattern space where
the sed script can address or change it, and when the pattern space
is printed, appends a newline to stdout (or to a file). If the
pattern space is entirely or partially deleted with 'd' or 'D', the
newline is not added in such cases. Thus, scripts like
sed 's/\n//' file # to delete newlines from each line
sed 's/\n/foo\n/' file # to add a word to the end of each line
will NEVER work, because the trailing newline is removed before
the line is put into the pattern space. To perform the above tasks,
use one of these scripts instead:
tr -d '\n' < file # use tr to delete newlines
sed ':a;N;$!ba;s/\n//g' file # GNU sed to delete newlines
sed 's/$/ foo/' file # add "foo" to end of each line
Since versions of sed other than GNU sed have limits to the size of
the pattern buffer, the Unix 'tr' utility is to be preferred here.
If the last line of the file contains a newline, GNU sed will add
that newline to the output but delete all others, whereas tr will
delete all newlines.
To match a block of two or more lines, there are 3 basic choices:
(1) use the 'N' command to add the Next line to the pattern space;
(2) use the 'H' command at least twice to append the current line
to the Hold space, and then retrieve the lines from the hold space
with x, g, or G; or (3) use address ranges (see section 3.3, above)
to match lines between two specified addresses.
Choices (1) and (2) will put an \n into the pattern space, where it
can be addressed as desired ('s/ABC\nXYZ/alphabet/g'). One example
of using 'N' to delete a block of lines appears in section 4.13
("How do I delete a block of specific consecutive lines?"). This
example can be modified by changing the delete command to something
else, like 'p' (print), 'i' (insert), 'c' (change), 'a' (append),
or 's' (substitute).
Choice (3) will not put an \n into the pattern space, but it does
match a block of consecutive lines, so it may be that you don't
even need the \n to find what you're looking for. Since GNU sed
version 3.02.80 now supports this syntax:
sed '/start/,+4d' # to delete "start" plus the next 4 lines,
in addition to the traditional '/from here/,/to there/{...}' range
addresses, it may be possible to avoid the use of \n entirely.
A shorter awk alternative:
awk 1 ORS=' '
Explanation
An awk program is built up of rules which consist of conditional code-blocks, i.e.:
condition { code-block }
If the code-block is omitted, the default is used: { print $0 }. Thus, the 1 is interpreted as a true condition and print $0 is executed for each line.
When awk reads the input it splits it into records based on the value of RS (Record Separator), which by default is a newline, thus awk will by default parse the input line-wise. The splitting also involves stripping off RS from the input record.
Now, when printing a record, ORS (Output Record Separator) is appended to it, default is again a newline. So by changing ORS to a space all newlines are changed to spaces.
GNU sed has an option, -z, for null-separated records (lines). You can just call:
sed -z 's/\n/ /g'
The Perl version works the way you expected.
perl -i -p -e 's/\n//' file
As pointed out in the comments, it's worth noting that this edits in place. -i.bak will give you a backup of the original file before the replacement in case your regular expression isn't as smart as you thought.
Who needs sed? Here is the bash way:
cat test.txt | while read line; do echo -n "$line "; done
In order to replace all newlines with spaces using awk, without reading the whole file into memory:
awk '{printf "%s ", $0}' inputfile
If you want a final newline:
awk '{printf "%s ", $0} END {printf "\n"}' inputfile
You can use a character other than space:
awk '{printf "%s|", $0} END {printf "\n"}' inputfile
tr '\n' ' '
is the command.
Simple and easy to use.
Three things.
tr (or cat, etc.) is absolutely not needed. (GNU) sed and (GNU) awk, when combined, can do 99.9% of any text processing you need.
stream != line based. ed is a line-based editor. sed is not. See sed lecture for more information on the difference. Most people confuse sed to be line-based because it is, by default, not very greedy in its pattern matching for SIMPLE matches - for instance, when doing pattern searching and replacing by one or two characters, it by default only replaces on the first match it finds (unless specified otherwise by the global command). There would not even be a global command if it were line-based rather than STREAM-based, because it would evaluate only lines at a time. Try running ed; you'll notice the difference. ed is pretty useful if you want to iterate over specific lines (such as in a for-loop), but most of the times you'll just want sed.
That being said,
sed -e '{:q;N;s/\n/ /g;t q}' file
works just fine in GNU sed version 4.2.1. The above command will replace all newlines with spaces. It's ugly and a bit cumbersome to type in, but it works just fine. The {}'s can be left out, as they're only included for sanity reasons.
Why didn't I find a simple solution with awk?
awk '{printf $0}' file
printf will print the every line without newlines, if you want to separate the original lines with a space or other:
awk '{printf $0 " "}' file
The answer with the :a label ...
How can I replace a newline (\n) using sed?
... does not work in freebsd 7.2 on the command line:
( echo foo ; echo bar ) | sed ':a;N;$!ba;s/\n/ /g'
sed: 1: ":a;N;$!ba;s/\n/ /g": unused label 'a;N;$!ba;s/\n/ /g'
foo
bar
But does if you put the sed script in a file or use -e to "build" the sed script...
> (echo foo; echo bar) | sed -e :a -e N -e '$!ba' -e 's/\n/ /g'
foo bar
or ...
> cat > x.sed << eof
:a
N
$!ba
s/\n/ /g
eof
> (echo foo; echo bar) | sed -f x.sed
foo bar
Maybe the sed in OS X is similar.
Easy-to-understand Solution
I had this problem. The kicker was that I needed the solution to work on BSD's (Mac OS X) and GNU's (Linux and Cygwin) sed and tr:
$ echo 'foo
bar
baz
foo2
bar2
baz2' \
| tr '\n' '\000' \
| sed 's:\x00\x00.*:\n:g' \
| tr '\000' '\n'
Output:
foo
bar
baz
(has trailing newline)
It works on Linux, OS X, and BSD - even without UTF-8 support or with a crappy terminal.
Use tr to swap the newline with another character.
NULL (\000 or \x00) is nice because it doesn't need UTF-8 support and it's not likely to be used.
Use sed to match the NULL
Use tr to swap back extra newlines if you need them
You can use xargs:
seq 10 | xargs
or
seq 10 | xargs echo -n
cat file | xargs
for the sake of completeness
If you are unfortunate enough to have to deal with Windows line endings, you need to remove the \r and the \n:
tr '\r\n' ' ' < $input > $output
I'm not an expert, but I guess in sed you'd first need to append the next line into the pattern space, bij using "N". From the section "Multiline Pattern Space" in "Advanced sed Commands" of the book sed & awk (Dale Dougherty and Arnold Robbins; O'Reilly 1997; page 107 in the preview):
The multiline Next (N) command creates a multiline pattern space by reading a new line of input and appending it to the contents of the pattern space. The original contents of pattern space and the new input line are separated by a newline. The embedded newline character can be matched in patterns by the escape sequence "\n". In a multiline pattern space, the metacharacter "^" matches the very first character of the pattern space, and not the character(s) following any embedded newline(s). Similarly, "$" matches only the final newline in the pattern space, and not any embedded newline(s). After the Next command is executed, control is then passed to subsequent commands in the script.
From man sed:
[2addr]N
Append the next line of input to the pattern space, using an embedded newline character to separate the appended material from the original contents. Note that the current line number changes.
I've used this to search (multiple) badly formatted log files, in which the search string may be found on an "orphaned" next line.
In response to the "tr" solution above, on Windows (probably using the Gnuwin32 version of tr), the proposed solution:
tr '\n' ' ' < input
was not working for me, it'd either error or actually replace the \n w/ '' for some reason.
Using another feature of tr, the "delete" option -d did work though:
tr -d '\n' < input
or '\r\n' instead of '\n'
I used a hybrid approach to get around the newline thing by using tr to replace newlines with tabs, then replacing tabs with whatever I want. In this case, " " since I'm trying to generate HTML breaks.
echo -e "a\nb\nc\n" |tr '\n' '\t' | sed 's/\t/ <br> /g'`
You can also use this method:
sed 'x;G;1!h;s/\n/ /g;$!d'
Explanation
x - which is used to exchange the data from both space (pattern and hold).
G - which is used to append the data from hold space to pattern space.
h - which is used to copy the pattern space to hold space.
1!h - During first line won't copy pattern space to hold space due to \n is
available in pattern space.
$!d - Clear the pattern space every time before getting the next line until the
the last line.
Flow
When the first line get from the input, an exchange is made, so 1 goes to hold space and \n comes to pattern space, appending the hold space to pattern space, and a substitution is performed and deletes the pattern space.
During the second line, an exchange is made, 2 goes to hold space and 1 comes to the pattern space, G append the hold space into the pattern space, h copy the pattern to it, the substitution is made and deleted. This operation is continued until EOF is reached and prints the exact result.
Bullet-proof solution. Binary-data-safe and POSIX-compliant, but slow.
POSIX sed
requires input according to the
POSIX text file
and
POSIX line
definitions, so NULL-bytes and too long lines are not allowed and each line must end with a newline (including the last line). This makes it hard to use sed for processing arbitrary input data.
The following solution avoids sed and instead converts the input bytes to octal codes and then to bytes again, but intercepts octal code 012 (newline) and outputs the replacement string in place of it. As far as I can tell the solution is POSIX-compliant, so it should work on a wide variety of platforms.
od -A n -t o1 -v | tr ' \t' '\n\n' | grep . |
while read x; do [ "0$x" -eq 012 ] && printf '<br>\n' || printf "\\$x"; done
POSIX reference documentation:
sh,
shell command language,
od,
tr,
grep,
read,
[,
printf.
Both read, [, and printf are built-ins in at least bash, but that is probably not guaranteed by POSIX, so on some platforms it could be that each input byte will start one or more new processes, which will slow things down. Even in bash this solution only reaches about 50 kB/s, so it's not suited for large files.
Tested on Ubuntu (bash, dash, and busybox), FreeBSD, and OpenBSD.
In some situations maybe you can change RS to some other string or character. This way, \n is available for sub/gsub:
$ gawk 'BEGIN {RS="dn" } {gsub("\n"," ") ;print $0 }' file
The power of shell scripting is that if you do not know how to do it in one way you can do it in another way. And many times you have more things to take into account than make a complex solution on a simple problem.
Regarding the thing that gawk is slow... and reads the file into memory, I do not know this, but to me gawk seems to work with one line at the time and is very very fast (not that fast as some of the others, but the time to write and test also counts).
I process MB and even GB of data, and the only limit I found is line size.
Finds and replaces using allowing \n
sed -ie -z 's/Marker\n/# Marker Comment\nMarker\n/g' myfile.txt
Marker
Becomes
# Marker Comment
Marker
You could use xargs — it will replace \n with a space by default.
However, it would have problems if your input has any case of an unterminated quote, e.g. if the quote signs on a given line don't match.
On Mac OS X (using FreeBSD sed):
# replace each newline with a space
printf "a\nb\nc\nd\ne\nf" | sed -E -e :a -e '$!N; s/\n/ /g; ta'
printf "a\nb\nc\nd\ne\nf" | sed -E -e :a -e '$!N; s/\n/ /g' -e ta
To remove empty lines:
sed -n "s/^$//;t;p;"
Using Awk:
awk "BEGIN { o=\"\" } { o=o \" \" \$0 } END { print o; }"
A solution I particularly like is to append all the file in the hold space and replace all newlines at the end of file:
$ (echo foo; echo bar) | sed -n 'H;${x;s/\n//g;p;}'
foobar
However, someone said me the hold space can be finite in some sed implementations.
Replace newlines with any string, and replace the last newline too
The pure tr solutions can only replace with a single character, and the pure sed solutions don't replace the last newline of the input. The following solution fixes these problems, and seems to be safe for binary data (even with a UTF-8 locale):
printf '1\n2\n3\n' |
sed 's/%/%p/g;s/#/%a/g' | tr '\n' # | sed 's/#/<br>/g;s/%a/#/g;s/%p/%/g'
Result:
1<br>2<br>3<br>
It is sed that introduces the new-lines after "normal" substitution. First, it trims the new-line char, then it processes according to your instructions, then it introduces a new-line.
Using sed you can replace "the end" of a line (not the new-line char) after being trimmed, with a string of your choice, for each input line; but, sed will output different lines. For example, suppose you wanted to replace the "end of line" with "===" (more general than a replacing with a single space):
PROMPT~$ cat <<EOF |sed 's/$/===/g'
first line
second line
3rd line
EOF
first line===
second line===
3rd line===
PROMPT~$
To replace the new-line char with the string, you can, inefficiently though, use tr , as pointed before, to replace the newline-chars with a "special char" and then use sed to replace that special char with the string you want.
For example:
PROMPT~$ cat <<EOF | tr '\n' $'\x01'|sed -e 's/\x01/===/g'
first line
second line
3rd line
EOF
first line===second line===3rd line===PROMPT~$

sed is adding character to start of word vs end

So I'm trying to add a "!" to the end of every word in my placesCapEx file from my placesCap file
This is what it looks like:
Yugoslavian
Zambia
Zambian
Zomba
This is what I want it to look like:
Yugoslavian!
Zambia!
Zambian!
Zomba!
I've tried sed 's/$/\!/' Wordlists/placesCap > Wordlists/placesCapEx and just sed 's/$/!/' Wordlists/placesCap > Wordlists/placesCapEx
What happens is when I run this and then cat Wordlists/placesCapEx it outputs
!ugoslavian
!ambia
!ambian
!omba
I've done some research and someone stated something about it being a Unix thing but they never went into detail
Your simpler sed command should work fine for a text file where end-of-line is a single newline character. You likely have "dos" format files here (carriage return / linefeed).
Consider:
$ cat zippy
Zippy
$ od -c zippy
0000000 Z i p p y \r \n
0000007
$ sed 's/$/!/' zippy
!ippy
$ sed 's/$/!/' zippy | od -c
0000000 Z i p p y \r ! \n
0000010
You're seeing the effect of \r displayed on a terminal: move the cursor to start of line, print the '!', newline goes to next line.
To handle the presence of \r\n pairs as your end-of-line character, you might try:
$ sed 's/\r*$/!/' zippy
Zippy!
...assuming your sed honors the \r as mine (GNU sed 4.2.2) does.

Converting CR to LF using sed

I had a file containing CR and CRLF on Windows.
I ran this command on it:
$ sed -i 's \x0d \x0a ' foo
What I got back was that:
All CR that were not followed by LF were converted to LF
But
Those CR that were part of CRLF were left unchanged.
Why is that?
Assuming that you're running this on a Unix platform, using GNU sed:
sed -i 's/\r/\n/g; s/\n$//' foo
This replaces all isolated CR (\r, \x0d) instances as well as CRLF (\r\n, \x0d\x0a) sequences with one LF (\n, \x0a) each - see bottom for an explanation.
As for what you tried (again, assuming that you're running this on a Unix platform, using GNU sed):
sed reads everything up to, but not including, a LF (\n) as a single line, and, on output, terminates that line with LF.
In your case that means that a single line read would end in CR (\r) (due to sed reading up to CRLF, stripping the LF), possibly containing isolated CR instances in that line.
's \x0d \x0a ', due to not using option g, replaces at most 1 CR character with LF.
What that should have resulted in:
The first CR (\r, \x0d) instance on each line should have been replaced with LF (\n, \x0a)
Any additional CR instances on the current line - including one that is part of the line-ending CRLF sequence - would have been left alone.
Why does a correct solution need two s calls?
's/\r/\n/g' globally (g) replaces all CR (\r) instances in the current line with LF \n.
Since the CR that was part of the line-ending CRLF was therefore also replaced with \n, the in-memory line (the pattern space, in sed speak) now ends in \n.
Because sed invariably appends an LF (\n) on output, the extra trailing \n must be removed, which is what s/\n$//' does.
The reason of this behavior is that lines ending with \r in unix appear as ONE line with the next line that has a \n:
$ echo -e "line1\rline2\r\nline3" |cat -A
line1^Mline2^M$
line3$
As a result your sed, without g option, will replace the first \r in this "concatenated" line :
$ echo -e "line1\rline2\r\nline3" |sed 's \x0d \x0a ' |cat -A
line1$
line2^M$ #this is same input line as line1 and thus \r is not replaced the second time in the same line without g
line3$
You need to include g for global replacements of \r when found more than once in the same whatever considered to be input line:
$ echo -e "line1\rline2\r\nline3\rline4\r\nline5\r\nline6" |cat -A
line1^Mline2^M$ #line2 \r will not be replaced without g
line3^Mline4^M$ #line4 \r will not be replaced without g
line5^M$ # This \r will be replaced since it is unique on input line
line6$
$ echo -e "line1\rline2\r\nline3\rline4\r\nline5\r\nline6" |sed 's \r \n ' |cat -A
line1$
line2^M$
line3$
line4^M$
line5$ #the \r is removed from here even without g , since input line5 was alone
$
line6$
$ echo -e "line1\rline2\r\nline3\rline4\r\nline5\r\nline6" |sed 's \r \n g' |cat -A
line1$
line2$
$
line3$
line4$
$
line5$
$
line6$
Attention:
As it is obvious from above tests , replacing \r with \n will make CRLF to be LFLF = \n\n and this will generate an extra blank line. This may or may not be desirable. This extra line can be removed as advised i.e by answer of mklement0

Resources