using sed to move a string in a multi line pattern - bash

how can I use sed to change this:
typedef struct
{
uint8_t foo;
uint8_t bar;
} a_somestruct_b;
to
pre_somestruct_post = restruct.
int8lu('foo').
int8lu('bar')
I have many "somestruct" structs to convert.

awk solution to get you started:
$ cat tst.awk
/typedef struct/{p=1;next} # start capturing
p && $1=="}" {
split($2,a,"_") # capture "somestruct"
# in a[2]
printf "%s_%s_%s = restruct.\n", "pre", a[2], "post" # possibly "pre" and "post"
# should be "a" and "b"
# here?
for (j=1;j<=i;j++) printf "%s%s\n", s[j], (j<i?".":"") # print saved struct fields
delete s; i=0; p=0 # reinitialize
}
p && NF==2{
split($1, b, "_") # capture type
sub(/;/,"",$2) # remove ";"
s[++i]=sprintf(" %slu('%s')", b[1], $2) # save struct field in
# array s
}
Testing this with file input.txt:
$ cat input.txt
typedef struct
{
uint8_t foo;
uint8_t bar;
} a_atruct_b;
typedef struct {
uint8_t foo;
uint8_t bar;
} a_bstruct_b;
typedef struct
{
uint8_t foo;
uint8_t bar;
} a_cstruct_b;
gives:
$ awk -f tst.awk input.txt
pre_atruct_post = restruct.
uint8lu('foo').
uint8lu('bar')
pre_bstruct_post = restruct.
uint8lu('foo').
uint8lu('bar')
pre_cstruct_post = restruct.
uint8lu('foo').
uint8lu('bar')
Same thing, as a one-liner:
$ awk '/typedef struct/{p=1;next} p && $1=="}" {split($2,a,"_");printf "%s_%s_%s = restruct.\n", "pre", a[2], "post";for (j=1;j<=i;j++) printf "%s%s\n", s[j], (j<i?".":"");delete s; i=0; p=0} p && NF==2 {split($1, b, "_");sub(/;/,"",$2);s[++i]=sprintf(" %slu('%s')", b[1], $2)}' input.txt

$ cat sed_script
/typedef struct/{ # find the line with "typedef struct"
n;n; # Go to next two line
/uint8_t/{ # Find the line with "uint8_t"
s/uint8_t (.*);/int8lu(\x27\1\x27)./; # substitute the line, i.e. int8lu('foo').
h;n; # copy the pattern space to the hold space,
# then go to next line
s/uint8_t (.*);/int8lu(\x27\1\x27)/; # substitute the line, i.e. int8lu('bar')
H;n # append the pattern space to the hold space
# then go to next line
};
s/.*_(.*)_.*/pre_\1_post = restruct./p; # substitute and print the line,
# i.e., pre_somestruct_post = restruct.
g;p # copy the hold space to the pattern space
# and then print
}
$ sed -rn -f sed_script input
pre_somestruct_post = restruct.
int8lu('foo').
int8lu('bar')
After checked the output is what you desired, added the -i option for sed to edit the file in place.

Related

Bash script to compare and generate csv datafile

I have two CSV files data1.csv and data2.csv the content is something like this (with headers) :
DATA1.csv
Client Name;strnu;addr;fav
MAD01;HDGF;11;V PO
CVOJF01;HHD-;635;V T
LINKO10;DH--JDH;98;V ZZ
DATA2.csv
USER;BINin;TYPE
XXMAD01XXXHDGFXX;11;N
KJDGD;635;M
CVOJF01XXHHD;635;N
Issues :
The value of the 1st and 2nd column of DATA1.csv exist randomly in the first column of DATA2.csv.
For example MAD01;HDGF exist in the first column of DATA2 ***MAD01***HDGF** (* can be alphanum and/or symbols charachter) and MAD01;HDGF might not be in the same order in the column USER of DATA2.
The value of strnum in DATA1 is equal to the value of the column BINin in DATA2
The column fav DATA1 is the same as TYPE in DATA2 because V T = M and V PO = N (some other valuses may exist but we won't need them for example line 3 of DATA1 it should be ignored)
N.B: some data may exist in a file but not the other.
my bash script needs to generate a new CSV file that should contain:
The column USER from DATA2
Client Name and strnu from DATA1
BINin from DATA2 only if it's equal to the corespondent line and value of strnu DATA1
TYPE using M N Format and making sure to respect the condition that V T = M and V PO = N
The first thing i tried was usuing grep to search for lines that exist in both files
#!/bin/sh
DATA1="${1}"
DATA2="${2}"
for i in $(cat $DATA1 | awk -F";" '{print $1".*"$2}' | sed 1d) ; do
grep "$i" $DATA2
done
Result :
$ ./script.sh DATA1.csv DATA2.csv
MAD01;HDGF;11;V PO
XXMAD01XXXHDGFXX;11;N
CVOJF01;HHD-;635;V T
LINKO10;DH--JDH;98;V PO
Using grep and awk i could find lines that are present in DATA1 and DATA2 files but it doesn't work for all the lines and i guess it's because of the - and other special characters present in column 2 of DATA1 but they can be ignored.
I don't know how i can generate a new csv that would mix the lines present in both files but the expected generated CSV should look like this
USER;Client Name;strnu;BINin;TYPE
XXMAD01XXXHDGFXX;MAD01;HDGF;11;N
CVOJF01XXHHD;CVOJF01;HHD-;635;M
This can be done in a single awk program. This is join.awk
BEGIN {
FS = OFS = ";"
print "USER", "Client Name", "strnu", "BINin", "TYPE"
}
FNR == 1 {next}
NR == FNR {
strnu[$1] = $2
next
}
{
for (client in strnu) {
strnu_pattern = strnu[client]
gsub(/-/, "", strnu_pattern)
if ($1 ~ client && $1 ~ strnu_pattern) {
print $1, client, strnu[client], $2, $3
break
}
}
}
and then
awk -f join.awk DATA1.csv DATA2.csv
outputs
USER;Client Name;strnu;BINin;TYPE
XXMAD01XXXHDGFXX;MAD01;HDGF;11;N
CVOJF01XXHHD;CVOJF01;HHD-;635;N
Assumptions/understandings:
ignore lines from DATA1.csv where the fav field is not one of V T or V PO
when matching fields we need to ignore the any hyphens from the DATA1.csv fields
when matching fields the strings from DATA1.csv can show up in either order in DATA2.csv
last line of the expected output show end with 635,N
One `awk idea:
awk '
BEGIN { FS=OFS=";"
print "USER","Client Name","strnu","BINin","TYPE" # print new header
}
FNR==1 { next } # skip input headers
FNR==NR { if ($4 == "V PO" || $4 == "V T") { # only process if fav is one of "V PO" or "V T"
cnames[FNR]=$1 # save client name
strnus[FNR]=$2 # save strnu
}
next
}
{ for (i in cnames) { # loop through array indices
cname=cnames[i] # make copy of client name ...
strnu=strnus[i] # and strnu so that we can ...
gsub(/-/,"",cname) # strip hypens from both ...
gsub(/-/,"",strnu) # in order to perform the comparisons ...
if (index($1,cname) && index($1,strnu)) { # if cname and strnu both exist in $1 then index()>=1 in both cases so ...
print $1,cnames[i],strnus[i],$2,$3 # print to stdout
next # we found a match so break from loop and go to next line of input
}
}
}
' DATA1.csv DATA2.csv
This generates:
USER;Client Name;strnu;BINin;TYPE
XXMAD01XXXHDGFXX;MAD01;HDGF;11;N
CVOJF01XXHHD;CVOJF01;HHD-;635;N

awk substitution ascii table rules bash

I want to perform a hierarchical set of (non-recursive) substitutions in a text file.
I want to define the rules in an ascii file "table.txt" which contains lines of blank space tabulated pairs of strings:
aaa 3
aa 2
a 1
I have tried to solve it with an awk script "substitute.awk":
BEGIN { while (getline < file) { subs[$1]=$2; } }
{ line=$0; for(i in subs)
{ gsub(i,subs[i],line); }
print line;
}
When I call the script giving it the string "aaa":
echo aaa | awk -v file="table.txt" -f substitute.awk
I get
21
instead of the desired "3". Permuting the lines in "table.txt" doesn't help. Who can explain what the problem is here, and how to circumvent it? (This is a simplified version of my actual task. Where I have a large file containing ascii encoded phonetic symbols which I want to convert into Latex code. The ascii encoding of the symbols contains {$,&,-,%,[a-z],[0-9],...)).
Any comments and suggestions!
PS:
Of course in this application for a substitution table.txt:
aa ab
a 1
a original string: "aa" should be converted into "ab" and not "1b". That means a string which was yielded by applying a rule must be left untouched.
How to account for that?
The order of the loop for (i in subs) is undefined by default.
In newer versions of awk you can use PROCINFO["sorted_in"] to control the sort order. See section 12.2.1 Controlling Array Traversal and (the linked) section 8.1.6 Using Predefined Array Scanning Orders for details about that.
Alternatively, if you can't or don't want to do that you could store the replacements in numerically indexed entries in subs and walk the array in order manually.
To do that you will need to store both the pattern and the replacement in the value of the array and that will require some care to combine. You can consider using SUBSEP or any other character that cannot be in the pattern or replacement and then split the value to get the pattern and replacement in the loop.
Also note the caveats/etcץ with getline listed on http://awk.info/?tip/getline and consider not using that manually but instead using NR==1{...} and just listing table.txt as the first file argument to awk.
Edit: Actually, for the manual loop version you could also just keep two arrays one mapping input file line number to the patterns to match and another mapping patterns to replacements. Then looping over the line number array will get you the pattern and the pattern can be used in the second array to get the replacement (for gsub).
Instead of storing the replacements in an associative array, put them in two arrays indexed by integer (one array for the strings to replace, one for the replacements) and iterate over the arrays in order:
BEGIN {i=0; while (getline < file) { subs[i]=$1; repl[i++]=$2}
n = i}
{ for(i=0;i<n;i++) { gsub(subs[i],repl[i]); }
print tolower($0);
}
It seems like perl's zero-width word boundary is what you want. It's a pretty straightforward conversion from the awk:
#!/usr/bin/env perl
use strict;
use warnings;
my %subs;
BEGIN{
open my $f, '<', 'table.txt' or die "table.txt:$!";
while(<$f>) {
my ($k,$v) = split;
$subs{$k}=$v;
}
}
while(<>) {
while(my($k, $v) = each %subs) {
s/\b$k\b/$v/g;
}
print;
}
Here's an answer pulled from another StackExchange site, from a fairly similar question: Replace multiple strings in a single pass.
It's slightly different in that it does the replacements in inverse order by length of target string (i.e. longest target first), but that is the only sensible order for targets which are literal strings, as appears to be the case in this question as well.
If you have tcc installed, you can use the following shell function, which process the file of substitutions into a lex-generated scanner which it then compiles and runs using tcc's compile-and-run option.
# Call this as: substitute replacements.txt < text_to_be_substituted.txt
# Requires GNU sed because I was too lazy to write a BRE
substitute () {
tcc -run <(
{
printf %s\\n "%option 8bit noyywrap nounput" "%%"
sed -r 's/((\\\\)*)(\\?)$/\1\3\3/;
s/((\\\\)*)\\?"/\1\\"/g;
s/^((\\.|[^[:space:]])+)[[:space:]]*(.*)/"\1" {fputs("\3",yyout);}/' \
"$1"
printf %s\\n "%%" "int main(int argc, char** argv) { return yylex(); }"
} | lex -t)
}
With gcc or clang, you can use something similar to compile a substitution program from the replacement list, and then execute that program on the given text. Posix-standard c99 does not allow input from stdin, but gcc and clang are happy to do so provided you tell them explicitly that it is a C program (-x c). In order to avoid excess compilations, we use make (which needs to be gmake, Gnu make).
The following requires that the list of replacements be in a file with a .txt extension; the cached compiled executable will have the same name with a .exe extension. If the makefile were in the current directory with the name Makefile, you could invoke it as make repl (where repl is the name of the replacement file without a text extension), but since that's unlikely to be the case, we'll use a shell function to actually invoke make.
Note that in the following file, the whitespace at the beginning of each line starts with a tab character:
substitute.mak
.SECONDARY:
%: %.exe
#$(<D)/$(<F)
%.exe: %.txt
#{ printf %s\\n "%option 8bit noyywrap nounput" "%%"; \
sed -r \
's/((\\\\)*)(\\?)$$/\1\3\3/; #\
s/((\\\\)*)\\?"/\1\\"/g; #\
s/^((\\.|[^[:space:]])+)[[:space:]]*(.*)/"\1" {fputs("\3",yyout);}/' \
"$<"; \
printf %s\\n "%%" "int main(int argc, char** argv) { return yylex(); }"; \
} | lex -t | c99 -D_POSIX_C_SOURCE=200809L -O2 -x c -o "$#" -
Shell function to invoke the above:
substitute() {
gmake -f/path/to/substitute.mak "${1%.txt}"
}
You can invoke the above command with:
substitute file
where file is the name of the replacements file. (The filename must end with .txt but you don't have to type the file extension.)
The format of the input file is a series of lines consisting of a target string and a replacement string. The two strings are separated by whitespace. You can use any valid C escape sequence in the strings; you can also \-escape a space character to include it in the target. If you want to include a literal \, you'll need to double it.
If you don't want C escape sequences and would prefer to have backslashes not be metacharacters, you can replace the sed program with a much simpler one:
sed -r 's/([\\"])/\\\1/g' "$<"; \
(The ; \ is necessary because of the way make works.)
a) Don't use getline unless you have a very specific need and fully understand all the caveats, see http://awk.info/?tip/getline
b) Don't use regexps when you want strings (yes, this means you cannot use sed).
c) The while loop needs to constantly move beyond the part of the line you've already changed or you could end up in an infinite loop.
You need something like this:
$ cat substitute.awk
NR==FNR {
if (NF==2) {
strings[++numStrings] = $1
old2new[$1] = $2
}
next
}
{
for (stringNr=1; stringNr<=numStrings; stringNr++) {
old = strings[stringNr]
new = old2new[old]
slength = length(old)
tail = $0
$0 = ""
while ( sstart = index(tail,old) ) {
$0 = $0 substr(tail,1,sstart-1) new
tail = substr(tail,sstart+slength)
}
$0 = $0 tail
}
print
}
$ echo aaa | awk -f substitute.awk table.txt -
3
$ echo aaaa | awk -f substitute.awk table.txt -
31
and adding some RE metacharacters to table.txt to show they are treated just like every other character and showing how to run it when the target text is stored in a file instead of being piped:
$ cat table.txt
aaa 3
aa 2
a 1
. 7
\ 4
* 9
$ cat foo
a.a\aa*a
$ awk -f substitute.awk table.txt foo
1714291
Your new requirement requires a solution like this:
$ cat substitute.awk
NR==FNR {
if (NF==2) {
strings[++numStrings] = $1
old2new[$1] = $2
}
next
}
{
delete news
for (stringNr=1; stringNr<=numStrings; stringNr++) {
old = strings[stringNr]
new = old2new[old]
slength = length(old)
tail = $0
$0 = ""
charPos = 0
while ( sstart = index(tail,old) ) {
charPos += sstart
news[charPos] = new
$0 = $0 substr(tail,1,sstart-1) RS
tail = substr(tail,sstart+slength)
}
$0 = $0 tail
}
numChars = split($0, olds, "")
$0 = ""
for (charPos=1; charPos <= numChars; charPos++) {
$0 = $0 (charPos in news ? news[charPos] : olds[charPos])
}
print
}
.
$ cat table.txt
1 a
2 b
$ echo "121212" | awk -f substitute.awk table.txt -
ababab

How to get specific data from block of data based on condition

I have a file like this:
[group]
enable = 0
name = green
test = more
[group]
name = blue
test = home
[group]
value = 48
name = orange
test = out
There may be one ore more space/tabs between label and = and value.
Number of lines may wary in every block.
I like to have the name, only if this is not true enable = 0
So output should be:
blue
orange
Here is what I have managed to create:
awk -v RS="group" '!/enable = 0/ {sub(/.*name[[:blank:]]+=[[:blank:]]+/,x);print $1}'
blue
orange
There are several fault with this:
I am not able to set RS to [group], both this fails RS="[group]" and RS="\[group\]". This will then fail if name or other labels contains group.
I do prefer not to use RS with multiple characters, since this is gnu awk only.
Anyone have other suggestion? sed or awk and not use a long chain of commands.
If you know that groups are always separated by empty lines, set RS to the empty string:
$ awk -v RS="" '!/enable = 0/ {sub(/.*name[[:blank:]]+=[[:blank:]]+/,x);print $1}'
blue
orange
#devnull explained in his answer that GNU awk also accepts regular expressions in RS, so you could only split at [group] if it is on its own line:
gawk -v RS='(^|\n)[[]group]($|\n)' '!/enable = 0/ {sub(/.*name[[:blank:]]+=[[:blank:]]+/,x);print $1}'
This makes sure we're not splitting at evil names like
[group]
enable = 0
name = [group]
name = evil
test = more
Your problem seems to be:
I am not able to set RS to [group], both this fails RS="[group]" and
RS="\[group\]".
Saying:
RS="[[]group[]]"
should yield the desired result.
In these situations where there's clearly name = value statements within a record, I like to first populate an array with those mappings, e.g.:
map["<name>"] = <value>
and then just use the names to reference the values I want. In this case:
$ awk -v RS= -F'\n' '
{
delete map
for (i=1;i<=NF;i++) {
split($i,tmp,/ *= */)
map[tmp[1]] = tmp[2]
}
}
map["enable"] !~ /^0$/ {
print map["name"]
}
' file
blue
orange
If your version of awk doesn't support deleting a whole array then change delete map to split("",map).
Compared to using REs and/or sub()s., etc., it makes the solution much more robust and extensible in case you want to compare and/or print the values of other fields in future.
Since you have line-separated records, you should consider putting awk in paragraph mode. If you must test for the [group] identifier, simply add code to handle that. Here's some example code that should fulfill your requirements. Run like:
awk -f script.awk file.txt
Contents of script.awk:
BEGIN {
RS=""
}
{
for (i=2; i<=NF; i+=3) {
if ($i == "enable" && $(i+2) == 0) {
f = 1
}
if ($i == "name") {
r = $(i+2)
}
}
}
!(f) && r {
print r
}
{
f = 0
r = ""
}
Results:
blue
orange
This might work for you (GNU sed):
sed -n '/\[group\]/{:a;$!{N;/\n$/!ba};/enable\s*=\s*0/!s/.*name\s*=\s*\(\S\+\).*/\1/p;d}' file
Read the [group] block into the pattern space then substitute out the colour if the enable variable is not set to 0.
sed -n '...' set sed to run in silent mode, no ouput unless specified i.e. a p or P command
/\[group\]/{...} when we have a line which contains [group] do what is found inside the curly braces.
:a;$!{N;/\n$/!ba} to do a loop we need a place to loop to, :a is the place to loop to. $ is the end of file address and $! means not the end of file, so $!{...} means do what is found inside the curly braces when it is not the end of file. N means append a newline and the next line to the current line and /\n$/ba when we have a line that ends with an empty line branch (b) to a. So this collects all lines from a line that contains `[group] to an empty line (or end of file).
/enable\s*=\s*0/!s/.*name\s*=\s*\(\S\+\).*/\1/p if the lines collected contain enable = 0 then do not substitute out the colour. Or to put it another way, if the lines collected so far do not contain enable = 0 do substitute out the colour.
If you don't want to use the record separator, you could use a dummy variable like this:
#!/usr/bin/awk -f
function endgroup() {
if (e == 1) {
print n
}
}
$1 == "name" {
n = $3
}
$1 == "enable" && $3 == 0 {
e = 0;
}
$0 == "[group]" {
endgroup();
e = 1;
}
END {
endgroup();
}
You could actually use Bash for this.
while read line; do
if [[ $line == "enable = 0" ]]; then
n=1
else
n=0
fi
if [ $n -eq 0 ] && [[ $line =~ name[[:space:]]+=[[:space:]]([a-z]+) ]]; then
echo ${BASH_REMATCH[1]}
fi
done < file
This will only work however if enable = 0 is always only one line above the line with name.

Append to the previous line for a match

can I use sed or awk to append to the previous line if a match is found ?
I have a file which has the format :
INT32
FSHL (const TP Buffer)
{
INT32
FSHL_lm (const TP Buffer)
{ WORD32 ugo = 0; ...
What I am trying to do is scan for independant open braces {and append it to the previous non-blank line .The match should not occur for an open brace appended by anything in the same line .
The expected output :
INT32
FSHL (const TP Buffer){
INT32
FSHL_lm (const TP Buffer)
{ WORD32 ugo = 0; ...
Thanks for the replies .
This might work for you (GNU sed):
sed '$!N;s/\n\s*{\s*$/{/;P;D' file
Explanation:
$!N unless the last line append the next line to the pattern space.
s/\n\s*{\s*$/{/ replace a linefeed followed by no or any amount of white space followed by an opening curly brace followed by no or any amount of white space to the end of the string, by an opening curly brace.
P print upto and including the first newline.
D delete upto and including the first newline (if so do not start a new cycle).
One way using perl. I read all file in slurp mode and use a regular expression to search lines with only a curly brace and remove its leading spaces.
perl -ne '
do {
local $/ = undef;
$data = <>;
};
$data =~ s/\n^\s*(\{\s*)$/\1/mg;
print $data
' infile
Assuming infile with the content of the question, output will be:
FSHL (const TP Buffer){
INT32
FSHL_lm (const TP Buffer)
{ WORD32 ugo = 0; ...
One way using awk:
awk '!(NF == 1 && $1 == "{") { if (line) print line; line = $0; next; } { sub(/^[ \t]+/, "", $0); line = line $0; } END { print line }' file.txt
Or broken out on multiple lines:
!(NF == 1 && $1 == "{") {
if (line) print line
line = $0
next
}
{
sub(/^[ \t]+/, "", $0)
line = line $0
}
END {
print line
}
Results:
INT32
FSHL (const TP Buffer){
INT32
FSHL_lm (const TP Buffer)
{ WORD32 ugo = 0; ...
HTH
[shyam#localhost ~]$ perl -lne 's/^/\n/ if $.>1 && /^\d+/; printf "%s",$_' appendDateText.txt
that will work
i/p:
06/12/2016 20:30 Test Test Test
TestTest
06/12/2019 20:30 abbs abcbcb abcbc
06/11/2016 20:30 test test
i123312331233123312331233123312331233123312331233Test
06/12/2016 20:30 abc
o/p:
06/12/2016 20:30 Test Test TestTestTest
06/12/2019 20:30 abbs abcbcb abcbc
06/11/2016 20:30 test ##testi123312331233123312331233123312331233123312331233Test

sed: how to replace CR and/or LF with "\r" "\n", so any file will be in one line

I have files like
aaa
bbb
ccc
I need them to sed into aaa\r\nbbb\r\nccc
It should work either for unix and windows replacing them with \r or \r\n accordingly
The problem is that sed adds \n at the end of line but keeps lines separated. How can I fix it?
These two commands together should do what you want:
sed ':a;N;$!ba;s/\r/\\r/g'
sed ':a;N;$!ba;s/\n/\\n/g'
Pass your input file through both to get the output you want. Theres probably a way to combine them into a single expression.
Stolen and Modified from this question:
How can I replace a newline (\n) using sed?
It's possible to merge lines in sed, but personally, I consider needing to change line breaks a sign that it's time to give up on sed and use a more powerful language instead. What you want is one line of perl:
perl -e 'undef $/; while (<>) { s/\n/\\n/g; s/\r/\\r/g; print $_, "\n" }'
or 12 lines of python:
#! /usr/bin/python
import fileinput
from sys import stdout
first = True
for line in fileinput.input(mode="rb"):
if fileinput.isfirstline() and not first:
stdout.write("\n")
if line.endswith("\r\n"): stdout.write(line[:-2] + "\\r\\n")
elif line.endswith("\n"): stdout.write(line[:-1] + "\\n")
elif line.endswith("\r"): stdout.write(line[:-1] + "\\r")
first = False
if not first: stdout.write("\n")
or 10 lines of C to do the job, but then a whole bunch more because you have to process argv yourself:
#include <stdio.h>
void process_one(FILE *fp)
{
int c;
while ((c = getc(fp)) != EOF)
if (c == '\n') fputs("\\n", stdout);
else if (c == '\r') fputs("\\r", stdout);
else putchar(c);
fclose(fp);
putchar('\n');
}
int main(int argc, char **argv)
{
FILE *cur;
int i, consumed_stdin = 0, rv = 0;
if (argc == 1) /* no arguments */
{
process_one(stdin);
return 0;
}
for (i = 1; i < argc; i++)
{
if (argc[i][0] == '-' && argc[i][1] == 0)
{
if (consumed_stdin)
{
fputs("cannot read stdin twice\n", stderr);
rv = 1;
continue;
}
cur = stdin;
consumed_stdin = 1;
}
else
{
cur = fopen(ac[i], "rb");
if (!cur)
{
perror(ac[i]);
rv = 1;
continue;
}
}
process_one(cur);
}
return rv;
}
awk '{printf("%s\\r\\n",$0)} END {print ""}' file
tr -s '\r' '\n' <file | unix2dos
EDIT (it's been pointed out that the above misses the point entirely! •///•)
tr -s '\r' '\n' <file | perl -pe 's/\s+$/\\r\\n/'
The tr gets rid of empty lines and dos line endings. The pipe means two processes—good on modern hardware.

Resources