using awk to print every second field - for-loop

I have the following input file and I wish to print every second field:
A=1=B=2=C=3
To get the following output:
1 2 3
I have tried:
awk 'BEGIN {FS="="; OFS=" "} {for (i=2; i<=NF; i+=2); print ($i) }' input_file
and it clearly doesn't work. I think I have the for loop portion correct, but something is wrong with my print portion.
Thank you.

$ awk -v RS== -v ORS=" " '0==NR%2' input_file
1 2 3
How it works
-v RS==
Set the input record separator to =.
-v ORS=" "
Set the output record separator to a space.
0==NR%2
Print every other line.
NR is the line number. NR%2 is the line number modulo 2. Thus, the condition 0==NR%2 is true on every other line. When the condition is true, the action is performed. Since no action is specified, the default action is performed which is to print the record.
Alternative
The key issue in the original code was a misplaced semicolon. Consider:
for (i=2; i<=NF; i+=2); print ($i)
In this case, the print command is executed only after the for loop exits.
Try:
$ awk 'BEGIN {FS="="; OFS=" "} {for (i=2; i<=NF; i+=2)print $i }' input_file
1
2
3
Or, if you want the output on one line:
$ awk 'BEGIN {FS="="} {for (i=2; i<=NF; i+=2)printf "%s ", $i; print "" }' input_file
1 2 3

Related

How to make column values to rows in unix

Below is a sample data in a file
4 columns with TAB separated with last column as values separated by comma.
The 3rd column actually shows number of values in the 4th column.
6338838 ESR 3 173812,10547556,10518181
6338822 ESR 2 7219086,12761162
Expected output :
6338838 ESR 3 173812
6338838 ESR 3 10547556
6338838 ESR 3 10518181
6338822 ESR 2 7219086
6338822 ESR 2 12761162
Tried with AWK , but not able to make it work.
EDIT: How about simply using gsub to get rid of commas here :)
awk -F" +" '{gsub(",",ORS $1 OFS $2 OFS $3 OFS,$4)} 1' Input_file | column -t
Change -F to -F"\t" in case your Input_file is TAB delimited.
How about simple using -F of awk and printing as per fields values.
awk -F" +|," '{for(i=4;i<=NF;i++){print $1,$2,$3,$i}}' Input_file
Append | column -t in above code in case you need TAB delimited output.
As per Cyrus and Ghoti's comment adding following too now in case your Input_file is TAB delimited.
awk -F '[\t,]' -v OFS='\t' '{for(i=4; i<=NF; i++) print $1,$2,$3,$i}' Input_file
This should work:
awk '{n = split($4,x,","); for (i = 1; i <= n; ++i) {printf "%s %s %s %s\n", $1, $2, $3, x[i]} }' yourfile
With GNU awk:
awk 'BEGIN{FS=OFS="\t"} {c1to3=$1 FS $2 FS $3; columns=split($4,array,","); for(i=1; i<=columns; i++) print c1to3,array[i]}' file
or shorter:
awk -v OFS='\t' '{columns=split($4,array,","); for(i=1; i<=columns; i++) print $1,$2,$3,array[i]}' file
or
awk 'BEGIN{OFS="\t"} {c=split($4,a,","); NF=3; for(i=1; i<=c; i++) print $0,a[i]}' file
Output:
6338838 ESR 3 173812
6338838 ESR 3 10547556
6338838 ESR 3 10518181
6338822 ESR 2 7219086
6338822 ESR 2 12761162
I love these "who can do it shorter" contests. :-)
If we cared to use the item count from $3, we could do this:
awk '{split($4,a,",");for(i=1;i<=$3;i++){$4=a[i];print}}' OFS='\t' input.txt
But the following produces similar results in fewer bytes of code. Output is in the reverse order of subfields in $4.
awk '{for(i=split($4,a,",");i;i--){$4=a[i];print}}' OFS='\t' input.txt
Not bothering to set FS because your sample input doesn't appear to include spaces within the fields.
In native bash:
while IFS=$'\t' read -r one two three four; do
IFS=, read -r -a pieces <<<"$four"
for piece in "${pieces[#]}"; do
printf '%s\t%s\t%s\t%s\n' "$one" "$two" "$three" "$piece"
done
done <yourfile
Here is another awk, without referencing to unused fields.
$ awk '{n=split($NF,a,",");
for(i=1;i<=n;i++)
{sub($NF"$",a[i]);
print}}' file.t

Ignore delimiters in quotes and excluding columns dynamically in csv file

I have awk command to read the csv file with | sperator. I am using this command as part of my shell script where the columns to exclude will be removed from the output. The list of columns are input as 1 2 3
Command Reference: http://wiki.bash-hackers.org/snipplets/awkcsv
awk -v FS='"| "|^"|"$' '{for i in $test; do $(echo $i=""); done print }' test.csv
$test is 1 2 3
I want to print $1="" $2="" $3="" in front of print all columns. I am getting this error
awk: {for i in $test; do $(echo $i=""); done {print }
awk: ^ syntax error
This command is working properly which prints all the columns
awk -v FS='"| "|^"|"$' '{print }' test.csv
File 1
"first"| "second"| "last"
"fir|st"| "second"| "last"
"firtst one"| "sec|ond field"| "final|ly"
Expected output if I want to exclude the column 2 and 3 dynamically
first
fir|st
firtst one
I need help to keep the for loop properly.
With GNU awk for FPAT:
$ awk -v FPAT='"[^"]+"' '{print $1}' file
"first"
"fir|st"
"firtst one"
$ awk -v flds='1' -v FPAT='"[^"]+"' 'BEGIN{n=split(flds,f,/ /)} {for (i=1;i<=n;i++) printf "%s%s", $(f[i]), (i<n?OFS:ORS)}' file
"first"
"fir|st"
"firtst one"
$ awk -v flds='2 3' -v FPAT='"[^"]+"' 'BEGIN{n=split(flds,f,/ /)} {for (i=1;i<=n;i++) printf "%s%s", $(f[i]), (i<n?OFS:ORS)}' file
"second" "last"
"second" "last"
"sec|ond field" "final|ly"
$ awk -v flds='3 1' -v FPAT='"[^"]+"' 'BEGIN{n=split(flds,f,/ /)} {for (i=1;i<=n;i++) printf "%s%s", $(f[i]), (i<n?OFS:ORS)}' file
"last" "first"
"last" "fir|st"
"final|ly" "firtst one"
If you don't want your output fields separated by a blank char then set OFS to whatever you do want with -v OFS='whatever'. If you want to get rid of the surrounding quotes you can use gensub() (since we're using gawk anyway) or substr() on every field, e.g.:
$ awk -v OFS=';' -v flds='1 3' -v FPAT='"[^"]+"' 'BEGIN{n=split(flds,f,/ /)} {for (i=1;i<=n;i++) printf "%s%s", substr($(f[i]),2,length($(f[i]))-2), (i<n?OFS:ORS)}' file
first;last
fir|st;last
firtst one;final|ly
$ awk -v OFS=';' -v flds='1 3' -v FPAT='"[^"]+"' 'BEGIN{n=split(flds,f,/ /)} {for (i=1;i<=n;i++) printf "%s%s", gensub(/"/,"","g",$(f[i])), (i<n?OFS:ORS)}' file
first;last
fir|st;last
firtst one;final|ly
In GNU awk (for FPAT):
$ test="2 3" # fields to exclude in bash var $test
$ awk -v t="$test" ' # taken to awk var t
BEGIN { # first
FPAT="([^|]+)|( *\"[^\"]+\")" # instead of FS, use FPAT
split(t,a," ") # process t to e:
for(i in a) # a[1]=2 -> e[2], etc.
e[a[i]]
}
{
for(i=1;i<=NF;i++) # for each field
if((i in e)==0) { # if field # not in e
gsub(/^\"|\"$/,"",$i) # remove leading and trailing "
b=b (b==""?"":OFS) $i # put to buffer b
}
print b; b="" # putput and reset buffer
}' file
first
fir|st
firtst one
FPAT is used as FS can't handle separator in quotes.
Vikram, if your actual Input_file is DITTO same as shown sample Input_file then following may help you in same. I will add explanation shortly too here(tested this with GNU awk 3.1.7 little old version of awk).
awk -v num="2,3" 'BEGIN{
len=split(num, val,",")
}
{while($0){
match($0,/.[^"]*/);
if(substr($0,RSTART,RLENGTH+1) && substr($0,RSTART,RLENGTH+1) !~ /\"\| \"/ && substr($0,RSTART,RLENGTH+1) !~ /^\"$/ && substr($0,RSTART,RLENGTH+1) !~ /^\" \"$/){
array[++i]=substr($0,RSTART,RLENGTH+1)
};
$0=substr($0,RLENGTH+1);
};
for(l=1;l<=len;l++){
delete array[val[l]]
};
for(j=1;j<=length(array);j++){
if(array[j]){
gsub(/^\"|\"$/,"",array[j]);
printf("%s%s",array[j],j==length(array)?"":" ")
}
};
print "";
i="";
delete array
}' Input_file
EDIT1: Adding a code with explanation too here.
awk -v num="2,3" 'BEGIN{ ##creating a variable named num whose value is comma seprated values of fields which you want to nullify, starting BEGIN section here.
len=split(num, val,",") ##creating an array named val here whose delimiter is comma and creating len variable whose value is length of array val here.
}
{while($0){ ##Starting a while loop here which will run for a single line till that line is NOT getting null.
match($0,/.[^"]*/);##using match functionality which will look for matches from starting to till a " comes into match.
if(substr($0,RSTART,RLENGTH+1) && substr($0,RSTART,RLENGTH+1) !~ /\"\| \"/ && substr($0,RSTART,RLENGTH+1) !~ /^\"$/ && substr($0,RSTART,RLENGTH+1) !~ /^\" \"$/){##So RSTATR and RLENGTH are the variables which will be set when a regex is having a match in line/variable passed into match function. In this if condition I am checking 1st: value of substring of RSTART,RLENGTH+1 should not be NULL. 2nd: Then checking this substring should not be having " pipe space ". 3rd condition: Checking if substring is NOT equal to a string which starts from " and ending with it. 4th condition: Checking here if substring is NOT equal to ^" space "$, if all conditions are TRUE then do following actions.
array[++i]=substr($0,RSTART,RLENGTH+1) ##creating an array named array whose index is variable i with increasing value of i and its value is substring of RSTART to till RLENGTH+1.
};
$0=substr($0,RLENGTH+1);##Now removing the matched part from current line which will decrease the length of line and avoid the while loop to become as infinite.
};
for(l=1;l<=len;l++){##Starting a loop here once while above loop is done which runs from starting of variable l=1 to value of len.
delete array[val[l]] ##Deleting here those values which we want to REMOVE from OPs request, so removing here.
};
for(j=1;j<=length(array);j++){##Start a for loop from the value of j=1 till the value of lengthh of array.
if(array[j]){ ##Now making sure array value whose index is j is NOT NULL, if yes then perform following statements.
gsub(/^\"|\"$/,"",array[j]); ##Globally substituting starting " and ending " with NULL in value of array value.
printf("%s%s",array[j],j==length(array)?"":" ") ##Now printing the value of array and secondly printing space or null depending upon if j value is equal to array length then print NULL else print space. It is because we don not want space at the last of the line.
}
};
print ""; ##Because above printf will NOT print a new line, so printing a new line.
i=""; ##Nullifying variable i here.
delete array ##Deleting array here.
}' Input_file ##Mentioning Input_file here.

awk - how to replace semicolon in string in csv file?

I need to manage smtp logfile handling in my company.
These logfiles need to be imported to MSSQL, so it is my job to provide this data.
I got strange undelivery message with a ";" in the string, I need to replace this with a comma.
So what I got:
Sender;Recipient;Operation;Answer;Error;Servername
bla#bla.com;rockit#sohard.com;RCPT TO;450;+4.2.0+<rockit#sohard.com>:+Recipient+address+rejected:+Policy+restrictions;+try+later;M0641
Mention the ";" in the Answer field after "restrictions", dunno why the mail server sends semicolons, maybe to annoy me :P
I tried following with awk after I did a lot of research:
awk 'BEGIN{FS=OFS=";"} {for (i=5;i<=NF;i++) gsub (";",",",$i)} 1' myfile.csv
This command actually works but it seems it does nothing with my file, the ";" in the error field remains. What I am missing here ?
Replacing the fifth and later ; with ,
$ awk -F\; '{for (i=1;i<=NF;i++) printf "%s%s",$i,(i==NF?ORS:(i<=4?";":","))}' myfile.csv
Sender;Recipient;Operation;Answer;Error,Servername
bla#bla.com;rockit#sohard.com;RCPT TO;450;+4.2.0+<rockit#sohard.com>:+Recipient+address+rejected:+Policy+restrictions,+try+later,M0641
How it works:
-F\;
This sets the field separator for input to ;.
for (i=1;i<=NF;i++) printf "%s%s",$i,(i==NF?ORS:(i<=4?";":","))
This loops over every field and prints the field followed by (a) ORS if we are on the last field, or (b) , if were are on field 5 or later, or (c) ; if we are on one of the first four fields.
Replacing all ; with ,
Try:
$ awk -F\; '{$1=$1} 1' OFS=, myfile.csv
Sender,Recipient,Operation,Answer,Error,Servername
bla#bla.com,rockit#sohard.com,RCPT TO,450,+4.2.0+<rockit#sohard.com>:+Recipient+address+rejected:+Policy+restrictions,+try+later,M0641
How it works:
-F\;
This sets the field separator on input to a semicolon.
$1=$1
This causes awk to think the the line has been changed so that awk will update the output line to use the new field separator.
1
This tells awk to print the line.
OFS=,
This sets the field separator on output to a comma.
Alternative #1
$ awk '{gsub(/;/, ",")} 1' myfile.csv
Sender,Recipient,Operation,Answer,Error,Servername
bla#bla.com,rockit#sohard.com,RCPT TO,450,+4.2.0+<rockit#sohard.com>:+Recipient+address+rejected:+Policy+restrictions,+try+later,M0641
Alternative #2
$ sed 's/;/,/g' myfile.csv
Sender,Recipient,Operation,Answer,Error,Servername
bla#bla.com,rockit#sohard.com,RCPT TO,450,+4.2.0+<rockit#sohard.com>:+Recipient+address+rejected:+Policy+restrictions,+try+later,M0641
I think your problem is replacing the unquotes delimiters in your logical 4th field in a five field wide input. Although this script is repetitious should be easier to understand
$ awk '{n=split($0,a,";");
for(i=1; i<4; i++) printf "%s;", a[i];
for(i=4; i<n-1; i++) printf "%s,", a[i];
printf "%s;%s\n", a[n-1], a[n]}' file
A better way to write the same based on #Ed Morton's comments
$ awk -F';' '{for(i=1; i<NF-1; i++) printf "%s"(i<4?FS:","), $i;
print $(NF-1) FS $NF}' file
For the input
1;2;3;4a;4b;4c;5
1;2;3;4;5
it generates
1;2;3;4a,4b,4c;5
1;2;3;4;5
If the offending semi-colons only appear in your 5th field then you can do this using GNU awk for the 3rd arg to match():
$ awk 'match($0,/(([^;]+;){4})(.*)(;[^;]+$)/,a){gsub(/;/,",",a[3]); print a[1] a[3] a[4]}' file
bla#bla.com;rockit#sohard.com;RCPT TO;450;+4.2.0+<rockit#sohard.com>:+Recipient+address+rejected:+Policy+restrictions,+try+later;M0641
If your fifth ; should be removed, append $6 to $5 and advance accordingly. This could be done with for loop (there are examples in SO) but since the fault is so near the end, we'll just do this in a simpler way:
$ awk 'BEGIN {FS=OFS=";"} NR==1 {nf=NF} NF==(nf+1) {$5=$5 "," $6; $6=$7; NF=nf} 1' file
Explained:
BEGIN {FS=OFS=";"} # set separator
NR==1 {nf=NF} # get field count from the first record (6)
NF==(nf+1) { # if record is one field longer:
$5=$5 "," $6 # append $6 to $5, comma-separated
$6=$7 # set $7 (NF) to $6 (nf)
NF=nf # reset NF
} 1 # output
Testing: Running the program and sending the output to cut -d\; -f 5 outputs:
Error
+4.2.0+<rockit#sohard.com>:+Recipient+address+rejected:+Policy+restrictions,+try+later

Chop a row into multiple rows using awk

I am trying to chop a line into multiple lines using awk. After every two words.
Input:
hey there this is a test
Output:
hey there
this is
a test
I am able to achieve it using xargs ,as follow:
echo hey there this is a test |xargs -n2
hey there
this is
a test
However I am curious to know how to achive this using awk. Here is command I am using, which of course didn't gave expected result.
echo hey there this is a test | awk '{ for(i=1;i<=NF;i++) if(i%2=="0") ORS="\n" ;else ORS=" "}1'
hey there this is a test
And
echo hey there this is a test | awk '{$1=$1; for(i=1;i<=NF;i++) if(i%2==0) ORS="\n" ;else ORS=" "}{ print $0}'
hey there this is a test
Need to know what is conceptually wrong in above awk command and how it can be modified to give correct output. Assume input is of single line.
Thanks and Regards.
Using awk you can do:
s='hey there this is a test'
awk '{for (i=1; i<=NF; i++) printf "%s%s", $i, (i%2 ? OFS : ORS)}' <<< "$s"
hey there
this is
a test
First you want OFS (field separator) not ORS (record separator).
And your for is in the end setting a single ORS, it iterates over all fields and sets the ORS value back and forth between " " and "\n" and at the end only one value will be there.
So what you really want is to operate on records (normally those are lines) instead of fields (normally spaces separate them).
Here's a version that uses records:
echo hey there this is a test | awk 'BEGIN {RS=" "} {if ((NR-1)%2 == 0) { ORS=" "} else {ORS="\n"}}1'
Result:
hey there
this is
a test
Another flavour of #krzyk's version:
$ awk 'BEGIN {RS=" "} {ORS="\n"} NR%2 {ORS=" "} 1' test.in
hey there
this is
a test
$
Maybe even:
awk 'BEGIN {RS=" "} {ORS=(ORS==RS?"\n":RS)} 1' test.in
They both do leave an ugly enter in the end, though.

creating a ":" delimited list in bash script using awk

I have following lines
380:<CHECKSUM_VALIDATION>
393:</CHECKSUM_VALIDATION>
437:<CHECKSUM_VALIDATION>
441:</CHECKSUM_VALIDATION>
I need to format it as below
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Is it possible to achieve above output using "awk"? [I'm using bash]
Thanks you!
Here you go:
awk -F '[:<>/]+' '{ n = $1; getline; print $2 ":" n ":" $1 }'
Explanation:
Set the field separator with -F to be a sequence of a mix of :<>/ characters, this way the first field will be the number, and the second will be CHECKSUM_VALIDATION
Save the first field in variable n and read the next line (which would overwrite $1)
Print the line: a combination of the number from the previous line, and the fields on the current line
Another approach without using getline:
awk -F '[:<>/]+' 'NR % 2 { n = $1 } NR % 2 == 0 { print $2 ":" n ":" $1 }'
This one uses the record counter NR to determine whether it's time to print: if NR is odd, save the first field in n, if NR is even, then print.
You can try this sed,
sed 'N; s/\([0-9]\+\):<\(.*\)>\n\([0-9]\+\):<\(.*\)>/\2:\1:\3/' file.txt
Test:
sat:~$ sed 'N; s/\([0-9]\+\):<\(.*\)>\n\([0-9]\+\):<\(.*\)>/\2:\1:\3/' file.txt
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Another way:
awk -F: '/<C/ {printf "CHECKSUM_VALIDATION:%d:",$1; next} {print $1}'
Here is one gnu awk
awk -F"[:\n<>]" 'NR==1{print $3,$1,$5;f=$3;next} $3{print f,$3,$7}' OFS=":" RS="</CH" file
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Based on Jonas post and avoiding getline, this awk should do:
awk -F '[:<>/]+' '/<C/ {f=$1;next} { print $2,f,$1}' OFS=\: file
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441

Resources