Consolidate two tables awk - bash

I have two files (all tab delimited):
database.txt
MAR001;string1;H
MAR002;string2;G
MAR003;string3;H
data.txt
data1;MAR002
data2;MAR003
And I want to consolidate these two tables using the MAR### column. Expected output (tab-delimited):
data1;MAR002;string2;G
data2;MAR003;string3;H
I want to use awk; this is my attempt:
awk 'BEGIN{FS=OFS="\t"} FNR == NR { a[$2] = $1; next } $2 in a { print $0, a[$1] }' data.txt database.txt
but this fails...

I would just use the join command. It's very easy:
join -t \; -1 1 -2 2 database.txt data.txt
MAR002;string2;G;data1
MAR003;string3;H;data2
You can specify output column order using -o. For example:
join -t \; -1 1 -2 2 -o 2.1,2.2,1.2,1.3 database.txt data.txt
data1;MAR002;string2;G
data2;MAR003;string3;H
P.S. I did assume your files are "semicolon separated" and not "tab separated". Also, your files need to be sorted by the key column.

awk -F '\t' 'FNR==1 && NR == 1 { strt=1 } FNR==1 && NR != 1 { strt=0} strt==1 {dat[$1]=$2";"$3 } strt==0 { if ( dat[$2] != "" ) { print $1";"$2";"dat[$2] } }' database.txt data.txt
Read database.txt in first and read the data into an array dat. Then when we encounter the data.txt file, check for entries in the dat array and print the required data if there is one.
Output:
data1;MAR002;string2;G
data2;MAR003;string3;H

First of all ; and \t are different characters. If your real input files are tab delimited, here is the fix on your codes:
Change your codes into:
awk '....... $1 in a { print a[$1], $0 }' data.txt database.txt

Related

Split a large gz file into smaller ones filtering and distributing content

I have a gzip file of size 81G which I unzip and size of uncompressed file is 254G. I want to implement a bash script which takes the gzip file and splits it on the basis of the first column. The first column has values range between 1-10. I want to split the files into 10 subfiles where by all rows where value in first column is 1 is put into 1 file. All the rows where the value is 2 in the first column is put into a second file and so on. While I do that I don't want to put column 3 and column 5 in the new subfiles. Also the file is tab separated. For example:
col_1 col_2. col_3. col_4. col_5. col_6
1. 7464 sam. NY. 0.738. 28.9
1. 81932. Dave. NW. 0.163. 91.9
2. 162. Peter. SD. 0.7293. 673.1
3. 7193. Ooni GH. 0.746. 6391
3. 6139. Jess. GHD. 0.8364. 81937
3. 7291. Yeldish HD. 0.173. 1973
File above will result in three different gzipped files such that col_3 and col_5 are removed from each of the new subfiles. What I did was
#!/bin/bash
#SBATCH --partition normal
#SBATCH --mem-per-cpu 500G
#SBATCH --time 12:00:00
#SBATCH -c 1
awk -F, '{print > $1".csv.gz"}' file.csv.gz
But this is not producing the desired result. Also I don't know how to remove col_3 and col_5 from the new subfiles.
Like I said gzip file is 81G and therefore, I am looking for an efficient solution. Insights will be appreciated.
You have to decompress and recompress; to get rid of columns 3 and 5, you could use GNU cut like this:
gunzip -c infile.gz \
| cut --complement -f3,5 \
| awk '{ print | "gzip > " $1 "csv.gz" }'
Or you could get rid of the columns in awk:
gunzip -c infile.gz \
| awk -v OFS='\t' '{ print $1, $2, $4, $6 | "gzip > " $1 "csv.gz" }'
Something like
zcat input.csv.gz | cut -f1,2,4,6- | awk '{ print | ("gzip -c > " $1 "csv.gz") }'
Uncompress the file, remove fields 3 and 5, save to the appropriate compressed file based on the first column.
Robustly and portably with any awk, if the file is always sorted by the first field as shown in your example:
gunzip -c infile.gz |
awk '
{ $0 = $1 OFS $2 OFS $4 OFS $6 }
NR==1 { hdr = $0; next }
$1 != prev { close(gzip); gzip="gzip > \047"$1".csv.gz\047"; prev=$1 }
!seen[$1]++ { print hdr | gzip }
{ print | gzip }
'
otherwise:
gunzip -c infile.gz |
awk 'BEGIN{FS=OFS="\t"} {print (NR>1), NR, $0}' |
sort -k1,1n -k3,3 -k2,2n |
cut -f3- |
awk '
{ $0 = $1 OFS $2 OFS $4 OFS $6 }
NR==1 { hdr = $0; next }
$1 != prev { close(gzip); gzip="gzip > \047"$1".csv.gz\047"; prev=$1 }
!seen[$1]++ { print hdr | gzip }
{ print | gzip }
'
The first awk adds a number at the front to ensure the header line sorts before the rest during the sort phase, and adds the line number so that lines with the same original first field value retain their original input order. Then we sort by the first field, and then cut away the 2 fields added in the first step, then use awk to robustly and portably create the separate output files, ensuring that each output file starts with a copy of the header. We close each output file as we go so that the script will work for any number of output files using any awk and will work efficiently even for a large number of output files with GNU awk. It also ensures that each output file name is properly quoted to avoid globbing, word splitting, and filename expansion.

using awk how to merge 2 files, say A & B and do a left outer join function and include all columns in both files

I have multiple files with different number of columns, i need to do a merge on first file and second file and do a left outer join in awk respective to first file and print all columns in both files matching the first column of both files.
I have tried below codes to get close to my output. But i can't print the ",', where no matching number is found in second file. Below is the code. Join needs sorting and takes more time than awk. My file sizes are big, like 30 million records.
awk -F ',' '{
if (NR==FNR){ r[$1]=$0}
else{ if($1 in r)
r[$1]=r[$1]gensub($1,"",1)}
}END{for(i in r){print r[i]}}' file1 file2
file1
number,column1,column2,..columnN
File2
numbr,column1,column2,..columnN
Output
number,file1.column1,file1.column2,..file1.columnN,file2.column1,file2.column3...,file2.columnN
file1
1,a,b,c
2,a,b,c
3,a,b,c
5,a,b,c
file2
1,x,y
2,x,y
5,x,y
6,x,y
7,x,y
desired output
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,,
5,a,b,c,x,y
$ cat tst.awk
BEGIN { FS=OFS="," }
NR==FNR {
tail = gensub(/[^,]*,/,"",1)
if ( FNR == 1 ) {
empty = gensub(/[^,]/,"","g",tail)
}
file2[$1] = tail
next
}
{ print $0, ($1 in file2 ? file2[$1] : empty) }
$ awk -f tst.awk file2 file1
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y
The above uses GNU awk for gensub(), with other awks it's just one more step to do [g]sub() on the appropriate variable after initially assigning it.
An interesting (to me at least!) alternative you might want to test for a performance difference is:
$ cat tst.awk
BEGIN { FS=OFS="," }
NR==FNR {
tail = gensub(/[^,]*,/,"",1)
idx[$1] = NR
file2[NR] = tail
if ( FNR == 1 ) {
file2[""] = gensub(/[^,]/,"","g",tail)
}
next
}
{ print $0, file2[idx[$1]] }
$ awk -f tst.awk file2 file1
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y
but I don't really expect it to be any faster and it MAY even be slower.
you can try,
awk 'BEGIN{FS=OFS=","}
FNR==NR{d[$1]=substr($0,index($0,",")+1); next}
{print $0, ($1 in d?d[$1]:",")}' file2 file1
you get,
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y
join to the rescue:
$ join -t $',' -a 1 -e '' -o 0,1.2,1.3,1.4,2.2,2.3 file1.txt file2.txt
Explanation:
-t $',': Field separator token.
-a 1: Do not discard records from file 1 if not present in file 2.
-e '': Missing records will be treated as an empty field.
-o: Output format.
file1.txt
1,a,b,c
2,a,b,c
3,a,b,c
5,a,b,c
file2.txt
1,x,y
2,x,y
5,x,y
6,x,y
7,x,y
Output
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y

Left outer join of multiple files data using awk command

I have base file and multiple files having common data based on 1st field of base file. I need output file with combination of all data. I have tried many commands due to file size taking to much time for output many times awk helps me out but i don't have any idea of awk array programing
example
Base File
aa
ab
ac
ad
ae
File -1
aa,Apple
ab,Orange
ac,Mango
File -2
aa,1
ab,2
ae,3
Output File expected
aa,Apple,1
ab,Orange,2
ac,Mango,
ad,,
ae,,3
This is what I tried:
awk -F, 'FNR==NR{a[$1]=$0;next}{if(b=a[$1]) print b,$2; else print $1 }' OFS=, test.txt test2.txt
You could try 2 successive join. Something like the following function should work :
join -a 1 -t, -e '' -o auto <(join -a 1 -t, -e '' -o auto base_file file1) file2
Here, we first join base_file and file1, then join the result with file2.
Explanation :
join -a 1 -t, -e '' -o auto base_file file1 :
-a 1 : displays the fields of base_file even if there is no match in the file1
-t, : we treat the character , as our field separator. This impacts both the input files and the output of the function.
-e '' -o auto : when a field is not present, output the string ''. The -e option is dependant on the -o option. -o auto is the default output format.
Output :
aa,Apple,1
ab,Orange,2
ac,Mango,
ad,,
ae,,3
awk way:
awk -F, -v OFS="," 'NR==FNR{a[$1]=$2}FILENAME==ARGV[2]{b[$1]=$2}
FILENAME==ARGV[3]{print $0,a[$0],b[$0]}' f1 f2 base
This will work in any awk for any number of input files:
$ cat tst.awk
BEGIN { FS=OFS="," }
!seen[$1]++ { keys[++numKeys] = $1 }
FNR==1 { ++numFiles }
{ a[$1,numFiles]=$2 }
END {
for (keyNr=1; keyNr <= numKeys; keyNr++) {
key = keys[keyNr]
printf "%s%s", key, OFS
for (fileNr=2;fileNr<=numFiles;fileNr++) {
printf "%s%s", a[key,fileNr], (fileNr<numFiles ? OFS : ORS)
}
}
}
$ awk -f tst.awk base file1 file2
aa,Apple,1
ab,Orange,2
ac,Mango,
ad,,
ae,,3

AWK: Compare two CSV files

I have two CSV files and I want to compare them using AWK and generate a new file.
file1.csv:
"no","loc"
"abc121","C:/pro/in"
"abc122","C:/pro/abc"
"abc123","C:/pro/xyz"
"abc124","C:/pro/in"
file2.csv:
"no","loc"
"abc121","C:/pro/in"
"abc122","C:/pro/abc"
"abc125","C:/pro/xyz"
"abc126","C:/pro/in"
output.csv:
"file1","file2","Diff"
"abc121","abc121","Match"
"abc122","abc122","Match"
"abc123","","Unmatch"
"abc124","","Unmatch"
"","abc125","Unmatch"
"","abc126","Unmatch"
One way with awk:
script.awk:
BEGIN {
FS = ","
}
NR>1 && NR==FNR {
a[$1] = $2
next
}
FNR>1 {
print ($1 in a) ? $1 FS $1 FS "Match" : "\"\"" FS $1 FS "Unmatch"
delete a[$1]
}
END {
for (x in a) {
print x FS "\"\"" FS "Unmatch"
}
}
Output:
$ awk -f script.awk file1.csv file2.csv
"abc121","abc121",Match
"abc122","abc122",Match
"","abc125",Unmatch
"","abc126",Unmatch
"abc124","",Unmatch
"abc123","",Unmatch
I didn't use awk alone, but if I understood the gist of what you're asking correctly, I think this long one-liner should do it...
join -t, -a 1 -a 2 -o 1.1 2.1 1.2 2.2 file1.csv file2.csv | awk -F, '{ if ( $3 == $4 ) var = "\"Match\""; else var = "\"Unmatch\"" ; print $1","$2","var }' | sed -e '1d' -e 's/^,/"",/' -e 's/,$/,"" /' -e 's/,,/,"",/g'
Description:
The join portion takes the two CSV files, joins them on the first column (default behavior of join) and outputs all four fields (-o 1.1 2.1 1.2 2.2), making sure to include rows that are unmatched for both files (-a 1 -a 2).
The awk portion takes that output and replaces combination of the 3rd and 4th columns to either "Match" or "Unmatch" based on if they do in fact match or not. I had to make an assumption on this behavior based on your example.
The sed portion deletes the "no","loc" header from the output (-e '1d') and replaces empty fields with open-close quote marks (-e 's/^,/"",/' -e 's/,$/,""/' -e 's/,,/,"",/g'). This last part might not be necessary for you.
EDIT:
As tripleee points out, the above fails if the two initial files are unsorted. Here's an updated command to fix that. It punts the header line and sorts each file before passing them to join...
join -t, -a 1 -a 2 -o 1.1 2.1 1.2 2.2 <( sed 1d file1.csv | sort ) <( sed 1d file2.csv | sort ) | awk -F, '{ if ( $3 == $4 ) var = "\"Match\""; else var = "\"Unmatch\"" ; print $1","$2","var }' | sed -e 's/^,/"",/' -e 's/,$/,""/' -e 's/,,/,"",/g'

Merge two files using awk and write the output

I have two files with common field. I want to merge the two files with common field and write the merged file into another file using awk in linux command.
file1
412234$name1$value1$mark1
413233$raja$$mark2
414444$$$
file2
412234$sum$file2$address$street
413233$sum2$file32$address2$street2$path
414444$$$$
These sample files are seperated by $ and output merged file also will be in $. Also these rows have the empty field.
I tried the script using join:
join -t "$" out2.csv out1.csv |sort -un > file3.csv
But there is total number mismatching happened.
Tried with awk:
myawk.awk
#!/usr/bin/awk -f
NR==FNR{a[FNR]=$0;next} {print a[FNR],$2,$3}
I ran it
awk -f myawk.awk out2.csv out1.csv > file3.csv
It was also taking too much time. Not responding.
Here out2.csv is master file and we have to compare with out1.csv
Could you please help me to write the merged files into another file?
Run the following using bash. This gives you the equivalent of a full outer join
join -t'$' -a 1 -a 2 <(sort -k1,1 -t'$' out1.csv ) <(sort -k1,1 -t'$' out2.csv )
You were in the good direction with the awk solution. The main point was to change FS to split fields with $:
Content of script.awk:
awk '
BEGIN {
## Split fields with "$".
FS = "$"
}
## Save lines from second file, the first field as the index of the
## array, and rest of the line as the value.
FNR == NR {
file2[ $1 ] = substr( $0, index( $0, "$" ) )
next
}
## Print when keys from both files match.
FNR < NR {
if ( $1 in file2 ) {
printf "%s$%s\n", $0, file2[ $1 ]
}
}
' out2.csv out1.csv
Output:
412234$name1$value1$mark1$$sum$file2$address$street
413233$raja$$mark2$$sum2$file32$address2$street2$path
414444$$$$$$$$

Resources