Merging output of two awk acripts into 1 file - bash

I have a large input file with 150+ columns and 50M rows, a sample of which is shown here:
id,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14
1,0,0,0,0,1,0,0,1,1,0,0,1,0,0
2,0,0,1,0,0,1,1,0,0,0,1,0,0,1
I have a bash shell script:
function awkScript() {
awk -F, -v cols="$1" -v hdr="$2" '
BEGIN {OFS=FS}
NR==1 {n=split(cols,cn);
for(i=1;i<=NF;i++)
for(j=1;j<=n;j++)
if($i==cn[j]) c[++k]=i;
$(NF+1)=hdr}
NR >1 {v1=$c[1]; v2=$c[2]; v3=$c[3]
if(!v2 && !v3) $(NF+1) = v1?10:0
else $(NF+1) = v3?(v1-v3)/v3:0 + v2?(v1-v2)/v2:0}1' "$3"
}
function awkScript1() {
awk -F, -v cols="$1" -v hdr="$2" '
BEGIN {OFS=FS}
NR==1 {n=split(cols,cn);
for(i=1;i<=NF;i++)
for(j=1;j<=n;j++)
if($i==cn[j]) c[++k]=i;
$(NF+1)=hdr}
NR >1 {v1=$c[1]; v2=$c[2]; v3=$c[3]; v4=$c[4]
$(NF+1) = v1?(v1/(v1+v2+v3+v4)):0
}1' "$3"
}
function awkScriptWrapper() {
awkScript "$1" "$2"
}
function awkScriptWrapper1() {
awkScript1 "$1" "$2"
}
awkScript "c1,c2,c3" "Header1" "input.txt" | awkScriptWrapper "c4,c5,c6" "Header2" >> output.txt
awkScript1 "c7,c8,c9,c10" "Header3" "input.txt" | awkScriptWrapper1 "c11,c12,c13,c14" "Header4" >> output1.txt
Sample of output.txt is:
id,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,Header1,Header2
1,0,0,0,0,1,0,0,1,1,0,0,1,0,0,0,-1
2,0,0,1,0,0,1,1,0,0,0,1,0,0,1,-1,-1
Sample of output1.txt is:
id,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,Header3,Header4
1,0,0,0,0,1,0,0,1,1,0,0,1,0,0,0,0
2,0,0,1,0,0,1,1,0,0,0,1,0,0,1,1,0.5
My requirement is that I have to append Header1,Header2,Header3,Header4 into the end of the same input file i.e., the above script should produce just 1 output file "finaloutput.txt":
id,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,Header1,Header2,Header3,Header4
1,0,0,0,0,1,0,0,1,1,0,0,1,0,0,0,-1,0,0
2,0,0,1,0,0,1,1,0,0,0,1,0,0,1,-1,-1,1,0.5
I tried doing the following statements:
awkScript "c1,c2,c3" "Header1" "input.txt" | awkScriptWrapper "c4,c5,c6" "Header2" >> temp_output.txt
awkScript1 "c7,c8,c9,c10" "Header3" "temp_output.txt" | awkScriptWrapper1 "c11,c12,c13,c14" "Header4" >> finaloutput.txt
But I'm not getting it.
Any help would be much appreciated.

Assuming that you need to join two commands in a pipeline:
$ cmd1 | join --header -j1 -t, -o1.{1..17} -o2.16,2.17 - <(cmd2)
id,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,Header1,Header2,Header3,Header4
1,0,0,0,0,1,0,0,1,1,0,0,1,0,0,0,-1,0,0
2,0,0,1,0,0,1,1,0,0,0,1,0,0,1,-1,-1,1,0.5
The above assumes that cmd1 outputs:
id,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,Header1,Header2
1,0,0,0,0,1,0,0,1,1,0,0,1,0,0,0,-1
2,0,0,1,0,0,1,1,0,0,0,1,0,0,1,-1,-1
While cmd2 outputs:
id,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,Header3,Header4
1,0,0,0,0,1,0,0,1,1,0,0,1,0,0,0,0
2,0,0,1,0,0,1,1,0,0,0,1,0,0,1,1,0.5
How does it work?
--header will treat the first line in each file as fields headers
-j1 will joint on field one
-t, specified , as a field delimiter
-o xxx will specify output columns, 1.1 means columns one from file one, in this case cmd1. 2.1 means columns one from file two, in this case cmd2
-o1.{1..17} will expand to:
-o1.1 -o1.2 -o1.3 -o1.4 -o1.5 -o1.6 -o1.7 -o1.8 -o1.9 -o1.10 -o1.11 -o1.12 -o1.13 -o1.14 -o1.15 -o1.16 -o1.17
And is a quick way to specify the first 17 columns from cmd1.
- refers to standard input, which in this case it the output from cmd1
<(command) is a process substitution.
You can change to:
join [options] file1 file2
if you need to join two regular files.

Related

Bash: Working with CSV file to build a loop and save the result

Using Bash, I'm wanting to get a list of email addresses from a CSV file to do a recursive grep search on it for a bunch of directories looking for a match in specific metadata XML files, and then also tallying up how many results I find for each address throughout the directory tree (i.e. updating the tally field in the same CSV file).
accounts.csv looks something like this:
updated to more accurately reflect real-world data
email,date,bar,URL,"something else",tally
address#somewhere.com,21/04/2015,1.2.3.4,https://blah.com/,"blah blah",5
something#that.com,17/06/2015,5.6.7.8,https://blah.com/,"lah yah",0
another#here.com,7/08/2017,9.10.11.12,https://blah.com/,"wah wah",1
For example, if we put address#somewhere.com in $email from the list, run
grep -rl "${email}" --include=\*_meta.xml --only-matching | wc -l
on it and then add that result to the tally column.
At the moment I can get the first column of that CSV file (minus the heading/first line) using
awk -F"," '{print $1}' accounts.csv | tail -n +2
but I'm lost how to do the looping and also the writing of the result back to the CSV file...
So for instance, with another#here.com if we run
grep -rl "${email}" --include=\*_meta.xml --only-matching | wc -l
and the result is say 17, how can I update that line to become:
another#here.com,7/08/2017,9.10.11.12,https://blah.com/,"wah wah",17
Is this possible with maybe awk or sed?
This is where I'm up to:
#!/bin/bash
# make temporary list of email addresses
awk -F"," '{print $1}' accounts.csv | tail -n +2 > emails.tmp
# loop over each
while read email; do
# count how many uploads for current email address
grep -rl "${email}" --include=\*_meta.xml --only-matching | wc -l
done < emails.tmp
XML Metadata looks something like this:
<?xml version="1.0" encoding="UTF-8"?>
<metadata>
<identifier>SomeTitleNameGoesHere</identifier>
<mediatype>audio</mediatype>
<collection>opensource_movies</collection>
<description>example <br /></description>
<subject>testing</subject>
<title>Some Title Name Goes Here</title>
<uploader>another#here.com</uploader>
<addeddate>2017-05-28 06:20:54</addeddate>
<publicdate>2017-05-28 06:21:15</publicdate>
<curation>[curator]email#address.com[/curator][date]20170528062151[/date][comment]checked for malware[/comment]</curation>
</metadata>
how to do the looping and also the writing of the result back to the CSV file
awk does the looping automatically. You can change any field by assigning to it. So to change a tally field (the 6th in each line) you would do $6 = ....
awk is a great tool for many scenarios. You probably can safe a lot of time in the future by investing some minutes in a short tutorial now.
The only non-trivial part is getting the output of grep into awk.
The following script increments each tally by the count of *_meta.xml files containing the given email address:
awk -F, -v OFS=, -v q=\' 'NR>1 {
cmd = "grep -rlFw " q $1 q " --include=\\*_meta.xml | wc -l";
cmd | getline c;
close(cmd);
$6 = c
} 1' accounts.csv
For simplicity we assume that filenames are free of linebreaks and email addresses are free of '.
To reduce possible false positives, I also added the -F and -w option to your grep command.
-F searches literal strings; without it, searching for a.b#c would give false positives for things like axb#c and a-b#c.
-w matches only whole words; without it, searching for b#c would give a false positive for ab#c. This isn't 100% safe, as a-b#c would still give a false positive, but without knowing more about the structure of your xml files we cannot fix this.
A pipeline to reduce the number of greps:
grep -rHo --include=\*_meta.xml -f <(awk -F, 'NR > 1 {print $1}' accounts.csv) \
| gawk -F, -v OFS=',' '
NR == FNR {
# store the filenames for each email
if (match($0, /^([^:]+):(.+)/, m)) tally[m[2]][m[1]]
next
}
FNR > 1 {$4 = length(tally[$1])}
1
' - accounts.csv
Here is a solution using single awk command to achieve this. This solution will be highly performant as compared to other solutions because it is scanning each XML file only once for all the email addresses found in first column of the CSV file. Also it is not invoking any external command or spawning a sub0shell anywhere.
This should work in any version of awk.
cat srch.awk
# function to escape regex meta characters
function esc(s, tmp) {
tmp = s
gsub(/[&+.]/, "\\\\&", tmp)
return tmp
}
BEGIN {FS=OFS=","}
# while processing csv file
NR == FNR {
# save escaped email address in array em skipping header row
if (FNR > 1)
em[esc($1)] = 0
# save each row in rec array
rec[++n] = $0
next
}
# this block will execute for eaxh XML file
{
# loop each email and save count of matched email in array em
# PS: gsub return no of substitutionx
for (i in em)
em[i] += gsub(i, "&")
}
END {
# print header row
print rec[1]
# from 2nd row onwards split row into columns using comma
for (i=2; i<=n; ++i) {
split(rec[i], a, FS)
# 6th column is the count of occurrence from array em
print a[1], a[2], a[3], a[4], a[5], em[esc(a[1])]
}
}
Use it as:
awk -f srch.awk accounts.csv $(find . -name '*_meta.xml') > tmp && mv tmp accounts.csv
A script that handles accounts.csv line by line and replaces the data in accounts.new.csv for comparison.
#! /bin/bash
file_old=accounts.csv
file_new=${file_old/csv/new.csv}
delimiter=","
x=1
# Copy file
cp ${file_old} ${file_new}
while read -r line; do
# Skip first line
if [[ $x -gt 1 ]]; then
# Read data into variables
IFS=${delimiter} read -r address foo bar tally somethingelse <<< ${line}
cnt=$(find . -name '*_meta.xml' -exec grep -lo "${address}" {} \; | wc -l)
# Reset tally
tally=$cnt
# Change line number $x in new file
sed "${x}s/.*/${address} ${foo} ${bar} ${tally} ${somethingelse}/; ${x}s/ /${delimiter}/g" \
-i ${file_new}
fi
((x++))
done < ${file_old}
The input and ouput:
# Input
$ find . -name '*_meta.xml' -exec cat {} \; | sort | uniq -c
2 address#somewhere.com
1 something#that.com
$ cat accounts.csv
email,foo,bar,tally,somethingelse
address#somewhere.com,bar1,foo2,-1,blah
something#that.com,bar2,foo3,-1,blah
another#here.com,bar4,foo5,-1,blah
# output
$ ./test.sh
$ cat accounts.new.csv
email,foo,bar,tally,somethingelse
address#somewhere.com,bar1,foo2,2,blah
something#that.com,bar2,foo3,1,blah
another#here.com,bar4,foo5,0,blah

Left outer join of multiple files data using awk command

I have base file and multiple files having common data based on 1st field of base file. I need output file with combination of all data. I have tried many commands due to file size taking to much time for output many times awk helps me out but i don't have any idea of awk array programing
example
Base File
aa
ab
ac
ad
ae
File -1
aa,Apple
ab,Orange
ac,Mango
File -2
aa,1
ab,2
ae,3
Output File expected
aa,Apple,1
ab,Orange,2
ac,Mango,
ad,,
ae,,3
This is what I tried:
awk -F, 'FNR==NR{a[$1]=$0;next}{if(b=a[$1]) print b,$2; else print $1 }' OFS=, test.txt test2.txt
You could try 2 successive join. Something like the following function should work :
join -a 1 -t, -e '' -o auto <(join -a 1 -t, -e '' -o auto base_file file1) file2
Here, we first join base_file and file1, then join the result with file2.
Explanation :
join -a 1 -t, -e '' -o auto base_file file1 :
-a 1 : displays the fields of base_file even if there is no match in the file1
-t, : we treat the character , as our field separator. This impacts both the input files and the output of the function.
-e '' -o auto : when a field is not present, output the string ''. The -e option is dependant on the -o option. -o auto is the default output format.
Output :
aa,Apple,1
ab,Orange,2
ac,Mango,
ad,,
ae,,3
awk way:
awk -F, -v OFS="," 'NR==FNR{a[$1]=$2}FILENAME==ARGV[2]{b[$1]=$2}
FILENAME==ARGV[3]{print $0,a[$0],b[$0]}' f1 f2 base
This will work in any awk for any number of input files:
$ cat tst.awk
BEGIN { FS=OFS="," }
!seen[$1]++ { keys[++numKeys] = $1 }
FNR==1 { ++numFiles }
{ a[$1,numFiles]=$2 }
END {
for (keyNr=1; keyNr <= numKeys; keyNr++) {
key = keys[keyNr]
printf "%s%s", key, OFS
for (fileNr=2;fileNr<=numFiles;fileNr++) {
printf "%s%s", a[key,fileNr], (fileNr<numFiles ? OFS : ORS)
}
}
}
$ awk -f tst.awk base file1 file2
aa,Apple,1
ab,Orange,2
ac,Mango,
ad,,
ae,,3

How to find the difference between the values two fields from two files and print only if there is a difference >10 using shell

Let say, i have two files a.txt and b.txt. the content of a.txt and b.txt is as follows:
a.txt:
abc|def|ghi|jfkdh|dfgj|hbkjdsf|ndf|10|0|cjhk|00|098r|908re|
dfbk|sgvfd|ZD|zdf|2df|3w43f|ZZewd|11|19|fdgvdf|xz00|00|00
b.txt:
abc|def|ghi|jfkdh|dfgj|hbkjdsf|ndf|11|0|cjhk|00|098r|908re|
dfbk|sgvfd|ZD|zdf|2df|3w43f|ZZewd|22|18|fdgvdf|xz00|00|00
So let's say these files have various fields separated by "|" and can have any number of lines. Also, assume that both are sorted files and so that we can match exact line between the two files. Now, i want to find the difference between the fields 8 & 9 of each row of each to be compared respectively and if any of their difference is greater than 10, then print the lines, otherwise remove the lines from file.
i.e., in the given example, i will subtract |10-11| (respective field no. 8 which is 1(absolute value) from a.txt and b.txt) and similarly for field no. 9 (0-0) which is 0,and both the difference is <10 so we delete this line from the files.
for the second line, the differences are (11-22)= 10 so we print this line.(dont need to check 19-18 as if any of the fields values(8,9) is >=10 we print such lines.
So the output is
a.txt:
dfbk|dfdag|sgvfd|ZD|zdf|2df|3w43f|ZZewd|11|19|fdgvdf|xz00|00|00
b.txt:
dfbk|dfdag|sgvfd|ZD|zdf|2df|3w43f|ZZewd|22|18|fdgvdf|xz00|00|00
You can do this with awk:
awk -F\| 'FNR==NR{x[FNR]=$0;eight[FNR]=$8;nine[FNR]=$9;next} {d1=eight[FNR]-$8;d2=nine[FNR]-$9;if(d1>10||d1<-10||d2>10||d2<-10){print x[FNR] >> "newa";print $0 >> "newb"}}' a.txt b.txt
Explanation
The -F sets the field separator to the pipe symbol. The stuff in curly braces after FNR==NR applies only to the processing of a.txt. It says to save the whole line in array x[] indexed by line number (FNR) and also to save the eighth field in array eight[] also indexed by line number. Likewise field 9 is saved in array nine[].
The second set of curly braces applies to processing file b. It calculates the differences d1 and d2. If either exceeds 10, the line is printed to each of the files newa and newb.
You can write bash shell script that does it:
while true; do
read -r lineA <&3 || break
read -r lineB <&4 || break
vara_8=$(echo "$lineA" | cut -f8 -d "|")
varb_8=$(echo "$lineB" | cut -f8 -d "|")
vara_9=$(echo "$lineA" | cut -f9 -d "|")
varb_9=$(echo "$lineB" | cut -f9 -d "|")
if (( vara_8-varb_8 > 10 || vara_8-varb_8 < -10
|| vara_9-varb_9 > 10 || vara_9-varb_9 < -10 )); then
echo "$lineA" >> newA.txt
echo "$lineB" >> newB.txt
fi
done 3<a.txt 4<b.txt
For short files
Use the method provided by Mark Setchell. Seen below in an expanded and slightly modified version:
parse.awk
FNR==NR {
x[FNR] = $0
m[FNR] = $8
n[FNR] = $9
next
}
{
if(abs(m[FNR] - $8) || abs(n[FNR] - $9)) {
print x[FNR] >> "newa"
print $0 >> "newb"
}
}
Run it like this:
awk -f parse.awk a.txt b.txt
For huge files
The method above reads a.txt into memory. If the file is very large, this becomes unfeasible and streamed parsing is called for.
It can be done in a single pass, but that requires careful handling of the multiplexed lines from a.txt and b.txt. A less error prone approach is to identify relevant line numbers, and then extract those into new files. An example of the last approach is shown below.
First you need to identify the matching lines:
# Extract fields 8 and 9 from a.txt and b.txt
paste <(awk -F'|' '{print $8, $9}' OFS='\t' a.txt) \
<(awk -F'|' '{print $8, $9}' OFS='\t' b.txt) |
# Check if it the fields matche the criteria and print line number
awk '$1 - $3 > n || $3 - $1 > n || $2 - $4 > n || $4 - $2 > 10 { print NR }' n=10 > linesfile
Now we are ready to extract the lines from a.txt and b.txt, and as the numbers are sorted, we can use the extract.awk script proposed here (repeated for convenience below):
extract.awk
BEGIN {
getline n < linesfile
if(length(ERRNO)) {
print "Unable to open linesfile '" linesfile "': " ERRNO > "/dev/stderr"
exit
}
}
NR == n {
print
if(!(getline n < linesfile)) {
if(length(ERRNO))
print "Unable to open linesfile '" linesfile "': " ERRNO > "/dev/stderr"
exit
}
}
Extract the lines (can be run in parallel):
awk -v linesfile=linesfile -f extract.awk a.txt > newa
awk -v linesfile=linesfile -f extract.awk b.txt > newb

AWK: Compare two CSV files

I have two CSV files and I want to compare them using AWK and generate a new file.
file1.csv:
"no","loc"
"abc121","C:/pro/in"
"abc122","C:/pro/abc"
"abc123","C:/pro/xyz"
"abc124","C:/pro/in"
file2.csv:
"no","loc"
"abc121","C:/pro/in"
"abc122","C:/pro/abc"
"abc125","C:/pro/xyz"
"abc126","C:/pro/in"
output.csv:
"file1","file2","Diff"
"abc121","abc121","Match"
"abc122","abc122","Match"
"abc123","","Unmatch"
"abc124","","Unmatch"
"","abc125","Unmatch"
"","abc126","Unmatch"
One way with awk:
script.awk:
BEGIN {
FS = ","
}
NR>1 && NR==FNR {
a[$1] = $2
next
}
FNR>1 {
print ($1 in a) ? $1 FS $1 FS "Match" : "\"\"" FS $1 FS "Unmatch"
delete a[$1]
}
END {
for (x in a) {
print x FS "\"\"" FS "Unmatch"
}
}
Output:
$ awk -f script.awk file1.csv file2.csv
"abc121","abc121",Match
"abc122","abc122",Match
"","abc125",Unmatch
"","abc126",Unmatch
"abc124","",Unmatch
"abc123","",Unmatch
I didn't use awk alone, but if I understood the gist of what you're asking correctly, I think this long one-liner should do it...
join -t, -a 1 -a 2 -o 1.1 2.1 1.2 2.2 file1.csv file2.csv | awk -F, '{ if ( $3 == $4 ) var = "\"Match\""; else var = "\"Unmatch\"" ; print $1","$2","var }' | sed -e '1d' -e 's/^,/"",/' -e 's/,$/,"" /' -e 's/,,/,"",/g'
Description:
The join portion takes the two CSV files, joins them on the first column (default behavior of join) and outputs all four fields (-o 1.1 2.1 1.2 2.2), making sure to include rows that are unmatched for both files (-a 1 -a 2).
The awk portion takes that output and replaces combination of the 3rd and 4th columns to either "Match" or "Unmatch" based on if they do in fact match or not. I had to make an assumption on this behavior based on your example.
The sed portion deletes the "no","loc" header from the output (-e '1d') and replaces empty fields with open-close quote marks (-e 's/^,/"",/' -e 's/,$/,""/' -e 's/,,/,"",/g'). This last part might not be necessary for you.
EDIT:
As tripleee points out, the above fails if the two initial files are unsorted. Here's an updated command to fix that. It punts the header line and sorts each file before passing them to join...
join -t, -a 1 -a 2 -o 1.1 2.1 1.2 2.2 <( sed 1d file1.csv | sort ) <( sed 1d file2.csv | sort ) | awk -F, '{ if ( $3 == $4 ) var = "\"Match\""; else var = "\"Unmatch\"" ; print $1","$2","var }' | sed -e 's/^,/"",/' -e 's/,$/,""/' -e 's/,,/,"",/g'

Merge two files using awk and write the output

I have two files with common field. I want to merge the two files with common field and write the merged file into another file using awk in linux command.
file1
412234$name1$value1$mark1
413233$raja$$mark2
414444$$$
file2
412234$sum$file2$address$street
413233$sum2$file32$address2$street2$path
414444$$$$
These sample files are seperated by $ and output merged file also will be in $. Also these rows have the empty field.
I tried the script using join:
join -t "$" out2.csv out1.csv |sort -un > file3.csv
But there is total number mismatching happened.
Tried with awk:
myawk.awk
#!/usr/bin/awk -f
NR==FNR{a[FNR]=$0;next} {print a[FNR],$2,$3}
I ran it
awk -f myawk.awk out2.csv out1.csv > file3.csv
It was also taking too much time. Not responding.
Here out2.csv is master file and we have to compare with out1.csv
Could you please help me to write the merged files into another file?
Run the following using bash. This gives you the equivalent of a full outer join
join -t'$' -a 1 -a 2 <(sort -k1,1 -t'$' out1.csv ) <(sort -k1,1 -t'$' out2.csv )
You were in the good direction with the awk solution. The main point was to change FS to split fields with $:
Content of script.awk:
awk '
BEGIN {
## Split fields with "$".
FS = "$"
}
## Save lines from second file, the first field as the index of the
## array, and rest of the line as the value.
FNR == NR {
file2[ $1 ] = substr( $0, index( $0, "$" ) )
next
}
## Print when keys from both files match.
FNR < NR {
if ( $1 in file2 ) {
printf "%s$%s\n", $0, file2[ $1 ]
}
}
' out2.csv out1.csv
Output:
412234$name1$value1$mark1$$sum$file2$address$street
413233$raja$$mark2$$sum2$file32$address2$street2$path
414444$$$$$$$$

Resources