How to merge two tab-separated files and predefine formatting of missing values? - bash

I am trying to merge two unsorted tab separated files by a column of partially overlapping identifiers (gene#) with the option of predefining missing values and keeping the order of the first table.
When using paste on my two example tables missing values end up as empty space.
cat file1
c3 100 300 gene4
c1 300 400 gene1
c13 600 700 gene2
cat file2
gene1 4.2 0.001
gene4 1.05 0.5
paste file1 file2
c3 100 300 gene1 gene1 4.2 0.001
c1 300 400 gene4 gene4 1.05 0.5
c13 600 700 gene2
As you see the result not surprisingly shows empty spaces in non matched lines. Is there a way to keep the order of file1 and fill lines like the third as follows:
c3 100 300 gene4 gene4 1.05 0.5
c1 300 400 gene1 gene1 4.2 0.001
c13 600 700 gene2 NA 1 1
I assume one way could be to build an awk conditional construct. It would be great if you could point me in the right direction.

With awk please try the following:
awk 'FNR==NR {a[$1]=$1; b[$1]=$2; c[$1]=$3; next}
{if (!a[$4]) {a[$4]="N/A"; b[$4]=1; c[$4]=1}
printf "%s %s %s %s\n", $0, a[$4], b[$4], c[$4]}
' file2 file1
which yields:
c3 100 300 gene1 gene1 4.2 0.001
c1 300 400 gene4 gene4 1.05 0.5
c13 600 700 gene2 N/A 1 1
awk 'FNR==NR {a[$1]=$1; b[$1]=$2; c[$1]=$3; next}
{if (!a[$4]) {a[$4]="N/A"; b[$4]=1; c[$4]=1}
printf "%s %s %s %s\n", $0, a[$4], b[$4], c[$4]}
' file2 file1
[Explanations]
In the 1st line, FNR==NR { command; next} is an idiom to execute the command only when reading the 1st file in the argument list ("file2" in this case). Then it creates maps (aka associative arrays) to associate values in "file2" to genes
as:
gene1 => gene1 (with array a)
gene1 => 4.2 (with array b)
gene1 => 0.001 (with array c)
gene4 => gene4 (with array a)
gene4 => 1.05 (with array b)
gene4 => 0.5 (with array c)
It is not necessary that "file2" is sorted.
The following lines are executed only when reading the 2nd file ("file1") because these lines are skipped when reading the 1st file due to the next statement.
The line {if (!a[$4]) .. is a fallback to assign variables to default values when the associative array a[gene] is undefined (meaning the gene is not found in "file2").
The final line prints the contents of "file1" followed by the associated values via the gene.

You can use join:
join -e NA -o '1.1 1.2 1.3 1.4 1.5 2.1 2.2 2.3' -a 1 -1 5 -2 1 <(nl -w1 -s ' ' file1 | sort -k 5) <(sort -k 1 file2) | sed 's/NA\sNA$/1 1/' | sort -n | cut -d ' ' -f 2-
-e NA — replace all missing values with NA
-o ... — output format (field is specified using <file>.<field>)
-a 1 — Keep every line from the left file
-1 5, -2 1 — Fields used to join the files
file1, file2 — The files
nl -w1 -s ' ' file1 — file1 with numbered lines
<(sort -k X fileN) — File N ready to be joined on column X
s/NA\sNA$/1 1/ — Replace every NA NA on end of line with 1 1
| sort -n | cut -d ' ' -f 2- — sort numerically and remove the first column
The example above uses spaces on output. To use tabs, append | tr ' ' '\t':
join -e NA -o '1.1 1.2 1.3 1.4 2.1 2.2 2.3' -a 1 -1 4 -2 1 file1 file2 | sed 's/NA\sNA$/1 1/' | tr ' ' '\t'

The broken lines have a TAB as the last character. Fix this with
paste file1 file2 | sed 's/\t$/\tNA\t1\t1/g'

Related

Bash/Awk Compare two files, print value when it is between coordinates else print 0

I have two files. If the column "chromosome" matches between the two files and the position of File1 is between the Start_position and the End_position of File2, I would like to associate the two cell_frac values. If the Gene (chromosome + position) is not present in File2, I would like both cell_frac values to be equal to 0.
File1:
Hugo_Symbol Chromosome Position
Gene1 1 111111
Gene2 1 222222
Gene3 2 333333
Gene4 2 333337
File2:
Chromosome Start_Position End_Position cell_frac_A1 cell_frac_A2
1 222220 222230 0.12 0.01
2 333330 333340 0.03 0.25
3 444440 444450 0.01 0.01
Desired output:
Hugo_Symbol Chromosome Position cell_frac_A1 cell_frac_A2
Gene1 1 111111 0 0
Gene2 1 222222 0.12 0.01
Gene3 2 333333 0.03 0.25
Gene4 2 333337 0.03 0.25
Edit: Here is the beginning of the code I used for now (not correct output):
awk '
NR==FNR{ range[$1,$2,$3,$4,$5]; next }
FNR==1
{
for(x in range) {
split(x, check, SUBSEP);
if($2==check[1] && $3>=check[2] && $3<=check[3]) { print $1"\t"$2"\t"$3"\t"check[4]"\t"check[5]}
}
}
' File2 File1
However, I did not manage to associate a 0 (with "else") when the gene was not present. I get the wrong number of lines. I Can you give me more hints?
Thanks a lot.
One awk-only idea ...
NOTE: see my other answer for assumptions/understandings and my version of file1
awk ' # process file2
FNR==NR { c=$1 # save chromosome value
$1="" # clear field #1
file2[c]=$0 # use chromosome as array index; save line in array
next
}
# process file1
{ start=end=-9999999999 # default values in case
a1=a2="" # no match in file2
if ($2 in file2) { # if chromosome also in file2
split(file2[$2],arr) # split file2 data into array arr[]
startpos =arr[1]
endpos =arr[2]
a1 =arr[3]
a2 =arr[4]
}
# if not the header row and file1/position outside of file2/range then set a1=a2=0
if (FNR>1 && ($3 < startpos || $3 > endpos)) a1=a2=0
print $0,a1,a2
}
' file2 file1
This generates:
Hugo_Symbol Chromosome Position cell_frac_A1 cell_frac_A2
Gene2 1 222222 0.12 0.01
Gene3 2 333333 0.03 0.25
Gene1 1 111111 0 0
Gene4 2 333337 0.03 0.25
Gene5 4 444567 0 0
Changing the last line to ' file2 file1 | column -t generates:
Hugo_Symbol Chromosome Position cell_frac_A1 cell_frac_A2
Gene2 1 222222 0.12 0.01
Gene3 2 333333 0.03 0.25
Gene1 1 111111 0 0
Gene4 2 333337 0.03 0.25
Gene5 4 444567 0 0
Presorting file1 by Chromosome and Position by changing last line to ' file2 <(head -1 file1; tail -n +2 file1 | sort -k2,3) | column -t generates:
Hugo_Symbol Chromosome Position cell_frac_A1 cell_frac_A2
Gene1 1 111111 0 0
Gene2 1 222222 0.12 0.01
Gene3 2 333333 0.03 0.25
Gene4 2 333337 0.03 0.25
Gene5 4 444567 0 0
One big issue (same as with my other answer) ... the actual code may become unweidly when dealing with 519 total columns especially if there's a need to intersperse a lot of columns; otherwise OP may be able to use some for loops to more easily print ranges of columns.
A job for sql instead of awk, perhaps?
tr -s ' ' '|' <File1 >file1.csv
tr -s ' ' '|' <File2 >file2.csv
(
echo 'Hugo_Symbol|Chromosome|Position|cell_frac_A1|cell_frac_A2'
sqlite3 <<'EOD'
.import file1.csv t1
.import file2.csv t2
select distinct
t1.hugo_symbol,
t1.chromosome,
t1.position,
case
when t1.position between t2.start_position and t2.end_position
then t2.cell_frac_a1
else 0
end,
case
when t1.position between t2.start_position and t2.end_position
then t2.cell_frac_a2
else 0
end
from t1 join t2 on t1.chromosome=t2.chromosome;
EOD
rm file[12].csv
) | tr '|' '\t'
Assumptions and/or understandings based on sample data and OP comments ...
file1 is not sorted by chromosome
file2 is sorted by chromosome
common headers in both files are spelled the same (eg, file1:Chromosome vs file2:Chromosom)
if a chromosome exists in file1 but does not exist in file2 then we keep the line from file1 and the columns from file2 are left blank
both files are relatively small (file1: 5MB, 900 lines; file2: few KB, 50 lines)
NOTE: the number of columns (file1: 500 columns; file2: 20 columns) could be problematic from the point of view of cumbersome coding ... more on that later ...
Sample inputs:
$ cat file1 # scrambled chromsome order; added chromosome=4 line
Hugo_Symbol Chromosome Position
Gene1 1 111111
Gene3 2 333333
Gene2 1 222222
Gene4 2 333337
Gene5 4 444567 # has no match in file2
$ cat file2
Chromosome Start_Position End_Position cell_frac_A1 cell_frac_A2
1 222220 222230 0.12 0.01
2 333330 333340 0.03 0.25
3 444440 444450 0.01 0.01
First issue is to sort file1 by Chromosome and Position and also keep the header line in place:
$ (head -1 file1; tail -n +2 file1 | sort -k2,3)
Hugo_Symbol Chromosome Position
Gene1 1 111111
Gene2 1 222222
Gene3 2 333333
Gene4 2 333337
Gene5 4 444567
We can now join the 2 files based on the Chromosome column:
$ join -1 2 -2 1 -a 1 --nocheck-order <(head -1 file1; tail -n +2 file1 | sort -k2,3) file2
Chromosome Hugo_Symbol Position Start_Position End_Position cell_frac_A1 cell_frac_A2
1 Gene1 111111 222220 222230 0.12 0.01
1 Gene2 222222 222220 222230 0.12 0.01
2 Gene3 333333 333330 333340 0.03 0.25
2 Gene4 333337 333330 333340 0.03 0.25
4 Gene5 444567
Where:
-1 2 -2 1 - join on Chromosome columns: -1 2 == file #1 column #2; -2 1 == file #2 column #1
-a 1 - keep columns from file #1 (sorted file1)
--nocheck-order - disable verifying input is sorted by join column; optional; may be needed if a locale thinks 1 should be sorted before Chromosome
NOTE: for the sample inputs/outputs we don't need a special output format so we can skip the -o option, but this may be needed depending on OP's output requirements for 519 total columns (but it may also become unwieldly)
From here OP can use bash or awk to do comparisons (is column #3 between columns #4/#5); one awk idea:
$ join -1 2 -2 1 -a 1 --nocheck-order <(head -1 file1; tail -n +2 file1 | sort -k2,3) file2 | awk 'FNR>1{if ($3<$4 || $3>$5) $6=$7=0} {print $2,$1,$3,$6,$7}'
Hugo_Symbol Chromosome Position cell_frac_A1 cell_frac_A2
Gene1 1 111111 0 0 # Position outside of range
Gene2 1 222222 0.12 0.01
Gene3 2 333333 0.03 0.25
Gene4 2 333337 0.03 0.25
Gene5 4 444567 0 0 # no match in file2; if there were other columns from file2 they would be empty
And to match OP's sample output (appears to be a fixed width requirement) we can pass this to column:
$ join -1 2 -2 1 -a 1 --nocheck-order <(head -1 file1; tail -n +2 file1 | sort -k2,3) file2 | awk 'FNR>1{if ($3<$4 || $3>$5) $6=$7=0} {print $2,$1,$3,$6,$7}' | column -t
Hugo_Symbol Chromosome Position cell_frac_A1 cell_frac_A2
Gene1 1 111111 0 0
Gene2 1 222222 0.12 0.01
Gene3 2 333333 0.03 0.25
Gene4 2 333337 0.03 0.25
Gene5 4 444567 0 0
NOTE: Keep in mind this may be untenable with OP's 519 total columns, especially if interspersed columns contain blanks/white-space (ie, column -t may not parse the input properly)
Issues (in addition to incorrect assumptions and previous NOTES):
for relatively small files the performance of the join | awk | column should be sufficient
for larger files all of this code can be rolled into a single awk solution though memory usage could be an issue on a small machine (eg, one awk idea would be to load file2 into memory via arrays so memory would need to be large enough to hold all of file2 ... probably not an issue unless file2 gets to be 100's/1000's of MBytes in size)
for 519 total columns the awk/print will get unwieldly especially if there's a need to move/intersperse a lot of columns

loop through numeric text files in bash and add numbers row wise

I have a set of text files in a folder, like so:
a.txt
1
2
3
4
5
b.txt
1000
1001
1002
1003
1004
.. and so on (assume fixed number of rows, but unknown number of text files). What I am looking a results file which is a summation across all rows:
result.txt
1001
1003
1005
1007
1009
How do I go about achieving this in bash? without using Python etc.
Using awk
Try:
$ awk '{a[FNR]+=$0} END{for(i=1;i<=FNR;i++)print a[i]}' *.txt
1001
1003
1005
1007
1009
How it works:
a[FNR]+=$0
For every line read, we add the value of that line, $0, to partial sum, a[FNR], where a is an array and FNR is the line number in the current file.
END{for(i=1;i<=FNR;i++)print a[i]}
After all the files have been read in, this prints out the sum for each line number.
Using paste and bc
$ paste -d+ *.txt | bc
1001
1003
1005
1007
1009

Subtract corresponding lines

I have two files, file1.csv
3 1009
7 1012
2 1013
8 1014
and file2.csv
5 1009
3 1010
1 1013
In the shell, I want to subtract the count in the first column in the second file from that in the first file, based on the identifier in the second column. If an identifier is missing in the second column, the count is assumed to be 0.
The result would be
-2 1009
-3 1010
7 1012
1 1013
8 1014
The files are huge (several GB). The second columns are sorted.
How would I do this efficiently in the shell?
Assuming that both files are sorted on second column:
$ join -j2 -a1 -a2 -oauto -e0 file1 file2 | awk '{print $2 - $3, $1}'
-2 1009
-3 1010
7 1012
1 1013
8 1014
join will join sorted files.
-j2 will join one second column.
-a1 will print records from file1 even it there is no corresponding row in file2.
-a2 Same as -a1 but applied for file2.
-oauto is in this case the same as -o1.2,1.1,2.1 which will print the joined column, and then the remaining columns from file1 and file2.
-e0 will insert 0 instead of an empty column. This works with -a1 and -a2.
The output from join is three columns like:
1009 3 5
1010 0 3
1012 7 0
1013 2 1
1014 8 0
Which is piped to awk, to subtract column three from column 2, and then reformatting.
$ awk 'NR==FNR { a[$2]=$1; next }
{ a[$2]-=$1 }
END { for(i in a) print a[i],i }' file1 file2
7 1012
1 1013
8 1014
-2 1009
-3 1010
It reads the first file in memory so you should have enough memory available. If you don't have the memory, I would maybe sort -k2 the files first, then sort -m (merge) them and continue with that output:
$ sort -m -k2 -k3 <(sed 's/$/ 1/' file1|sort -k2) <(sed 's/$/ 2/' file2|sort -k2) # | awk ...
3 1009 1
5 1009 2 # previous $2 = current $2 -> subtract
3 1010 2 # previous $2 =/= current and current $3=2 print -$3
7 1012 1
2 1013 1 # previous $2 =/= current and current $3=1 print prev $2
1 1013 2
8 1014 1
(I'm out of time for now, maybe I'll finish it later)
EDIT by Ed Morton
Hope you don't mind me adding what I was working on rather than posting my own extremely similar answer, feel free to modify or delete it:
$ cat tst.awk
{ split(prev,p) }
$2 == p[2] {
print p[1] - $1, p[2]
prev = ""
next
}
p[2] != "" {
print (p[3] == 1 ? p[1] : 0-p[1]), p[2]
}
{ prev = $0 }
END {
split(prev,p)
print (p[3] == 1 ? p[1] : 0-p[1]), p[2]
}
$ sort -m -k2 <(sed 's/$/ 1/' file1) <(sed 's/$/ 2/' file2) | awk -f tst.awk
-2 1009
-3 1010
7 1012
1 1013
8 1014
Since the files are sorted¹, you can merge them line-by-line with the join utility in coreutils:
$ join -j2 -o auto -e 0 -a 1 -a 2 41144043-a 41144043-b
1009 3 5
1010 0 3
1012 7 0
1013 2 1
1014 8 0
All those options are required:
-j2 says to join based on the second column of each file
-o auto says to make every row have the same format, beginning with the join key
-e 0 says that missing values should be substituted with zero
-a 1 and -a 2 include rows that are absent from one file or another
the filenames (I've used names based on the question number here)
Now we have a stream of output in that format, we can do the subtraction on each line. I used this GNU sed command to transform the above output into a dc program:
sed -re 's/.*/c&-n[ ]np/e'
This takes the three values on each line and rearranges them into a dc command for the subtraction, then executes it. For example, the first line becomes (with spaces added for clarity)
c 1009 3 5 -n [ ]n p
which subtracts 5 from 3, prints it, then prints a space, then prints 1009 and a newline, giving
-2 1009
as required.
We can then pipe all these lines into dc, giving us the output file that we want:
$ join -o auto -j2 -e 0 -a 1 -a 2 41144043-a 41144043-b \
> | sed -e 's/.*/c& -n[ ]np/' \
> | dc
-2 1009
-3 1010
7 1012
1 1013
8 1014
¹ The sorting needs to be consistent with LC_COLLATE locale setting. That's unlikely to be an issue if the fields are always numeric.
TL;DR
The full command is:
join -o auto -j2 -e 0 -a 1 -a 2 "$file1" "$file2" | sed -e 's/.*/c& -n[ ]np/' | dc
It works a line at a time, and starts only the three processes you see, so should be reasonably efficient in both memory and CPU.
Assuming this is a csv with blank separation, if this is a "," use argument -F ','
awk 'FNR==NR {Inits[$2]=$1; ids[$2]++; next}
{Discounts[$2]=$1; ids[$2]++}
END { for (id in ids) print Inits[ id] - Discounts[ id] " " id}
' file1.csv file2.csv
for memory issue (could be in 1 serie of pipe but prefer to use a temporary file)
awk 'FNR==NR{print;next}{print -1 * $1 " " $2}' file1 file2 \
| sort -k2 \
> file.tmp
awk 'Last != $2 {
if (NR != 1) print Result " "Last
Last = $2; Result = $1
}
Last == $2 { Result+= $1; next}
END { print Result " " $2}
' file.tmp
rm file.tmp

how shell script merging three files by lines and calculating some value, meeting some condition

while read line1
do
while read line2
do
while read line3
do echo "$line1, $line2, $line3" | awk -F , ' $1==$5 && $6==$11 && $10==$12 {print $1,",",$2,",",$3,",",$4,",",$6,",",$7,",",$8,",",$9,",",$10,",",$13,",",$14,",",$15}' >>out.txt
done < grades.csv
done < subjects.csv
done < students.csv
In this code i am merging three files by line(cross product) and if any merged line meets the condition "$1==$5 && $6==$11 && $10==$12", I am printing them in the output file.
Now my problem is i want to keep adding "$13" field values for each iteration if it meets the condition.
How can I do this? Please help.
Here is the sample files.
gardes.csv containes lines :
1,ARCH,1,90,very good,80
1,ARCH,2,70,good,85
1,PLNG,1,89,very good,85
subjects.csv contains lines :
1,ARCH,Computer Architecture,A,K.Gose
1,PLNG,Programming Languages,A,P.Yang
1,OS,Operating System,B,K.Gopalan
2,ARCH,Computer Architecture,A,K.Gose
students.csv contains lines:
1,pankaj,vestal,986-654-32
2,satadisha,binghamton,879-876-54
5,pankaj,vestal,986-654-32
6,pankaj,vestal,986-654-31
This is the expected output:
ARCH 1 pankaj vestal 986-654-32 Computer Architecture A K.Gose 1 1 90 very good 80
ARCH 1 pankaj vestal 986-654-32 Computer Architecture A K.Gose 1 2 70 good 85
ARCH 2 satadisha binghamton 879-876-54 Computer Architecture A K.Gose 1 1 90 very good 80
ARCH 2 satadisha binghamton 879-876-54 Computer Architecture A K.Gose 1 2 70 good 85
PLNG 1 pankaj vestal 986-654-32 Programming Languages A P.Yang 1 1 89 very good 85
Also I need the sum of (90+70+90+70+89) in another shell variable which can be written to a file.
Assuming you have joined the columns to form a TSV (tab-separated values) file or stream, and that columns $k1, $k2, and $k3 (in that file or stream) form the key, and that you want to sum column $s in the join, here is the awk command you can use to form a TSV listing of the keys and sum:
awk -F\\t -v k1=$k1 -v k2=$k2 -v k3=$k3 '
BEGIN{t=OFS="\t"}
{ key=$k1 t $k2 t $k3; sum[key]+=$s }
END {for (key in sum) {print key, sum[key] } }'
(Using awk to process CSV files that might contain commas is asking for trouble, so I've illustrated how to use awk with tabs.)
You can use the join to create your expanded data and operate with awk on it.
$ join -t, -1 5 -2 2 <(join -t, -j 1 file3 file2 | sort -t, -k5,5) file1 | column -s, -t
ARCH 1 pankaj vestal 986-654-32 Computer Architecture A K.Gose 1 1 90 very good 80
ARCH 1 pankaj vestal 986-654-32 Computer Architecture A K.Gose 1 2 70 good 85
ARCH 2 satadisha binghamton 879-876-54 Computer Architecture A K.Gose 1 1 90 very good 80
ARCH 2 satadisha binghamton 879-876-54 Computer Architecture A K.Gose 1 2 70 good 85
PLNG 1 pankaj vestal 986-654-32 Programming Languages A P.Yang 1 1 89 very good 85
alternatively, you can do the join in awk as well, eliminating the while loops.
If you want to add the values in $11.
$ join -t, -1 5 -2 2 <(join -t, -j 1 file3 file2
| sort -t, -k5,5) file1 | awk -F, '{sum+=$11} END{print sum}'
To assign the result to a shell variable
$ sum=$(join ... )

bash- get all lines with the same column value in two files

I have two text files each with 3 fields. I need to get the lines with the same value on the third field. The 3rd field value is unique in each file. Example:
file1:
1 John 300
2 Eli 200
3 Chris 100
4 Ann 600
file2:
6 Kevin 250
7 Nancy 300
8 John 100
output:
1 John 300
7 Nancy 300
3 Chris 100
8 John 100
When I use the following command:
cat file1 file2 | sort -k 3 | uniq -c -f 2
I get only one row from an input file with the duplicate value. I need both!
this one-liner gives you that output:
awk 'NR==FNR{a[$3]=$0;next}$3 in a{print a[$3];print}' file1 file2
My solution is
join -1 3 -2 3 <(sort -k3 file1) <(sort -k3 file2) | awk '{print $2, $3, $1; print $4, $5, $1}'
or
join -1 3 -2 3 <(sort -k3 file1) <(sort -k3 file2) -o "1.1 1.2 0 2.1 2.2 0" | xargs -n3

Resources