Fuse two csv files - bash

I am trying to fuse two csv files this way using BASH.
files1.csv :
Col1;Col2
a;b
b:c
file2.csv
Col3;Col4
1;2
3;4
result.csv
Col1;Col2;Col3;Col4
a;b;0;0
b;c;0;0
0;0;1;2
0;0;3;4
The '0's in the result files are just empty cells.
I tried using paste command but it doesn't fuse it the way I want.
paste -d';' file1 file2
Is there a way to do it using BASH?
Thanks.

One in awk:
$ awk -v OFS=";" '
FNR==1 { a[1]=a[1] (a[1]==""?"":OFS) $0; next } # mind headers
FNR==NR { a[NR]=$0 OFS 0 OFS 0; next } # hash file1
{ a[NR]=0 OFS 0 OFS $0 } # hash file2
END { for(i=1;i<=NR;i++)if(i in a)print a[i] } # output
' file1 file2
Col1;Col2;Col3;Col4
a;b;0;0
b:c;0;0
0;0;1;2
0;0;3;4

Related

intersecting to files by several columns using awk

I have two big CSV files that look like below:
f1.csv
f1_c1,f1_c2
A,B
C,A
B,D
f2.csv
f2_c1,f2_c2,f2_c3
chr1,fail,A
chr1,pass,B
chr1,neutral,C
chr2,fail,D
I want to intersect the two files in a way that the information from the first column and the second column of file two should be written for each row of f1 in separate columns. So based on what I mentioned the desired output should be as below:
f1_c1,f1_c2,f2_c1,f2_c1,f2_c2,f2_c2
A,B,chr1,chr1,fail,pass
C,A,chr1,chr1,neutral,fail
B,D,chr1,chr2,pass,fail
I have been trying to make this code work but it gives errors - would be great if you give some help to fix this.
awk 'BEGIN{FS=OFS=","}NR==FNR{gene[$3]=$1; type{$3]=$2; next}{ print ($1, $2, gene[$1], gene[$2], type[$1], type[$2] ) }' f2.csv f1.csv
Thank you.
You may use this awk:
awk 'BEGIN{FS=OFS=","} NR==1{print "f1_c1,f1_c2,f2_c1,f2_c1,f2_c2,f2_c2"} FNR==NR {m1[$3]=$1; m2[$3]=$2; next} FNR>1 {print $0, m1[$1], m1[$2], m2[$1], m2[$2]}' f2.csv f1.csv
f1_c1,f1_c2,f2_c1,f2_c1,f2_c2,f2_c2
A,B,chr1,chr1,fail,pass
C,A,chr1,chr1,neutral,fail
B,D,chr1,chr2,pass,fail
Expanded command:
awk '
BEGIN { FS = OFS = "," }
NR == 1 {
print "f1_c1,f1_c2,f2_c1,f2_c1,f2_c2,f2_c2"
}
FNR == NR {
m1[$3]=$1
m2[$3]=$2;
next
}
FNR > 1 {
print $0, m1[$1], m1[$2], m2[$1], m2[$2]
}' f2.csv f1.csv

awk output to file based on filter

I have a big CSV file that I need to cut into different pieces based on the value in one of the columns. My input file dataset.csv is something like this:
NOTE: edited to clarify that data is ,data, no spaces.
action,action_type, Result
up,1,stringA
down,1,strinB
left,2,stringC
So, to split by action_type I simply do (I need the whole matching line in the resulting file):
awk -F, '$2 ~ /^1$/ {print}' dataset.csv >> 1_dataset.csv
awk -F, '$2 ~ /^2$/ {print}' dataset.csv >> 2_dataset.csv
This works as expected but I am basicaly travesing my original dataset twice. My original dataset is about 5GB and I have 30 action_type categories. I need to do this everyday, so, I need to script the thing to run on its own efficiently.
I tried the following but it does not work:
# This is a file called myFilter.awk
{
action_type=$2;
if (action_type=="1") print $0 >> 1_dataset.csv;
else if (action_type=="2") print $0 >> 2_dataset.csv;
}
Then I run it as:
awk -f myFilter.awk dataset.csv
But I get nothing. Literally nothing, no even errors. Which sort of tell me that my code is simply not matching anything or my print / pipe statement is wrong.
You may try this awk to do this in a single command:
awk -F, 'NR > 1{fn = $2 "_dataset.csv"; print >> fn; close(fn)}' file
With GNU awk to handle many concurrently open files and without replicating the header line in each output file:
awk -F',' '{print > ($2 "_dataset.csv")}' dataset.csv
or if you also want the header line to show up in each output file then with GNU awk:
awk -F',' '
NR==1 { hdr = $0; next }
!seen[$2]++ { print hdr > ($2 "_dataset.csv") }
{ print > ($2 "_dataset.csv") }
' dataset.csv
or the same with any awk:
awk -F',' '
NR==1 { hdr = $0; next }
{ out = $2 "_dataset.csv" }
!seen[$2]++ { print hdr > out }
{ print >> out; close(out) }
' dataset.csv
As currently coded the input field separator has not been defined.
Current:
$ cat myfilter.awk
{
action_type=$2;
if (action_type=="1") print $0 >> 1_dataset.csv;
else if (action_type=="2") print $0 >> 2_dataset.csv;
}
Invocation:
$ awk -f myfilter.awk dataset.csv
There are a couple ways to address this:
$ awk -v FS="," -f myfilter.awk dataset.csv
or
$ cat myfilter.awk
BEGIN {FS=","}
{
action_type=$2
if (action_type=="1") print $0 >> 1_dataset.csv;
else if (action_type=="2") print $0 >> 2_dataset.csv;
}
$ awk -f myfilter.awk dataset.csv

Using sed to index files [duplicate]

file1 contains multiple alphabetic sequences:
AETYUIOOILAKSJ
EAYEURIOPOSIDK
RYXURIAJSKDMAO
URITORIEJAHSJD
YWQIAKSJDHFKCM
HAJSUDIDSJSIAJ
AJDHDPFDIXSIBJ
JAQIAUXCNCVUFO
while file2 contains indexes of the sequences which I want to pull out and transfer to another file. For example, 3T means I want the sequence with a T at position 3 from within file1.
In reality both files are very large with thousands of indexes and sequences.
file2:
3T
10K
14D
1J
Desired output:
AETYUIOOILAKSJ
RYXURIAJSKDMAO
URITORIEJAHSJD
JAQIAUXCNCVUFO
Ideally the output should match the order of indexes in file2. In other words the first index "3T" matches sequence "AETYUIOOILAKSJ" and thus this is the first sequence in the new file.
Things I have tried:
grep -f file2 file1
grep -fov file2 file1 # possibly to filter for those non-matching entries
I have also used the command line tool sift but am still having difficulty.
Thanks
$ cat tst.awk
NR==FNR {
lgth = length($0)
pos2char[substr($0,1,lgth-1)] = substr($0,lgth,1)
next
}
{
for (pos in pos2char) {
if ( substr($0,pos,1) == pos2char[pos] ) {
print
next
}
}
}
$ awk -f tst.awk file2 file1
AETYUIOOILAKSJ
RYXURIAJSKDMAO
URITORIEJAHSJD
JAQIAUXCNCVUFO
With awk + grep pipeline:
awk '{ pat=sprintf("%*s", int($0)-1, ""); gsub(" ", ".", pat);
printf "^%s%s\n", pat, substr($0, length) }' file2 | grep -f- file1
The output:
AETYUIOOILAKSJ
RYXURIAJSKDMAO
URITORIEJAHSJD
JAQIAUXCNCVUFO
Here you go:
awk 'NR==FNR {b[$0]++;next} {for (i in b) {a=match($0,"[A-Z]");n=substr($0,1,(a-1));s=substr($0,a);t=substr(i,n,1);if (t==s) print i}}' file1 file2
AETYUIOOILAKSJ
RYXURIAJSKDMAO
URITORIEJAHSJD
JAQIAUXCNCVUFO
Some more readable:
awk '
NR==FNR {
b[$0]++;
next
}
{
for (i in b) {
a=match($0,"[A-Z]");
n=substr($0,1,(a-1));
s=substr($0,a);
t=substr(i,n,1);
if (t==s)
print i
}
}
' file1 file2
With comments:
awk '
NR==FNR { # For the first file
b[$0]++; # Store file1 in in array b
next
}
{
for (i in b) { # Loop trough elements in array b
a=match($0,"[A-Z]"); # For file2 find where letters starts
n=substr($0,1,(a-1)); # Store the number part of file2 in n
s=substr($0,a); # Store the letters part of file2 in s
t=substr(i,n,1); # from file1 find string at position n
if (t==s) # test if string found is equal to letter to find s
print i # if yes, print the line
}
}
' file1 file2
awk '(NR==FNR){a[$0]=substr($0,length);next}
{ for(key in a) if (a[key] == substr($0,key+0,1)) { print; break }
}' file2 file1
Here, the array a[key] is a associative array with the following key-value pairs:
key: value
3T T
10K K
... ...
When processing file2 with the line: (NR==FNR){a[$0]=substr($0,length);next}: we extract the value beforehand so we don't have to do it later on. The index is easily extracted with a math operation. Eg. "10K"+0=10 in Awk.
Processing file1 is done with the next line. Here we just check if the character matches for any of the entries in the associative array.
With GNU awk and grep:
awk -v FPAT='[0-9]+|[A-Z]+' '{ print "^.{" $1-1 "}" $2 }' file1 | grep -Ef - file2
Output:
AETYUIOOILAKSJ
RYXURIAJSKDMAO
URITORIEJAHSJD
JAQIAUXCNCVUFO

awk - Compare columns from two files and replace text in first file

I have two files. The first has 1 column and the second has 3 columns. I want to compare first columns of both files. If there is a coincidence, replace column 2 and 3 for specific values; if not, print the same line.
File 1:
$ cat file1
26
28
30
File 2:
$ cat file2
1,a,0
2,a,0
22,a,0
23,a,0
24,a,0
25,a,0
26,r,1510139756
27,a,0
28,r,1510244156
29,a,0
30,r,1510157364
31,a,0
32,a,0
33,r,1510276164
34,a,0
40,a,0
Desired output:
$ cat file2
1,a,0
2,a,0
22,a,0
23,a,0
24,a,0
25,a,0
26,a,0
27,a,0
28,a,0
29,a,0
30,a,0
31,a,0
32,a,0
33,r,1510276164
34,a,0
40,a,0
I am using gawk to do this (it's inside a shell script and I am using solaris) but I can't get the output right. It only prints the lines that matches:
$fuente="file2"
gawk -v fuente="$fuente" 'FNR==NR{a[FNR]=$1; next}{print $1,$2="a",$3="0" }' $fuente file1 > file3
The output I got:
$ cat file3
26 a 0
28 a 0
30 a 0
awk one-liner:
awk 'NR==FNR{ a[$1]; next }$1 in a{ $2="a"; $3=0 }1' file1 FS=',' OFS=',' file2
The output:
1,a,0
2,a,0
22,a,0
23,a,0
24,a,0
25,a,0
26,a,0
27,a,0
28,a,0
29,a,0
30,a,0
31,a,0
32,a,0
33,r,1510276164
34,a,0
40,a,0
Really spread out for clarity; called (fuente.awk) like so:
awk -F \, -v fuente=file1 -f fuente.awk file2 # -F == IFS
BEGIN {
OFS="," # set OFS to make printing easier
while (getline x < fuente > 0) # safe way; read file into array
{
a[++i]=x # stuff indexed array
}
}
{ # For each line in file2
for (k=1 ; k<=i ; k++) # Lop over array (elements in file1)
{
if (($1==a[k]) && (! flag))
{
print($1,"a",0) # Found print new line
flag=1 # print only once
}
}
if (! flag) # Not found
{
print($0) # print original
}
flag=0 # reset flag
}
END { }

Using a command-line utility to perform the following map-updates

I'm a complete newbie to using command-line utilities and am wondering how to process information as following:
mapping.txt:
80 001 002
81 011 012 013 014
82 021 022
...
input.txt:
81 103823044
80 103823054
81 103823064
...
Desired output.txt:
103823044|011|
103823044|012|
103823044|013|
103823044|014|
103823054|001|
103823054|002|
103823064|011|
103823064|012|
103823064|013|
103823064|014|
I've done simple mapping wherein the column numbers are fixed but I'm unsure of how to map a dynamic number of columns to the desired output
If order is not important, join and awk can do the job easily.
$ join <(sort input.txt) <(sort mapping.txt) | awk -v OFS="|" '{for (i=3;i<NF;i++) print $2, $i OFS}'
103823054|001|
103823044|011|
103823044|012|
103823044|013|
103823064|011|
103823064|012|
103823064|013|
Here's a GNU awk script that uses multi-dimensional arrays to do what you want:
#!/usr/bin/awk -f
BEGIN { OFS="|" }
FNR==NR { for(i=2;i<=NF;i++) a[$1][$i]; next }
$1 in a { for(k in a[$1]) print $2, k, "" }
If you save that to a file like script.awk and then chmod +x script.awk you can run it like:
$ ./script.awk mapping.txt input.txt
103823044|011|
103823044|012|
103823044|013|
103823044|014|
103823054|002|
103823054|001|
103823064|011|
103823064|012|
103823064|013|
103823064|014|
Here's a breakdown of the script:
BEGIN - set the output field separator to |
FNR==NR - process the first file (mapping.txt) and store the data in a multi-dimensional array by $1 first, then by the other fields. next to skip any other line processing.
$1 in a - test to see if the line has a mapping. If so, print the corresponding mappings out in order(also GNU awk). The commas in the print command are converted to the OFS value.
It could be remade a "one-liner" like:
awk -v OFS="|" 'FNR==NR {for(i=2;i<=NF;i++) a[$1][$i]; next} $1 in a {for(k in a[$1]) print $2, k, ""}' mapping.txt input.txt
Here's a version of the script that uses a single dimensional array to store $0 then split()s it later to preserve order:
#!/usr/bin/awk -f
BEGIN { OFS="|" }
FNR==NR { a[$1]=$0; next }
$1 in a { c=split(a[$1], b); for(i=2;i<=c;i++) print $2, b[i], "" }

Resources