How to keep the header of the file when filtering using awk [duplicate] - bash

This question already has answers here:
Filter file with awk and keep header in output
(2 answers)
Closed 1 year ago.
I have a file that looks like this with a header. I am just showing the first 8 columns although there are 26 columns.
id Id Study Site CancerType Sex Country unexpected_duplicates
468768 1032 Response Karlburg VN Breast Female Germany
468769 1405 Response Santiago Prostate Male Spain
I want to filter the Cancer type (column 5) by "Breast" using this command which works fine:
awk '($5 == "Breast")' PCA.covar > PCA.covar1
The only problem is my header is not printed and the first line is missing in the output.
So I modified my command to:
awk 'NR==1; NR > 1 ($5 == "Breast")' PCA.covar > PCA.covar1
And I see that while the header is there, it has not filtered by Breast:
id Id Study Site CancerType Sex Country unexpected_duplicates
468768 1032 Response Karlburg VN Breast Female Germany
468769 1405 Response Santiago Prostate Male Spain
68772 RQ56001-9 Response Maastricht Prostate Male Netherlands

This should do it:
awk 'NR==1 || $5 == "Breast"{print}' PCA.covar > PCA.covar1

Related

Merging two files with unequal lengths based on two keys in linux [duplicate]

This question already has answers here:
Joining multiple fields in text files on Unix
(11 answers)
Closed 2 years ago.
I have two txt files with different lengths.
File 1:
Albania 20200305 0
Albania 20200306 0
Albania 20200307 0
Albania 20200308 0
Albania 20200309 3
Albania 20200310 7
Albania 20200311 4
Albania 20200312 2
File 2:
Europe Albania 20200309 2
Europe Albania 20200310 6
Europe Albania 20200311 10
Europe Albania 20200312 11
Europe Albania 20200313 23
Europe Albania 20200314 33
I would like to create a File3 which will add the 3. column of the File1 at the end of File2 if 1st and 2nd column of File1 is same with 2nd and 3rd column of File2. It should look like this:
File3:
Europe Albania 20200309 2 3
Europe Albania 20200310 6 7
Europe Albania 20200311 10 4
Europe Albania 20200312 11 2
I have tried
awk 'NR==FNR{A[$1,$2]=$3;next} (($2,$3) in A) {print $0, A[$1,$2]}' file1.txt file2.txt > file3.txt
but it is just printing File 2, it does not add the third column of File1.
Can you please help me with the problem.
Thanks in advance!
Your approach is correct but while printing you need to use like A[$2,$3], you are using A[$1,$2] which is NOT existing(Because 1st, 2nd columns of file1 should be compared to 2nd and 3rd columns of file2) in array A hence its printing only current line values of file2 in your file3.
awk 'NR==FNR{a[$1,$2]=$3;next} (($2,$3) in a) {print $0, a[$2,$3]}' file1 file2
Also see link(Thanks to James for providing nice link here) Why we shouldn't use variables in capital letters

How to put pivot table using Shell script

I have data in a CSV file as below...
Emailid Storeid
a#gmail.com 2000
b#gmail.com 2001
c#gmail.com 2000
d#gmail.com 2000
e#gmail.com 2001
I am expecting below output, basically finding out how many email ids are there for each store.
StoreID Emailcount
2000 3
2001 2
So far i tried to solve my issue
IFS=","
while read f1 f2
do
awk -F, '{ A[$1]+=$2 } END { OFS=","; for (x in A) print x,A[x]; }' > /home/ec2-user/storewiseemials.csv
done < temp4.csv
With the above shell script i am not getting desired output, Can you guys please help me?
Using miller (https://github.com/johnkerl/miller) and starting from this (I have used a CSV, because I do not know if you use a tab or a white space as separator)
Emailid,Storeid
a#gmail.com,2000
b#gmail.com,2001
c#gmail.com,2000
d#gmail.com,2000
e#gmail.com,2001
and running
mlr --csv count-distinct -f Storeid -o Emailcount input >output
you will have
+---------+------------+
| Storeid | Emailcount |
+---------+------------+
| 2000 | 3 |
| 2001 | 2 |
+---------+------------+

shell script inserting "$" into a formatted column and adding new column

Hi guys pardon for my bad English. I manage to display out my data nicely and neatly using column program in the code. But how do i add a "$" in the price column. Secondly how do i add a new column total sum to it and display it with "$". (Price * Sold)
(echo "Title:Author:Price:Quantity:Sold" && cat BookDB.txt) | column -s: -t
Output:
Title Author Price Quantity Sold
The Godfather Mario Puzo 21.50 50 20
The Hobbit J.R.R Tolkien 40.50 50 10
Romeo and Juliet William Shakespeare 102.80 200 100
The Chronicles of Narnia C.S.Lewis 35.90 80 15
Lord of the Flies William Golding 29.80 125 25
Memories of a Geisha Arthur Golden 35.99 120 50
I guess you could do it with awk (line break added before && for readability
(echo "Title:Author:Price:Quantity:Sold:Calculated"
&& awk -F: '{printf ("%s:%s:$%d:%d:%d:%d\n",$1,$2,$3,$4,$5,$3*$5)}' BookDB.txt) | column -s: -t

Mapping ids for 10 million records [duplicate]

This question already has answers here:
Efficient way to map ids
(2 answers)
Closed 9 years ago.
I have two text files,
File 1 with data like
User game count
A Rugby 2
A Football 2
B Volleyball 1
C TT 2
...
File 2
1 Basketball
2 Football
3 Rugby
...
90 TT
91 Volleyball
...
Now what I want to do is add another column to File 2 such that I have the corresponding index of the game from File 2 as an extra column in File 1.
I have 2 million entries in File 1. So I want to add another column specifying the index(basically the line number or order) of the game from file 2. How can I do this efficiently.
Right now I am doing this line by line. Reading a line from file 1, grep the corresponding game from file 2 for its line number and saving/writing that to a file.
This will take me ages. How can I speed this up if I have 10 million rows in file 2 and 3000 rows in file 1?
With awk, read field 1 from File2 into an array indexed by field 2, look up the array using field 2 from File1 as you iterate through it
awk 'NR == FNR{a[$2]=$1; next}; {print $0, a[$2]}' File2 File1
A Rugby 2 3
A Football 2 2
B Volleyball 1 91
C TT 2 90
You can construct an associative array from the second file, with game names as keys and the game index as values. then for each line in file 1 search the array for the wanted id, and write it back
Associative arrays provide O(1) time complexity.
Use the join command:
$ cat file1
A Rugby 2
A Football 2
B Volleyball 1
C TT 2
$ cat file2
1 Basketball
2 Football
3 Rugby
90 TT
91 Volleyball
$ join -1 3 -2 1 -o 1.1,1.2,1.3,2.2 \
<(sort -k 3 file1) <(sort -k 1 file2)
B Volleyball 1 Basketball
A Football 2 Football
A Rugby 2 Football
C TT 2 Football
Here's another approach: only read the small file into memory, and then read the bigger file line-by-line. Once each ID has been found, bail out:
awk '
NR == FNR {
f1[$2] = $0
n++
next
}
($2 in f1) {
print f1[$2], $1
delete f1[$2]
if (--n == 0) exit
}
' file1 file2
Rereading your question, I don't know if I've answered the question: do you want an extra column appended to file1 or file2?

Combine text from two files, output to another [duplicate]

This question already has answers here:
Inner join on two text files
(5 answers)
Closed 1 year ago.
i'm having a bit of a problem and i've been searching allll day. this is my first Unix class don't be to harsh.
so this may sound fairly simple, but i can't get it
I have two text files
file1
David 734.838.9801
Roberto‭ ‬313.123.4567
Sally‭ ‬248.344.5576
Mary‭ ‬313.449.1390
Ted‭ ‬248.496.2207
Alice‭ ‬616.556.4458
Frank‭ ‬634.296.1259
file2
Roberto Tuesday‭ ‬2
Sally Monday‭ ‬8
Ted Sunday‭ ‬16
Alice Wednesday‭ ‬23
David Thursday‭ ‬10
Mary Saturday‭ ‬14
Frank Friday‭ ‬15
I am trying to write a script using a looping structure that will combine both files and come out with the output below as a separate file
output:
Name On-Call Phone Start Time
Sally Monday 248.344.5576 8am
Roberto Tuesday 313.123.4567 2am
Alice‭ Wednesday‭ 616.556.4458‭ 11pm
David‭ Thursday‭ 734.838.9801‭ 10am
Frank‭ Friday‭ 634.296.1259‭ 3pm
Mary‭ Saturday‭ 313.449.1390‭ 2pm
Ted‭ ‬ Sunday‭ 248.496.2207‭ 4pm
This is what i tried( i know its horrible)
echo " Name On-Call Phone Start Time"
file="/home/xubuntu/date.txt"
file1="/home/xubuntu/name.txt"
while read name2 phone
do
while read name day time
do
echo "$name $day $phone $time"
done<"$file"
done<"$file1"
any help would be appreciated
First, sort the files using sort and then use this command:
paste file1 file2 | awk '{print $1,$4,$2,$5}'
This will bring you pretty close. After that you have to figure out how to format the time from the 24 hour format to the 12 hour format.
If you want to avoid using sort separately, you can bring in a little more complexity like this:
paste <(sort file1) <(sort file2) | awk '{print $1,$4,$2,$5}'
Finally, if you have not yet figured out how to print the time in 12 hour format, here is your full command:
paste <(sort file1) <(sort file2) | awk '{"date --date=\"" $5 ":00:00\" +%I%P" |& getline $5; print $1 " " $4 " " $2 " " $5 }'
You can use tabs (\t) in place of spaces as connectors to get a nicely formatted output.
In this case join command will also work,
join -1 1 -2 1 <(sort file1) <(sort file2)
Description
-1 -> file1
1 -> first field of file1 (common field)
-2 -> file2
1 -> first field of file2 (common field)
**cat file1**
David 734.838.9801
Roberto 313.123.4567
Sally 248.344.5576
Mary 313.449.1390
Ted 248.496.2207
Alice 616.556.4458
Frank 634.296.1259
**cat file2**
Roberto Tuesday 2
Sally Monday 8
Ted Sunday 16
Alice Wednesday 23
David Thursday 10
Mary Saturday 14
Frank Friday 15
output
Alice 616.556.4458 Wednesday 23
David 734.838.9801 Thursday 10
Frank 634.296.1259 Friday 15
Mary 313.449.1390 Saturday 14
Roberto 313.123.4567 Tuesday 2
Sally 248.344.5576 Monday 8
Ted 248.496.2207 Sunday 16

Resources