Cut | Sort | Uniq -d -c | but? - shell

The given file is in the below format.
GGRPW,33332211,kr,P,SUCCESS,systemrenewal,REN,RAMS,SAA,0080527763,on:X,10.0,N,20120419,migr
GBRPW,1232221,uw,P,SUCCESS,systemrenewal,REN,RAMS,ASD,20075578623,on:X,1.0,N,20120419,migr
GLSH,21122111,uw,P,SUCCESS,systemrenewal,REN,RAMS,ASA,0264993503,on:X,10.0,N,20120419,migr
I need to take out duplicates and count(each duplicates categorized by f1,2,5,14). Then insert into database with the first duplicate occurence record entire fields and tag the count(dups) in another column. For this I need to cut all the 4 mentioned fields and sort and find the dups using uniq -d and for counts I used -c. Now again coming back after all sorting out of dups and it counts I need the output to be in the below form.
3,GLSH,21122111,uw,P,SUCCESS,systemrenewal,REN,RAMS,ASA,0264993503,on:X,10.0,N,20120419,migr
Whereas three being the number of repeated dups for f1,2,5,14 and rest of the fields can be from any of the dup rows.
By this way dups should be removed from the original file and show in the above format.
And the remaining in the original file will be uniq ones they go as it is...
What I have done is..
awk '{printf("%5d,%s\n", NR,$0)}' renewstatus_2012-04-19.txt > n_renewstatus_2012-04-19.txt
cut -d',' -f2,3,6,15 n_renewstatus_2012-04-19.txt |sort | uniq -d -c
but this needs a point back again to the original file to get the lines for the dup occurences. ..
let me not confuse.. this needs a different point of view.. and my brain is clinging on my approach.. need a cigar..
Any thots...??

sort has an option -k
-k, --key=POS1[,POS2]
start a key at POS1, end it at POS2 (origin 1)
uniq has an option -f
-f, --skip-fields=N
avoid comparing the first N fields
so sort and uniq with field numbers(count NUM and test this cmd yourself, plz)
awk -F"," '{print $0,$1,$2,...}' file.txt | sort -k NUM,NUM2 | uniq -f NUM3 -c

Using awk's associative arrays is a handy way to find unique/duplicate rows:
awk '
BEGIN {FS = OFS = ","}
{
key = $1 FS $2 FS $5 FS $14
if (key in count)
count[key]++
else {
count[key] = 1
line[key] = $0
}
}
END {for (key in count) print count[key], line[key]}
' filename

SYNTAX :
awk -F, '!(($1 SUBSEP $2 SUBSEP $5 SUBSEP $14) in uniq){uniq[$1,$2,$5,$14]=$0}{count[$1,$2,$5,$14]++}END{for(i in count){if(count[i] > 1)file="dupes";else file="uniq";print uniq[i],","count[i] > file}}' renewstatus_2012-04-19.txt
Calculation:
sym#localhost:~$ cut -f16 -d',' uniq | sort | uniq -d -c
124275 1 -----> SUM OF UNIQ ( 1 )ENTRIES
sym#localhost:~$ cut -f16 -d',' dupes | sort | uniq -d -c
3860 2
850 3
71 4
7 5
3 6
sym#localhost:~$ cut -f16 -d',' dupes | sort | uniq -u -c
1 7
10614 ------> SUM OF DUPLICATE ENTRIES MULTIPLIED WITH ITS COUNTS
sym#localhost:~$ wc -l renewstatus_2012-04-19.txt
134889 renewstatus_2012-04-19.txt ---> TOTAL LINE COUNTS OF THE ORIGINAL FILE, MATCHED EXACTLY WITH (124275+10614) = 134889

Related

check if column has more than one value in unix [duplicate]

I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
What if I wanted a count of these unique values as well?
Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows:
cat input_file | cut -f 1 | sort | uniq
gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
cut -f 1 input_file | sort | uniq
EDIT:
To count the number of unique occurences you can make use of wc command in the chain as:
cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
awk < test.txt '{print $1}' | sort | uniq
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned:
<test.tsv awk '{print $4}' | sort | uniq
Where $4 will be:
$1 - Red
$2 - Ball
$3 - 1
$4 - Sold
# COLUMN is integer column number
# INPUT_FILE is input file name
cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
#!/bin/bash
# Syntax: $0 filename
# The input is assumed to be a .tsv file
FILE="$1"
cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c)
cols=$((cols + 2 ))
i=0
for ((i=1; i < $cols; i++))
do
echo Column $i ::
cut -f $i < "$FILE" | sort | uniq -c
echo
done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66

AWK : To print data of a file in sorted order of result obtained from columns

I have an input file that looks somewhat like this:
PlayerId,Name,Score1,Score2
1,A,40,20
2,B,30,10
3,C,25,28
I want to write an awk command that checks for players with sum of scores greater than 50 and outputs the PlayerId,and PlayerName in sorted order of their total score.
When I try the following:
awk 'BEGIN{FS=",";}{$5=$3+$4;if($5>50) print $1,$2}' | sort -k5
It does not work and seemingly sorts them on the basis of their ids.
1 A
3 C
Whereas the correct output I'm expecting is : ( since Player A has sum of scores=60, and C has sum of scores=53, and we want the output to be sorted in ascending order )
3 C
1 A
In addition to this,what confuses me a bit is when I try to sort it on the basis of score1, i.e. column 3 but intend to print only the corresponding ids and names, it dosen't work either.
awk 'BEGIN{FS=",";}{$5=$3+$4;if($5>50) print $1,$2}' | sort -k3
And outputs :
1 A
3 C
But if the $3 with respect to what the data is being sorted is included in the print,
awk 'BEGIN{FS=",";}{$5=$3+$4;if($5>50)print $1,$2,$3}' | sort -k3
It produces the correct output ( but includes the unwanted score1 parameter in display )
3 C 25
1 A 40
But what if one wants to only print the id and name fields ?
Actually I'm new to awk commands, and probably I'm not using the sort command correctly. It would be really helpful if someone could explain.
I think this is what you're trying to do:
$ awk 'BEGIN{FS=","} {sum=$3+$4} sum>50{print sum,$1,$2}' file |
sort -k1,1n | cut -d' ' -f2-
3 C
1 A
You have to print the sum so you can sort by it and then the cut removes it.
If you wanted the header output too then it'd be:
$ awk 'BEGIN{FS=","} {sum=$3+$4} (NR==1) || (sum>50){print (NR>1),sum,$1,$2}' file |
sort -k1,2n | cut -d' ' -f3-
PlayerId Name
3 C
1 A
if you outsource sorting, you need to have the auxiliary values and need to cut it out later, some complication is due to preserve the header.
$ awk -F, 'NR==1 {print s "\t" $1 FS $2; next}
(s=$3+$4)>50 {print s "\t" $1 FS $2 | "sort -n" }' file | cut -f2
PlayerId,Name
3,C
1,A

Use of Awk filter to get the students records details in descending order of total score

Student details are stored in a file system as follows:
Roll_no,name,socre1,score2
101,ABC,50,55
102,XYZ,48,54
103,CWE,42,34
104,ZSE,65,72
105,FGR,31,45
106,QWE,68,45
Q.Write the unix command to display Roll_no and name of the student whose total score is greater than 100 the student details are to be displayed sorted in descending order of the total score.
total score as to be calculated as follows :-
totalscore=score1+score2
file also content the header(Roll_no,name,socre1,score2)
My solution:
awk 'BEGIN {FS=",";OFS=" "} {if(NR>1){if($3+$4>100){s[$1]=$2}}} END{for (i in s) {print i,h[i]}}' stu.txt| sort -rk 2n
I am not getting how to get sorting according to total score?
please help guys!
output:-
104 ZSE
106 QWE
101 ABC
102 XYZ
Could you please try following. To keep it simple in calculation(1st get total of numbers for all lines which are greater than 100 Then sort it reverse order by total as per OP then print only first 2 columns by cut)
awk 'BEGIN{FS=OFS=","} $3+$4>100{print $1,$2,$3+$4}' Input_file |
sort -t, -nr -k3 |
cut -d',' -f 1-2
OR in case you want output in space delimiter in output then try following.
awk 'BEGIN{FS=","} $3+$4>100{print $1,$2,$3+$4}' Input_file |
sort -nr -k3 |
cut -d' ' -f 1-2
Explanation: Adding detailed explanation for above.
awk 'BEGIN{FS=OFS=","} $3+$4>100{print $1,$2,$3+$4}' Input_file | ##Starting awk program setting FS, OFS as comma. Then checking 3rd+4th column sum is greater than 100 then printing 1st, 2nd field along with sum of 3rd and 4th field here. Now passing its output as input to next command.
sort -t, -nr -k3 | ##Sorting output with setting delimiter as comma and sorting it reverse order witg 3rd column here, sending output as input to next command.
cut -d',' -f 1-2 ##Getting first 2 fields by setting delimiter comma here, to get name and roll number here.
OR
sort -t, -nr -k3 < <(awk 'BEGIN{FS=OFS=","} $3+$4>100{print $1,$2,$3+$4}' Input_file) |
cut -d',' -f 1-2
OR in case you need output as space delimited then try following.
sort -nr -k3 < <(awk 'BEGIN{FS=","} $3+$4>100{print $1,$2,$3+$4}' Input_file) |
cut -d' ' -f 1-2
$ awk 'BEGIN {OFS=FS=","}
NR==1 {print $0, "total"; next}
{if(($5=$3+$4)>100) print | "sort -t, -k5nr"}' file
Roll_no,name,socre1,score2,total
104,ZSE,65,72,137
106,QWE,68,45,113
101,ABC,50,55,105
102,XYZ,48,54,102
without header and individual scores
$ awk 'BEGIN{OFS=FS=","}
NR>1 && ($3+=$4)>100{print $1,$2,$3}' file | sort -t, -k3nr
104,ZSE,137
106,QWE,113
101,ABC,105
102,XYZ,102
or
$ awk 'BEGIN{OFS=FS=","}
NR>1 && ($3+=$4)>100 && NF--' file | sort -t, -k3nr
104,ZSE,137
106,QWE,113
101,ABC,105
102,XYZ,102
without the final score and not comma delimited
$ awk -F, 'NR>1 && ($3+=$4)>100 && NF--' file | sort -k3nr | cut -d' ' -f1,2
104 ZSE
106 QWE
101 ABC
102 XYZ
reads as written
if line number is greater than one (skip header) AND
if field 3 + field 4 > 100 (assigned back to field 3) then
if both conditions are satisfied decrement field count so that last field won't be printed.
sort the results based on the third field,
remove the last field.
you were close:
awk 'BEGIN {FS=OFS=","} {if(NR>1){if($3+$4>100){s[$1]=$2}}} END{for (i in s) {print i,s[i]}}' stu.txt| sort -rk 2n

uniq -c unable to count unique lines

I am trying to count unique occurrences of numbers in the 3rd column of a text file, a very simple command:
awk 'BEGIN {FS = "\t"}; {print $3}' bisulfite_seq_set0_v_set1.tsv | uniq -c
which should say something like
1 10103
2 2093
3 109
but instead puts out nonsense, where the same number is counted multiple times, like
20 1
1 2
1 1
1 2
14 1
1 2
I've also tried
awk 'BEGIN {FS = "\t"}; {print $3}' bisulfite_seq_set0_v_set1.tsv | sed -e 's/ //g' -e 's/\t//g' | uniq -c
I've tried every combination I can think of from the uniq man page. How can I correctly count the unique occurrences of numbers with uniq?
uniq -c counts the contiguous repeats. To count them all you need to sort it first. However, with awk you don't need to.
$ awk '{count[$3]++} END{for(c in count) print count[c], c}' file
will do
awk-free version with cut, sort and uniq:
cut -f 3 bisulfite_seq_set0_v_set1.tsv | sort | uniq -c
uniq operates on adjacent matching lines, so the input has to be sorted first.

awk loop over all fields in one file

This statement gives me the count of unique values in column 1:
awk -F ',' '{print $1}' infile1.csv | sort | uniq -c | sort -nr > outfile1.csv
It does what I expected (gives the count (left) of unique values (right) in the column):
117 5
58 0
18 4
14 3
11 1
9 2
However, now I want to create a loop, so it will go through all columns.
I tried:
for i in {1..10}
do
awk -F ',' '{print $$i}' infile.csv | sort | uniq -c | sort -nr > outfile$i.csv
done
This does not do the job (it does produce a file but with much more data). I think that a variable in a print statement, as I tried with print $$i, is not something that works in general, since I did not come across it so far.
I also tried this:
awk -F ',' '{for(i=1;i<=NF;i++) infile.csv | sort | uniq -c | sort -nr}' > outfile$i.csv
But this does not give any result at all (meaning syntax errors for infile and sort command). I am sure I am using the for statement the wrong way.
Ideally, I would like the code to find the count of unique values for each column and print them all in the same output file. However, I am already very happy with a well functioning loop.
Please let me know if this explanation is not good enough, I will do my best to clarify.
Any time you write a loop in shell just to manipulate text you have the wrong approach. Just do it in one awk command, something like this using GNU awk for 2D arrays and sorted in (untested since you didn't provide any sample input):
awk -F, '
BEGIN { PROCINFO["sorted_in"] = "#val_num_desc" }
{ for (i=1; i<=NF; i++) cnt[i][$i]++ }
END {
for (i=1; i<=NF; i++)
for (val in cnt[i])
print val, cnt[i][val] > ("outfile" i ".csv")
}
' infile.csv
No need for half a dozen different commands, pipes, etc.
You want to loop through the columns and perform the same command in each one of them. So what you are doing is fine: pass the column name to awk. However, you need to pass the value differently, so that it is an awk variable:
for i in {1..10}
do
awk -F ',' -v col=$i '{print $col}' infile.csv | sort | uniq -c | sort -nr > outfile$i.csv
^^^^^^^^^^^^^^^^^^^^^^^^
done

Resources