Sort a file in unix by the absolute value of a field - sorting

I want to sort this file by the absolute value of the Linear regression (p) column in descending order. My attempt to do this didnt quite work. Im not sure what it fails. I found this code from http://www.unix.com/shell-programming-and-scripting/168144-sort-absolute-value.html.
awk -F',' '{print ($2>=0)?$2:-$2, $0}' OFS=',' mycsv1.csv | sort -n -k8,8 | cut -d ',' -f2-
X var,Y var,MIC (strength),MIC-p^2 (nonlinearity),MAS (non-monotonicity),MEV (functionality),MCN (complexity),Linear regression (p)
AT1G01030,AT1G32310,0.67958,0.4832027,0.32644996,0.63247,4.0,-0.44314474
AT1G01030,AT3G06520,0.61732,0.17639545,0.23569,0.58557,4.0,0.6640215
AT1G01030,AT5G42580,0.61579,0.5019064,0.30105,0.58143,4.0,0.33746648
AT1G01030,AT1G55280,0.57287,0.20705527,0.19536,0.52857,4.0,0.6048262
AT1G01030,AT5G30490,0.56509,0.37536618,0.16172999,0.51847,4.0,-0.43557298
AT1G01030,AT1G80040,0.56268,0.22935495,0.18583998,0.52728,4.0,-0.5773431
...
Please help me to understand the awk script to sort this file.

You could use sed and sort for this and follow the #hek2mgl's very smart logic of adding and removing a field at the end to retain the original number:
sed -E 's/,([-]?)([0-9.]+)$/,\1\2,\2/' file | sort -t, -k9,9 -nr | cut -f1-8 -d,
sed -E 's/,([-]?)([0-9.]+)$/,\1\2,\2/' => creates field 9 as the absolute value of field 8
sort -t, -k9,9 -nr => sorts by the newly created field, numeric and descending order
cut -f1-8 -d, => removes the 9th field, restoring the output to its original format, with the desired sorting order
Here is the output:
AT1G01030,AT3G06520,0.61732,0.17639545,0.23569,0.58557,4.0,0.6640215
AT1G01030,AT1G55280,0.57287,0.20705527,0.19536,0.52857,4.0,0.6048262
AT1G01030,AT1G80040,0.56268,0.22935495,0.18583998,0.52728,4.0,-0.5773431
AT1G01030,AT1G32310,0.67958,0.4832027,0.32644996,0.63247,4.0,-0.44314474
AT1G01030,AT5G30490,0.56509,0.37536618,0.16172999,0.51847,4.0,-0.43557298
AT1G01030,AT5G42580,0.61579,0.5019064,0.30105,0.58143,4.0,0.33746648

Take three steps:
(1) Temporarily create a 9th field which contains the abs value of field 8:
LC_COLLATE=C awk -F, 'NR>1{v=$NF;sub(/-/,"",v);printf "%s%s%s%s",$0,FS,v,RS}' file
^ ------ make sure this is set since sorting, especially the decimal point
depends on the local.
(2) Sort that output based on the 9th field:
command_1 | sort -t, -k9r
(3) Pipe that back to awk to remove the last field. NF-- decreases the number of fields which will effectively remove the last field. 1 is always true, that makes awk print the line:
command_2 | cut -d, -f1-8
Output:
AT1G01030,AT3G06520,0.61732,0.17639545,0.23569,0.58557,4.0,0.6640215
AT1G01030,AT1G55280,0.57287,0.20705527,0.19536,0.52857,4.0,0.6048262
AT1G01030,AT1G80040,0.56268,0.22935495,0.18583998,0.52728,4.0,-0.5773431
AT1G01030,AT1G32310,0.67958,0.4832027,0.32644996,0.63247,4.0,-0.44314474
AT1G01030,AT5G30490,0.56509,0.37536618,0.16172999,0.51847,4.0,-0.43557298
AT1G01030,AT5G42580,0.61579,0.5019064,0.30105,0.58143,4.0,0.33746648

Could get awk to do it all:
awk -F, 'NR>1{n[substr($NF,1,1)=="-"?substr($NF,2):$NF]=$0}NR==1;END{asorti(n,out);for(i in out)print n[out[i]]}' file

Related

check if column has more than one value in unix [duplicate]

I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
What if I wanted a count of these unique values as well?
Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows:
cat input_file | cut -f 1 | sort | uniq
gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
cut -f 1 input_file | sort | uniq
EDIT:
To count the number of unique occurences you can make use of wc command in the chain as:
cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
awk < test.txt '{print $1}' | sort | uniq
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned:
<test.tsv awk '{print $4}' | sort | uniq
Where $4 will be:
$1 - Red
$2 - Ball
$3 - 1
$4 - Sold
# COLUMN is integer column number
# INPUT_FILE is input file name
cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
#!/bin/bash
# Syntax: $0 filename
# The input is assumed to be a .tsv file
FILE="$1"
cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c)
cols=$((cols + 2 ))
i=0
for ((i=1; i < $cols; i++))
do
echo Column $i ::
cut -f $i < "$FILE" | sort | uniq -c
echo
done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66

Sort by length of column

Need help sorting by length of the 4th column with a Unix command.
Example data (all data is made up, and not actual).
5032:Stack:overflows#business.com:123:JamesPeterson
3200:Admin:admin#me.com:12ej3dij23i2j32:AdminAdmin
1024:GregoryJames:greg#admin.com:12329232:GregJames
Preferred format (Because the length of 4th column is the longest).
3200:Admin:admin#me.com:12ej3dij23i2j32:AdminAdmin
1024:GregoryJames:greg#admin.com:12329232:GregJames
5032:Stack:overflows#business.com:123:JamesPeterson
Use awk to add a column containing the length of the column, sort by that, then remove it.
awk -F: '{printf("%d %s\n", length($4), $0)}' input.txt | sort -nr | cut -d' ' -f2- > output.txt

How to remove duplicates by column (inverse ordering)

I've looking for this in here, but did not found the exact case. Sorry if it is duplicated, but I couldn't find it.
I have a huge file in Debian that contains 4 columns separated by "#", with the following format:
username#source#date#time
For example:
A222222#Windows#2014-08-18#10:47:16
A222222#Juniper#2014-08-07#14:31:40
A222222#Juniper#2014-08-08#09:15:34
A111111#Juniper#2014-08-10#14:32:55
A111111#Windows#2014-08-08#10:27:30
I want to print unique rows based on the first two columns, and if duplicates found, it has to print the last event based on date/time. With the list above, the result should be:
A222222#Windows#2014-08-18#10:47:16
A222222#Juniper#2014-08-08#09:15:34
A111111#Juniper#2014-08-10#14:32:55
A111111#Windows#2014-08-08#10:27:30
I have tested it using two commands:
cat file | sort -u -t# -k1,2
cat file | sort -r -u -t# -k1,2
But both of them print the following:
A222222#Windows#2014-08-18#10:47:16
A222222#Juniper#2014-08-07#14:31:40 --> Wrong line, it is older than the duplicate one
A111111#Juniper#2014-08-10#14:32:55
A111111#Windows#2014-08-08#10:27:30
Is there any way to do it?
Thanks!
This should work
tac file | awk -F# '!a[$1,$2]++' | tac
Output
A222222#Windows#2014-08-18#10:47:16
A222222#Juniper#2014-08-08#09:15:34
A111111#Juniper#2014-08-10#14:32:55
A111111#Windows#2014-08-08#10:27:30
First, you need sort the input file to ensure the order of lines, e.g. for duplicate username#source you will get ordered times. Best is sort reverse, so last event comes first. This can be done with an simple sort, like:
sort -r < yourfile
This will produce from your input the next:
A222222#Windows#2014-08-18#10:47:16
A222222#Juniper#2014-08-08#09:15:34
A222222#Juniper#2014-08-07#14:31:40
A111111#Windows#2014-08-08#10:27:30
A111111#Juniper#2014-08-10#14:32:55
reverse-ordered lines, where for the each username#source combination the latest event comes first.
next, you need somewhat filter the sorted lines, to get only the first event. This can be done, with several tools, like awk or uniq or perl and such,
So, the solution
sort -r <yourfile | uniq -w16
or
sort -r <yourfile | awk -F# '!seen[$1,$2]++'
or
sort -r yourfile | perl -F'#' -lanE 'say $_ unless $seen{"$F[0],$F[1]"}++'
all the above will print the next
A222222#Windows#2014-08-18#10:47:16
A222222#Juniper#2014-08-08#09:15:34
A111111#Windows#2014-08-08#10:27:30
A111111#Juniper#2014-08-10#14:32:55
Finally you can re-sort the unique lines as you want and needed.
awk -F\# '{ p = ($1 FS $2 in a ); a[$1 FS $2] = $0 }
!p { keys[++k] = $1 FS $2 }
END { for (k = 1; k in keys; ++k) print a[keys[k]] }' file
Output:
A222222#Windows#2014-08-18#10:47:16
A222222#Juniper#2014-08-08#09:15:34
A111111#Juniper#2014-08-10#14:32:55
A111111#Windows#2014-08-08#10:27:30
If you know for a fact that the first column is always 7 chars long, and second column also 7 chars long, you can extract unique lines considering only the first 16 characters with:
uniq file -w 16
Since you want the latter duplicate, you can reverse the data using tac prior to uniq and then reverse the output again:
tac file | uniq -w 16 | tac
Update: As commented below, uniq needs the lines to be sorted. In which case this starts to become contrived, and the awk based suggestions are better. Something like this would still work though:
sort -s -t"#" -k1,2 file | tac | uniq -w 16 | tac

Unix - Sorting file name with a key but not knowing its position

I would like to sort those files using Unix commands:
MyFile_fdfdsf_20140326.txt
MyFile_4fg5d6_20100301.csv
MyFile_dfgfdklm_19990101.tar.gz
The result I am waiting for here is MyFile_fdfdsf_20140326.txt
So I'd like to get the file with the newest date.
I can't use 'sort -k', as the position of the key (the date) may vary
But in my file name there are always two "_" delimiters and a dot '.' for the file extension
Any help would be appreciated :)
Then use -t to indicate the field separator and set it to _:
sort -t'_' -k3
See an example of sorting the file names if they are in a file. I used -n for numeric sort and -r for reverse order:
$ sort -t'_' -nk3 file
MyFile_dfgfdklm_19990101.tar.gz
MyFile_4fg5d6_20100301.csv
MyFile_fdfdsf_20140326.txt
$ sort -t'_' -rnk3 file
MyFile_fdfdsf_20140326.txt
MyFile_4fg5d6_20100301.csv
MyFile_dfgfdklm_19990101.tar.gz
From man sort:
-t, --field-separator=SEP
use SEP instead of non-blank to blank transition
-n, --numeric-sort
compare according to string numerical value
-r, --reverse
reverse the result of comparisons
Update
Thank you for you answer. It's perfect. But out of curiosity, what if
I had an unknown number of delimiters, but the date was always after
the last "_" delimiter. MyFile_abc_def_...20140326.txt sort -t''
-nk??? file – user3464809
You can trick it a little bit: print the last field, sort and then remove it.
awk -F_ '{print $NF, $0}' a | sort | cut -d'_' -f2-
See an example:
$ cat a
MyFile_fdfdsf_20140326.txt
MyFile_4fg5d6_20100301.csv
MyFile_dfgfdklm_19990101.tar.gz
MyFile_dfgfdklm_asdf_asdfsadfas_19940101.tar.gz
MyFile_dfgfdklm_asdf_asdfsadfas_29990101.tar.gz
$ awk -F_ '{print $NF, $0}' a | sort | cut -d'_' -f2-
dfgfdklm_asdf_asdfsadfas_19940101.tar.gz
dfgfdklm_19990101.tar.gz
4fg5d6_20100301.csv
fdfdsf_20140326.txt
dfgfdklm_asdf_asdfsadfas_29990101.tar.gz

bash sort on multiple fields and deduplicating

I want to sort data like the below content on the first field first and then on the date in the third field. Then keep only the latest for each ID(field 1) - irrespective of the second field.
id1,description1,2013/11/20
id2,description2,2013/06/11
id2,description3,2012/10/28
id2,description4,2011/12/04
id3,description5,2014/02/09
id3,description6,2013/12/05
id4,description7,2013/12/05
id5,description8,2013/08/14
So the expected output will be
id1,description1,2013/11/20
id2,description2,2013/06/11
id3,description5,2014/02/09
id4,description7,2013/12/05
id5,description8,2013/08/14
Thanks
Jomon
You can use this awk:
> cat file
id1,description1,2013/11/20
id1,description1,2013/11/19
id2,description2,2013/06/11
id2,description3,2012/10/28
id2,description4,2011/12/04
id3,description5,2014/02/09
id3,description6,2013/12/05
id4,description7,2013/12/05
id5,description8,2013/08/14
> sort -t, -k1,1 -k3,3r file | awk -F, '!a[$1]++'
id1,description1,2013/11/20
id2,description2,2013/06/11
id3,description5,2014/02/09
id4,description7,2013/12/05
id5,description8,2013/08/14
Call sort twice; the first time, sort by the date. On the second call, sort uniquely on the first field, but do so stably so that items with the same id remain sorted by date.
sort -t, -k3,3r data.txt | sort -t, -su -k1,1
Try this:
cat file |sort -u|awk -F, '{if(map[$1] == ""){print $0; map[$1]="printed"}}'
Explanation:
I use sort to sort (well could not be more simple)
And I use awk to store in a map if the first column item was already printed.
If not (map[$1] == "") I print and store "printed" into map[$1] (so next time it won't be equal to "" for the current value of $1).

Resources