Why does coreutils sort give a different result when I use a different field delimiter? - sorting

When using sort on the command line, why does the sorted order depend on which field delimiter I use? As an example,
$ # The test file:
$ cat test.csv
2,az,a,2
3,a,az,3
1,az,az,1
4,a,a,4
$ # sort based on fields 2 and 3, comma separated. Gives correct order.
$ LC_ALL=C sort -t, -k2,3 test.csv
4,a,a,4
3,a,az,3
2,az,a,2
1,az,az,1
$ # replace , by ~ as field separator, then sort as before. Gives incorrect order.
$ tr "," "~" < test.csv | LC_ALL=C sort -t"~" -k2,3
2~az~a~2
1~az~az~1
4~a~a~4
3~a~az~3
The second case not only gets the ordering wrong, but is inconsistent between field 2 (where az < a) and field 3 (where a < az).

There is a mistake in -k2,3. That means that sort should sort starting at the 2nd field and ending at the 3rd field. That means that the delimiter between them is also part of what is to be sorted and therefore counts as character. That's why you encounter different sorts with different delimiters.
What you want is the following:
LC_ALL=C sort -t"," -k2,2 -k3,3 file
And:
tr "," "~" < file | LC_ALL=C sort -t"~" -k2,2 -k3,3
That means sort should sort the 2nd field and is the 2nd field has dublicates sort the 3rd field.

Related

get unique combination of values of two columns

I can get the unique values from col using below command
cut -d',' -f3 file.txt | uniq -c.
This gives me unique values in field 3.
But if I want to get unique combination of two fields, how can I get that ?
input
A,B,C
B,C,D
D,B,C
H,C,D
K,C,D
output
2 B,C
3 C,D
You can specify range of fields using -f 2-3 or -f 2,3
cut -d',' -f2-3 file.txt | sort | uniq -c
uniq does not detect repeated lines unless they are adjacent. Input should be sorted before using uniq command
Output
2 B,C
3 C,D
Another option you may find provides greater flexibility in processing the input is awk. You can use a concatenation of the fields at issue as an index for an array to sum the occurrences of each unique combination of fields and then output the results using the END rule, e.g.
awk -F, '{a[$2","$3]++} END{for(i in a)print a[i], i}' file
Example Use/Output
With your example file in input you would have:
$ awk -F, '{a[$2","$3]++} END{for(i in a)print a[i], i}' input
3 C,D
2 B,C
awk arrays are associative rather than indexed, but you can preserve the order of appearance using a 3rd array if needed. Or you can simply pipe the output to sort for whatever order you like.

How to sort a file by line length and then alphabetically for the second key?

Say I have a file:
ab
aa
c
aaaa
I would like it to be sorted like this
c
aa
ab
aaaa
That is to sort by line length and then alphabetically. Is that possible in bash?
You can prepend the length of the line to each line, then do a numerical sorting, and finally cutting out the numbers
< your_file awk '{ print length($0), $0; }' | sort -n | cut -f2
You see that I've accomplished the sorting via sort -n, without doing any multi-key sorting. Honestly I was lucky that this worked:
I didn't think that lines could begin with numbers and so I expected sort -n to work because alphabetic and numeric sorting give the same result if all the strings are the same length, as is the case exaclty because we are sorting by the line length which I'm adding via awk.
It turns out everything works even if your input has lines starting with digits, the reason being that sort -n
sorts numerically on the leading numeric part of the lines;
in case of ties, it uses strcmp to compare the whole lines
Here's some demo:
$ echo -e '3 11\n3 2' | sort -n
3 11
3 2
# the `3 ` on both lines makes them equal for numerical sorting
# but `3 11` comes before `3 2` by `strcmp` before `1` comes before `2`
$ echo -e '3 11\n03 2' | sort -n
03 2
3 11
# the `03 ` vs `3 ` is a numerical tie,
# but `03 2` comes before `3 11` by `strcmp` because `0` comes before `3`
So the lucky part is that the , I included in the awk command inserts a space (actually an OFS), i.e. a non-digit, thus "breaking" the numeric sorting and letting the strcmp sorting kick in (on the whole lines which compare equal numerically, in this case).
Whether this behavior is POSIX or not, I don't know, but I'm using GNU coreutils 8.32's sort. Refer to this question of mine and this answer on Unix for details.
awk could do all itself, but I think using sort to sort is more idiomatic (as in, use sort to sort) and efficient, as explained in a comment (after all, why would you not expect that sort is the best performing tool in the shell to sort stuff?).
Insert a length for the line using gawk (zero-filled to four places so it will sort correctly), sort by two keys (first the length, then the first word on the line), then remove the length:
gawk '{printf "%04d %s\n", length($0), $0}' | sort -k1 -k2 | cut -d' ' -f2-
If it must be bash:
while read -r line; do printf "%04d %s\n" ${#line} "${line}"; done | sort -k1 -k2 | (while read -r len remainder; do echo "${remainder}"; done)
For GNU awk:
$ gawk '{
a[length()][$0]++ # hash to 2d array
}
END {
PROCINFO["sorted_in"]="#ind_num_asc" # first sort on length dim
for(i in a) {
PROCINFO["sorted_in"]="#ind_str_asc" # and then on data dim
for(j in a[i])
for(k=1;k<=a[i][j];k++) # in case there are duplicates
print j
# PROCINFO["sorted_in"]="#ind_num_asc" # I don t think this is needed?
}
}' file
Output:
c
aa
ab
aaaa
aaaaaaaaaa
aaaaaaaaaa

Sort a file in unix by the absolute value of a field

I want to sort this file by the absolute value of the Linear regression (p) column in descending order. My attempt to do this didnt quite work. Im not sure what it fails. I found this code from http://www.unix.com/shell-programming-and-scripting/168144-sort-absolute-value.html.
awk -F',' '{print ($2>=0)?$2:-$2, $0}' OFS=',' mycsv1.csv | sort -n -k8,8 | cut -d ',' -f2-
X var,Y var,MIC (strength),MIC-p^2 (nonlinearity),MAS (non-monotonicity),MEV (functionality),MCN (complexity),Linear regression (p)
AT1G01030,AT1G32310,0.67958,0.4832027,0.32644996,0.63247,4.0,-0.44314474
AT1G01030,AT3G06520,0.61732,0.17639545,0.23569,0.58557,4.0,0.6640215
AT1G01030,AT5G42580,0.61579,0.5019064,0.30105,0.58143,4.0,0.33746648
AT1G01030,AT1G55280,0.57287,0.20705527,0.19536,0.52857,4.0,0.6048262
AT1G01030,AT5G30490,0.56509,0.37536618,0.16172999,0.51847,4.0,-0.43557298
AT1G01030,AT1G80040,0.56268,0.22935495,0.18583998,0.52728,4.0,-0.5773431
...
Please help me to understand the awk script to sort this file.
You could use sed and sort for this and follow the #hek2mgl's very smart logic of adding and removing a field at the end to retain the original number:
sed -E 's/,([-]?)([0-9.]+)$/,\1\2,\2/' file | sort -t, -k9,9 -nr | cut -f1-8 -d,
sed -E 's/,([-]?)([0-9.]+)$/,\1\2,\2/' => creates field 9 as the absolute value of field 8
sort -t, -k9,9 -nr => sorts by the newly created field, numeric and descending order
cut -f1-8 -d, => removes the 9th field, restoring the output to its original format, with the desired sorting order
Here is the output:
AT1G01030,AT3G06520,0.61732,0.17639545,0.23569,0.58557,4.0,0.6640215
AT1G01030,AT1G55280,0.57287,0.20705527,0.19536,0.52857,4.0,0.6048262
AT1G01030,AT1G80040,0.56268,0.22935495,0.18583998,0.52728,4.0,-0.5773431
AT1G01030,AT1G32310,0.67958,0.4832027,0.32644996,0.63247,4.0,-0.44314474
AT1G01030,AT5G30490,0.56509,0.37536618,0.16172999,0.51847,4.0,-0.43557298
AT1G01030,AT5G42580,0.61579,0.5019064,0.30105,0.58143,4.0,0.33746648
Take three steps:
(1) Temporarily create a 9th field which contains the abs value of field 8:
LC_COLLATE=C awk -F, 'NR>1{v=$NF;sub(/-/,"",v);printf "%s%s%s%s",$0,FS,v,RS}' file
^ ------ make sure this is set since sorting, especially the decimal point
depends on the local.
(2) Sort that output based on the 9th field:
command_1 | sort -t, -k9r
(3) Pipe that back to awk to remove the last field. NF-- decreases the number of fields which will effectively remove the last field. 1 is always true, that makes awk print the line:
command_2 | cut -d, -f1-8
Output:
AT1G01030,AT3G06520,0.61732,0.17639545,0.23569,0.58557,4.0,0.6640215
AT1G01030,AT1G55280,0.57287,0.20705527,0.19536,0.52857,4.0,0.6048262
AT1G01030,AT1G80040,0.56268,0.22935495,0.18583998,0.52728,4.0,-0.5773431
AT1G01030,AT1G32310,0.67958,0.4832027,0.32644996,0.63247,4.0,-0.44314474
AT1G01030,AT5G30490,0.56509,0.37536618,0.16172999,0.51847,4.0,-0.43557298
AT1G01030,AT5G42580,0.61579,0.5019064,0.30105,0.58143,4.0,0.33746648
Could get awk to do it all:
awk -F, 'NR>1{n[substr($NF,1,1)=="-"?substr($NF,2):$NF]=$0}NR==1;END{asorti(n,out);for(i in out)print n[out[i]]}' file

Unix - Sorting file name with a key but not knowing its position

I would like to sort those files using Unix commands:
MyFile_fdfdsf_20140326.txt
MyFile_4fg5d6_20100301.csv
MyFile_dfgfdklm_19990101.tar.gz
The result I am waiting for here is MyFile_fdfdsf_20140326.txt
So I'd like to get the file with the newest date.
I can't use 'sort -k', as the position of the key (the date) may vary
But in my file name there are always two "_" delimiters and a dot '.' for the file extension
Any help would be appreciated :)
Then use -t to indicate the field separator and set it to _:
sort -t'_' -k3
See an example of sorting the file names if they are in a file. I used -n for numeric sort and -r for reverse order:
$ sort -t'_' -nk3 file
MyFile_dfgfdklm_19990101.tar.gz
MyFile_4fg5d6_20100301.csv
MyFile_fdfdsf_20140326.txt
$ sort -t'_' -rnk3 file
MyFile_fdfdsf_20140326.txt
MyFile_4fg5d6_20100301.csv
MyFile_dfgfdklm_19990101.tar.gz
From man sort:
-t, --field-separator=SEP
use SEP instead of non-blank to blank transition
-n, --numeric-sort
compare according to string numerical value
-r, --reverse
reverse the result of comparisons
Update
Thank you for you answer. It's perfect. But out of curiosity, what if
I had an unknown number of delimiters, but the date was always after
the last "_" delimiter. MyFile_abc_def_...20140326.txt sort -t''
-nk??? file – user3464809
You can trick it a little bit: print the last field, sort and then remove it.
awk -F_ '{print $NF, $0}' a | sort | cut -d'_' -f2-
See an example:
$ cat a
MyFile_fdfdsf_20140326.txt
MyFile_4fg5d6_20100301.csv
MyFile_dfgfdklm_19990101.tar.gz
MyFile_dfgfdklm_asdf_asdfsadfas_19940101.tar.gz
MyFile_dfgfdklm_asdf_asdfsadfas_29990101.tar.gz
$ awk -F_ '{print $NF, $0}' a | sort | cut -d'_' -f2-
dfgfdklm_asdf_asdfsadfas_19940101.tar.gz
dfgfdklm_19990101.tar.gz
4fg5d6_20100301.csv
fdfdsf_20140326.txt
dfgfdklm_asdf_asdfsadfas_29990101.tar.gz

bash sort on multiple fields and deduplicating

I want to sort data like the below content on the first field first and then on the date in the third field. Then keep only the latest for each ID(field 1) - irrespective of the second field.
id1,description1,2013/11/20
id2,description2,2013/06/11
id2,description3,2012/10/28
id2,description4,2011/12/04
id3,description5,2014/02/09
id3,description6,2013/12/05
id4,description7,2013/12/05
id5,description8,2013/08/14
So the expected output will be
id1,description1,2013/11/20
id2,description2,2013/06/11
id3,description5,2014/02/09
id4,description7,2013/12/05
id5,description8,2013/08/14
Thanks
Jomon
You can use this awk:
> cat file
id1,description1,2013/11/20
id1,description1,2013/11/19
id2,description2,2013/06/11
id2,description3,2012/10/28
id2,description4,2011/12/04
id3,description5,2014/02/09
id3,description6,2013/12/05
id4,description7,2013/12/05
id5,description8,2013/08/14
> sort -t, -k1,1 -k3,3r file | awk -F, '!a[$1]++'
id1,description1,2013/11/20
id2,description2,2013/06/11
id3,description5,2014/02/09
id4,description7,2013/12/05
id5,description8,2013/08/14
Call sort twice; the first time, sort by the date. On the second call, sort uniquely on the first field, but do so stably so that items with the same id remain sorted by date.
sort -t, -k3,3r data.txt | sort -t, -su -k1,1
Try this:
cat file |sort -u|awk -F, '{if(map[$1] == ""){print $0; map[$1]="printed"}}'
Explanation:
I use sort to sort (well could not be more simple)
And I use awk to store in a map if the first column item was already printed.
If not (map[$1] == "") I print and store "printed" into map[$1] (so next time it won't be equal to "" for the current value of $1).

Resources