Sort according to second column numerically and first alphabetically - bash

I have 2 columns, I want to sort them using bash.
I used the command:
sort -k2 -n
c 9
c 11
c 11
sh 11
c 13
c 15
txt 47
txt 94
txt 345
txt 628
sh 3673
This is the result, but i need them to be sorted like this:
c 9
c 11
c 11
c 13
c 15
sh 11
sh 3673
txt 47
txt 94
txt 345
txt 628
Any ideas?

First sort by column 1, then by 2:
sort -k1,1 -k2,2n file.txt

Related

Unix sort function that sorts a space delimited txt file according to specific column by ASCII value

I have tried to come up with this answer but everything I try does not work.
My code below is what I have come up with:
sort -k$field_number "$1".db > temp.txt && cp temp.txt "$1.db"
Shouldn't this line of code sort the .db file by ASCII value (the sort function should sort by ASCII by default?). In the code, field_number corresponds to the column I wish to sort the lines of the file by. When I use my code to format the file (where I am sorting by column 2), I get the output below.
Textfile (the .db file) format:
a 5 5 5
Green 72 72 72
Smith 84 72 93
Jones 85 73 94
z 9 9 9
Ford 92 64 93
Miller 93 73 87
bobua che Apple Xor
Maybe your problem is with your collection. Try this please:
LC_COLLATE=C sort -n --ignore-case -k$field_number "$1".db > temp.txt && cp temp.txt "$1.db"

How to combine column from multiple text files? [duplicate]

This question already has answers here:
How can I sum values in column based on the value in another column?
(5 answers)
Combine text from two files, output to another [duplicate]
(2 answers)
Closed 6 years ago.
I want to extract and combine a certain column from a bunch of text files into a single file as shown.
File1_example.txt
A 123 1
B 234 2
C 345 3
D 456 4
File2_example.txt
A 123 5
B 234 6
C 345 7
D 456 8
File3_example.txt
A 123 9
B 234 10
C 345 11
D 456 12
...
..
.
File100_example.txt
A 123 55
B 234 66
C 345 77
D 456 88
How can I loop through my files of interest and paste these columns together so that the final result is like below without having to type out 1000 unique file names?
1 5 9 ... 55
2 6 10 ... 66
3 7 11 ... 77
4 8 12 ... 88
Try this:
paste File[0-9]*_example.txt | awk '{i=3;while($i){printf("%s ",$i);i+=3}printf("\n")}'
Example:
File1_example.txt:
A 123 1
B 234 2
C 345 3
D 456 4
File2_example.txt:
A 123 5
B 234 6
C 345 7
D 456 8
Run command as:
$ paste File[0-9]*_example.txt | awk '{i=3;while($i){printf("%s ",$i);i+=3}printf("\n")}'
Output:
1 5
2 6
3 7
4 8
I tested below code with first 3 files
cat File*_example.txt | awk '{a[$1$2]= a[$1$2] $3 " "} END{for(x in a){print a[x]}}' | sort
1 5 9
2 6 10
3 7 11
4 8 12
1) use an awk array, a[$1$2]= a[$1$2] $3 " " index is column1 and column2, array value appends all column 3.
2) END{for(x in a){print a[x]}} travesrsed array a and prints all values.
3)use sort to sort the output.
when cating you need to ensure the file order is preserved, one way is to explicitly specify the files
cat File{1..100}_example.txt | awk '{print $NF}' | pr 4ts' '
extract last column by awk and align using pr

How to merge three text files into three columns on screen

How can I merge three text files into three columns on screen?
1 A 1
2 B 2
3 C 3
D
E
I tried...
paste file1.txt file2.txt file3.txt | column -s $'\t' -t
...but I always get
1 A 1
2 B 2
3 C 3
D
E
Thanks in advance for your help!
line 1-2 of file1.txt
USB Device Class ID:
CdRom&Ven_ZALMAN&Prod__Virtual_CD-Rom&Rev_
line 1-2 of file2.txt
USB Instance ID:
______XX00000001&1
line 1-2 of file3.txt
Last updated (Subkey):
2015-01-12 15:08:45 UTC+0000
I don't know your input files, but paste works as intended.
$ paste <(seq 1 4) <(seq 10 17) <(seq 5 9)
1 10 5
2 11 6
3 12 7
4 13 8
14 9
15
16
17
:|paste -d ' ' file1 - file2 - file3 | column -ts "| " combine many files as a table column -t and -s as a separator "| " .
the output will be like that
1 A 1
2 B 2
3 C 3
D
E
If you only have 3 files or a few to deal with you can do this:
$ paste foo[12].txt | expand -t 45 | paste - foo3.txt | expand -t 12
USB Device Class ID: USB Instance ID: Last updated (Subkey):
CdRom&Ven_ZALMAN&Prod__Virtual_CD-Rom&Rev_ ______XX00000001&1 2015-01-12 15:08:45 UTC+0000
______XY0000000182
$
You need to choose the tab expansions 45 and 12 depending upon maximum line widths in foo1.txt and foo2.txt.

Removing multiple block of lines of a text file in bash

Assume a text file with 40 lines of data. How can I remove lines 3 to 10, 13 to 20, 23 to 30, 33 to 40, in place using bash script?
I already know how to remove lines 3 to 10 with sed, but I wonder if there is a way to do all the removing, in place, with only one command line. I can use for loop but the problem is that with each iteration of loop the lines number will be changed and it needs some additional calculation of line numbers to be removed.
here is an awk oneliner, works for your needs no matter your file has 40 lines or 40k lines:
awk 'NR~/[12]$/' file
for example, with 50 lines:
kent$ seq 50|awk 'NR~/[12]$/'
1
2
11
12
21
22
31
32
41
42
sed -i '3,10d;13,20d;23,30d;33,40d' file
This might work for you (GNU sed):
sed '3~10,+7d' file
Deletes lines in the range of 3 and thereafter steps of 10 for the following 7 lines to be deleted.
If the file was longer than 40 lines and you were only interested in the first 40 lines:
sed '41,$b;3~10,+7d' file
The first instruction tells sed to ignore lines 41 to end-of-file.
Could also be written:
sed '1,40{3~10,+7d}' file
#Kent's answer is the way to go for this particular case, but in general:
$ seq 50 | awk '{idx=(NR%10)} idx>=1 && idx<=2'
1
2
11
12
21
22
31
32
41
The above will work even if you want to select the 4th through 7th lines out of every 13, for example:
$ seq 50 | awk '{idx=(NR%13)} idx>=4 && idx<=7'
4
5
6
7
17
18
19
20
30
31
32
33
43
44
45
46
its not constrained to N out of 10.
Or to select just the 3rd, 5th and 6th lines out of every 13:
$ seq 50 | awk 'BEGIN{split("3 5 6",tmp); for (i in tmp) tgt[tmp[i]]=1} tgt[NR%13]'
3
5
6
16
18
19
29
31
32
42
44
45
The point is - selecting ranges of lines is a job for awk, definitely not sed.
awk '{m=NR%10} !(m==0 || m>=3)' file > tmp && mv tmp file

Need help to formating data

I need your help to formatting my data. I have a data like below
Ver1
12 45
Ver2
134 23
Ver3
2345 980
ver4
21 1
ver36
213141222 22
....
...etc
I need my data like the below format
ver1 12 45
ver2 134 23
ver3 2345 980
ver4 21 1
etc.....
Also i want the total count of col 2 and 3 at the end of the output. Im not sure the scripts, if you provide simple script (May AWK can, but not sure).if possible please share the detailed answer to learn and understand.
$ awk 'NR%2{printf $0" ";next;}
{col1+=$1; col2+=$2} 1;
END{print "TOTAL col1="col1, "col2="col2}' file
Ver1 12 45
Ver2 134 23
Ver3 2345 980
ver4 21 1
ver36 213141222 22
TOTAL col1=213143734 col2=1071
It merges every two lines as solved by Kent. It also sums the 1st and 2nd column into col1 and col2 vars. Finally, it prints the value in the END {} block.

Resources