How do i ordanate a column without ordanating a line? [closed] - bash

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
can someone explain me how do i do this?
I have a file that has three columns, and a lot of lines, and i want that the second column to be sorted in ascending order (it contains only numbers).
sample file;
7.31937 736 /tmp/ref13
7.3223 5373 /tmp/ref13
7.32816 768 /tmp/ref13
7.32955 5370 /tmp/ref10
I want to;
7.31937 736 /tmp/ref13
7.3223 768 /tmp/ref13
7.32816 5370 /tmp/ref13
7.32955 5373 /tmp/ref10
Thanks!!

you can try this ;
paste -d' ' <(awk '{print $1}' yourFile) <(awk '{print $2}' yourFile | sort -n) <(awk '{print $3}' yourFile)
or
awk '{print $2}' yourFile | sort -n | paste yourFile - | awk '{print $1"\t"$4"\t"$3}'
Eg:
user#host:/tmp$ cat t1
7.31937 736 /tmp/ref13
7.3223 5373 /tmp/ref13
7.32816 768 /tmp/ref13
7.32955 5370 /tmp/ref10
user#host:/tmp$ paste -d' ' <(awk '{print $1}' t1) <(awk '{print $2}' t1 | sort -n) <(awk '{print $3}' t1) | column -t
7.31937 736 /tmp/ref13
7.3223 768 /tmp/ref13
7.32816 5370 /tmp/ref13
7.32955 5373 /tmp/ref10

Related

Why is the RX and TX values ​the same when executing the network packet statistics script on centos8?

##test1 on rhel8 or centos8
$for i in 1 2;do cat /proc/net/dev | grep ens192 | awk '{print "RX:"$2"\n""TX:"$10}';done | awk '{print $0}' | tee 111
##result:
RX:2541598118
TX:1829843233
RX:2541598118
TX:1829843233

Bash Shell: How do I sort by values on last column, but ignoring the header of a file?

file
ID First_Name Last_Name(s) Average_Winter_Grade
323 Popa Arianna 10
317 Tabarcea Andreea 5.24
326 Balan Ionut 9.935
327 Balan Tudor-Emanuel 8.4
329 Lungu Iulian-Gabriel 7.78
365 Brailean Mircea 7.615
365 Popescu Anca-Maria 7.38
398 Acatrinei Andrei 8
How do I sort it by last column, except for the header ?
This is what file should look like after the changes:
ID First_Name Last_Name(s) Average_Winter_Grade
323 Popa Arianna 10
326 Balan Ionut 9.935
327 Balan Tudor-Emanuel 8.4
398 Acatrinei Andrei 8
329 Lungu Iulian-Gabriel 7.78
365 Brailean Mircea 7.615
365 Popescu Anca-Maria 7.38
317 Tabarcea Andreea 5.24
If it's always 4th column:
head -n 1 file; tail -n +2 file | sort -n -r -k 4,4
If all you know is that it's the last column:
head -n 1 file; tail -n +2 file | awk '{print $NF,$0}' | sort -n -r | cut -f2- -d' '
You'd like to just sort by the last column, but sort doesn't allow you to do that easily. So rewrite the data with the column to be sorted at the beginning of each line:
Ignoring the header for the moment (although this will often work by itself):
awk '{print $NF, $0 | "sort -nr" }' input | cut -d ' ' -f 2-
If you do need to trim the order (eg, it's getting mixed in the sort), you can do things like:
< input awk 'NR==1; NR>1 {print $NF, $0 | "sh -c \"sort -nr | cut -d \\\ -f 2-\"" }'
or
awk 'NR==1{ print " ", $0} NR>1 {print $NF, $0 | "sort -nr" }' OFS=\; input | cut -d \; -f 2-

Converting value to MB [bash] [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have this code:
VAL1=`ps auxf | grep httpd | grep ^apache | grep -v grep | wc -l`
VAL2=`ps auxf | grep httpd | grep ^apache | grep -v grep | awk '{s+=$6} END {print s}'`
VAL3=`expr $VAL2 / $VAL1`
echo "servers.value $VAL3"
and then I have values like servers.value 63908. Tell me please, how can i get in in MB?
Divide the KiB by 1024 and you get MB:
VAL3=`expr $VAL2 / $VAL1 / 1024`
echo "servers.value $VAL3 MB"
To get RSS from VAL2 in MB just use:
VAL2=`ps auxf | grep httpd | grep ^apache | grep -v grep | awk '{s+=$6} END {print s/1014}'`

Use SED in order to filter a file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I would like to use SED in order to filter a file and only get the id which is constituted of 3 numbers and the Domain (e.g.: google.com).
Original File:
451 [04/Jan/1997:03:35:55 +0100] http://www.netvibes.com
448 [04/Jan/1997:03:36:30 +0100] www.google.com:443
450 [04/Jan/1997:03:36:48 +0100] http://84.55.151.142:8080
452 [04/Jan/1997:03:36:51 +0100] http://127.0.0.1:9010
451 [04/Jan/1997:03:36:55 +0100] http://www.netvibes.com
453 [04/Jan/1997:03:37:10 +0100] api.del.icio.us:443
453 [04/Jan/1997:03:37:33 +0100] api.del.icio.us:443
448 [04/Jan/1997:03:37:34 +0100] www.google.com:443
Used SED commands : sed -e 's/\[[^]]*\]//g' -e 's/http:\/\///g' -e 's/www.//g' -e 's/^.com//g' -e 's/:[0-9]*//g'
Current Output:
451 netvibes.com
448 google.com
450 84.55.151.142
452 127.0.0.1
451 netvibes.com
453 api.del.icio.us
453 api.del.icio.us
448 google.com
Wished Output:
451 netvibes.com
448 google.com
451 netvibes.com
448 google.com
using grep
sed ... | grep -F '.com'
or
sed ... | grep '\.com$'
or with sed -n, using p to print match
sed -ne 's/\[[^]]*\]//g;s/http:\/\///g;s/www.//g;s/:[0-9]*//g;/.com$/p'
Expected you've lost api.del.icio.us in your wish output so:
cat testfile | awk '{print $1" "$NF}' | sed -r 's/http\:\/\/*//g;s/www\.//g' | awk -F: '{print $1}' | sed -r 's/([0-9]{1,3}) [0-9].*/\1 /g' | sed -r 's/[0-9]{3} $//g' | grep -v '^$' | uniq
If you needs only *.com domains, get it:
cat testfile | awk '{print $1" "$NF}' | sed -r 's/http://*//g;s/www.//g' | awk -F: '{print $1}' | sed -r 's/([0-9]{1,3}) [0-9].*/\1 /g' | sed -r 's/[0-9]{3} $//g' | grep -v '^$' | grep com | uniq
Here's one in awk:
$ awk 'match($NF,/[^\.]+\.[a-z]+($|:)/) {
print $1,substr($NF,RSTART,RLENGTH-($NF~/:[0-9]+/?1:0))
}' file
451 netvibes.com
448 google.com
451 netvibes.com
453 icio.us
453 icio.us
448 google.com
If you want just the .coms, replace [a-z]+ in the match regex with com.

In many lines of the same values that I want to be counted [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
main faile is:
785
785
788
788
883
883
883
921
921
921
921
921
921
925
925
I want to count the same values and write the results in a new file (as follows):
785 2
788 2
883 3
921 6
925 2
Thank you for your helps.
sort myFile.txt | uniq -c | awk '{ print $2 " " $1}' > myNewFile.txt
Edit: added sort and removed cat to take comments into account
And if you want only values which appear at least 4 times:
sort temp.txt | uniq -c | sort -n | egrep -v "^ *[0-3] " | awk '{ print $2 " " $1}'
Imagine your file is called t
You can do with:
cat t | sort -u | while read line #read each one element in sorted and uniqye
do
echo -n $line; # print element
cat t | grep ${line} | wc -l # read file, get only the specified and count
done
kent$ awk '{a[$0]++}END{for(x in a)print x, a[x]}' f
921 6
925 2
883 3
785 2
788 2
print only count >=4:
kent$ awk '{a[$0]++}END{for(x in a)if(a[x]>=4)print x, a[x]}' f
921 6

Resources