grep|sort and display only the first line - sorting

I am trying to grep for a pattern and sort by a column and display only the first line by using the following command:
grep "BEST SCORE" Result.txt | sort -nk 4 | "display only first line"
I don't want to save the grep neither thesort result into a file.

grep "BEST SCORE" Result.txt|sort -nk 4|head -1

Related

How can I send the out of the last pipe to two different commands?

So, I have text file with a a bunch of numbers, one number per line to be specific, so I do:-
cat filename.txt|sort -n|head -1 to get the top number and I can do cat filename.txt|sort -n|tail -1 to get the bottom number.
Just to be sure is there a way to send cat filename.txt|sort -n| and its output to two different commands in one line and have the out put (the highest number and the lowest number next to each other)
You can do interesting things with tee and process substitutions, but the order of the output may not be stable (due to timing of processes)
sort -n filename.txt | tee >(tail -1 >/dev/tty) | head -1
In this case, I'd use sed to print the first and last line:
sort -n filename.txt | sed -n '1p; $p'
As #chepner suggests
... | sed -n '1p; $p' | paste - - # tab separated
or
... | awk 'NR == 1 {first = $0} END {print first, $0}' # space separated
There is a useful command tee syntax tee second.txt will output to second.txt
You can combine that with bash executive pipes eg tee >(wc),
So you can do 2 or more commands by eg tee >(wc) | head

Shell: Counting lines per column while ignoring empty ones

I am trying to simply count the lines in the .CSV per column, while at the same time ignoring empty lines.
I use below and it works for the 1st column:
cat /path/test.csv | cut -d, -f1 | grep . | wc -l` >> ~/Desktop/Output.csv
#Outputs: 8
And below for the 2nd column:
cat /path/test.csv | cut -d, -f2 | grep . | wc -l` >> ~/Desktop/Output.csv
#Outputs: 6
But when I try to count 3rd column, it simply Outputs the Total number of lines in the whole .CSV.
cat /path/test.csv | cut -d, -f3 | grep . | wc -l` >> ~/Desktop/Output.csv
#Outputs: 33
#Should be: 19?
I've also tried to use awk instead of cut, but get the same issue.
I have tried creating new file thinking maybe it had some spaces in the lines, still the same.
Can someone clarify what is the difference? Betwen reading 1-2 column and the rest?
20355570_01.tif,,
20355570_02.tif,,
21377804_01.tif,,
21377804_02.tif,,
21404518_01.tif,,
21404518_02.tif,,
21404521_01.tif,,
21404521_02.tif,,
,22043764_01.tif,
,22043764_02.tif,
,22095060_01.tif,
,22095060_02.tif,
,23507574_01.tif,
,23507574_02.tif,
,,23507574_03.tif
,,23507804_01.tif
,,23507804_02.tif
,,23507804_03.tif
,,23509247_01.tif
,,23509247_02.tif
,,23509247_03.tif
,,23527663_01.tif
,,23527663_02.tif
,,23527663_03.tif
,,23527908_01.tif
,,23527908_02.tif
,,23527908_03.tif
,,23535506_01.tif
,,23535506_02.tif
,,23535562_01.tif
,,23535562_02.tif
,,23535636_01.tif
,,23535636_02.tif
That happens when input file has DOS line endings (\r\n). Fix your file using dos2unix and your command will work for 3rd column too.
dos2unix /path/test.csv
Or, you can remove the \r at the end while counting non-empty columns using awk:
awk -F, '{sub(/\r/,"")} $3!=""{n++} END{print n}' /path/test.csv
The problem is in the grep command: the way you wrote it will return 33 lines when you count the 3rd column.
It's better instead to use the following command to count number of lines in .CSV for each column (example below is for the 3rd column):
cat /path/test.csv | cut -d , -f3 | grep -cve '^\s*$'
This will return the exact number of lines for each column and avoid of piping into wc.
See previous post here:
count (non-blank) lines-of-code in bash
edit: I think oguz ismail found the actual reason in their answer. If they are right and your file has windows line endings you can use one of the following commands without having to convert the file.
cut -d, -f3 yourFile.csv cut | tr -d \\r | grep -c .
cut -d, -f3 yourFile.csv | grep -c $'[^\r]' # bash only
old answer: Since I cannot reproduce your problem with the provided input I take a wild guess:
The "empty" fields in the last column contain spaces. A field containing a space is not empty altough it looks like it is empty as you cannot see spaces.
To count only fields that contain something other than a space adapt your regex from . (any symbol) to [^ ] (any symbol other than space).
cut -d, -f3 yourFile.csv | grep -c '[^ ]'

BASH script help using TOP, GREP and CUT

Use Top command which repeats 5 times, pipe the results to Grep and Cut command to print the PID for init process on your screen.
Hi all, I have my line of code:
top -n 5 | grep "init" | cut -d" " -f3 > topdata
But I cannot see any output to verify that it's working.
Also, the next script asks me to use a one line command which shows the total memory used in megabytes. I'm supposed to pipe results from Free to Grep to select or filter the lines with the pattern "Total:" then pipe that result to Cut and display the number representing total memory used. So far:
free -m -t | grep "total:" | cut -c25-30
Also not getting any print return on that one. Any help appreciated.
expanding on my comments:
grep is case sensitive. free says "Total", you grep "total". So no match! Either grep for "Total" or use grep -i.
Instead of cut, I prefer awk when I need to get a number out of a line. You do not know what length the number will be, but you know it will be the first number after Total:. So:
free -m -t | grep "Total:" | awk '{print $2}'
For your top command, if you have no init process (which you should, but it would probably not show in top), just grep for something else to see if your code works. I used cinnamon (running Mint). The top command is:
top -n 5 | grep "cinnamon" | awk '{print $1}'
Replace "cinnamon" by "init" for your requirement. Why $1 in the awk? My top puts the PID in the first column. Adjust accordingly.
Overall, using cut is good when you have a string that is delimited by some character. Ex. aaa;bbb;ccc, you would cut on -d';'. But here the numbers might have different lengths so using cut is not (IMHO) the best solution.
The init process has PID 1, to there's no reason to do like this.
To find the PID of a process in general, I'd recommend:
pidof <name>

How to filter pipeline data according to column?

I wrote the following pipeline:
for i in `ls c*.txt | sort -V`; do echo $i; grep -v '#' ${i%???}_c_new.txt | grep -v 'seq-name' | cut -f 6 | grep -o '[0-9]*' | awk '{s+=$1} END {print s}'; done
Now, I want to take 6th column (cut -f 6 and later code) of only those lines, which match certain grep in 13th column.
These:
cut -f 13 | grep -o '^A$'
So that I look at 13th column and if grep matches, then I take this line and make rest of the code - counting sum of numbers in 6th column.
Please, how can I do such a thing? Thanks.
Make a grep command that will take uncut lines and filter by 13th field, like
grep -E '(\S+\s+){12}A\s'
and then pipe it to cut -f 6 and so on.

Removing a line based in a criteria

I just want to delete the line which contain the number of selected rows in a query. I mean the one in the last line. please help.
[root#machine-test scripts]# ./hit_ratio.sh
193830 432
185260 125
2 rows selected.
If you know you want to delete the last line, but not other lines which contain similar text, or you don't know what text it will contain, sed is uniquely suitable.
./hit_ratio.sh | sed '$d'
You don't need the power of sed or the super-powers of awk if all you want is to delete a line based on a pattern. You can use:
./hit_ratio.sh | grep -v ' rows selected.'
You can do it with awk and sed but it's a bit like trying to kill a fly with a thermo-nuclear warhead:
pax> ./hit_ratio.sh | sed '/ rows selected./d'
193830 432
185260 125
pax> ./hit_ratio.sh | awk '$2!="rows"{print}'
193830 432
185260 125
Alternatively, do something with your SQL script. Sometimes, turning on the set nocount on statement eliminates the "rows affected" line.
My first recommendation is not to have that line outputted, list hit_ratio.sh here maybe it can be modified not to output that line
Anyway if you have to remove only the last line the easiest would be to use head:
./hit_ratio.sh | head -n -1
Using -n and a negative number, makes head print all but the last N lines of the input
Use head to get the first N - 1 lines of your file, where N is the length of the file (calculated with wc -l)
head -n $(($(cat lipsum.log | wc -l) - 1)) lipsum.log works
Pipe through
sed -e '/\w*[0-9]\+ rows\? selected/d'

Resources