Print only values smaller than certain threshold in bash - bash
I have a file with more than 10000 lines like this, mostly numbers and some strings;
-40
-50
stringA
100
20
-200
...
I would like to write a bash (or other) script that reading this file only outputs numbers (no strings) and only those values smaller than zero (or some other predefined number). How can this be done?
In this case the output (sorted) would be
-40
-50
-200
...
cat filename | awk '{if($1==$1+0 && $1<THRESHOLD_VALUE)print $1}' | sort -n
The $1==$1+0 ensure that the string is a number, it will then check that it is less than THRESHOLD_VALUE (change this to whatever number you wish. Print it out if it passes, and sort.
awk '$1 < NUMBER { print }' FILENAME | sort -n
where NUMBER is the number that you want to use as an upper bound and FILENAME is your file with 10000+ lines of numbers. You can drop the | sort -n if you don't want to sort the numbers.
edit: One small caveat. If your string starts with a number, it will treat it as that number. Otherwise it should ignore it.
Another alternative is as follows:
function compare() {
if test $1 -lt $MAX_VALUE; then
echo $1
fi
} 2> /dev/null
Have a look at help test and man bash for further help on this. The 2> /dev/null redirects errors thrown by test when you try to compare something other than two integers. Call the function like:
compare 1
compare -1
compare string A
Only the middle line will give output.
Related
How to select a specific percentage of lines?
Goodmorning ! I have a file.csv with 140 lines and 26 columns. I need to sort the lines in according the values in column 23. This is an exemple : Controller1,NA,ASHEBORO,ASH,B,,3674,4572,1814,3674,4572,1814,1859,#NAME?,0,124.45%,49.39%,19%,1,,"Big Risk, No Spare disk",45.04%,4.35%,12.63%,160,464,,,,,,0,1,1,1,0,410,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller2,EU,FR,URG,D,,0,0,0,0,0,0,0,#NAME?,0,#DIV/0!,#DIV/0!,#DIV/0!,1,,#N/A,0.00%,0.00%,#DIV/0!,NO STATS,-1088,,,,,,#N/A,#N/A,#N/A,#N/A,0,#N/A,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller3,EU,FR,URG,D,,0,0,0,0,0,0,0,#NAME?,0,#DIV/0!,#DIV/0!,#DIV/0!,1,,#N/A,0.00%,0.00%,#DIV/0!,NO STATS,-2159,,,,,,#N/A,#N/A,#N/A,#N/A,0,#N/A,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller4,NA,STARR,STA,D,,4430,6440,3736,4430,6440,3736,693,#NAME?,0,145.38%,84.35%,18%,1,,No more Data disk,65.17%,19.18%,-2.18%,849,-96,,,,,,0,2,1,2,2,547,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, To sort the lines according the values of the column 23, I do this : awk -F "%*," '$23 > 4' myfikle.csv The result : Controller1,NA,ASHEBORO,ASH,B,,3674,4572,1814,3674,4572,1814,1859,#NAME?,0,124.45%,49.39%,19%,1,,"Big Risk, No Spare disk",45.04%,4.35%,12.63%,160,464,,,,,,0,1,1,1,0,410,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, Controller4,NA,STARR,STA,D,,4430,6440,3736,4430,6440,3736,693,#NAME?,0,145.38%,84.35%,18%,1,,No more Data disk,65.17%,19.18%,-2.18%,849,-96,,,,,,0,2,1,2,2,547,65%,1.1,1.1,1.3,0.65,0.65,0.75,0.04,0.1,,,,,,,,, In my example, I use the value of 4% in column 23, the goal being to retrieve all the rows with their value in % which increases significantly in column 23. The problem is that I can't base myself on the 4% value because it is only representative of the current table. So I have to find another way to retrieve the rows that have a high value in column 23. I have to sort the Controllers in descending order according to the percentage in column 23, I prefer to process the first 10% of the sorted lines to make sure I have the controllers with a large percentage. The goal is to be able to vary the percentage according to the number of lines in the table. Do you have any tips for that ? Thanks ! :)
I could have sworn that this question was a duplicate, but so far I couldn't find a similar question. Whether your file is sorted or not does not really matter. From any file you can extract the NUMBER first lines with head -n NUMBER. There is no built-in way to specify the number percentually, but you can compute that PERCENT% of your file's lines are NUMBER lines. percentualHead() { percent="$1" file="$2" linesTotal="$(wc -l < "$file")" (( lines = linesTotal * percent / 100 )) head -n "$lines" "$file" } or shorter but less readable percentualHead() { head -n "$(( "$(wc -l < "$2")" * "$1" / 100 ))" "$2" } Calling percentualHead 10 yourFile will print the first 10% of lines from yourFile to stdout. Note that percentualHead only works with files because the file has to be read twice. It does not work with FIFOs, <(), or pipes.
If you want to use standard tools, you'll need to read the file twice. But if you're content to use perl, you can simply do: perl -e 'my #sorted = sort <>; print #sorted[0..$#sorted * .10]' input-file
Here is one for GNU awk to get the top p% from the file but they are outputed in the order of appearance: $ awk -F, -v p=0.5 ' # 50 % of top $23 records NR==FNR { # first run a[NR]=$23 # hash precentages to a, NR as key next } FNR==1 { # second run, at beginning n=asorti(a,a,"#val_num_desc") # sort percentages to descending order for(i=1;i<=n*p;i++) # get only the top p % b[a[i]] # hash their NRs to b } (FNR in b) # top p % BUT not in order ' file file | cut -d, -f 23 # file processed twice, cut 23rd for demo 45.04% 19.18% Commenting this in a bit.
Get a percentage of randomly chosen lines from a text file
I have a text file (bigfile.txt) with thousands of rows. I want to make a smaller text file with 1 % of the rows which are randomly chosen. I tried the following output=$(wc -l bigfile.txt) ds1=$(0.01*output) sort -r bigfile.txt|shuf|head -n ds1 It give the following error: head: invalid number of lines: ‘ds1’ I don't know what is wrong.
Even after you fix your issues with your bash script, it cannot do floating point arithmetic. You need external tools like Awk which I would use as randomCount=$(awk 'END{print int((NR==0)?0:(NR/100))}' bigfile.txt) (( randomCount )) && sort -r file | shuf | head -n "$randomCount" E.g. Writing a file with with 221 lines using the below loop and trying to get random lines, tmpfile=$(mktemp /tmp/abc-script.XXXXXX) for i in {1..221}; do echo $i; done >> "$tmpfile" randomCount=$(awk 'END{print int((NR==0)?0:(NR/100))}' "$tmpfile") If I print the count, it would return me a integer number 2 and using that on the next command, sort -r "$tmpfile" | shuf | head -n "$randomCount" 86 126
Roll a die (with rand()) for each line of the file and get a number between 0 and 1. Print the line if the die shows less than 0.01: awk 'rand()<0.01' bigFile Quick test - generate 100,000,000 lines and count how many get through: seq 1 100000000 | awk 'rand()<0.01' | wc -l 999308 Pretty close to 1%. If you want the order random as well as the selection, you can pass this through shuf afterwards: seq 1 100000000 | awk 'rand()<0.01' | shuf On the subject of efficiency which came up in the comments, this solution takes 24s on my iMac with 100,000,000 lines: time { seq 1 100000000 | awk 'rand()<0.01' > /dev/null; } real 0m23.738s user 0m31.787s sys 0m0.490s The only other solution that works here, heavily based on OP's original code, takes 13 minutes 19s.
Creating histograms in bash
EDIT I read the question that this is supposed to be a duplicate of (this one). I don't agree. In that question the aim is to get the frequencies of individual numbers in the column. However if I apply that solution to my problem, I'm still left with my initial problem of grouping the frequencies of the numbers in a particular range into the final histogram. i.e. if that solution tells me that the frequency of 0.45 is 2 and 0.44 is 1 (for my input data), I'm still left with the problem of grouping those two frequencies into a total of 3 for the range 0.4-0.5. END EDIT QUESTION- I have a long column of data with values between 0 and 1. This will be of the type- 0.34 0.45 0.44 0.12 0.45 0.98 . . . A long column of decimal values with repetitions allowed. I'm trying to change it into a histogram sort of output such as (for the input shown above)- 0.0-0.1 0 0.1-0.2 1 0.2-0.3 0 0.3-0.4 1 0.4-0.5 3 0.5-0.6 0 0.6-0.7 0 0.7-0.8 0 0.8-0.9 0 0.9-1.0 1 Basically the first column has the lower and upper bounds of each range and the second column has the number of entries in that range. I wrote it (badly) as- for i in $(seq 0 0.1 0.9) do awk -v var=$i '{if ($1 > var && $1 < var+0.1 ) print $1}' input | wc -l; done Which basically does a wc -l of the entries it finds in each range. Output formatting is not a part of the problem. If I simply get the frequencies corresponding to the different bins , that will be good enough. Also please note that the bin size should be a variable like in my proposed solution. I already read this answer and want to avoid the loop. I'm sure there's a much much faster way in awk that bypasses the for loop. Can you help me out here?
Following the same algorithm of my previous answer, I wrote a script in awk which is extremely fast (look at the picture). The script is the following: #!/usr/bin/awk -f BEGIN{ bin_width=0.1; } { bin=int(($1-0.0001)/bin_width); if( bin in hist){ hist[bin]+=1 }else{ hist[bin]=1 } } END{ for (h in hist) printf " * > %2.2f -> %i \n", h*bin_width, hist[h] } The bin_width is the width of each channel. To use the script just copy it in a file, make it executable (with chmod +x <namefile>) and run it with ./<namefile> <name_of_data_file>.
For this specific problem, I would drop the last digit, then count occurrences of sorted data: cut -b1-3 | sort | uniq -c which gives, on the specified input set: 2 0.1 1 0.3 3 0.4 1 0.9 Output formatting can be done by piping through this awk command: | awk 'BEGIN{r=0.0} {while($2>r){printf "%1.1f-%1.1f %3d\n",r,r+0.1,0;r=r+.1} printf "%1.1f-%1.1f %3d\n",$2,$2+0.1,$1} END{while(r<0.9){printf "%1.1f-%1.1f %3d\n",r,r+0.1,0;r=r+.1}}'
The only loop you will find in this algorithm is around the line of the file. This is an example on how to realize what you asked in bash. Probably bash is not the best language to do this since it is slow with math. I use bc, you can use awk if you prefer. How the algorithm works Imagine you have many bins: each bin correspond to an interval. Each bin will be characterized by a width (CHANNEL_DIM) and a position. The bins, all together, must be able to cover the entire interval where your data are casted. Doing the value of your number / bin_width you get the position of the bin. So you have just to add +1 to that bin. Here a much more detailed explanation. #!/bin/bash # This is the input: you can use $1 and $2 to read input as cmd line argument FILE='bash_hist_test.dat' CHANNEL_NUMBER=9 # They are actually 10: 0 is already a channel # check the max and the min to define the dimension of the channels: MAX=`sort -n $FILE | tail -n 1` MIN=`sort -rn $FILE | tail -n 1` # Define the channel width CHANNEL_DIM_LONG=`echo "($MAX-$MIN)/($CHANNEL_NUMBER)" | bc -l` CHANNEL_DIM=`printf '%2.2f' $CHANNEL_DIM_LONG ` # Probably printf is not the best function in this context because #+the result could be system dependent. # Determine the channel for a given number # Usage: find_channel <number_to_histogram> <width_of_histogram_channel> function find_channel(){ NUMBER=$1 CHANNEL_DIM=$2 # The channel is found dividing the value for the channel width and #+rounding it. RESULT_LONG=`echo $NUMBER/$CHANNEL_DIM | bc -l` RESULT=`printf '%.0f' $RESULT_LONG` echo $RESULT } # Read the file and do the computuation while IFS='' read -r line || [[ -n "$line" ]]; do CHANNEL=`find_channel $line $CHANNEL_DIM` [[ -z HIST[$CHANNEL] ]] && HIST[$CHANNEL]=0 let HIST[$CHANNEL]+=1 done < $FILE counter=0 for i in ${HIST[*]}; do CHANNEL_START=`echo "$CHANNEL_DIM * $counter - .04" | bc -l` CHANNEL_END=`echo " $CHANNEL_DIM * $counter + .05" | bc` printf '%+2.1f : %2.1f => %i\n' $CHANNEL_START $CHANNEL_END $i let counter+=1 done Hope this helps. Comment if you have other questions.
Having SUM issues with a bash script
I'm trying to write a script to pull the integers out of 4 files that store temperature readings from 4 industrial freezers, this is a hobby script it generates the general readouts I wanted, however when I try to generate a SUM of the temperature readings I get the following printout into the file and my goal is to print the end SUM only not the individual numbers printed out in a vertical format Any help would be greatly appreciated;here's my code grep -o "[0.00-9.99]" "/location/$value-1.txt" | awk '{ SUM += $1; print $1} END { print SUM }' >> "/location/$value-1.txt" here is what I am getting in return Morningtemp:17.28 Noontemp:17.01 Lowtemp:17.00 Hightemp:18.72 1 7 . 2 8 1 7 . 0 1 1 7 . 0 0 1 8 . 7 2 53 It does generate the SUM I don't need the already listed numbers, just the SUM total
Why not stick with AWK completely? Code: $ cat > summer.awk { while(match($0,/[0-9]+\.[0-9]+/)) # while matches on record { sum+=substr($0, RSTART, RLENGTH) # extract matches and sum them $0=substr($0, RSTART + RLENGTH) # reset to start after previous match count++ # count matches } } END { print sum"/"count"="sum/count # print stuff Data: $ cat > data.txt Morningtemp:17.28 Noontemp:17.01 Lowtemp:17.00 Hightemp:18.72 Run: $ awk -f summer.awk file 70.01/4=17.5025 It might work in the winter too.
The regex in grep -o "[0.00-9.99]" "/location/$value-1.txt" is equivalent to [0-9.], but you're probably looking for numbers in the range 0.00 to 9.99. For that, you need a different regex: grep -o "[0-9]\.[0-9][0-9]" "/location/$value-1.txt" That looks for a digit, a dot, and two more digits. It was almost tempting to use [.] in place of \.; it would also work. A plain . would not; that would select entries such as 0X87. Note that the pattern shown ([0-9]\.[0-9][0-9]) will match 192.16.24.231 twice (2.16 and 4.23). If that's not what you want, you have to be a lot more precise. OTOH, it may not matter in the slightest for the actual data you have. If you'd want it to match 192.16 and 24.231 (or .24 and .231), you have to refine your regex. Your command structure: grep … filename | awk '…' >> filename is living dangerously. In the example, it is 'OK' (but there's a huge grimace on my face as I type 'OK') because the awk script doesn't write anything to the file until grep has read it all. But change the >> to > and you have an empty input, or have awk write material before the grep is complete and suddenly it gets very tricky to determine what happens (it depends, in part, on what awk writes to the end of the file).
Sorting and printing a file in bash UNIX
I have a file with a bunch of paths that look like so: 7 /usr/file1564 7 /usr/file2212 6 /usr/file3542 I am trying to use sort to pull out and print the path(s) with the most occurrences. Here it what I have so far: cat temp| sort | uniq -c | sort -rk1 > temp I am unsure how to only print the highest occurrences. I also want my output to be printed like this: 7 1564 7 2212 7 being the total number of occurrences and the other numbers being the file numbers at the end of the name. I am rather new to bash scripting so any help would be greatly appreciated!
To emit only the first line of output (with the highest number, since you're doing a reverse numeric sort immediately prior), pipe through head -n1. To remove all content which is not either a number or whitespace, pipe through tr -cd '0-9[:space:]'. To filter for only the values with the highest number, allowing there to be more than one: { read firstnum name && printf '%s\t%s\n' "$firstnum" "$name" while read -r num name; do [[ $num = $firstnum ]] || break printf '%s\t%s\n' "$num" "$name" done } < temp
If you want to avoid sort and you are allowed to use awk, then you can do this: awk '{ if($1>maxcnt) {s=$1" "substr($2,10,4); maxcnt=$1} else if($1==maxcnt) {s=s "\n"$1" "substr($2,10,4)}} END{print s}' \ temp