Bash - invalid arithmetic operator - bash

I'm trying to study for a test and one of the subjects are bash scripts.
I have the following txt file :
123456 100
654321 50
203374111 86
I need to get the averages of the scores (the numbers in the second column).
This is what I have written :
cat $course_name$end | while read line; do
sum=`echo $line | cut -f2 -d" "`
let total+=$sum
done
I have tried with
while read -a line
and then
let sum+=${line[1]}
But I'm still getting the same error mentioned in the header.

I love AWK:
awk -F\* '{sum+=$3} END {print sum/NR}' x.txt
So in x.txt are values are stored. Please note that many answers don't actually compute the average, as they need to divide by the line numbers in the end. Often it will be performed by a wc -l < x.txt but in my solution you will get it almost for free.

cat your_file_name.txt | cut -f2 -d" " | paste -sd+ | bc
This should do the job!

You are very close, this works for me:
while read line; do
sum=$(echo $line | cut -f2 -d" ")
echo "sum is $sum"
let total+=$sum
echo "total is $total"
done < file
echo "total is $total"
As you can see, there is no need to use cat $course_name$end, it is enough to do
while read line
do
done < file
Also, it is more recommendable to use
sum=$(echo $line | cut -f2 -d" ")
rather than
sum=`echo $line | cut -f2 -d" "`
Or even
sum=$(cut -f2 -d" " <<< "$line")

There's no need to use cat as well as read; you can redirect the contents of the file into the loop. You also don't need to use let for arithmetic.
sum = 0
count = 0
while read id score; do
(( sum += score )) && (( ++count ))
done < "$course_name$end"
echo $(( sum / count ))
This will give you an integer result, as bash doesn't do floating point arithmetic. To get a floating point result, you could use bc:
bc <<< "scale=2;$a/$b"
This will give you a result correct to 2 decimal places.

Related

Bash - Sum numbers in line, line by line

So I have a file which looks likes the following:
9032894 First Last 89 43 100
9423897 First Last 89 20 48
And so on, continuing on in this format. My hope is to get the sum of the last three numbers, (such as 89 43 100 for the first line), and then use echo to display this sum. In my attempts to do this, I just end up with the entire first column stored in one variable, as the following code does:
first=$(echo $INT | cut -d" " -f4 $1)
Would take the entire first column in the file and store it in "first". How would I go through and sum each individual line, instead of by column? (I already know how to do this using awk, I'm attempting to do an alternate way of coding using bash)? My full code thus far (which isn't even close to working) is:
#!/bin/bash
filename=$1
while read -a rows
do
first=$(echo $INT | cut -d" " -f4 $1)
second=$(echo $INT | cut -d" " -f5 $1)
third=$(echo $INT | cut -d" " -f6 $1)
echo ${first}
echo ${second}
echo ${third}
done< $filename
Thanks in advance, it's much appreciated.
You can read file columns directly into variables:
while read id firstname lastname first second third
echo "${first}"
echo "${second}"
echo "${third}"
echo Sum: $(( first + second + third ))
done< "$filename"
And get in the habit of quoting your variables, unless you specifically need the result to undergo word-splitting and globbing.
#Eric: Try:
awk '{print $NF+$(NF-1)+$(NF-2)}' Input_file
As you have mentioned last 3 fields always we need to take the sum so we need not to go through a loop we could add their values simply by doing($NF+$(NF-1)+$(NF-2)) where $NF means last field of a line, $(NF-1) is second last field and so on.
If you read a line into an array, just sum the last three elements of the array:
while read -a cells ; do
echo $(( cells[-1] + cells[-2] + cells[-3] ))
done
Output:
232
157
You could also use awk:
$ awk '{s=0; for(i=4;i<=NF;i++) s+=$i; print s}' file
232
157

Convert Floating Point Number To Integer Via File Read

I'm trying to get this to work when the "line" is in the format ###.###
Example line of data:
Query_time: 188.882
Current script:
#!/bin/bash
while read line; do
if [ $(echo "$line" | cut -d: -f2) -gt 180 ];
then
echo "Over 180"
else
echo "Under 180"
fi
done < test_file
Errors I get:
./calculate: line 4: [: 180.39934: integer expression expected
If you have:
line='Query_time: 188.882'
This expression:
$(echo "$line" | cut -d: -f2) -gt 180
Will give an error invalid arithmetic operator since BASH cannot handle floating point numbers.
You can use this awk command:
awk -F ':[ \t]*' '{print ($2 > 180 ? "above" : "under")}' <<< "$line"
above
You can use this awk:
$ echo Query_time: 188.882 | awk '{ print ($2>180?"Over ":"Under ") 180 }'
Over 180
It takes the second space delimited field ($2) and using conditional operator outputs if it was over or under (less than or equal to) 180.

Reading a file in a shell script and selecting a section of the line

This is probably pretty basic, I want to read in a occurrence file.
Then the program should find all occurrences of "CallTilEdb" in the file Hendelse.logg:
CallTilEdb 8
CallCustomer 9
CallTilEdb 4
CustomerChk 10
CustomerChk 15
CallTilEdb 16
and sum up then right column. For this case it would be 8 + 4 + 16, so the output I would want would be 28.
I'm not sure how to do this, and this is as far as I have gotten with vistid.sh:
#!/bin/bash
declare -t filename=hendelse.logg
declare -t occurance="$1"
declare -i sumTime=0
while read -r line
do
if [ "$occurance" = $(cut -f1 line) ] #line 10
then
sumTime+=$(cut -f2 line)
fi
done < "$filename"
so the execution in terminal would be
vistid.sh CallTilEdb
but the error I get now is:
/home/user/bin/vistid.sh: line 10: [: unary operator expected
You have a nice approach, but maybe you could use awk to do the same thing... quite faster!
$ awk -v par="CallTilEdb" '$1==par {sum+=$2} END {print sum+0}' hendelse.logg
28
It may look a bit weird if you haven't used awk so far, but here is what it does:
-v par="CallTilEdb" provide an argument to awk, so that we can use par as a variable in the script. You could also do -v par="$1" if you want to use a variable provided to the script as parameter.
$1==par {sum+=$2} this means: if the first field is the same as the content of the variable par, then add the second column's value into the counter sum.
END {print sum+0} this means: once you are done from processing the file, print the content of sum. The +0 makes awk print 0 in case sum was not set... that is, if nothing was found.
In case you really want to make it with bash, you can use read with two parameters, so that you don't have to make use of cut to handle the values, together with some arithmetic operations to sum the values:
#!/bin/bash
declare -t filename=hendelse.logg
declare -t occurance="$1"
declare -i sumTime=0
while read -r name value # read both values with -r for safety
do
if [ "$occurance" == "$name" ]; then # string comparison
((sumTime+=$value)) # sum
fi
done < "$filename"
echo "sum: $sumTime"
So that it works like this:
$ ./vistid.sh CallTilEdb
sum: 28
$ ./vistid.sh CustomerChk
sum: 25
first of all you need to change the way you call cut:
$( echo $line | cut -f1 )
in line 10 you miss the evaluation:
if [ "$occurance" = $( echo $line | cut -f1 ) ]
you can then sum by doing:
sumTime=$[ $sumTime + $( echo $line | cut -f2 ) ]
But you can also use a different approach and put the line values in an array, the final script will look like:
#!/bin/bash
declare -t filename=prova
declare -t occurance="$1"
declare -i sumTime=0
while read -a line
do
if [ "$occurance" = ${line[0]} ]
then
sumTime=$[ $sumtime + ${line[1]} ]
fi
done < "$filename"
echo $sumTime
For the reference,
id="CallTilEdb"
file="Hendelse.logg"
sum=$(echo "0 $(sed -n "s/^$id[^0-9]*\([0-9]*\)/\1 +/p" < "$file") p" | dc)
echo SUM: $sum
prints
SUM: 28
the sed extract numbers from a lines containing the given id, such CallTilEdb
and prints them in the format number +
the echo prepares a string such 0 8 + 16 + 4 + p what is calculation in RPN format
the dc do the calculation
another variant:
sum=$(sed -n "s/^$id[^0-9]*\([0-9]*\)/\1/p" < "$file" | paste -sd+ - | bc)
#or
sum=$(grep -oP "^$id\D*\K\d+" < "$file" | paste -sd+ - | bc)
the sed (or the grep) extracts and prints only the numbers
the paste make a string like number + number + number (-d+ is a delimiter)
the bc do the calculation
or perl
sum=$(perl -slanE '$s+=$F[1] if /^$id/}{say $s' -- -id="$id" "$file")
sum=$(ID="CallTilEdb" perl -lanE '$s+=$F[1] if /^$ENV{ID}/}{say $s' "$file")
Awk translation to script:
#!/bin/bash
declare -t filename=hendelse.logg
declare -t occurance="$1"
declare -i sumTime=0
sumtime=$(awk -v entry=$occurance '
$1==entry{time+=$NF+0}
END{print time+0}' $filename)

Bash escaping and syntax

I have a small bash file which I intend to use to determine my current ping vs my average ping.
#!/bin/bash
output=($(ping -qc 1 google.com | tail -n 1))
echo "`cut -d/ -f1 <<< "${output[3]}"`-20" | bc
This outputs my ping - 20 ms, which is the number I want. However, I also want to prepend a + if the number is positive and append "ms".
This brings me to my overarching problem: Bash syntax regarding escaping and such heavy "indenting" is kind of flaky.
While I'll be satisfied with an answer of how to do what I wanted, I'd like a link to, or explanation of how exactly bash syntax works dealing with this sort of thing.
output=($(ping -qc 1 google.com | tail -n 1))
echo "${output[3]}" | awk -F/ '{printf "%+fms\n", $1-20}'
The + modifier in printf tells it to print the sign, whether it's positive or negative.
And since we're using awk, there's no need to use cut or bc to get a field or do arithmetic.
Escaping is pretty awful in bash if you use the deprecated `..` style command expansion. In this case, you have to escape any backticks, which means you also have to escape any other escapes. $(..) nests a lot better, since it doesn't add another layer of escaping.
In any case, I'd just do it directly:
ping -qc 1 google.com.org | awk -F'[=/ ]+' '{n=$6}
END { v=(n-20); if(v>0) printf("+"); print v}'
Here's my take on it, recognizing that the result from bc can be treated as a string:
output=($(ping -qc 1 google.com | tail -n 1))
output=$(echo "`cut -d/ -f1 <<< "${output[3]}"`-20" | bc)' ms'
[[ "$output" != -* ]] && output="+$output"
echo "$output"
Bash cannot handle floating point numbers. A workaround is to use awk like this:
#!/bin/bash
output=($(ping -qc 1 google.com | tail -n 1))
echo "`cut -d/ -f1 <<< "${output[3]}"`-20" | bc | awk '{if ($1 >= 0) printf "+%fms\n", $1; else printf "%fms\n", $1}'
Note that this does not print anything if the result of bc is not positive
Output:
$ ./testping.sh
+18.209000ms

Problem with floating point comparison

I am trying to check if a value I read from a text file is zero:
[[ $(echo $line | cut -d" " -f5) -gt 0 ]] && [[ $(echo $line | cut -d" " -f7 | bc -l) -eq 0 ]]
With the first condition there is no problem because f5 are integers. The problem comes form the second condition. I receive this error message:
[[: 1.235: syntax error: invalid arithmetic operator (error token is ".235")
I have tried several suggestions I found in different forums such as using echo $line | cut -d" " -f7 | bc -l with and without double quotes, etc. However, the error persist. f7 is a positive number and is given with 3 decimal places. Removing decimals or approximating is not an option because I need the result to be exactly zero (0.000).
Generally, you can't compare floating-point numbers for equality. This is because the binary representation of decimal numbers is not precise and you get rounding errors. This is the standard answer that most others will give you.
In this specific case, you don't actually need to compare floating-point numbers, because you're just testing whether some text represents a specific number. Since you're in shell, you can either use a regular string compare against "0.000" - assuming your data is rounded in that way - or using regular expressions with grep/egrep. Something like
egrep -q '0(|\.0+)'
Will match 0, 0.0, 0.00, etc, and will exit indicating success or failure, which you can use in the surrounding if statement:
if cut and pipe soup | egrep ... ; then
...
fi
Use a string comparison instead. Replace:
-eq 0
with:
= '0.000'
TZ:
Script section from comment:
for clus in $(ls *.cluster) ; do
while read line ; do
if [[ $(echo $line | cut -d" " -f11) -gt 0 ]] && [[ "$(echo $line | cut -d" " -f15 | bc -l)" = '0.000' ]] ; then
cat $(echo $line | cut -d" " -f6).pdb >> test/$(echo $line | cut -d" " -f2)_pisa.pdb
fi
done < $clus
done
My pseudo-Python interpretation:
for clus in *.cluster:
for line in clus:
fields = line.split(' ')
# field numbers are counting from 1 as in cut
if int(field 11) > 0 and str(field 15) == '0.000':
fin_name = (field 6) + '.pdb'
fout_name = (field 2) + '_pisa.pdb'
cat fin_name >> fout_name
Is that what you intended?

Resources