This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
The universal advice to avoid floating point errors in ruby is to use BigDecimal. I must be overlooking something, because I think I've found a case where BigDecimal math is returning an error where a Float does not:
using Float gives the correct answer of 2.75:
> 50.0 * 0.6 / 360.0 * 33
=> 2.75
using BigDecimal gives the incorrect answer of 2.74999999:
> BigDecimal("50") * BigDecimal("0.6") / BigDecimal("360") * BigDecimal("33")
=> #<BigDecimal:7efe74824c80,'0.2749999999 999999989E1',27(36)>
Someone please tell me what I'm missing here?
Let's simplify your example, and use this one instead:
BigDecimal(1) / BigDecimal(3) * BigDecimal(3)
# => #<BigDecimal:19289d8,'0.9999999999 99999999E0',18(36)>
How did it get there?
BigDecimal(1) / BigDecimal(3)
# => #<BigDecimal:1921a70,'0.3333333333 33333333E0',18(36)>
BigDecimal does not provide rational numbers, so when you divide 1 by 3, you get 0, following by a lot of 3s. A lot, but not infinitely many. When you then multiply that by 3, you will get 0 followed by equally many 9s.
I believe you misread the BigDecimal's advertisement (although I am not sure it is anywhere advertised as the solution to floating point errors). It just provides arbitrary precision. It is still a floating point number. If you really want exact numbers when dividing numbers, you might take a look at Rational class:
(Rational(50) * Rational(0.6) / Rational(360) * Rational(33)).to_f
# => 2.75
Related
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
I was trying to solve a mathematical problem:
37.9 - 6.05
Ruby gives me:
37.9 - 6.05
#=> 31.849999999999998
37.90 - 6.05
#=> 31.849999999999998
37.90 + 6.05
#=> 43.949999999999996
Why am I getting this?
In a nutshell, computers have a problem working with real numbers and use
floating point representation to deal with them. Much in the same way you can only represent 256 numbers with 8 bits for natural numbers, you can only represent a fixed amount of numbers with 64 bits for real numbers. For more details on this, read this http://floating-point-gui.de/ or google for "floating point arithmetic".
How should i deal with that?
Never store currency values in floating point variables. Use BigDecimal or do your calculations in cents using only integer numbers.
use round to round your floats to a user friendly length. Rounding errors will occur, especially when adding up a lot of floats.
In SQL systems use decimal data type, or use integers and divide them by a constant factor in the UI (say you need 3 decimal digits, you could store 1234 as integer and display 1.234.
This question already has answers here:
ruby floating point errors
(3 answers)
Closed 7 years ago.
Can someone explain to me why:
33.8 * 100 # => 3379.999999999995
but
23.8 * 100 # => 2380.0
Floating-point numbers cannot precisely represent all real numbers, and floating-point operations cannot precisely represent true arithmetic operations, this leads to many surprising situations.
I advise to read: https://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
You may want to use BigDecimal to avoid such problems.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
When I run the code
count = 0
while count < 1
count += 0.1
puts count
end
I would expect
0.1
0.2
0.3
. . .
I have however been getting
0.1
0.2
0.30000000000000004
0.4
0.5
0.6
0.7
0.7999999999999999
0.8999999999999999
0.9999999999999999
1.0999999999999999
can anyone help explain this?
Think of it this way:
Your computer only has 32 or 64 bits to represent a number. That means it can only represent a finite amount of numbers.
Now consider all the decimal values between 0 and 1. There is an infinite amount of them. How can you possibly represent all Real Numbers if your machine can't even represent all the numbers between 0 and 1?
The answer is that your machine needs to approximate decimal numbers. This is what you are seeing.
Of course there are libraries that try to overcome these limitations and make it so that you can still accurately represent decimal numbers. One such library is BigDecimal:
require 'bigdecimal'
count = BigDecimal.new("0")
while count < 1
count += 0.1
puts count.to_s('F')
end
The downfall is that these libraries are generally slower at arithmetic, because they are a software layer above the CPU doing these calculations.
Floating-point numbers cannot precisely represent all real numbers, and floating-point operations cannot precisely represent true arithmetic operations, this leads to many surprising situations.
I advise to read: https://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
You may want to use BigDecimal to avoid such problems.
This is one of the many consequences of the representation of floating point number in memory !
To explain what exactly is is happening would be very long, and has other people have already done it better before, the best thing for you would be to read about go read about it elsewhere :
The very good What Every Computer Scientist Should Know About Floating-Point Arithmetic (article from 1991, reprint on oracle)
Wikipedia pages Floating point and IEEE floating point
IEEE 754 References
You can also have a look at those previous questions on SO:
What is a simple example of floating point/rounding error?
Ruby BigDecimal sanity check (floating point newb)
Strange output when using float instead of double
This question already has answers here:
Why is division in Ruby returning an integer instead of decimal value?
(7 answers)
Closed 8 years ago.
Hi I have the following question:
When I put (1+7/100) in ruby it gives 1.
This is very strange because normally this is how I calculate a 7% increase in Excel.
But when I put (1+7.0/100) it gives me 1.07 which is the correct answer I expected.
Why is ruby doing this? And how do you solve this issue in your calculations in ruby?
This has nothing to do with rounding.
Ruby does division differently on float than it does on an integer.
If you divide integers, you will always get an integer result.
If you divide with floats (or a mixture of integer and float), you will always get a float result.
Remember your order of operations, too. Ruby is going to handle the division before it handles the addition.
7/100 = 0 so 1+0 = 1
7.0/100 = 0.07 so 1+0.07 = 1.07
This question already has answers here:
Addition error with ruby-1.9.2 [duplicate]
(2 answers)
Closed 4 years ago.
Whats wrong here? (ruby version: 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.0.0]
x = 523.8
w = 46.9
xm = x + w
assert_equal w, (xm - x) # FAILS with: <46.9> expected but was <46.89999999999998>
From The Floating-Point Guide:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and
instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point)
that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
Read the linked-to site for details and ways to get around this.
This is perfectly normal; it is a fact about the lower-level concept of floating point arithmetic rather than Ruby and therefore can occur in any language.
Floating point arithmetic is not exact. Equality should be replaced with closeness along the lines of assert((xm-x).abs < epsilon), where epsilon is some small number like 0.01.
Read this. It describes the way binary representation of floating point numbers work in every language, not just Ruby.
The answer to your question is: No.
(Other answers tell you why, but you didn't ask that. :p)