This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why 6.84 - 3.6 == 3.2399999999999998
And by extension, why does 49.99999999999999 * 1.1 equal 55.0?
I assume this is do to with floating point arithmetic, but am somewhat perplexed as to why this occurs with such a simple sum, and why it is also true for the multiplication case.
You're correct, it is entirely to do with floating point arithmetic. Many decimal numbers are only representable to a certain accuracy in binary, which is why you see the behaviour here. This isn't restricted to ruby - I'd suggest reading What every computer scientist should know about floating point arithmetic.
Your computer works in binary, not in decimal. The number 1.1 cannot be exactly represented in a finite binary representation, so it is necessarily an approximation.
Related
This question already has answers here:
Floating point arithmetic and reproducibility
(1 answer)
Reproducibility of floating point operation result
(2 answers)
Is specifying floating-point type sufficient to guarantee same results?
(1 answer)
Closed 1 year ago.
Hash functions like SHA-2 are widely used to validate the transfer of data or to validate cryptographic integer calculations.
Is it possible to use hash functions to validate floating point calculations, e.g. a physics simulation. In other words is it possible and viable to make floating point calculations consistent across multiple platforms?
If the answer to the previous question is "no", given the calculations are carefully crafted, it might be possible to hash data with reduced precision and still get high confidence in the validation. Is there any ongoing project which tries to achieve that?
Hashes work on binary input, usually bytes. So if you want to use hashes to compare results instead of comparing the results themselves then you need to create a canonical binary encoding of the values. For floating point, the common standard that specifies how numbers can be encoded is IEEE 754, and most languages will specify how they use floats compared to that standard.
For instance, Java adheres to the "round-to-nearest rule of IEEE 754 floating-point arithmetic" and starts with this part:
The floating-point types are float and double, which are conceptually associated with the single-precision 32-bit and double-precision 64-bit format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York).
Please that you should also understand the strictfp flag for Java. Similar tricks may be required for other languages as well.
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
I was trying to solve a mathematical problem:
37.9 - 6.05
Ruby gives me:
37.9 - 6.05
#=> 31.849999999999998
37.90 - 6.05
#=> 31.849999999999998
37.90 + 6.05
#=> 43.949999999999996
Why am I getting this?
In a nutshell, computers have a problem working with real numbers and use
floating point representation to deal with them. Much in the same way you can only represent 256 numbers with 8 bits for natural numbers, you can only represent a fixed amount of numbers with 64 bits for real numbers. For more details on this, read this http://floating-point-gui.de/ or google for "floating point arithmetic".
How should i deal with that?
Never store currency values in floating point variables. Use BigDecimal or do your calculations in cents using only integer numbers.
use round to round your floats to a user friendly length. Rounding errors will occur, especially when adding up a lot of floats.
In SQL systems use decimal data type, or use integers and divide them by a constant factor in the UI (say you need 3 decimal digits, you could store 1234 as integer and display 1.234.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
When I run the code
count = 0
while count < 1
count += 0.1
puts count
end
I would expect
0.1
0.2
0.3
. . .
I have however been getting
0.1
0.2
0.30000000000000004
0.4
0.5
0.6
0.7
0.7999999999999999
0.8999999999999999
0.9999999999999999
1.0999999999999999
can anyone help explain this?
Think of it this way:
Your computer only has 32 or 64 bits to represent a number. That means it can only represent a finite amount of numbers.
Now consider all the decimal values between 0 and 1. There is an infinite amount of them. How can you possibly represent all Real Numbers if your machine can't even represent all the numbers between 0 and 1?
The answer is that your machine needs to approximate decimal numbers. This is what you are seeing.
Of course there are libraries that try to overcome these limitations and make it so that you can still accurately represent decimal numbers. One such library is BigDecimal:
require 'bigdecimal'
count = BigDecimal.new("0")
while count < 1
count += 0.1
puts count.to_s('F')
end
The downfall is that these libraries are generally slower at arithmetic, because they are a software layer above the CPU doing these calculations.
Floating-point numbers cannot precisely represent all real numbers, and floating-point operations cannot precisely represent true arithmetic operations, this leads to many surprising situations.
I advise to read: https://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
You may want to use BigDecimal to avoid such problems.
This is one of the many consequences of the representation of floating point number in memory !
To explain what exactly is is happening would be very long, and has other people have already done it better before, the best thing for you would be to read about go read about it elsewhere :
The very good What Every Computer Scientist Should Know About Floating-Point Arithmetic (article from 1991, reprint on oracle)
Wikipedia pages Floating point and IEEE floating point
IEEE 754 References
You can also have a look at those previous questions on SO:
What is a simple example of floating point/rounding error?
Ruby BigDecimal sanity check (floating point newb)
Strange output when using float instead of double
This question already has answers here:
Addition error with ruby-1.9.2 [duplicate]
(2 answers)
Closed 4 years ago.
Whats wrong here? (ruby version: 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.0.0]
x = 523.8
w = 46.9
xm = x + w
assert_equal w, (xm - x) # FAILS with: <46.9> expected but was <46.89999999999998>
From The Floating-Point Guide:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and
instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point)
that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
Read the linked-to site for details and ways to get around this.
This is perfectly normal; it is a fact about the lower-level concept of floating point arithmetic rather than Ruby and therefore can occur in any language.
Floating point arithmetic is not exact. Equality should be replaced with closeness along the lines of assert((xm-x).abs < epsilon), where epsilon is some small number like 0.01.
Read this. It describes the way binary representation of floating point numbers work in every language, not just Ruby.
The answer to your question is: No.
(Other answers tell you why, but you didn't ask that. :p)
Why this code 7.30 - 7.20 in ruby returns 0.0999999999999996, not 0.10?
But if i'll write 7.30 - 7.16, for example, everything will be ok, i'll get 0.14.
What the problem, and how can i solve it?
What Every Computer Scientist Should Know About Floating-Point Arithmetic
The problem is that some numbers we can easily write in decimal don't have an exact representation in the particular floating point format implemented by current hardware. A casual way of stating this is that all the integers do, but not all of the fractions, because we normally store the fraction with a 2**e exponent. So, you have 3 choices:
Round off appropriately. The unrounded result is always really really close, so a rounded result is invariably "perfect". This is what Javascript does and lots of people don't even realize that JS does everything in floating point.
Use fixed point arithmetic. Ruby actually makes this really easy; it's one of the only languages that seamlessly shifts to Class Bignum from Fixnum as numbers get bigger.
Use a class that is designed to solve this problem, like BigDecimal
To look at the problem in more detail, we can try to represent your "7.3" in binary. The 7 part is easy, 111, but how do we do .3? 111.1 is 7.5, too big, 111.01 is 7.25, getting closer. Turns out, 111.010011 is the "next closest smaller number", 7.296875, and when we try to fill in the missing .003125 eventually we find out that it's just 111.010011001100110011... forever, not representable in our chosen encoding in a finite bit string.
The problem is that floating point is inaccurate. You can solve it by using Rational, BigDecimal or just plain integers (for example if you want to store money you can store the number of cents as an int instead of the number of dollars as a float).
BigDecimal can accurately store any number that has a finite number of digits in base 10 and rounds numbers that don't (so three thirds aren't one whole).
Rational can accurately store any rational number and can't store irrational numbers at all.
That is a common error from how float point numbers are represented in memory.
Use BigDecimal if you need exact results.
result=BigDecimal.new("7.3")-BigDecimal("7.2")
puts "%2.2f" % result
It is interesting to note that a number that has few decimals in one base may typically have a very large number of decimals in another. For instance, it takes an infinite number of decimals to express 1/3 (=0.3333...) in the base 10, but only one decimal in the base 3. Similarly, it takes many decimals to express the number 1/10 (=0.1) in the base 2.
Since you are doing floating point math then the number returned is what your computer uses for precision.
If you want a closer answer, to a set precision, just multiple the float by that (such as by 100), convert it to an int, do the math, then divide.
There are other solutions, but I find this to be the simplest since rounding always seems a bit iffy to me.
This has been asked before here, you may want to look for some of the answers given before, such as this one:
Dealing with accuracy problems in floating-point numbers