Ruby and mathematical problems [duplicate] - ruby

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
I was trying to solve a mathematical problem:
37.9 - 6.05
Ruby gives me:
37.9 - 6.05
#=> 31.849999999999998
37.90 - 6.05
#=> 31.849999999999998
37.90 + 6.05
#=> 43.949999999999996
Why am I getting this?

In a nutshell, computers have a problem working with real numbers and use
floating point representation to deal with them. Much in the same way you can only represent 256 numbers with 8 bits for natural numbers, you can only represent a fixed amount of numbers with 64 bits for real numbers. For more details on this, read this http://floating-point-gui.de/ or google for "floating point arithmetic".
How should i deal with that?
Never store currency values in floating point variables. Use BigDecimal or do your calculations in cents using only integer numbers.
use round to round your floats to a user friendly length. Rounding errors will occur, especially when adding up a lot of floats.
In SQL systems use decimal data type, or use integers and divide them by a constant factor in the UI (say you need 3 decimal digits, you could store 1234 as integer and display 1.234.

Related

ruby issue with float point iteration [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
When I run the code
count = 0
while count < 1
count += 0.1
puts count
end
I would expect
0.1
0.2
0.3
. . .
I have however been getting
0.1
0.2
0.30000000000000004
0.4
0.5
0.6
0.7
0.7999999999999999
0.8999999999999999
0.9999999999999999
1.0999999999999999
can anyone help explain this?
Think of it this way:
Your computer only has 32 or 64 bits to represent a number. That means it can only represent a finite amount of numbers.
Now consider all the decimal values between 0 and 1. There is an infinite amount of them. How can you possibly represent all Real Numbers if your machine can't even represent all the numbers between 0 and 1?
The answer is that your machine needs to approximate decimal numbers. This is what you are seeing.
Of course there are libraries that try to overcome these limitations and make it so that you can still accurately represent decimal numbers. One such library is BigDecimal:
require 'bigdecimal'
count = BigDecimal.new("0")
while count < 1
count += 0.1
puts count.to_s('F')
end
The downfall is that these libraries are generally slower at arithmetic, because they are a software layer above the CPU doing these calculations.
Floating-point numbers cannot precisely represent all real numbers, and floating-point operations cannot precisely represent true arithmetic operations, this leads to many surprising situations.
I advise to read: https://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
You may want to use BigDecimal to avoid such problems.
This is one of the many consequences of the representation of floating point number in memory !
To explain what exactly is is happening would be very long, and has other people have already done it better before, the best thing for you would be to read about go read about it elsewhere :
The very good What Every Computer Scientist Should Know About Floating-Point Arithmetic (article from 1991, reprint on oracle)
Wikipedia pages Floating point and IEEE floating point
IEEE 754 References
You can also have a look at those previous questions on SO:
What is a simple example of floating point/rounding error?
Ruby BigDecimal sanity check (floating point newb)
Strange output when using float instead of double

Why does ruby round like this? [duplicate]

This question already has answers here:
Why is division in Ruby returning an integer instead of decimal value?
(7 answers)
Closed 8 years ago.
Hi I have the following question:
When I put (1+7/100) in ruby it gives 1.
This is very strange because normally this is how I calculate a 7% increase in Excel.
But when I put (1+7.0/100) it gives me 1.07 which is the correct answer I expected.
Why is ruby doing this? And how do you solve this issue in your calculations in ruby?
This has nothing to do with rounding.
Ruby does division differently on float than it does on an integer.
If you divide integers, you will always get an integer result.
If you divide with floats (or a mixture of integer and float), you will always get a float result.
Remember your order of operations, too. Ruby is going to handle the division before it handles the addition.
7/100 = 0 so 1+0 = 1
7.0/100 = 0.07 so 1+0.07 = 1.07

Why does 55.0 / 1.1 equal 49.99999999999999? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why 6.84 - 3.6 == 3.2399999999999998
And by extension, why does 49.99999999999999 * 1.1 equal 55.0?
I assume this is do to with floating point arithmetic, but am somewhat perplexed as to why this occurs with such a simple sum, and why it is also true for the multiplication case.
You're correct, it is entirely to do with floating point arithmetic. Many decimal numbers are only representable to a certain accuracy in binary, which is why you see the behaviour here. This isn't restricted to ruby - I'd suggest reading What every computer scientist should know about floating point arithmetic.
Your computer works in binary, not in decimal. The number 1.1 cannot be exactly represented in a finite binary representation, so it is necessarily an approximation.

Ada Digits Confusion

I have been doing some reading, and I'm having a tough time understanding how to interpret something that is a "digits x".
I.E.
type something is digits 6
I get that it's 6 digits of precision, but I guess what has me mixed up is what does that mean.
1) Y.XXXXXX (6X's),
2) XXX.XXX (Any number of digits, just will always be 6 of them counting both fore and aft the mantissa)
...
I'm just trying to understand what a range of something that is digits 6 (or digits n to be more generic), is there a formula I can simply plug into to determine what my ranges are on a type that is some number of digits?
A type declared with digits is a floating-point type, similar to Float or Long_Float.
The 6 is "the minimum number of significant decimal digits required for
the floating point type". For example, all the following will be represented reasonably accurately (but not exactly):
type My_Real is digits 6;
X: My_Real := 1.23456;
Y: My_Real := 12345.6;
Z: My_Real := 1.23456E7;
In practice, there are usually just 2 or 3 underlying floating-point types on a given system. The compiler will choose an appropriate one as the underlying type for your declaration. In practice, two types declared with digits 2 and digits 6 will probably have exactly the same representation and precision.
Understanding the phrase "not exactly" requires an understanding of floating-point that's well beyond the scope of a single question, but if you're familiar with floating-point in other languages, it's the same general idea.
If you want a general understanding of what floating-point is and how it works, the Wikipedia Article isn't bad. A much more advanced treatment is David Goldberg's classic paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic", available here as a web page and here as a PDF.

Arithmetic in ruby

Why this code 7.30 - 7.20 in ruby returns 0.0999999999999996, not 0.10?
But if i'll write 7.30 - 7.16, for example, everything will be ok, i'll get 0.14.
What the problem, and how can i solve it?
What Every Computer Scientist Should Know About Floating-Point Arithmetic
The problem is that some numbers we can easily write in decimal don't have an exact representation in the particular floating point format implemented by current hardware. A casual way of stating this is that all the integers do, but not all of the fractions, because we normally store the fraction with a 2**e exponent. So, you have 3 choices:
Round off appropriately. The unrounded result is always really really close, so a rounded result is invariably "perfect". This is what Javascript does and lots of people don't even realize that JS does everything in floating point.
Use fixed point arithmetic. Ruby actually makes this really easy; it's one of the only languages that seamlessly shifts to Class Bignum from Fixnum as numbers get bigger.
Use a class that is designed to solve this problem, like BigDecimal
To look at the problem in more detail, we can try to represent your "7.3" in binary. The 7 part is easy, 111, but how do we do .3? 111.1 is 7.5, too big, 111.01 is 7.25, getting closer. Turns out, 111.010011 is the "next closest smaller number", 7.296875, and when we try to fill in the missing .003125 eventually we find out that it's just 111.010011001100110011... forever, not representable in our chosen encoding in a finite bit string.
The problem is that floating point is inaccurate. You can solve it by using Rational, BigDecimal or just plain integers (for example if you want to store money you can store the number of cents as an int instead of the number of dollars as a float).
BigDecimal can accurately store any number that has a finite number of digits in base 10 and rounds numbers that don't (so three thirds aren't one whole).
Rational can accurately store any rational number and can't store irrational numbers at all.
That is a common error from how float point numbers are represented in memory.
Use BigDecimal if you need exact results.
result=BigDecimal.new("7.3")-BigDecimal("7.2")
puts "%2.2f" % result
It is interesting to note that a number that has few decimals in one base may typically have a very large number of decimals in another. For instance, it takes an infinite number of decimals to express 1/3 (=0.3333...) in the base 10, but only one decimal in the base 3. Similarly, it takes many decimals to express the number 1/10 (=0.1) in the base 2.
Since you are doing floating point math then the number returned is what your computer uses for precision.
If you want a closer answer, to a set precision, just multiple the float by that (such as by 100), convert it to an int, do the math, then divide.
There are other solutions, but I find this to be the simplest since rounding always seems a bit iffy to me.
This has been asked before here, you may want to look for some of the answers given before, such as this one:
Dealing with accuracy problems in floating-point numbers

Resources