This question already has answers here:
ruby floating point errors
(3 answers)
Closed 6 years ago.
I am using ruby 2.3.0p0.
I have been trying to do a simple addition using 2 float numbers in ruby
irb(main):001:0> 1.50 + 14.99
=> 16.490000000000002
The desired result should be 16.49 instead of 16.490000000000002
a = 1.5
b = 14.99
c = a + b
How could I fixed this so that I could get 16.49 for variable c
Cheers.
Since floating point numbers are not as precise as you think they are, they often include tiny bits of noise that result from various calculations, it's your responsibility to specify what level of precision you want when displaying them:
a = 1.5
b = 14.99
c = a + b
puts '%.2f' % c
The %.2f notation here means to have two places. %.9f would be nine places.
This is how floating point numbers work. Don't expect them to be neat and tidy.
Related
This question already has answers here:
Round to nearest multiple of a number
(3 answers)
Closed last year.
Let’s imagine I would like to round a number (i.e x = 7.4355) to a given arbitrary precision (i.e p = 0.002). In this case, I would expect to see:
round_arbitrary(x, p) = 7.436
What would be the best approach to design such a rounding function? Ideas in pseudocode or Rust are welcome
What would be the best approach to design such a rounding function?
An approach that gets near to OP's goal:
// Pseudo code (p != 0)
round_arbitrary(x, p)
x /= p
x = round(x)
return x*p
A key point is that floating point numbers are finite in size and so can represent about 264 different values exactly whereas code values like 7.4355, 0.002 and the math quotient 1/7.0 are of a much bigger set. Thus the above will get one close, but not certainty to an exact mathematically rounded value.
More advanced code would avoid overflow by not rounding large values which do not need rounding.
// Assume 0 < |p| < 1.0
round_arbitrary_2(x, p)
if (round(x) != x)
x /= p
x = round(x)
x *= p;
return x*p
Deeper
This issues lies with floating point numbers that are encoded with an integer times a power-of-2. Then the question is not so much "How to round to an arbitrary (non power-of-ten) precision", but "How to round to an arbitrary (non power-of-2) precision".
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
Below is the small segment of code from my script:
The logic added is to covert any data value as per the scale value
x = "11400000206633458812"
scale = 2
x = x.to_f/10**scale.to_i
x = x == x.to_i ? x.to_i : x
puts x
observe that the value comes out to be "114000002066334592"
which is different from the initial value of x.
My test-points are failing at this point.
x can be any value. If i use x = 156, the output is correct, but when the length of the integer exceeds 16 digits then the problem arises.
The expected output in above case is 114000002066334588
Can anybody help me why the value is changing and how to fix it?
The expected output in above case is 114000002066334588
Not at all.
One can not expect floating point operations to be precise. That is just not the case because of processor architecture. In all languages, with no exceptions, there are workarounds to work with decimals precisely. In ruby it’s BigDecimal:
require 'bigdecimal'
BigDecimal("11400000206633458812") / 100
#⇒ 0.11400000206633458812E18
(BigDecimal("11400000206633458812") / 100).to_i
#⇒ 114000002066334588
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I have some code:
num1 = 1.001
num2 = 0.001
sum = num1 + num2
puts sum
I expected 1.002000 but I am getting 1.0019999999999998. Why is this the case?
This is a commonly-found, fundamental problem with binary representation of fractional values, and is not Ruby-specific.
Because of the way that floating-point numbers are implemented as binary values, there are sometimes noticeable discrepancies between what you'd expect with "normal" decimal math and what actually results. Ruby's default representation of floating-point numbers is no exception--since it is based on the industry-standard double-precision (IEEE 754) format, it internally represents non-integers as binary values, and so its approximations don't quite line up with decimal values.
If you need to do decimal calculations in Ruby, consider using BigDecimal (documentation):
require 'bigdecimal'
num1 = BigDecimal.new("1.001")
num2 = BigDecimal.new("0.001")
puts num1 + num2 #=> 0.1002E1
In a perfect world yes you expected that 1.002000, but you have an error due to rounding in floating point arithmetic operation, you can check on the web machine epsilon or just floating point.
For example, you can calculate the relative error like that, and the error machine for the ruby language is 1e-15
f = 0.0
100.times { f += 0.1 }
p f #=> 9.99999999999998 # should be 10.0 in the ideal world.
p 10-f #=> 1.9539925233402755e-14 # the floating-point error.
So this is weird. I'm in Ruby 1.9.3, and float addition is not working as I expect it would.
0.3 + 0.6 + 0.1 = 0.9999999999999999
0.6 + 0.1 + 0.3 = 1
I've tried this on another machine and get the same result. Any idea why this might happen?
Floating point operations are inexact: they round the result to nearest representable float value.
That means that each float operation is:
float(a op b) = mathematical(a op b) + rounding-error( a op b )
As suggested by above equation, the rounding error depends on operands a & b.
Thus, if you perform operations in different order,
float(float( a op b) op c) != float(a op (b op c))
In other words, floating point operations are not associative.
They are commutative though...
As other said, transforming a decimal representation 0.1 (that is 1/10) into a base 2 representation (that is 1/16 + 1/64 + ... ) would lead to an infinite serie of digits. So float(0.1) is not equal to 1/10 exactly, it also has a rounding-error and it leads to a long serie of binary digits, which explains that following operations have a non null rounding-error (mathematical result is not representable in floating point)
It has been said many times before but it bears repeating: Floating point numbers are by their very nature approximations of decimal numbers. There are some decimal numbers that cannot be represented precisely due to the way the floating point numbers are stored in binary. Small but perceptible rounding errors will occur.
To avoid this kind of mess, you should always format your numbers to an appropriate number of places for presentation:
'%.3f' % (0.3 + 0.6 + 0.1)
# => "1.000"
'%.3f' % (0.6 + 0.1 + 0.3)
# => "1.000"
This is why using floating point numbers for currency values is risky and you're generally encouraged to use fixed point numbers or regular integers for these things.
First, the numerals “0.3”, “.6”, and “.1” in the source text are converted to floating-point numbers, which I will call a, b, and c. These values are near .3, .6, and .1 but not equal to them, but that is not directly the reason you see different results.
In each floating-point arithmetic operation, there may be a little rounding error, some small number ei. So the exact mathematical results your two expressions calculate is:
(a + b + e0) + c + e1 and
(b + c + e2) + a + e3.
That is, in the first expression, a is added to b, and there is a slight rounding error e0. Then c is added, and there is a slight rounding error e1. In the second expression, b is added to c, and there is a slight rounding error e2. Finally, a is added, and there is a slight rounding error e3.
The reason your results differ is that e0 + e1 ≠ e2 + e3. That is, the rounding that was necessary when a and b were added was different from the rounding that was necessary when b and c were added and/or the roundings that were necessary in the second additions of the two cases were different.
There are rules that govern these errors. If you know the rules, you can make deductions about them that bound the size of the errors in final results.
This is a common limitation of floating point numbers, due to their being encoded in base 2 instead of base 10. It can be difficult to understand, but once you do, you can easily avoid problems like this. I recommend this guide, which goes in depth to explain it.
For this problem specifically, you might try rounding your result to the nearest millionths place:
result = (0.3+0.6+0.1)
=> 0.9999999999999999
(result*1000000.0).round/1000000.0
=> 1.0
As for why the order matters, it has to do with rounding. When those numbers are turned into floats, they are converted to binary, and all of them become repeating fractions, like ⅓ is in decimal. Since the result gets rounded during each addition, the final answer depends on the order of the additions. It appears that in one of those, you get a round-up, where in the other, you get a round-down. This explains the discrepancy.
It is worth noting what the actual difference is between those two answers: approximately 0.0000000000000001.
In view you can use also the number_with_precision helper:
result = 0.3 + 0.6 + 0.1
result = number_with_precision result, :precision => 3
Wouldn't it be better to see an error when dividing an odd integer by 2 than an incorrect calculation?
Example in Ruby (I'm guessing its the same in other languages because ints and floats are common datatypes):
39 / 2 => 19
I get that the output isn't 19.5 because we're asking for the value of an integer divided by an integer, not a float (39.0) divided by an integer. My question is, if the limits of these datatypes inhibit it from calculating the correct value, why output the least correct value?
Correct = 19.5
Correct-ish = 20 (rounded up)
Least correct = 19
Wouldn't it be better to see an error?
Throwing an error would be usually be extremely counter-productive, and computationally inefficient in most languages.
And consider that this is often useful behaviour:
total_minutes = 563;
hours = total_minutes / 60;
minutes = total_minutes % 60;
Correct = 19.5
Correct-ish = 20 (rounded up)
Least correct = 19
Who said that 20 is more correct than 19?
Among other reasons to keep the following very useful relationship between the sibling operators of division and modulo.
Quotient: a / b = Q
Remainder: a % b = R
Awesome relationship: a = b*Q + R.
Also so that integer division by two returns the same result as a right shift by one bit and lots of other nice relationships.
But the secret, main reason is that C did it this way, and you simply don't argue with C!
If you divide through 2.0, you get the correct result in Ruby.
39 / 2.0 => 19.5