Float precision in ruby - ruby

I'm writing a ruby program that uses floats. I'm having trouble with the precision. For example
1.9.3p194 :013 > 113.0 * 0.01
# => 1.1300000000000001
and therefore
1.9.3p194 :018 > 113 * 0.01 == 1.13
# => false
This is exactly the sort of calculation my app needs to get right.
Is this expected? How should I go about handling this?

This is an inherent limitation in floating point numbers (even 0.01 doesn't have an exact binary floating point representation). You can use the technique provided by Aleksey or, if you want perfect precision, use the BigDecimal class bundled in ruby. It's more verbose, and slower, but it will give the right results:
require 'bigdecimal'
=> true
1.9.3p194 :003 > BigDecimal.new("113") * BigDecimal("0.01")
=> #<BigDecimal:26cefd8,'0.113E1',18(36)>
1.9.3p194 :004 > BigDecimal.new("113") * BigDecimal("0.01") == BigDecimal("1.13")
=> true

In calculation with float you should use sigma method - it means not to compare two values, but compare absolute difference of them with a very little value - 1e-10, for example.
((113 * 0.01) - 1.13).abs<1e-10

Related

Why is `9.3 == 9.3.to_d` false?

I just ran into an interesting case during TDD:
Failure/Error: expect(MoneyManager::CustomsCalculator.call(price: 31, weight: 1.12)).to eq 9.3
expected: 9.3
got: 0.93e1
I investigated further and found:
require 'bigdecimal'
=> true
2.4.2 :005 > require 'bigdecimal/util'
=> true
...
2.4.2 :008 > 1 == 1.to_d
=> true
2.4.2 :009 > 2 == 2.to_d
=> true
2.4.2 :010 > 2.0 == 2.0.to_d
=> true
2.4.2 :011 > 1.3 == 1.3.to_d
=> true
2.4.2 :012 > 9.3 == 9.3.to_d
=> false
Why is 9.3 == 9.3.to_d false?
PS, I am well aware of what a Float and a BigDecimal is, but I'm delightfully puzzled by this particular behavior.
This is not really a "ruby problem". This is a floating point representation of numbers problem.
You cannot reliably perform an equality check between floating point numbers and the "exact" value (as represented by BigDecimal).
BigDecimal.new(9.3, 2) is exact. 9.3 is not.
9.3 * 100 #=> 930.0000000000001
1.3 * 100 #=> 130.0
That's just how binary floating point numbers work. They are (sometimes) an inexact representation of the "true" value.
You can either:
Compare like-for-like (bigdecimal1 == bigdecimal2, or float1 == float2). But also note that comparing float1 == float2 is also unreliable if you're performing different calculations to get those values!! Or,
Check that the values are equal within an error bound (e.g. in rspec terms, expect(value1).to be_within(1e-12).of(value2)).
Edited due to Eric comment above
You could use the nature of float and compare it to to a limit you propose which would return either true or false reliably.
(bigdecimal-float).abs < comparison_limit
In your example that would be (I have added () to improve readability):
((9.3.to_d)-9.3).abs < 0.000001 <-- watch out for the limit!
Which yields true and can be used for testing.
Edit based on Eric's (thank you for it) comment.
It is important to always check the limits of the tolerance when comparing the two numbers.
You could do it the following way:
9.3.next_float
which would give you
9.300000000000002
so your tolerance should be
0.000000000000002
Note: watch out for the step:
9.3.next_float.next_float
=> 9.300000000000004
Now the code looks differently:
((9.3.to_d)-9.3).abs < 0.000000000000002

Convert 9999999999999999999999.001 to "9999999999999999999999.001" in ruby

How to convert 9999999999999999999999.001 to "9999999999999999999999.001" in ruby
I have tried
>> 9999999999999999999999.001.to_s
=> "1.0e+22"
>> "%f" % 9999999999999999999999.001
=> "10000000000000000000000.000000"
Truth is you can't. The number you write is 22 digits and floats in ruby have only 15 digits precision. So when you use this variable already part of its value is kind of "lost" as it is of the class Float.
Use BigDecimal from the standard library.
1.8.7 :005 > require 'bigdecimal'
=> true
1.8.7 :006 > BigDecimal('9999999999999999999999.001')
=> #<BigDecimal:7fe0cbcead70,'0.9999999999 9999999999 99001E22',36(36)>
1.8.7 :007 > BigDecimal('9999999999999999999999.001').to_s
=> "0.9999999999999999999999001E22"
Of course, this example only shows that BigDecimal can handle numbers that big. Wherever you're initially getting your 9999999999999999999999.001 number from needs to get it into BigDecimal as soon as it's calculated / inputted.
You can not do that. The reason is simple: from the very beginning the value of the number is not going to be exactly 9999999999999999999999.001. Floats will have just 15 digits of precision.
However, you can use other type to achieve what you want:
require 'bigdecimal'
a = BigDecimal("9999999999999999999999.001")
a.to_s("F")
>> "9999999999999999999999.001"
For BigDecimal the precision is extended with the requests of bigger real numbers - no restrictions are applied.
Float is faster in calculations, because its meant to use the FPU of the processor directly, but because of that comes the restriction in the precision.
EDIT Especially for #izomorphius and his argument, just a very short code sample:
a = "34.101"
b = BigDecimal(a.to_s)
c = b ** 15
c.to_s("F")
>>> 98063348952510709441484.183684987951811295085234607613193907150561501
Now tell me how otherwise you get the last string?

Does this seem like a ruby Time equality bug?

Why the false in the 2nd comparison? I am not loading any libraries.
puts RUBY_DESCRIPTION
t = Time.now
t1 = Time.at(t.to_f)
t2 = Time.at(t.to_f)
puts( t1 == t2 )
puts( t == t1 )
puts( t.to_f == t1.to_f )
printf "%.64f\n%.64f\n%.64f\n", t, t1, t2
Output:
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-darwin11.4.0]
true
false
true
1347661545.4348440170288085937500000000000000000000000000000000000000000000
1347661545.4348440170288085937500000000000000000000000000000000000000000000
1347661545.4348440170288085937500000000000000000000000000000000000000000000
I get all trues on 1.8.7. What's going on?
I updated the script to show that the floats are the same, as far as I can tell. Am I missing something?
From the docs on Time.to_f: "Note that IEEE 754 double is not accurate enough to represent number of nanoseconds from the Epoch." To illustrate #oldrinb's comment:
puts RUBY_DESCRIPTION # ruby 1.9.3p194 (2012-04-20 revision 35410) [i686-linux]
t = Time.now
p t.subsec #=> (40189433/100000000); # a Rational, note the last digits 33
p t.to_f #=> 1347661635.4018943 # last digit missing 3
Time#subsec documentation: "The lowest digit of #to_f and subsec is different because IEEE 754 double is not accurate enough to represent the rational. The accurate value is returned by subsec."
I'm willing to bet that this is a classic float precision issue. Specifically, when you call #to_f, you are likely losing precision present in the original object.
You can see this easily if you compare the #nsec values of each object:
1.9.3p194 :059 > t = Time.now
=> 2012-09-14 15:29:59 -0700
1.9.3p194 :060 > t2 = Time.at(t.to_f)
=> 2012-09-14 15:29:59 -0700
1.9.3p194 :062 > t.nsec
=> 489932427
1.9.3p194 :063 > t2.nsec
=> 489932537
The reason that Time.at(t.to_f) == Time.at(t.to_f) likely succeeds is that both have the same float precision loss in their input, so their inputs are indeed identical.
So, in summary, it's buggy behavior, but it's not a bug /per se/, because it's tied to a fundamental caveat of float arithmetic.

How do I round a float to a specified number of significant digits in Ruby?

It would be nice to have an equivalent of R's signif function in Ruby.
For example:
>> (11.11).signif(1)
10
>> (22.22).signif(2)
22
>> (3.333).signif(2)
3.3
>> (4.4).signif(3)
4.4 # It's usually 4.40 but that's OK. R does not print the trailing 0's
# because it returns the float data type. For Ruby we want the same.
>> (5.55).signif(2)
5.6
There is probably better way, but this seems to work fine:
class Float
def signif(signs)
Float("%.#{signs}g" % self)
end
end
(1.123).signif(2) # => 1.1
(11.23).signif(2) # => 11.0
(11.23).signif(1) # => 10.0
Here's an implementation that doesn't use strings or other libraries.
class Float
def signif(digits)
return 0 if self.zero?
self.round(-(Math.log10(self).ceil - digits))
end
end
I don't see anything like that in Float. Float is mostly a wrapper for the native double type and given the usual binary/decimal issues, I'm not that surprised that Float doesn't allow you to manipulate the significant digits.
However, BigDecimal in the standard library does understand significant digits but again, I don't see anything that allows you to directly alter the significant digits in a BigDecimal: you can ask for it but you can't change it. But, you can kludge around that by using a no-op version of the mult or add methods:
require 'bigdecimal'
a = BigDecimal.new('11.2384')
a.mult(1, 2) # the result is 0.11E2 (i.e. 11)
a.add(0, 4) # the result is 0.1124E2 (i.e. 11.24)
The second argument to these methods:
If specified and less than the number of significant digits of the result, the result is rounded to that number of digits, according to BigDecimal.mode.
Using BigDecimal will be slower but it might be your only choice if you need fine grained control or if you need to avoid the usual floating point problems.
Some of the previous answers and comments have alluded to this solution but this is what worked for me:
# takes in a float value and returns another float value rounded to
# given significant figures.
def round_to_sig_figs(val, sig_figs)
BigDecimal.new(val, sig_figs).to_f
end
You are probably looking for Ruby's Decimal.
You could then write:
require 'decimal/shortcut'
num = 1.23541764
D.context.precision = 2
num_with_2_significant_digits = +D(num.to_s) # => Decimal('1.2')
num_with_2_significant_digits.to_f # => 1.2000000000000002
Or if you prefer to use the same syntax add this as a function to class Float like this:
class Float
def signif num_digits
require 'decimal/shortcut'
D.context.precision = num_digits
(+D(self.to_s)).to_f
end
end
Usage would then be the same, i.e.
(1.23333).signif 3
# => 1.23
To use it, install the gem
gem install ruby-decimal
#Blou91's answer is nearly there, but it returns a string, instead of a float. This below works for me:
(sprintf "%.2f", 1.23456).to_f
So as a function,
def round(val, sig_figs)
(sprintf "%.#{sig_figs}f", val).to_f
end
Use sprintf if you want to print trailing zeros
2.0.0-p353 :001 > sprintf "%.3f", 500
=> "500.000"
2.0.0-p353 :002 > sprintf "%.4f", 500
=> "500.0000"
2.0.0-p353 :003 >

How do I declare NaN (not a number) in Ruby?

Also "NaN".to_f returns 0 instead of NaN.
Since Ruby 1.9.3 there is a constant to get the NaN value
Float::NAN
=> NaN
If you need to test if a number is NaN, you can use #nan? on it:
ruby-1.8.7-p352 :008 > (0/0.0).nan? #=> true
ruby-1.8.7-p352 :009 > (0/1.0).nan? #=> false
The simplest way is to use 0.0 / 0.0. "NaN".to_f doesn't work, and there's some discussion in this thread about why.
0.0 / 0.0 works for me on ruby 1.8.6.
The thread linked to by Pesto has this function, which should work on platforms where floating-point numbers are implemented according to IEEE 754:
def aNaN
s, e, m = rand(2), 2047, rand(2**52-1)+1
[sprintf("%1b%011b%052b", s,e,m)].pack("B*").unpack("G").first
end
Assigning a variable in Rails can be done thus (useful for unit-testing):
o.amount = BigDecimal.new('NaN')
expect(o.valid?).to be false

Resources