Display BigDecimal To Arbitrary Precision - ruby

Somehow I can't find the answer to this using google or SO...
Consider:
require 'bigdecimal'
puts (BigDecimal.new(1)/BigDecimal.new(3)).to_s
#=> 0.333333333333333333E0
I want the ability to specify a precision of 100 or 200 or 1000, which would print out "0." followed by 100 threes, 200 threes, or 1000 threes, respectively.
How can I accomplish this? The answer should also work for non-repeating decimals, in which case the extra digits of precision of would be filled with zeros.
Thanks!

I think the problem is that the BigDecimal objects don't have their precision set to a value high enough. I can get a 1000 fractional digits printed if I explicitly specify the required precision of the operation by using div instead of /:
require 'bigdecimal'
puts (BigDecimal.new(1).div(BigDecimal.new(3), 1000)).to_s
#=> 0.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333E0
After that, you can limit the number of fractional digits with round.

Related

How to prevent BigDecimal from truncating results?

Follow up to this question:
I want to calculate 1/1048576 and get the correct result, i.e. 0.00000095367431640625.
Using BigDecimal's / truncates the result:
require 'bigdecimal'
a = BigDecimal.new(1)
#=> #<BigDecimal:7fd8f18aaf80,'0.1E1',9(27)>
b = BigDecimal.new(2**20)
#=> #<BigDecimal:7fd8f189ed20,'0.1048576E7',9(27)>
n = a / b
#=> #<BigDecimal:7fd8f0898750,'0.9536743164 06E-6',18(36)>
n.to_s('F')
#=> "0.000000953674316406" <- should be ...625
This really surprised me, because I was under the impression that BigDecimal would just work.
To get the correct result, I have to use div with an explicit precision:
n = a.div(b, 100)
#=> #<BigDecimal:7fd8f29517a8,'0.9536743164 0625E-6',27(126)>
n.to_s('F')
#=> "0.00000095367431640625" <- correct
But I don't really understand that precision argument. Why do I have to specify it and what value do I have to use to get un-truncated results?
Does this even qualify as "arbitrary-precision floating point decimal arithmetic"?
Furthermore, if I calculate the above value via:
a = BigDecimal.new(5**20)
#=> #<BigDecimal:7fd8f20ab7e8,'0.9536743164 0625E14',18(27)>
b = BigDecimal.new(10**20)
#=> #<BigDecimal:7fd8f2925ab8,'0.1E21',9(36)>
n = a / b
#=> #<BigDecimal:7fd8f4866148,'0.9536743164 0625E-6',27(54)>
n.to_s('F')
#=> "0.00000095367431640625"
I do get the correct result. Why?
BigDecimal can perform arbitrary-precision floating point decimal arithmetic, however it cannot automatically determine the "correct" precision for a given calculation.
For example, consider
BigDecimal.new(1)/BigDecimal.new(3)
# <BigDecimal:1cfd748, '0.3333333333 33333333E0', 18(36)>
Arguably, there is no correct precision in this case; the right value to use depends on the accuracy required in your calculations. It's worth noting that in a mathematical sense†, almost all whole number divisions result in a number with an infinite decimal expansion, thus requiring rounding. A fraction only has a finite representation if, after reducing it to lowest terms, the denominator's only prime factors are 2 and 5.
So you have to specify the precision. Unfortunately the precision argument is a little weird, because it seems to be both the number of significant digits and the number of digits after the decimal point. Here's 1/1048576 for varying precision
1 0.000001
2 0.00000095
3 0.000000953
9 0.000000953
10 0.0000009536743164
11 0.00000095367431641
12 0.000000953674316406
18 0.000000953674316406
19 0.00000095367431640625
For any value less than 10, BigDecimal truncates the result to 9 digits which is why you get a sudden spike in accuracy at precision 10: at that point is switches to truncating to 18 digits (and then rounds to 10 significant digits).
† Depending on how comfortable you are comparing the sizes of countably infinite sets.

Why am I getting rounding errors when using Ruby Rational?

I'm calculating programatically the frequencies of given musical notes.
Quick intro:
The musical note A4 has a frequency of 440Hz
Notes one octave higher are double the frequency of the same note in a lower octave (so A3 is 220Hz, A2 is 110Hz)
The difference between one note, to the next semitone is log 2 to the base of a 12th.
C3 = B3 x 2 ^ (1/12)
Writing this formula in Ruby, I came up with the following:
# Starting from the note A4 (440Hz)
A4 = 440.to_f
# I want the frequencies of each note over the next 3 octaves
number_of_octaves = 3
# There are 12 semitones per octave
SEMITIONES_PER_OCTAVE = 12
current_freq = A4
(number_of_octaves * SEMITIONES_PER_OCTAVE).times do |i|
puts "-----" if i % 12 == 0 # separate each octave with dashes
puts current_freq
current_freq = current_freq * 2 ** Rational('1/12')
end
The results I'm getting back are not perfect though. The A notes seem to have rounded a little higher than expected:
-----
440.0
466.1637615180899
493.8833012561241
523.2511306011974
554.3652619537443
587.3295358348153
622.253967444162
659.2551138257401
698.456462866008
739.988845423269
783.9908719634989
830.6093951598906
-----
880.0000000000003
932.3275230361802
987.7666025122486
1046.502261202395
1108.7305239074888
1174.6590716696307
1244.5079348883241
1318.5102276514804
1396.9129257320162
1479.9776908465383
1567.981743926998
1661.2187903197814
-----
1760.000000000001
1864.6550460723606
1975.5332050244976
2093.0045224047904
2217.4610478149784
2349.3181433392624
2489.0158697766497
2637.020455302962
2793.825851464034
2959.9553816930784
3135.963487853998
3322.4375806395647
Note the A frequencies - instead of being 880, 1760, they are slightly higher.
I thought Ruby's Rational was supposed to give accurate calculations and avoid the rounding errors from using floats.
Can anybody explain:
Why is this result inaccurate?
How can I improve the above code to obtain a truly accurate result?
It's not clear to me whether in this expression: current_freq * 2 ** Rational('1/12') that Ruby is keeping the entire calculation in the Rational realm. Within Ruby, you get:
2.0.0p195 :001 > current_freq = 440
=> 440
2.0.0p195 :002 > current_freq * 2 ** Rational('1/12')
=> 466.1637615180899
The calculation produces a float, not a Rational. If we kept it Rational, it would look like:
2.0.0p195 :005 > Rational( current_freq * 2 ** Rational('1/12'))
=> (4100419809895505/8796093022208)
Even if you do this:
2.0.0p195 :010 > Rational(2) ** Rational(1,12)
=> 1.0594630943592953
Ruby goes from Rational to float. The Ruby doc on Rational doesn't describe this clearly, but the examples given show this when taking a rational to a fractional exponent that's not an integer. This makes sense, since when you take a rational number to a rational (fractional, non-integer) exponent, chances are you're going to get an irrational number. 2**(1/12) is one of those cases.
So to keep accuracy, you'd need to keep everything in the Rational realm throughout which isn't really possible once you hit an irrational number. You might, as Scott Hunter suggests, be able to narrow the field with some custom functions to control the inaccuracy. It's unclear whether that would be worth the effort in this situation.
While I'm sure it is representing 1/12 exactly (as a fraction), once you use it as an exponent, you're back in floating point, and the potential for round-off returns.
I suppose you write your own power function, which checks to see if the exponent is an integer and uses multiplication explicitly; that would take care of your A's at least.
To answer the second part of your question:
How can I improve the above code to obtain a truly accurate result?
You could calculate the frequencies with f = 2n/12 × 440:
def freq(n)
2 ** (n/12.0) * 440
end
puts (0..12).map { |n| freq(n) }
Output:
440.0
466.1637615180899
493.8833012561241
523.2511306011972
554.3652619537442
587.3295358348151
622.2539674441618
659.2551138257398
698.4564628660078
739.9888454232688
783.9908719634985
830.6093951598903
880.0

Floating point is limited to 16 digits

I have the following code:
def pi
pivalue = 4 * (4 * Math.atan(1.0/5.0) - Math.atan(1.0/239.0))
pivaluestring = pi.to_s
puts pivaluestring[0,20]
end
Why is that pivalue is only limited to 16 decimal points? I want there to be a much bigger limit (maximum).
Use BigMath and BigDecimal (in the Standard Library):
require "bigdecimal/math"
p BigMath::PI(50).to_s
#=>"0.3141592653589793238462643383279502884197169399375105820974944592309049629352442819E1"
# Or
include BigMath
p PI(100).to_s
BigDecimal provides arbitrary-precision floating point decimal arithmetic.
Ruby floats are 64bit floats. Once you take away the sign bit and the exponent bits you are left with 52 bits for the mantissa which is about 16 digits of decimal precision.
Ruby does have an arbitrary precision library: big decimal. Converting your code to use it would look a little like
require "bigdecimal"
require "bigdecimal/math"
def pi(prec=20)
pivalue = 4 * (4 * BigMath.atan(BigDecimal.new("0.2",prec), prec) - BigMath.atan(BigDecimal.new(1)/BigDecimal.new(239), prec))
pivaluestring = pivalue.to_s
puts pivaluestring[0,20]
end
You usually have to give bigdecimal a precision to tell it how many decimals it has to track.
There is also a built in BigMath.PI function
That is because Math.atan is a Float. Since you only have that much precision in the middle of the calculation, you cannot get more precision.
By the way, for float precision, you can get the pi simply by doing:
Math::PI # => 3.141592653589793
Floating point values have limited precision based on the number of bits used to store the value. Read these articles about floating-point arithmetic and limitations:
http://floating-point-gui.de/
http://www.ruby-doc.org/core-2.0/Float.html

ruby to_f bug or multiplication operator bug?

Hi I just ran into an issue where ruby's to_f function is giving me inconsistent results.
ruby-1.9.2-head :026 > 8.45.to_f * 100
=> 844.9999999999999
ruby-1.9.2-head :027 > 4.45.to_f * 100
=> 445.0
ruby-1.9.2-head :028 > 4.35.to_f * 100
=> 434.99999999999994
My workaround is to simply round the result this way
ruby-1.9.2-head :029 > (4.35.to_f * 100).round
=> 435
After more playing around I realised that the issue might be with the multiplication operator * 100
Welcome to floating point drift. This is a well understood problem, and you should have a read so you at least understand it yourself. For instance, have a peek over at the following article:
What every computer scientist should know about floating point drift
The problems with Float are already mentioned. See check the other answers.
Some more remarks:
You wrote 4.35.to_f. The to_f is not necessary in this case.
4.35 is already a Float:
p 4.35.class #-> Float
Where did you recognize the problem. When you print the number the value is already rounded.
With String#% you can determine the details level of the output:
p 8.45.to_f * 100 #->845.0
p "%.12f" % (8.45.to_f * 100) # -> "845.000000000000"
p "%.13f" % (8.45.to_f * 100) # -> "844.9999999999999"
p "%.14f" % (8.45.to_f * 100) # -> "844.99999999999989"
p "%.16f" % (8.45.to_f * 100) # -> "844.9999999999998900"
Alas, this is part of the curse of floating point math, and not just a problem in Ruby:
http://en.wikipedia.org/wiki/Floating_point#Representable_numbers.2C_conversion_and_rounding
If you need exact arithmetic with decimals, use BigDecimal:
require 'bigdecimal'
(BigDecimal('4.35') * 100).to_f
#=> 435.0
The fundamental problem is that the fraction 45/100 does not have an exact representation as a sequence of 1/2n terms. In fact, most fractions written with a small number of base-10 digits do not have an exact FP representation.
As a result, the actual number you get is a very close but not exact approximation to your base-10 number. The output results will depend on where you round to, but will be correct if you do anything the least bit reasonable when rounding.
If you don't round, the exact number you get will depend on where the fraction gets chopped off and on how many digits you attempt to convert. Where the fraction is chopped will depend on how many bits are needed to represent the mantissa. That's why you get different results for x.45 depending on x.
This question comes up all the time on stack overflow. I guess we need a floating-point-faq.
Ironically, every (in range) integer value does have an exact floating point format representation.

How to compute one's complement using Ruby's bitwise operators?

What I want:
assert_equal 6, ones_complement(9) # 1001 => 0110
assert_equal 0, ones_complement(15) # 1111 => 0000
assert_equal 2, ones_complement(1) # 01 => 10
the size of the input isn't fixed as in 4 bits or 8 bits. rather its a binary stream.
What I see:
v = "1001".to_i(2) => 9
There's a bit flipping operator ~
(~v).to_s(2) => "-1010"
sprintf("%b", ~v) => "..10110"
~v => -10
I think its got something to do with one bit being used to store the sign or something... can someone explain this output ? How do I get a one's complement without resorting to string manipulations like cutting the last n chars from the sprintf output to get "0110" or replacing 0 with 1 and vice versa
Ruby just stores a (signed) number. The internal representation of this number is not relevant: it might be a FixNum, BigNum or something else. Therefore, the number of bits in a number is also undefined: it is just a number after all. This is contrary to for example C, where an int will probably be 32 bits (fixed).
So what does the ~ operator do then? Wel, just something like:
class Numeric
def ~
return -self - 1
end
end
...since that's what '~' represents when looking at 2's complement numbers.
So what is missing from your input statement is the number of bits you want to switch: a 32-bits ~ is different from a generic ~ like it is in Ruby.
Now if you just want to bit-flip n-bits you can do something like:
class Numeric
def ones_complement(bits)
self ^ ((1 << bits) - 1)
end
end
...but you do have to specify the number of bits to flip. And this won't affect the sign flag, since that one is outside your reach with XOR :)
It sounds like you only want to flip four bits (the length of your input) - so you probably want to XOR with 1111.
See this question for why.
One problem with your method is that your expected answer is only true if you only flip the four significant bits: 1001 -> 0110.
But the number is stored with leading zeros, and the ~ operator flips all the leading bits too: 00001001 -> 11110110. Then the leading 1 is interpreted as the negative sign.
You really need to specify what the function is supposed to do with numbers like 0b101 and 0b11011 before you can decide how to implement it. If you only ever want to flip 4 bits you can do v^0b1111, as suggested in another answer. But if you want to flip all significant bits, it gets more complicated.
edit
Here's one way to flip all the significant bits:
def maskbits n
b=1
prev=n;
mask=prev|(prev>>1)
while (mask!=prev)
prev=mask;
mask|=(mask>>(b*=2))
end
mask
end
def ones_complement n
n^maskbits(n)
end
This gives
p ones_complement(9).to_s(2) #>>"110"
p ones_complement(15).to_s(2) #>>"0"
p ones_complement(1).to_s(2) #>>"0"
This does not give your desired output for ones_compliment(1), because it treats 1 as "1" not "01". I don't know how the function could infer how many leading zeros you want without taking the width as an argument.
If you're working with strings you could do:
s = "0110"
s.gsub("\d") {|bit| bit=="1"?"0":"1"}
If you're working with numbers, you'll have to define the number of significant bits because:
0110 = 6; 1001 = 9;
110 = 6; 001 = 1;
Even, ignoring the sign, you'll probably have to handle this.
What you are doing (using the ~) operator, is indeed a one's complement. You are getting those values that you are not expecting because of the way the number is interpreted by Ruby.
What you actually need to do will depend on what you are using this for. That is to say, why do you need a 1's complement?
Remember that you are getting the one's complement right now with ~ if you pass in a Fixnum: the number of bits which represent the number is a fixed quantity in the interpreter and thus there are leading 0's in front of the binary representation of the number 9 (binary 1001). You can find this number of bits by examining the size of any Fixnum. (the answer is returned in bytes)
1.size #=> 4
2147483647.size #=> 4
~ is also defined over Bignum. In this case it behaves as if all of the bits which are specified in the Bignum were inverted, and then if there were an infinite string of 1's in front of that Bignum. You can, conceivably shove your bitstream into a Bignum and invert the whole thing. You will however need to know the size of the bitstream prior to inversion to get a useful result out after it is inverted.
To answer the question as you pose it right off the bat, you can find the largest power of 2 less than your input, double it, subtract 1, then XOR the result of that with your input and always get a ones complement of just the significant bits in your input number.
def sig_ones_complement(num)
significant_bits = num.to_s(2).length
next_smallest_pow_2 = 2**(significant_bits-1)
xor_mask = (2*next_smallest_pow_2)-1
return num ^ xor_mask
end

Resources