Ruby - mathematics operations - ruby

I tried a few minutes ago simple math operation
<%=((3+2+1)/100).round(8)%>
The result is 0.06, but the result of the ruby code above is 0.0. I would expect the result should be 0.060000.
Why not?
Thanks

(3+2+1)/100
is 0 because the division is integer. Try
(3+2+1)/100.0
You see, if both arguments of / are integer, the result of the division is an integer (the whole part). If at least one of the arguments is floating-point, then the result is also floating-point.

The dreadful integer arithmetic attacks again!
When you calculate ((3+2+1)/100), since all the operands are integers, Ruby uses integer arithmetic rather than floating point arithmetic.
If you do 7/100 it will also return 0, as it's rounded down to the nearest integer, which is 0.

Operations involving only integer data are done in integer (and then 6/100 is 0). Converting that 0 to float later (by round) does not bring you back the already discarded fractional part.
Change either of the values to float (e.g. 3.0) and you are done.

Related

how to get the integer part of a real number that is positive using only basic arithmetic

This is a question for school, reinventing the wheel as usual.
I'm allowed to use basic arithmetic +, -, *, / and comparison, but I'm obviously not allowed to use cast.
The method has to be efficient, so I thought about multiplying a variable by 2 until it's bigger then do a dichomitic search between the powers of 2 that contains the real number I want to extract the integer part.
However, in the next section, I'm not allowed to use these basic arithmetic and comparison between integer and float, only between 2 integers, or 2 floats.
I can't find any solution to this...
You can follow your idea of multiplication by two to surpass the value then dichomitic search (aka binary search) to get the desired integer. However, since you are not allowed to compare a float with an integer, start with two values, the float 1.0 and the integer 1. Do all your multiplications and comparisons with the float value, then at each step whatever you do to the float value you also do to the integer value. So at any point, your float value and your integer value are equal, and you are using the float value for all comparisons with your given value.
So if your given value is 3.1416, you start with your initial guess values of 1.0 and 1. 1.0 is less than 3.1416, so you double both guesses and get 2.0 and 2. The float 2.0 is still less than 3.1416 so you double both guesses again and get 4.0 and 4. Your float guess 4.0 is finally too high, so you use binary search and try 3.0 and 3. The float guess is low. However, your integer guess 3 is just one away from your previous integer guess of 4, so you are done. The final integer result is thus 3.

Why are negative numbers rounded down after division in Ruby?

I am looking through a documentation on divmod. Part of a table showing the difference between methods div, divmod, modulo, and remainder is displayed below:
Why is 13.div(-4) rounded to -4 and not to -3? Is there any rule or convention in Ruby to round down negative numbers? If so, why is the following code not rounding down?
-3.25.round() #3
13.div(-4) == -4 and 13.modulo(-4) == -3 so that
(-4 * -4) + -3 == 13
and you get the consistent relationship
(b * (a/b)) + a.modulo(b) == a
Why is 13.div(-4) rounded to -4 and not to -3?
This is a misconception. 13.div(-4) is not really rounded at all. It is integer division, and follows self-consistent rules for working with integers and modular arithmetic. The rounding logic described in your link fits with it, and is then applied consistently when dealing with the same divmod operation when one or both the parameters are Floats. Mathematical operations on negative or fractional numbers are often extended from simpler, more intuitive results on positive integers in this kind of way. E.g. this follows similar logic to how fractional and negative powers, or non-integer factorials are created from their positive integer variants.
In this case, it's all about self-consistency of divmod, but not about rounding in general.
Ruby's designers had a choice to make when dealing with negative numbers, not all languages will give the same result. However, once it was decided Ruby would return sign of modulo result matching the divisor (as opposed to matching the division as a whole), that set how the rest of the numbers work.
Is there any rule or convention in Ruby to round down negative numbers?
Yes. Rounding a float number means to return the numerically closest integer. When there are two equally close integers, Ruby rounds to the integer furthest from 0. This is entirely separate design decision from how integer division and modulo arithmetic methods work.
If so, why is the following code not rounding down? -3.25.round() #3
I assume you mean the result to read -3. The round method does not "round down". It does "round closest". -3 is the closest integer to -3.25. Ruby's designers did have to make a choice though, what to do with -3.5.round() # -4. Some languages would instead return a -3 when rounding that number.

BigDecimal in 1.8 vs. 1.9

When Upgrading to ruby 1.9, I have a failing test when comparing expected vs. actual values for a BigDecimal that is the result of dividing a Float.
expected: '0.495E0',9(18)
got: '0.4950000000 0000005E0',18(27)
googling for things like "bigdecimal ruby precision" and "bigdecimal changes ruby 1.9" isn't getting me anywhere.
How did BigDecimal's behavior change in ruby 1.9?
update 1
> RUBY_VERSION
=> "1.8.7"
> 1.23.to_d
=> #<BigDecimal:1034630a8,'0.123E1',18(18)>
> RUBY_VERSION
=> "1.9.3"
> 1.23.to_d
=> #<BigDecimal:1029f3988,'0.123E1',18(45)>
What does 18(18) and 18(45) mean? Precision I imagine, but what is the notation/unit?
update 2
the code is running:
((10 - 0.1) * (5.0/100)).to_d
My test is expecting this to be equal (==) to:
0.495.to_f
This passed under 1.8, fails under 1.9.2 and 1.9.3
Equality comparisons rarely succeed on FP values
The short answer is that the Float#to_d is more accurate in 1.9 and is correctly failing the equality test that should not have succeeded in 1.8.7.
The long answer involves a basic rule of floating point programming: never do equality comparisons. Instead, fuzzy comparisons like if (abs(x-y) < epsilon) are recommended, or code is written to avoid the need for equality comparison altogether.
Although there are in theory about 232 single-precision numbers and 264 double-precision numbers that could be exactly compared, there are an infinite number that cannot be so compared. (Note: it is safe to do equality comparisons on FP values that happen to be integral. So, contrary to much advice, they are actually perfectly safe for loop indices and subscripts.)
Worse, the way we write fractional numbers makes it unlikely that a comparison with any specific constant will be successful.
That's because the fractions are binary, that is 1/2 + 1/4 + 1/8 ... but our constants are decimal. So, for example, consider monetary amounts in the range $1.00, $1.01, $1.02 .. $1.99. There are 100 values in this range and yet only 4 of them have exact FP representations: 1.00, 1.25, 1.50, and 1.75.
So, back to your problem. Your result of 0.495 has no exact representation and neither does the input constant of 0.1. You begin the calculation with a subtraction of two FP numbers with different magnitudes. The smaller number will be denormalized in order to accomplish the subtraction and so it will lose two or three low-order bits. As a result, the calculation will lead to a slightly large number than 0.495, because the entire 0.1 was not subtracted from 10. Your constant is actually slightly smaller (internally) than 0.495. And that's why the comparison fails.
Ruby 1.8 must have been accidentally or deliberately losing some low order bits and effectively introducing a rounding step that ended up helping your test.
Remember: the rule of thumb is that you must explicitly program in such rounding for floating point comparisons.
Notes. To answer the question from the comments about simple decimal fraction constants not having exact representations: They don't have exact finite forms because they repeat in binary. Every machine fraction is a rational number of the form x/2n. Now, the constants are decimal and every decimal constant is a rational number of the form x/(2n * 5m). The 5m numbers are odd, so there isn't a 2n factor for any of them. Only when m == 0 is there a finite representation in both the binary and decimal expansion of the fraction. So, 1.25 is exact because it's 5/(22*50) but 0.1 is not because it's 1/(20*51). There is simply no way to express 0.1 as a finite sum of x/2n components.
See the Wikipedia article on floating point accuracy problems. It does a very good job of explaining why numbers like 0.1 and 0.01 cannot be represented exactly using floating point numbers.
The simple explanation is that these numbers, when represented in binary floating-point format, are recurring, just like one third is 0.3333333333... recurring in decimal.
Just as you can never represent one third exactly using a finite set of decimal digits, you cannot represent these numbers exactly using a finite set of binary digits.

How to convert a floating-point type to an integer type without rounding in VB6

What is the recommended way to convert a floating point type to an integer type, truncating everything after the decimal point? CLng rounds, apparently, and the documentation for the = operator doesn't mention the subject.
UseFix or Int depending on the treatment you wish for negative numbers.
Microsoft article Q196652 discusses rounding in incredible detail. Here is an excerpt
The VB Fix() function is an example
of truncation. For example, Fix(3.5)
is 3, and Fix(-3.5) is -3.
The Int() function rounds down to
the highest integer less than the
value. Both Int() and Fix() act
the same way with positive numbers -
truncating - but give different
results for negative numbers:
Int(-3.5) gives -4.
Full disclosure: I referred to this nice answer by elo80ka
see
this
Undocumented behavior of the CInt() function
The CInt() function rounds to the nearest integer value. In other words, CInt(2.4) returns 2, and CInt(2.6) returns 3.
This function exhibits an under-documented behavior when the fractional part is equal to 0.5. In this case, this function rounds down if the integer portion of the argument is even, but it rounds up if the integer portion is an odd number. For example, CInt(2.5) returns 2, but CInt(3.5) returns 4.
This behavior shouldn't be considered as a bug, because it helps not to introduce errors when doing statistical calculations. UPDATE: Matthew Wills let us know that this behavior is indeed documented in VB6's help file: When the fractional part is exactly 0.5, CInt and CLng always round it to the nearest even number. For example, 0.5 rounds to 0, and 1.5 rounds to 2. CInt and CLng differ from the Fix and Int functions, which truncate, rather than round, the fractional part of a number. Also, Fix and Int always return a value of the same type as is passed in.
For positive numbers you would use
truncated = Int(value)
If used on negative numbers it would go down, i.e. -7.2 would become -8.

Why is Math.sqrt(i*i).floor == i?

I am wondering if this is true: When I take the square root of a squared integer, like in
f = Math.sqrt(123*123)
I will get a floating point number very close to 123. Due to floating point representation precision, this could be something like 122.99999999999999999999 or 123.000000000000000000001.
Since floor(122.999999999999999999) is 122, I should get 122 instead of 123. So I expect that floor(sqrt(i*i)) == i-1 in about 50% of the cases. Strangely, for all the numbers I have tested, floor(sqrt(i*i) == i. Here is a small ruby script to test the first 100 million numbers:
100_000_000.times do |i|
puts i if Math.sqrt(i*i).floor != i
end
The above script never prints anything. Why is that so?
UPDATE: Thanks for the quick reply, this seems to be the solution: According to wikipedia
Any integer with absolute value less
than or equal to 2^24 can be exactly
represented in the single precision
format, and any integer with absolute
value less than or equal to 2^53 can
be exactly represented in the double
precision format.
Math.sqrt(i*i) starts to behave as I've expected it starting from i=9007199254740993, which is 2^53 + 1.
Here's the essence of your confusion:
Due to floating point representation
precision, this could be something
like 122.99999999999999999999 or
123.000000000000000000001.
This is false. It will always be exactly 123 on a IEEE-754 compliant system, which is almost all systems in these modern times. Floating-point arithmetic does not have "random error" or "noise". It has precise, deterministic rounding, and many simple computations (like this one) do not incur any rounding at all.
123 is exactly representable in floating-point, and so is 123*123 (so are all modest-sized integers). So no rounding error occurs when you convert 123*123 to a floating-point type. The result is exactly 15129.
Square root is a correctly rounded operation, per the IEEE-754 standard. This means that if there is an exact answer, the square root function is required to produce it. Since you are taking the square root of exactly 15129, which is exactly 123, that's exactly the result you get from the square root function. No rounding or approximation occurs.
Now, for how large of an integer will this be true?
Double precision can exactly represent all integers up to 2^53. So as long as i*i is less than 2^53, no rounding will occur in your computation, and the result will be exact for that reason. This means that for all i smaller than 94906265, we know the computation will be exact.
But you tried i larger than that! What's happening?
For the largest i that you tried, i*i is just barely larger than 2^53 (1.1102... * 2^53, actually). Because conversions from integer to double (or multiplication in double) are also correctly rounded operations, i*i will be the representable value closest to the exact square of i. In this case, since i*i is 54 bits wide, the rounding will happen in the very lowest bit. Thus we know that:
i*i as a double = the exact value of i*i + rounding
where rounding is either -1,0, or 1. If rounding is zero, then the square is exact, so the square root is exact, so we already know you get the right answer. Let's ignore that case.
So now we're looking at the square root of i*i +/- 1. Using a Taylor series expansion, the infinitely precise (unrounded) value of this square root is:
i * (1 +/- 1/(2i^2) + O(1/i^4))
Now this is a bit fiddly to see if you haven't done any floating point error analysis before, but if you use the fact that i^2 > 2^53, you can see that the:
1/(2i^2) + O(1/i^4)
term is smaller than 2^-54, which means that (since square root is correctly rounded, and hence its rounding error must be smaller than 2^54), the rounded result of the sqrt function is exactly i.
It turns out that (with a similar analysis), for any exactly representable floating point number x, sqrt(x*x) is exactly x (assuming that the intermediate computation of x*x doesn't over- or underflow), so the only way you can encounter rounding for this type of computation is in the representation of x itself, which is why you see it starting at 2^53 + 1 (the smallest unrepresentable integer).
For "small" integers, there is usually an exact floating-point representation.
It's not too hard to find cases where this breaks down as you'd expect:
Math.sqrt(94949493295293425**2).floor
# => 94949493295293424
Math.sqrt(94949493295293426**2).floor
# => 94949493295293424
Math.sqrt(94949493295293427**2).floor
# => 94949493295293424
Ruby's Float is a double-precision floating point number, which means that it can accurately represent numbers with (rule of thumb) about 16 significant decimal digits. For regular single-precision floating point numbers it's about significant 7 digits.
You can find more information here:
What Every Computer Scientist Should Know About Floating-Point Arithmetic:
http://docs.sun.com/source/819-3693/ncg_goldberg.html

Resources