A better Ruby implementation of round decimal to nearest 0.5 - ruby

This seems horrible inefficient. Can someone give me a better Ruby way.
def round_value
x = (self.value*10).round/10.0 # rounds to two decimal places
r = x.modulo(x.floor) # finds remainder
f = x.floor
self.value = case
when r.between?(0, 0.25)
f
when r.between?(0.26, 0.75)
f+0.5
when r.between?(0.76, 0.99)
f+1.0
end
end

class Float
def round_point5
(self * 2).round / 2.0
end
end
A classic problem: this means you're doing integer rounding with a different radix. You can replace '2' with any other number.

Multiply the number by two.
round to whole number.
Divide by two.
(x * 2.0).round / 2.0
In a generalized form, you multiply by the number of notches you want per whole number (say round to .2 is five notches per whole value). Then round; then divide by the same value.
(x * notches).round / notches

You can accomplish this with a modulo operator too.
(x + (0.05 - (x % 0.05))).round(2)
If x = 1234.56, this will return 1234.6
I stumbled upon this answer because I am writing a Ruby-based calculator and it used Ruby's Money library to do all the financial calculations. Ruby Money objects do not have the same rounding functions that an Integer or Float does, but they can return the remainder (e.g. modulo, %).
Hence, using Ruby Money you can round a Money object to the nearest $25 with the following:
x + (Money.new(2500) - (x % Money.new(2500)))
Here, if x = $1234.45 (<#Money fractional:123445 currency:USD>), then it will return $1250.00 (#
NOTE: There's no need to round with Ruby Money objects since that library takes care of it for you!

Related

How to compute and store the digits of sqrt(n) up to 10^6 decimal places?

I am doing research work. for which I need to compute and store the square root of 2 up to 10^6 places. I have googled for this but I got only a NASA page but how they computed that I don't know. I used set_precision of c++. but that is giving the result up to around 50 places only.what should I do?
NASA page link: https://apod.nasa.gov/htmltest/gifcity/sqrt2.1mil
I have tried binary search also but not fruitful.
long double ans = sqrt(n);
cout<<fixed<<setprecision(50)<<ans<<endl;
You have various options here. You can work with an arbitrary-precision floating-point library (for example MPFR with C or C++, or mpmath or the built-in decimal library in Python). Provided you know what error guarantees that library gives, you can ensure that you get the correct decimal digits. For example, both MPFR and Python's decimal guarantee correct rounding here, but MPFR has the disadvantage (for your particular use-case of getting decimal digits) that it works in binary, so you'd also need to analyse the error induced by the binary-to-decimal conversion.
You can also work with pure integer methods, using an arbitrary-precision integer library (like GMP), or a language that supports arbitrary-precision integers out of the box (for example, Java with its BigInteger class: recent versions of Java provide a BigInteger.sqrt method): scale 2 by 10**2n, where n is the number of places after the decimal point that you need, take the integer square root (i.e., the integer part of the exact mathematical square root), and then scale back by 10**n. See below for a relatively simple but efficient algorithm for computing integer square roots.
The simplest out-of-the-box option here, if you're willing to use another language, is to use Python's decimal library. Here's all the code you need, assuming Python 3 (not Python 2, where this will be horribly slow).
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 10**6 + 1 # number of significant digits needed
>>> sqrt2_digits = str(Decimal(2).sqrt())
The str(Decimal(2).sqrt()) operation takes less than 10 seconds on my machine. Let's check the length, and the first and last hundred digits (we obviously can't reproduce the whole output here):
>>> len(sqrt2_digits)
1000002
>>> sqrt2_digits[:100]
'1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157'
>>> sqrt2_digits[-100:]
'2637136344700072631923515210207475200984587509349804012374947972946621229489938420441930169048412044'
There's a slight problem with this: the result is guaranteed to be correctly rounded, but that's rounded, not truncated. So that means that that final "4" digit could be the result of a final round up - that is, the actual digit in that position could be a "3", with an "8" or "9" (for example) following it.
We can get around this by computing a couple of extra digits, and then truncating them (after double checking that rounding of those extra digits doesn't affect the truncation).
>>> getcontext().prec = 10**6 + 3
>>> sqrt2_digits = str(Decimal(2).sqrt())
>>> sqrt2_digits[-102:]
'263713634470007263192351521020747520098458750934980401237494797294662122948993842044193016904841204391'
So indeed the millionth digit after the decimal point is a 3, not a 4. Note that if the last 3 digits computed above had been "400", we still wouldn't have known whether the millionth digit was a "3" or a "4", since that "400" could again be the result of a round up. In that case, you could compute another two digits and try again, and so on, stopping when you have an unambiguous output. (For further reading, search for "The table maker's dilemma".)
(Note that setting the decimal module's rounding mode to ROUND_DOWN does not work here, since the Decimal.sqrt method ignores the rounding mode.)
If you want to do this using pure integer arithmetic, Python 3.8 offers a math.isqrt function for computing exact integer square roots. In this case, we'd use it as follows:
>>> from math import isqrt
>>> sqrt2_digits = str(isqrt(2*10**(2*10**6)))
This takes a little longer: around 20 seconds on my laptop. Half of that time is for the binary-to-decimal conversion implicit in the str call. But this time, we got the truncated result directly, and didn't have to worry about the possibility of rounding giving us the wrong final digit(s).
Examining the results again:
>>> len(sqrt2_digits)
1000001
>>> sqrt2_digits[:100]
'1414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641572'
>>> sqrt2_digits[-100:]
'2637136344700072631923515210207475200984587509349804012374947972946621229489938420441930169048412043'
This is a bit of a cheat, because (at the time of writing) Python 3.8 hasn't been released yet, although beta versions are available. But there's a pure Python version of the isqrt algorithm in the CPython source, that you can copy and paste and use directly. Here it is in full:
import operator
def isqrt(n):
"""
Return the integer part of the square root of the input.
"""
n = operator.index(n)
if n < 0:
raise ValueError("isqrt() argument must be nonnegative")
if n == 0:
return 0
c = (n.bit_length() - 1) // 2
a = 1
d = 0
for s in reversed(range(c.bit_length())):
# Loop invariant: (a-1)**2 < (n >> 2*(c - d)) < (a+1)**2
e = d
d = c >> s
a = (a << d - e - 1) + (n >> 2*c - e - d + 1) // a
return a - (a*a > n)
The source also contains an explanation of the above algorithm and an informal proof of its correctness.
You can check that the results by the two methods above agree (modulo the extra decimal point in the first result). They're computed by completely different methods, so that acts as a sanity check on both methods.
You could use big integers, e.g. BigInteger in Java. Then you calculate the square root of 2e12 or 2e14. Note that sqrt(2) = 1.4142... and sqrt(200) = 14.142... Then you can use the Babylonian method to get all the digits: E.g. S = 10^14. x(n+1) = (x(n) + S / x(n)) / 2. Repeat until x(n) doesn't change. Maybe there are more efficient algorithms that converge faster.
// Input: a positive integer, the number of precise digits after the decimal point
// Output: a string representing the long float square root
function findSquareRoot(number, numDigits) {
function get_power(x, y) {
let result = 1n;
for (let i = 0; i < y; i ++) {
result = result * BigInt(x);
}
return result;
}
let a = 5n * BigInt(number);
let b = 5n;
const precision_digits = get_power(10, numDigits + 1);
while (b < precision_digits) {
if (a >= b) {
a = a - b;
b = b + 10n;
} else {
a = a * 100n;
b = (b / 10n) * 100n + 5n;
}
}
let decimal_pos = Math.floor(Math.log10(number))
if (decimal_pos == 0) decimal_pos = 1
let result = (b / 100n).toString()
result = result.slice(0, decimal_pos) + '.' + result.slice(decimal_pos)
return result
}

Generate uniform pseudo-random numbers in a closed interval

What's the best way to generate pseudo-random numbers in the closed interval [0,1] instead of the usual [0,1)? One idea I've came up with is to reject values in (1/2,1), then double the number. I wonder if there is a better method.
real x
do
call random_number(x)
if (x <= 0.5) exit
end do
x = 2*x
print *, x
end
The most important requirement is that the algorithm should not make a worse distribution (in terms of uniformity and correlation) than that generated by random_number(). Also I'd favour simplicity. A wrapper around random_number() would be perfectly good, I'm not looking to implement a whole new generator.
As #francescalus points out in the comments, with the algorithm above lots of numbers in [0,1] will have zero probability of appearing. The following code implements a slightly different approach: the interval is enlarged a bit, then values in excess of 1 are cut out. It should behave better in that aspect.
real x
do
call random_number(x)
x = x*(1 + 1e-6)
if (x <= 1.) exit
end do
print *, x
end
What about swapping x and 1-x? Sorry, my Fortran is rusty
real function RNG()
real :: x
logical, save :: swap = .TRUE.
call random_number(x)
if (swap .EQV. .TRUE.)
RNG = x
swap = .FALSE.
else
RNG = 1.0 - x
swap = .TRUE.
end if
end
And if you want to use Box-Muller, use 1-U everywhere and it should work
z0 = sqrt(-2.0*log(1.0-U1))*sin(TWOPI*U2)
z1 = sqrt(-2.0*log(1.0-U1))*cos(TWOPI*U2)
same for rejection version of Box-Muller

Integer division with rounding

I need to do integer division. I expect the following to return 2 instead of the actual 1:
187 / 100 # => 1
This:
(187.to_f / 100).round # => 2
will work, but does't seem elegant as a solution. Isn't there an integer-only operator that does 187 / 100 = 2?
EDIT
I'll be clearer on my use case since I keep getting down-voted:
I need to calculate taxes on a price. All my prices are in cents. There is nothing below 1 cent in the accountability world so I need to make sure all my prices are integers (those people checking taxes don't like mistakes... really!)
But on the other hand, the tax rate is 19%.
So I wanted to find the best way to write:
def tax_price(price)
price * TAX_RATE / 100
end
that surely returns an integer, without any floating side effect.
I was afraid of going to the floating world because it has very weird side-effects on number representation like:
Ruby strange issue with floating point multiplication
ruby floating point errors
So I found it safer to stay in the integer or the fractional world, hence my question.
You can do it while remaining in the integer world as follows:
def round_div(x,y)
(x + y / 2) / y
end
If you prefer, you could monkey-patch Fixnum with a variant of this:
class Fixnum
def round_div(divisor)
(self + divisor / 2) / divisor
end
end
187.round_div(100) # => 2
No – (a.to_f / b.to_f).round is the canonical way to do it. The behavior of integer / integer is (for example) defined in the C standard as "discarding the remainder" (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf page 82) and ruby uses the native C function.
This is a less know method, Numeric#fdiv
You use it like this : 187.fdiv(100).round
Not sure, but this might be what you have in mind.
q, r = 187.divmod(100)
q + (100 > r * 2 ? 0 : 1) # => 2
This should work for you :
Use syntax like this.
(number.to_f/another_number).round
Example:
(18.to_f/5).round
As #MattW already answer (+1), you'd have to cast your integers to floats.
The only other way that is less distracting can be to add .0 to your integer:
(187.0 / 100).round
However, usually we don't operate on concrete integers but variables and this method would be no use.
After some thoughts, I could:
have used BigDecimals but it feels like a bazooka to kill a bird
or I can use a custom method that wouldn't use floating division within the process, as #sawa suggests
def rounded_integer_div(numerator, denominator)
q, r = numerator.divmod(denominator)
q + (100 > r * 2 ? 0 : 1)
end
If what you want is to actually only increase the result by 1 if there's any remainder (e.g. for counting paging/batching), you can use the '%' (modula operation) for remainders checking.
# to add 1 if it's not an even division
a = 187
b = 100
result = a / b #=> 1
result += 1 if (a % b).positive?
#=> 2
# or in one line
result = (a / b) + ((a % b).zero? ? 0 : 1)

How to fix this floating point square root algorithm

I am trying to compute the IEEE-754 32-bit Floating Point Square Root of various inputs but for one particular input the below algorithm based upon the Newton-Raphson method won't converge, I am wondering what I can do to fix the problem? For the platform I am designing I have a 32-bit floating point adder/subtracter, multiplier, and divider.
For input 0x7F7FFFFF (3.4028234663852886E38)., the algorithm won't converge to the correct answer of 18446743523953729536.000000 This algorithm's answer gives 18446743523953737728.000000.
I am using MATLAB to implement my code before I implement this in hardware. I can only use single precision floating point values, (SO NO DOUBLES).
clc; clear; close all;
% Input
R = typecast(uint32(hex2dec(num2str(dec2hex(((hex2dec('7F7FFFFF'))))))),'single')
% Initial estimate
OneOverRoot2 = single(1/sqrt(2));
Root2 = single(sqrt(2));
% Get low and high bits of input R
hexdata_high = bitand(bitshift(hex2dec(num2hex(single(R))),-16),hex2dec('ffff'));
hexdata_low = bitand(hex2dec(num2hex(single(R))),hex2dec('ffff'));
% Change exponent of input to -1 to get Mantissa
temp = bitand(hexdata_high,hex2dec('807F'));
Expo = bitshift(bitand(hexdata_high,hex2dec('7F80')),-7);
hexdata_high = bitor(temp,hex2dec('3F00'));
b = typecast(uint32(hex2dec(num2str(dec2hex(((bitshift(hexdata_high,16)+ hexdata_low)))))),'single');
% If exponent is odd ...
if (bitand(Expo,1))
% Pretend the mantissa [0.5 ... 1.0) is multiplied by 2 as Expo is odd,
% so it now has the value [1.0 ... 2.0)
% Estimate the sqrt(mantissa) as [1.0 ... sqrt(2))
% IOW: linearly map (0.5 ... 1.0) to (1.0 ... sqrt(2))
Mantissa = (Root2 - 1.0)/(1.0 - 0.5)*(b - 0.5) + 1.0;
else
% The mantissa is in range [0.5 ... 1.0)
% Estimate the sqrt(mantissa) as [1/sqrt(2) ... 1.0)
% IOW: linearly map (0.5 ... 1.0) to (1/sqrt(2) ... 1.0)
Mantissa = (1.0 - OneOverRoot2)/(1.0 - 0.5)*(b - 0.5) + OneOverRoot2;
end
newS = Mantissa*2^(bitshift(Expo-127,-1));
S=newS
% S = (S + R/S)/2 method
for j = 1:6
fprintf('S %u %f %f\n', j, S, (S-sqrt(R)));
S = single((single(S) + single(single(R)/single(S))))/2;
S = single(S);
end
goodaccuracy = (abs((single(S)-single(sqrt(single(R)))))) < 2^-23
difference = (abs((single(S)-single(sqrt(single(R))))))
% Get hexadecimal output
hexdata_high = (bitand(bitshift(hex2dec(num2hex(single(S))),-16),hex2dec('ffff')));
hexdata_low = (bitand(hex2dec(num2hex(single(S))),hex2dec('ffff')));
fprintf('FLOAT: T Input: %e\t\tCorrect: %e\t\tMy answer: %e\n', R, sqrt(R), S);
fprintf('output hex = 0x%04X%04X\n',hexdata_high,hexdata_low);
out = hex2dec(num2hex(single(S)));
I took a whack at this. Here's what I came up with:
float mysqrtf(float f) {
if (f < 0) return 0.0f/0.0f;
if (f == 1.0f / 0.0f) return f;
if (f != f) return f;
// half-ass an initial guess of 1.0.
int expo;
float foo = frexpf(f, &expo);
float s = 1.0;
if (expo & 1) foo *= 2, expo--;
// this is the only case for which what's below fails.
if (foo == 0x0.ffffffp+0) return ldexpf(0x0.ffffffp+0, expo/2);
// do four newton iterations.
for (int i = 0; i < 4; i++) {
float diff = s*s-foo;
diff /= s;
s -= diff/2;
}
// do one last newton iteration, computing s*s-foo exactly.
float scal = s >= 1 ? 4096 : 2048;
float shi = (s + scal) - scal; // high 12 bits of significand
float slo = s - shi; // rest of significand
float diff = shi * shi - foo; // subtraction exact by sterbenz's theorem
diff += 2 * shi * slo; // opposite signs; exact by sterbenz's theorem
diff += slo * slo;
diff /= s; // diff == fma(s, s, -foo) / s.
s -= diff/2;
return ldexpf(s, expo/2);
}
The first thing to analyse is the formula (s*s-foo)/s in floating-point arithmetic. If s is a sufficiently good approximation to sqrt(foo), Sterbenz's theorem tells us that the numerator is within an ulp(foo) of the right answer --- all of that error is approximation error from computing s*s. Then we divide by s; this gives us at worst another half-ulp of approximation error. So, even without a fused multiply-add, diff is within 1.5 ulp of what it should be. And we divide it by two.
Notice that the initial guess doesn't in and of itself matter as long as you follow it up with enough Newton iterations.
Measure the error of an approximation s to sqrt(foo) by abs(s - foo/s). The error of my initial guess of 1 is at most 1. A Newton iteration in exact arithmetic squares the error and divides it by 4. A Newton iteration in floating-point arithmetic --- the kind I do four times --- squares the error, divides it by 4, and kicks in another 0.75 ulp of error. You do this four times and you find you have a relative error at most 0x0.000000C4018384, which is about 0.77 ulp. This means that four Newton iterations yield a faithfully-rounded result.
I do a fifth Newton step to get a correctly-rounded square root. The reason why it works is a little more intricate.
shi holds the "top half" of s while slo holds the "bottom half." The last 12 bits in each significand will be zero. This means, in particular, that shi * shi and shi * slo and slo * slo are exactly representable as floats.
s*s is within two ulps of foo. shi*shi is within 2047 ulps of s*s. Thus shi * shi - foo is within 2049 ulps of zero; in particular, it's exactly representable and less than 2-10.
You can check that you can add 2 * shi * slo and get an exactly-representable result that's within 2-22 of zero and then add slo*slo and get an exactly representable result --- s*s-foo computed exactly.
When you divide by s, you kick in an additional half-ulp of error, which is at most 2-48 here since our error was already so small.
Now we do a Newton step. We've computed the current error correctly to within 2-46. Adding half of it to s gives us the square root to within 3*2-48.
To turn this into a guarantee of correct rounding, we need to prove that there are no floats between 1/2 and 2, other than the one I special-cased, whose square roots are within 3*2-48 of a midpoint between two consecutive floats. You can do some error analysis, get a Diophantine equation, find all of the solutions of that Diophantine equation, find which inputs they correspond to, and work out what the algorithm does on those. (If you do this, there is one "physical" solution and a bunch of "unphysical" solutions. The one real solution is the only thing I special-cased.) There may be a cleaner way, however.

Large Exponents in Ruby?

I'm just doing some University related Diffie-Hellman exercises and tried to use ruby for it.
Sadly, ruby doesn't seem to be able to deal with large exponents:
warning: in a**b, b may be too big
NaN
[...]
Is there any way around it? (e.g. a special math class or something along that line?)
p.s. here is the code in question:
generator = 7789
prime = 1017473
alice_secret = 415492
bob_secret = 725193
puts from_alice_to_bob = (generator**alice_secret) % prime
puts from_bob_to_alice = (generator**bob_secret) % prime
puts bobs_key_calculation = (from_alice_to_bob**bob_secret) % prime
puts alices_key_calculation = (from_bob_to_alice**alice_secret) % prime
You need to do what is called, modular exponentiation.
If you can use the OpenSSL bindings then you can do rapid modular exponentiation in Ruby
puts some_large_int.to_bn.mod_exp(exp,mod)
There's a nice way to compute a^b mod n without getting these huge numbers.
You're going to walk through the exponentiation yourself, taking the modulus at each stage.
There's a trick where you can break it down into a series of powers of two.
Here's a link with an example using it to do RSA, from a course I took a while ago:
Specifically, on the second page, you can see an example:
http://www.math.uwaterloo.ca/~cd2rober/Math135/RSAExample.pdf
More explanation with some sample pseudocode from wikipedia: http://en.wikipedia.org/wiki/Modular_exponentiation#Right-to-left_binary_method
I don't know ruby, but even a bignum-friendly math library is going to struggle to evaluate such an expression the naive way (7789 to the power 415492 has approximately 1.6 million digits).
The way to work out a^b mod p without blowing up is to do the mod ping at every exponentiation - I would guess that the language isn't working this out on its own and therefore must be helped.
I've made some attempts of my own. Exponentiation by squaring works well so far, but same problem with bigNum. such a recursive thing as
def exponentiation(base, exp, y = 1)
if(exp == 0)
return y
end
case exp%2
when 0 then
exp = exp/2
base = (base*base)%##mod
exponentiation(base, exp, y)
when 1 then
y = (base*y)%##mod
exp = exp - 1
exponentiation(base, exp, y)
end
end
however, it would be, as I'm realizing, a terrible idea to rely on ruby's prime class for anything substantial. Ruby uses the Sieve of Eratosthenes for it's prime generator, but even worse, it uses Trial division for gcd's and such....
oh, and ##mod was a class variable, so if you plan on using this yourselves, you might want to add it as a param or something.
I've gotten it to work quite quickly for
puts a.exponentiation(100000000000000, 1222555345678)
numbers in that range.
(using ##mod = 80233)
OK, got the squaring method to work for
a = Mod.new(80233788)
puts a.exponentiation(298989898980988987789898789098767978698745859720452521, 12225553456987474747474744778)
output: 59357797
I think that should be sufficient for any problem you might have in your Crypto course
If you really want to go to BIG modular exponentiation, here is an implementation from the wiki page.
#base expantion number to selected base
def baseExpantion(number, base)
q = number
k = ""
while q > 0 do
a = q % base
q = q / base
k = a.to_s() + k
end
return k
end
#iterative for modular exponentiation
def modular(n, b, m)
x = 1
power = baseExpantion(b, 2) #base two
i = power.size - 1
if power.split("")[i] == "1"
x = x * n
x = x % m
end
while i > 0 do
n *= n
n = n % m
if power.split("")[i-1] == "1"
x *= n
x = x % m
end
i -= 1
end
return x
end
Results, where tested with wolfram alpha
This is inspired by right-to-left binary method example on Wikipedia:
def powmod(base, exponent, modulus)
return modulus==1 ? 0 : begin
result = 1
base = base % modulus
while exponent > 0
result = result*base%modulus if exponent%2 == 1
exponent = exponent >> 1
base = base*base%modulus
end
result
end
end

Resources