Ruby - Sqrt on a very large Integer cause Rounding issues - ruby

I'm trying to solve a Fibonacci problem and am stumbling into rounding issues.
If i = 8670007398507948658051921 then fib1 = 19386725908489880000000000.0.
My code is below - thanks for any help.
def is_fibonacci?(i)
fib1 = Math.sqrt(5*(i**2)+4)
fib2 = Math.sqrt(5*(i**2)-4)
fib1 == fib1.round || fib2 == fib2.round ? true : false
end

Doing sqrt like that will not work for such big values, because sqrt returns a Float and its precision will not suffice here. I would advice you to implement your own sqrt function. There are several algorithms out there suggesting how to do that, but I personally thing using a binary search for computing the reverse for a function is the easiest:
def sqrt a
begv = 1
endv = a
while endv > begv + 1
mid = (endv + begv)/2
if mid ** 2 <= a
begv = mid
else
endv = mid
end
end
return begv
end
Alternatively you may try using BigDecimal for the sqrt(simply raise to power 0.5), but I like above method better as it does not involve any double calculations.

Related

Avoiding Conditional Statements in Loops

There is a portion of my f90 program that is taking up a significant amount of compute time. I am basically looping through three matrices (of the same size, with dimensions as large as 250-by-250), and trying to make sure values stay bounded within the interval [-1.0, 1.0]. I know that it is best practice to avoid conditionals in loops, but I am having trouble figuring out how to re-write this block of code for optimal performance. Is there a way to "unravel" the loop or use a built-in function of some sort to "vectorize" the conditional statements?
do ind2 = 1, size(u_mat,2)
do ind1 = 1,size(u_mat,1)
! Dot product 1 must be bounded between [-1,1]
if (b1_dotProd(ind1,ind2) .GT. 1.0_dp) then
b1_dotProd(ind1,ind2) = 1.0_dp
else if (b1_dotProd(ind1,ind2) .LT. -1.0_dp) then
b1_dotProd(ind1,ind2) = -1.0_dp
end if
! Dot product 2 must be bounded between [-1,1]
if (b2_dotProd(ind1,ind2) .GT. 1.0_dp) then
b2_dotProd(ind1,ind2) = 1.0_dp
else if (b2_dotProd(ind1,ind2) .LT. -1.0_dp) then
b2_dotProd(ind1,ind2) = -1.0_dp
end if
! Dot product 3 must be bounded between [-1,1]
if (b3_dotProd(ind1,ind2) .GT. 1.0_dp) then
b3_dotProd(ind1,ind2) = 1.0_dp
else if (b3_dotProd(ind1,ind2) .LT. -1.0_dp) then
b3_dotProd(ind1,ind2) = -1.0_dp
end if
end do
end do
For what it's worth, I am compiling with ifort.
You can use the intrinsic min and max functions for this.
As they are both elemental, you can use them on the whole array, as
b1_dotProd = max(-1.0_dp, min(b1_dotProd, 1.0_dp))
While there are processor instructions which allow min and max to be implemented without branches, it will depend on the compiler implementation of min and max as to whether or not this is actually done and if this is actually any faster, but it is at least a lot more concise.
The answer by #veryreverie is definitely correct, but there
are two things to consider.
A where statement is another sensible choice. Since it still is a conditional choice the same caveat of
whether or not this actually avoids branches and if it's actually any faster, but it is at least a lot more concise
still applies.
One example is:
pure function clamp(X) result(res)
real, intent(in) :: X(:)
real :: res(size(X))
where (X < -1.0)
res = -1.0
else where (X > 1.0)
res = 1.0
else
res = X
end where
end function
If you want to normalize to strictly 1 or -1, I would actually think about changing the datatype to integer. Then you can actually use a == 1 etc. without thinking about floating point equality problems. Depending on your code I would also think about cases where the dot product gets close to zero. Of course this point only applies, if you are only interested in the sign.
pure function get_sign(X) result(res)
real, intent(in) :: X(:)
integer :: res(size(X))
! Or use another appropiate choice to test for near_zero
where (abs(X) < epsilon(X) * 10.)
res = 0
else where (X < 0.0)
res = -1
else where (X > 0.0)
res = +1
end where
end function

Is there a way to refactor the julia code below in order to avoid the loop/malloc?

m,n =size(l.x)
for batch=1:m
l.ly = l.y[batch,:]
l.jacobian .= -l.ly .* l.ly'
l.jacobian[diagind(l.jacobian)] .= l.ly.*(1.0.-l.ly)
# # n x 1 = n x n * n x 1
l.dldx[batch,:] = l.jacobian * DLDY[batch,:]
end
return l.dldx
l.x is a m by n matrix. l.y is another matrix with the same size as l.x. My goal is to create another m by n matrix, l.dldx, in which each row is the result of the operation inside the for loop. Can any one spot further optimization for this block of code? The code above is part of https://github.com/stevenygd/NN.jl.
The following should implement the same calculation and is more efficient:
l.dldx = l.y .* (DLDY .- sum( l.y .* DLDY , 2))
There might be a slight improvement available by refactoring the sum into a loop.
As the question does not have runnable code, or a test case, it is hard to give definite benchmarks, so feedback would be welcome.
UPDATE
Here is the code above with explicit loops:
function calc_dldx(y,DLDY)
tmp = zeros(eltype(y),size(y,1))
dldx = similar(y)
#inbounds for j=1:size(y,2)
for i=1:size(y,1)
tmp[i] += y[i,j]*DLDY[i,j]
end
end
#inbounds for j=1:size(y,2)
for i=1:size(y,1)
dldx[i,j] = y[i,j]*(DLDY[i,j]-tmp[i])
end
end
return dldx
end
The long version should run even faster. A good way to measure the performance of code is using the BenchmarkTools package.

Speeding up a nested for loop

I've been working on speeding up the following function, but with no results:
function beta = beta_c(k,c,gamma)
beta = zeros(size(k));
E = #(x) (1.453*x.^4)./((1 + x.^2).^(17/6));
for ii = 1:size(k,1)
for jj = 1:size(k,2)
E_int = integral(E,k(ii,jj),10000);
beta(ii,jj) = c*gamma/(k(ii,jj)*sqrt(E_int));
end
end
end
Up to now, I solved it this way:
function beta = beta_calc(k,c,gamma)
k_1d = reshape(k,[1,numel(k)]);
E_1d =#(k) 1.453.*k.^4./((1 + k.^2).^(17/6));
E_int = zeros(1,numel(k_1d));
parfor ii = 1:numel(k_1d)
E_int(ii) = quad(E_1d,k_1d(ii),10000);
end
beta_1d = c*gamma./(k_1d.*sqrt(E_int));
beta = reshape(beta_1d,[size(k,1),size(k,2)]);
end
Seems to me, it didn't really enhance performances. What do you think about this?
Would you mind to shed a light?
I thank you in advance.
EDIT
I am gonna introduce some theoretical background involving my question.
Generally, beta is to be calculated as follows
Therefore, in the reduced case of unidimensional k array, E_int may be calculated as
E = 1.453.*k.^4./((1 + k.^2).^(17/6));
E_int = 1.5 - cumtrapz(k,E);
or, alternatively as
E_int(1) = 1.5;
for jj = 2:numel(k)
E =#(k) 1.453.*k.^4./((1 + k.^2).^(17/6));
E_int(jj) = E_int(jj - 1) - integral(E,k(jj-1),k(jj));
end
Nonetheless, k is currently a matrix k(size1,size2).
Here's another approach, parallelize, because it's easy using spmd or parfor. Instead of integral consider quad, see this link for examples...
I like this question.
The problem: the function integral takes as integration limits only scalars. Hence, it is difficult to vectorize the computation of of E_int.
A clue: there seems to be lot of redundancy in integrating the same function over and over from k(ii,jj) to infinity...
Proposed solution: How about sorting the values of k from smallest to largest and integrating E_sort_int(si) = integral( E, sortedK(si), sortedK(si+1) ); with sortedK( numel(k) + 1 ) = 10000;. Then the full value of E_int = cumsum( E_sort_int ); (you only need to "undo" the sorting and reshape it back to the size of k).

Large Exponents in Ruby?

I'm just doing some University related Diffie-Hellman exercises and tried to use ruby for it.
Sadly, ruby doesn't seem to be able to deal with large exponents:
warning: in a**b, b may be too big
NaN
[...]
Is there any way around it? (e.g. a special math class or something along that line?)
p.s. here is the code in question:
generator = 7789
prime = 1017473
alice_secret = 415492
bob_secret = 725193
puts from_alice_to_bob = (generator**alice_secret) % prime
puts from_bob_to_alice = (generator**bob_secret) % prime
puts bobs_key_calculation = (from_alice_to_bob**bob_secret) % prime
puts alices_key_calculation = (from_bob_to_alice**alice_secret) % prime
You need to do what is called, modular exponentiation.
If you can use the OpenSSL bindings then you can do rapid modular exponentiation in Ruby
puts some_large_int.to_bn.mod_exp(exp,mod)
There's a nice way to compute a^b mod n without getting these huge numbers.
You're going to walk through the exponentiation yourself, taking the modulus at each stage.
There's a trick where you can break it down into a series of powers of two.
Here's a link with an example using it to do RSA, from a course I took a while ago:
Specifically, on the second page, you can see an example:
http://www.math.uwaterloo.ca/~cd2rober/Math135/RSAExample.pdf
More explanation with some sample pseudocode from wikipedia: http://en.wikipedia.org/wiki/Modular_exponentiation#Right-to-left_binary_method
I don't know ruby, but even a bignum-friendly math library is going to struggle to evaluate such an expression the naive way (7789 to the power 415492 has approximately 1.6 million digits).
The way to work out a^b mod p without blowing up is to do the mod ping at every exponentiation - I would guess that the language isn't working this out on its own and therefore must be helped.
I've made some attempts of my own. Exponentiation by squaring works well so far, but same problem with bigNum. such a recursive thing as
def exponentiation(base, exp, y = 1)
if(exp == 0)
return y
end
case exp%2
when 0 then
exp = exp/2
base = (base*base)%##mod
exponentiation(base, exp, y)
when 1 then
y = (base*y)%##mod
exp = exp - 1
exponentiation(base, exp, y)
end
end
however, it would be, as I'm realizing, a terrible idea to rely on ruby's prime class for anything substantial. Ruby uses the Sieve of Eratosthenes for it's prime generator, but even worse, it uses Trial division for gcd's and such....
oh, and ##mod was a class variable, so if you plan on using this yourselves, you might want to add it as a param or something.
I've gotten it to work quite quickly for
puts a.exponentiation(100000000000000, 1222555345678)
numbers in that range.
(using ##mod = 80233)
OK, got the squaring method to work for
a = Mod.new(80233788)
puts a.exponentiation(298989898980988987789898789098767978698745859720452521, 12225553456987474747474744778)
output: 59357797
I think that should be sufficient for any problem you might have in your Crypto course
If you really want to go to BIG modular exponentiation, here is an implementation from the wiki page.
#base expantion number to selected base
def baseExpantion(number, base)
q = number
k = ""
while q > 0 do
a = q % base
q = q / base
k = a.to_s() + k
end
return k
end
#iterative for modular exponentiation
def modular(n, b, m)
x = 1
power = baseExpantion(b, 2) #base two
i = power.size - 1
if power.split("")[i] == "1"
x = x * n
x = x % m
end
while i > 0 do
n *= n
n = n % m
if power.split("")[i-1] == "1"
x *= n
x = x % m
end
i -= 1
end
return x
end
Results, where tested with wolfram alpha
This is inspired by right-to-left binary method example on Wikipedia:
def powmod(base, exponent, modulus)
return modulus==1 ? 0 : begin
result = 1
base = base % modulus
while exponent > 0
result = result*base%modulus if exponent%2 == 1
exponent = exponent >> 1
base = base*base%modulus
end
result
end
end

Can I reduce the computational complexity of this?

Well, I have this bit of code that is slowing down the program hugely because it is linear complexity but called a lot of times making the program quadratic complexity. If possible I would like to reduce its computational complexity but otherwise I'll just optimize it where I can. So far I have reduced down to:
def table(n):
a = 1
while 2*a <= n:
if (-a*a)%n == 1: return a
a += 1
Anyone see anything I've missed? Thanks!
EDIT: I forgot to mention: n is always a prime number.
EDIT 2: Here is my new improved program (thank's for all the contributions!):
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1 = n-1
for a in range(1, n//2+1):
if (a*a)%n == a1: return a
EDIT 3: And testing it out in its real context it is much faster! Well this question appears solved but there are many useful answers. I should also say that as well as those above optimizations, I have memoized the function using Python dictionaries...
Ignoring the algorithm for a moment (yes, I know, bad idea), the running time of this can be decreased hugely just by switching from while to for.
for a in range(1, n / 2 + 1)
(Hope this doesn't have an off-by-one error. I'm prone to make these.)
Another thing that I would try is to look if the step width can be incremented.
Take a look at http://modular.fas.harvard.edu/ent/ent_py .
The function sqrtmod does the job if you set a = -1 and p = n.
You missed a small point because the running time of your improved algorithm is still in the order of the square root of n. As long you have only small primes n (let's say less than 2^64), that's ok, and you should probably prefer your implementation to a more complex one.
If the prime n becomes bigger, you might have to switch to an algorithm using a little bit of number theory. To my knowledge, your problem can be solved only with a probabilistic algorithm in time log(n)^3. If I remember correctly, assuming the Riemann hypothesis holds (which most people do), one can show that the running time of the following algorithm (in ruby - sorry, I don't know python) is log(log(n))*log(n)^3:
class Integer
# calculate b to the power of e modulo self
def power(b, e)
raise 'power only defined for integer base' unless b.is_a? Integer
raise 'power only defined for integer exponent' unless e.is_a? Integer
raise 'power is implemented only for positive exponent' if e < 0
return 1 if e.zero?
x = power(b, e>>1)
x *= x
(e & 1).zero? ? x % self : (x*b) % self
end
# Fermat test (probabilistic prime number test)
def prime?(b = 2)
raise "base must be at least 2 in prime?" if b < 2
raise "base must be an integer in prime?" unless b.is_a? Integer
power(b, self >> 1) == 1
end
# find square root of -1 modulo prime
def sqrt_of_minus_one
return 1 if self == 2
return false if (self & 3) != 1
raise 'sqrt_of_minus_one works only for primes' unless prime?
# now just try all numbers (each succeeds with probability 1/2)
2.upto(self) do |b|
e = self >> 1
e >>= 1 while (e & 1).zero?
x = power(b, e)
next if [1, self-1].include? x
loop do
y = (x*x) % self
return x if y == self-1
raise 'sqrt_of_minus_one works only for primes' if y == 1
x = y
end
end
end
end
# find a prime
p = loop do
x = rand(1<<512)
next if (x & 3) != 1
break x if x.prime?
end
puts "%x" % p
puts "%x" % p.sqrt_of_minus_one
The slow part is now finding the prime (which takes approx. log(n)^4 integer operation); finding the square root of -1 takes for 512-bit primes still less than a second.
Consider pre-computing the results and storing them in a file. Nowadays many platforms have a huge disk capacity. Then, obtaining the result will be an O(1) operation.
(Building on Adam's answer.)
Look at the Wikipedia page on quadratic reciprocity:
x^2 ≡ −1 (mod p) is solvable if and only if p ≡ 1 (mod 4).
Then you can avoid the search of a root precisely for those odd prime n's that are not congruent with 1 modulo 4:
def table(n):
if n == 2: return 1
if n%4 != 1: return None # or raise exception
...
Based off OP's second edit:
def table(n):
if n == 2: return 1
if n%4 != 1: return
mod = 0
a1 = n - 1
for a in xrange(1, a1, 2):
mod += a
while mod >= n: mod -= n
if mod == a1: return a//2 + 1
It looks like you're trying to find the square root of -1 modulo n. Unfortunately, this is not an easy problem, depending on what values of n are input into your function. Depending on n, there might not even be a solution. See Wikipedia for more information on this problem.
Edit 2: Surprisingly, strength-reducing the squaring reduces the time a lot, at least on my Python2.5 installation. (I'm surprised because I thought interpreter overhead was taking most of the time, and this doesn't reduce the count of operations in the inner loop.) Reduces the time from 0.572s to 0.146s for table(1234577).
def table(n):
n1 = n - 1
square = 0
for delta in xrange(1, n, 2):
square += delta
if n <= square: square -= n
if square == n1: return delta // 2 + 1
strager posted the same idea but I think less tightly coded. Again, jug's answer is best.
Original answer: Another trivial coding tweak on top of Konrad Rudolph's:
def table(n):
n1 = n - 1
for a in xrange(1, n // 2 + 1):
if (a*a) % n == n1: return a
Speeds it up measurably on my laptop. (About 25% for table(1234577).)
Edit: I didn't notice the python3.0 tag; but the main change was hoisting part of the calculation out of the loop, not the use of xrange. (Academic since there's a better algorithm.)
Is it possible for you to cache the results?
When you calculate a large n you are given the results for the lower n's almost for free.
One thing that you are doing is repeating the calculation -a*a over and over again.
Create a table of the values once and then do look up in the main loop.
Also although this probably doesn't apply to you because your function name is table but if you call a function that takes time to calculate you should cache the result in a table and just do a table look up if you call it again with the same value. This save you the time of calculating all of the values when you first run but you don't waste time repeating the calculation more than once.
I went through and fixed the Harvard version to make it work with python 3.
http://modular.fas.harvard.edu/ent/ent_py
I made some slight changes to make the results exactly the same as the OP's function. There are two possible answers and I forced it to return the smaller answer.
import timeit
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1=n-1
def inversemod(a, p):
x, y = xgcd(a, p)
return x%p
def xgcd(a, b):
x_sign = 1
if a < 0: a = -a; x_sign = -1
x = 1; y = 0; r = 0; s = 1
while b != 0:
(c, q) = (a%b, a//b)
(a, b, r, s, x, y) = (b, c, x-q*r, y-q*s, r, s)
return (x*x_sign, y)
def mul(x, y):
return ((x[0]*y[0]+a1*y[1]*x[1])%n,(x[0]*y[1]+x[1]*y[0])%n)
def pow(x, nn):
ans = (1,0)
xpow = x
while nn != 0:
if nn%2 != 0:
ans = mul(ans, xpow)
xpow = mul(xpow, xpow)
nn >>= 1
return ans
for z in range(2,n) :
u, v = pow((1,z), a1//2)
if v != 0:
vinv = inversemod(v, n)
if (vinv*vinv)%n == a1:
vinv %= n
if vinv <= n//2:
return vinv
else:
return n-vinv
tt=0
pri = [ 5,13,17,29,37,41,53,61,73,89,97,1234577,5915587277,3267000013,3628273133,2860486313,5463458053,3367900313 ]
for x in pri:
t=timeit.Timer('q=table('+str(x)+')','from __main__ import table')
tt +=t.timeit(number=100)
print("table(",x,")=",table(x))
print('total time=',tt/100)
This version takes about 3ms to run through the test cases above.
For comparison using the prime number 1234577
OP Edit2 745ms
The accepted answer 522ms
The above function 0.2ms

Resources