Divide and conquer method to compute roots - algorithm

Knowing that we can use Divide-and-Conquer algorithm to compute large exponents, for example 2 exp 100 = 2 exp(50) * 2 exp(50), which is quite more efficient, is this method efficient using roots? For example 2 exp (1/100) = (2 exp(1/50)) exp(1/50)?
In other words, I'm wondering if (n exp(1/x)) is more efficient to (n exp(1/y)) for x < y and where x and y are integers.

I don't think that a divide and conquer method is used when you have non-integer exponentials. I would assume that a taylor polynomial is used to compute x^y as e^(y ln(x)). You can compute the integer part of y, using divide and conquer then multiply it by the real part. But it doesn't make sense to divide it in two otherwise. Also:
2 exp (1/100) = (2 exp(1/50)) exp(1/50)
This is not true.
(2 exp(1/50))exp(1/50) = 2 exp(1/50+1/50)= 2*exp(1/25) != 2 exp(1/100)
You would be doing:
2 exp(1/100)= 2*exp(1/200)* exp(1/200)

As x,y are floating point numbers exp(1/x) might not be more efficient than exp(1/y) for all x<y.
But point of divide and conquer algorithms is that
if we have something like exp(1/x) we won't calculate it again i.e. we divide 2^N into two same problems of smaller size 2^(N/2) * 2^(N/2) and we calculate 2^(N/2) only once.
Similarly for exp(2/x) can be divided into exp(1/x)*exp(1/x) and we will have to calculate exp(1/x) only once. This should improve performance.
Also having smaller number in denominator should help.
So I think this should work fine.

Related

Find prime factors such that difference is smallest as possible

Suppose n, a, b are positive integers where n is not a prime number, such that n=ab with a≥b and (a−b) is small as possible. What would be the best algorithm to find the values of a and b if n is given?
I read a solution where they try to represent n as the difference between two squares via searching for a square S bigger than n such that S - n = (another square). Why would that be better than simply finding the prime factors of n and searching for the combination where a,b are factors of n and a - b is minimized?
Firstly....to answer why your approach
simply finding the prime factors of n and searching for the combination where a,b are factors of n and a - b is minimized
is not optimal:
Suppose your number is n = 2^7 * 3^4 * 5^2 * 7 * 11 * 13 (=259459200), well within range of int. From the combinatorics theory, this number has exactly (8 * 5 * 3 * 2 * 2 * 2 = 960) factors. So, firstly you find all of these 960 factors, then find all pairs (a,b) such that a * b = n, which in this case will be (6C1 + 9C2 + 11C3 + 13C4 + 14C5 + 15C6 + 16C7 + 16C8) ways. (if I'm not wrong, my combinatorics is a bit weak). This is of the order 1e5 if implemented optimally. Also, implementation of this approach is hard.
Now, why the difference of squares approach
represent S - n = Q, such that S and Q are perfect squares
is good:
This is because if you can represent S - n = Q, this implies, n = S - Q
=> n = s^2 - q^2
=> n = (s+q)(s-q)
=> Your reqd ans = 2 * q
Now, even if you iterate for all squares, you will either find your answer or terminate when difference of 2 consecutive squares is greater than n
But I don't think this will be doable for all n (eg. if n=6, there is no solution for (S,Q).)
Another approach:
Iterate from floor(sqrt(n)) to 1. The first number (say, x), such that x|n will be one of the numbers in the required pair (a,b). Other will be, obvs, y = x/n. So, your answer will be y - x.
This is O(sqrt(n)) time complex algorithm.
A general method could be this:
Find the prime factorization of your number: n = Π pi ai. Except for the worst cases where n is prime or semiprime, this will be substantially faster than O(n1/2) time of the iteration down from the square root, which won't divide the found factors out of the number.
Recall that the simplest, trial division, prime factorization is done by repeatedly trying to divide the number by increasing odd numbers (or by primes) below the number's square root, dividing out of the number each factor -- thus prime by construction -- as it is found (n := n/f).
Then, lazily enumerate the factors of n in order from its prime factorization. Stop after producing half of them. Having thus found n's (not necessarily prime) factor that is closest to its square root, find the second factor by simple division.
In case this must repeatedly run many times, it will greatly pay out to precalculate the needed primes below the n's square root, to use in the factorizations.

Algorithm on exponential with irrational base

I know there is an O(logn) algorithm on calculating a^n where a is an integer, and n is a huge integer (probably the result need to modular another prime MOD).
I wondering whether there is still an O(logn) algorithm to calculate
(a+sqrt(b))^n + (a-sqrt(b))^n (mod MOD)
The irrational part sqrt(b) looks not easy to handle in the exponential calculation. All I can do is to calculate a+sqrt(b) and a-sqrt(b) part separately and add them together then do the modular, but if n is huge, it is easy to overflow. Any ideas?
You can do that by computing (in ZM[x] / &langle;x²-b&rangle;)
(a+x)^n+(a-x)^n mod (M, x^2-b)
where again you can use modular halving-and-squaring for the powers, where the intermediate results now are linear polynomials (over modular integers). Actually, you will only need one of the powers, the result is twice the constant coefficient.
Alternatively, these power combinations are the solution of the linear recursion of order 2
u[n+2]-2*a*u[n+1]+(a^2-b)*u[n]
where
u[0]=2 and u[1]=2*a
so that you can use fast matrix exponentiation of the system matrix of this recursion, again obtaining an O(log(n)) algorithm (disregarding bitsize).
Example: As per the comment, take a=3, b=8, n=2 (and integers mod M=10^9+7, example is not large enough for that to matter)
In the first variant, compute u[n]=(a+x)^n mod (M, x^2-b), so
u[0]=1
u[1]=3+x
u[2]=(3+x)^2 mod (x^2-8)=9+6x+8=17+6x
and twice the constant term is 2*17=34
In the second variant, the recursion is (with 2*a=6, a^2-b=1)
u[n+2]-6*u[n+1]+u[n]=0
so that the first sequence elements are
u[0]=2
u[1]=6
u[2]=6*u[1]-u[0]=34
If you expand (a+sqrt(b))^n + (a-sqrt(b))^n you get
( a + nC1 a^(n-1) √b + nC2 a^(n-2) b + nC3 a^(n-3) √b b + ... )
+( a - nC1 a^(n-1) √b + nC2 a^(n-2) b - nC3 a^(n-3) √b b + ... )
= 2 a + 0 + 2 nC2 a^(n-2) b + 0 + ... + 2 nC4 a^(n-4) b^2 + ...
so the terms involving the possibly irrational parts cancel. (nC2 etc are binomial coefficients).
The RHS of the above could be calculated fairly efficiently using integer arithmetic as you can relate each term in the sequence to the previous one. However there are n/2 terms so the calculation is O(n).
As we know the result will be an integer we can try running through the Exponentiation by squaring algorithm keeping track of the integer a fractional components. Write a+sqrt(b) = x + y where x is an integer an y is the fractional part.
Finding the square of this we have x^2 + 2 x y + y^2. Even though we are only interested in the integer part we have some problems as there is an integer part of 2 x y+ y^2. This causes problems as to effectively calculate the integer part we are going to know a lot of digits of y. When we come to higher powers you need more an more digits of y to get the integer part.
I don't think normal floating point multiplication would be good enough to calculate the terms for very large n.

Computing sum of linear sequence modulo n

I'm looking to calculate the following sum efficiently:
sum (i=0..max) (i * A mod B)
One may assume that max, A < B and that A and B are co-prime (otherwise an easy reduction is possible). Numbers are large, so simple iteration is way too inefficient.
So far I haven't been able to come up with a polynomial-time algorithm (i.e., polynomial in log(B)), best I could find is O(sqrt(max)). Is this a known hard problem, or does anyone know of a polynomial-time algorithm?
To be clear, the "mod B" only applies to the i*A, not to the overall sum. So e.g.
sum(i=0..3) (i*7 mod 11) = 0 + 7 + 3 + 10 = 20.
You can shift things around a bit to get
A*(sum(i=0..max)) mod B
which simplifies to
A*(max*(max+1)/2) mod B
Now you only need to do one (possibly big-int) multiplication (assuming max itself isn't too big) followed by one (big-int) mod operation.

Exponentiation algorithm analysis

Following text is provided about exponentation
We have obvious algorithm to compute X to power N uses N-1 multiplications. A recursive algorithm can do better. N<=1 is the base case of recursion. Otherwise, if n is even, we have xn = xn/2 . xn/2, and if n is
odd, x to power of n = x(n-1)/2 x(n-1)/2 x.
Specifically, a 200-digit number is raised to a large power (usually
another 200-digit number), with only the low 200 or so digits retained
after each multiplication. Since the calculations require dealing with
200-digit numbers, efficiency is obviously important. The
straightforward algorithm for exponentiation would require about 10 to
power of 200 multiplications, whereas recursive algorithm presented
requires only about 1,200.
My questions regarding above text
1. How does author came with 10 to power of 200 multiplications for simple alogorithm and recursive algorithm only about 1, 200? How author came with above numbers
Thanks!
Because complexity of the first algorithm is linear and of the second is logarithmic (due to N).
200-digit number is about 10^200 and log_2(10^200) is about 1,200.
The exponent has 200 digits, thus it is about 10 to power of 200. If using naive exponentiation you'll have to do this amount of multiplications.
On the other hand, if you use the recursive exponentiation, the number of multiplications depends on exponent's number of bits. Since the exponent is almost 10^200, it has log(10^200) = 200*log(10) bits. This is 600, the 2 in there stems from the fact that if you have a 1 bit you'll have to do two multiplications.
Here are the 2 possible algorithms :
algo gives a^N
SimpleExp(a,N):
return a*simpleExp(a,N-1)
so it's N operation, so for a^(10^200) it's 10^200
OptimizedAlgo(a,N):
if N == 0:
return 1
if (N mod 2) == 0:
return OptimizedAlgo(a,N/2)*OptimizedAlgo(a,N/2) // 1 operation
else:
return a*OptimizedAlgo(a,(N-1)/2)*OptimizedAlgo(a,(N-1)/2) //2 operations
here for a^(10^200) you have between log2(N) and 2* log2(N) operations (2^(log2(N) = N )
and log2(10^200) = 200 * log2(10) ~ 664.3856189774724
and 2*log2(10^200) =1328.771237954945
so the number of operations lies between 664 and 1328

How to calculate the inverse key matrix in Hill Cipher algorithm?

I am finding it very hard to understand the way the inverse of the matrix is calculated in the Hill Cipher algorithm. I get the idea of it all being done in modulo arithmetic, but somehow things are not adding up. I would really appreciate a simple explanation!
Consider the following Hill Cipher key matrix:
5 8
17 3
Please use the above matrix for illustration.
You must study the Linear congruence theorem and the extended GCD algorithm, which belong to Number Theory, in order to understand the maths behind modulo arithmetic.
The inverse of matrix K for example is (1/det(K)) * adjoint(K), where det(K) <> 0.
I assume that you don't understand how to calculate the 1/det(K) in modulo arithmetic and here is where linear congruences and GCD come to play.
Your K has det(K) = -121. Lets say that the modulo m is 26. We want x*(-121) = 1 (mod 26).[ a = b (mod m) means that a-b = N*m]
We can easily find that for x=3 the above congruence is true because 26 divides (3*(-121) -1) exactly. Of course, the correct way is to use GCD in reverse to calculate the x, but I don't have time for explaining how do it. Check the extented GCD algorithm :)
Now, inv(K) = 3*([3 -8], [-17 5]) (mod 26) = ([9 -24], [-51 15]) (mod 26) = ([9 2], [1 15]).
Update: check out Basics of Computational Number Theory to see how to calculate modular inverses with the Extended Euclidean algorithm. Note that -121 mod 26 = 9, so for gcd(9, 26) = 1 we get (-1, 3).
In my very humble opinion it is much easier to calculate the inverse matrix (modular or otherwise) by using the Gauss-Jordan method. That way you don't have to calculate the determinant, and the method scales very simply to arbitrarily large systems.
Just look up 'Gauss Jordan Matrix Inverse' - but to summarise, you simply adjoin a copy of the identity matrix to the right of the matrix to be inverted, then use row operations to reduce your matrix to be solved until it itself is an identity matrix. At this point, the adjoined identity matrix has become the inverse of the original matrix. Voila!

Resources