division algorithm provided in Sanjoy Dasgupta - algorithm

I am reading division algorithm in book Algorithms by Sanjoy Dasgupta. Here division algorithm is mentioned as below.
function divide(x,y)
Input: Two n-bit integers x and y, where y ≥ 1
Output: The quotient and remainder of x divided by y
if x = 0: return (q,r) = (0,0)
(q,r) = divide(x/2,y)
q = 2·q, r = 2·r
if x is odd: r = r + 1
if r ≥ y: r = r−y, q = q + 1
return (q,r)
My questions on above algorithm are
How do we write recurrent formulation in simple terms for above algorithm which is missing from book and I am not able to write one.
Why are we performing r = r + 1 if x is odd?
Why are dong q = 2.q and r = 2.r?
Thanks

Related

Modular arithmetic. How to solve the following equation?

How to solve the following equation?
I am interested in the methods of solutions.
n^3 mod P = (n+1)^3 mod P
P- Prime number
Short example with the answer.
Could you gives step-by-step solutions for my example.
n^3 mod 61 = (n + 1)^3 mod 61
Integer solutions:
n = 61 m + 4,
n = 61 m + 56,
m element Z
Z - is set of integers.
An other way to state n^3 ≡ (n+1)^3 is n^3 ≡ n^3 + 3 n^2 + 3 n + 1 (just work out the cube of n+1) then the cubic terms cancel out to give a nicer quadratic 3 n^2 + 3 n + 1 ≡ 0
Then the usual quadratic formula applies, though all of its operations are now modulo P, and the determinant is not always a quadratic residue in which case there are no solutions to the original equation (this happens about half the time). This involves finding a square root modulo a prime, which is not hard for a computer to do for example with the Tonelli–Shanks algorithm, though not trivial to implement.
By the way 3 n^2 + 3 n + 1 = 0 has the property that if n is a solution, then -n - 1 is too.
For example, with some Python, once all the support functions exist it is pretty simple:
def solve(p):
# solve 3 n^2 + 3 n + 1 ≡ 0
D = -3 % p
sqrtD = modular_sqrt(D, p)
if sqrtD == 0:
return None
else:
n = (sqrtD - 3) * inverse(6, p) % p
return (n, -(n+1) % p)
Inverse modulo a prime is really easy,
def inverse(x, p):
return pow(x, p - 2, p)
I adapted this implementation of Tonelli-Shanks to Python3 (// instead of / for integer division)
def modular_sqrt(a, p):
""" Find a quadratic residue (mod p) of 'a'. p
must be an odd prime.
Solve the congruence of the form:
x^2 = a (mod p)
And returns x. Note that p - x is also a root.
0 is returned is no square root exists for
these a and p.
The Tonelli-Shanks algorithm is used (except
for some simple cases in which the solution
is known from an identity). This algorithm
runs in polynomial time (unless the
generalized Riemann hypothesis is false).
"""
# Simple cases
#
if legendre_symbol(a, p) != 1:
return 0
elif a == 0:
return 0
elif p == 2:
return 0
elif p % 4 == 3:
return pow(a, (p + 1) // 4, p)
# Partition p-1 to s * 2^e for an odd s (i.e.
# reduce all the powers of 2 from p-1)
#
s = p - 1
e = 0
while s % 2 == 0:
s //= 2
e += 1
# Find some 'n' with a legendre symbol n|p = -1.
# Shouldn't take long.
#
n = 2
while legendre_symbol(n, p) != -1:
n += 1
# Here be dragons!
# Read the paper "Square roots from 1; 24, 51,
# 10 to Dan Shanks" by Ezra Brown for more
# information
#
# x is a guess of the square root that gets better
# with each iteration.
# b is the "fudge factor" - by how much we're off
# with the guess. The invariant x^2 = ab (mod p)
# is maintained throughout the loop.
# g is used for successive powers of n to update
# both a and b
# r is the exponent - decreases with each update
#
x = pow(a, (s + 1) // 2, p)
b = pow(a, s, p)
g = pow(n, s, p)
r = e
while True:
t = b
m = 0
for m in range(r):
if t == 1:
break
t = pow(t, 2, p)
if m == 0:
return x
gs = pow(g, 2 ** (r - m - 1), p)
g = (gs * gs) % p
x = (x * gs) % p
b = (b * g) % p
r = m
def legendre_symbol(a, p):
""" Compute the Legendre symbol a|p using
Euler's criterion. p is a prime, a is
relatively prime to p (if p divides
a, then a|p = 0)
Returns 1 if a has a square root modulo
p, -1 otherwise.
"""
ls = pow(a, (p - 1) // 2, p)
return -1 if ls == p - 1 else ls
You can see some results on ideone

How to find the number of solutions of modular equation?

Find the number of solutions of 𝑥 = x (mod m)
Let p and q be the primes.
You can break a modular equation into separate equations if the factors are coprime.
This means that 𝑥**2 = x (mod m) is equivalent to 𝑥**2 = x (mod p) and 𝑥**2 = x (mod q).
Each of these can be factorized as x(x-1)=0 => x=0 or x=1.
So you know that x is 0 or 1 modulo p, and x is 0 or 1 modulo q. Each choice has 1 solution modulo m by the chinese remainder theorem so there will be 4 solutions.
2 are easy (x=0 and x=1). The other two can be found with the extended Euclidean algorithm as follows:
def egcd(a, b):
x,y = 0,1
lx,ly = 1,0
while b != 0:
q = a/b
(a, b) = (b, a%b)
(lx, x) = (x, lx-q*x)
(ly, y) = (y, ly-q*y)
return (lx, ly)
p=7
q=11
m=p*q
(lx, ly) = egcd(p,q)
print lx*p%m,ly*q%m

Recursive division algorithm for two n bit numbers

In the below division algorithm, I am not able to understand why multiplying q and r by two works and also why r is incremented if x is odd.
Please give a theoretical justification of this recursive division algorithm.
Thanks in advance.
function divide(x, y)
if x = 0:
return (q, r) = (0, 0)
(q, r) = divide(floor(x/2), y)
q = 2q, r = 2r
if x is odd:
r = r + 1
if r ≥ y:
r = r − y, q = q + 1
return (q, r)
Let's assume you want to divide x by y, i.e. represent x = Q * y + R
Let's assume that x is even. You recursively divide x / 2 by y and get your desired representation for a smaller case: x / 2 = q * y + r.
By multiplying it by two, you would get: x = 2q * y + 2r. Looking at the representation you wanted to get for x in the first place, you see that you have found it! Let Q = 2q and R = 2r and you found the desired Q and R.
If x is odd, you again first get the desired representation for a smaller case: (x - 1) / 2 = q * y + r, multiply it by two: x - 1 = 2q * y + 2r, and send 1 to the right: x = 2q * y + 2r + 1. Again, you have found Q and R you wanted: Q = 2q, R = 2r + 1.
The final part of the algorithm is just normalization so that r < y. r can become bigger than y when you perform multiplication by two.
Algorithm PuzzleSolve(k,S,U) :
Input: An integer k, sequence S, and set U
Output: An enumeration of all k-length extensions to S using elements in U without repetitions
for each e in U do
Add e to the end of S
Remove e from U /e is now being used/
if k == 1 then
Test whether S is a configuration that solves the puzzle
if S solves the puzzle then
return "Solution found: " S
else
PuzzleSolve(k-1,S,U) /a recursive call/
Remove e from the end of S
Add e back to U e is now considered as unused
This algorithm enumerates every possible size-k ordered subset of U, and tests each subset for being
a possible solution to our puzzle. For summation puzzles, U = 0,1,2,3,4,5,6,7,8,9 and each position
in the sequence corresponds to a given letter. For example, the first position could stand for b, the
second for o, the third for y, and so on.

Fast factorization

For a given number n (we know that n = p^a * q^b, for some prime numbers p,q and some integers a,b) and a given number φ(n) ( http://en.wikipedia.org/wiki/Euler%27s_totient_function ) find p,q,a and b.
The catch is that n, and φ(n) have about 200 digits so the algorithm have to be very fast.
It seems to be very hard problem and I completely don't know how to use φ(n).
How to approach this?
For n = p^a * q^b, the totient is φ(n) = (p-1)*p^(a-1) * (q-1)*q^(b-1). Without loss of generality, p < q.
So gcd(n,φ(n)) = p^(a-1) * q^(b-1) if p does not divide q-1 and gcd(n,φ(n)) = p^a * q^(b-1) if p divides q-1.
In the first case, we have n/gcd(n,φ(n)) = p*q and φ(n)/gcd(n,φ(n)) = (p-1)*(q-1) = p*q + 1 - (p+q), thus you have x = p*q = n/gcd(n,φ(n)) and y = p+q = n/gcd(n,φ(n)) + 1 - φ(n)/gcd(n,φ(n)). Then finding p and q is simple: y^2 - 4*x = (q-p)^2, so q = (y + sqrt(y^2 - 4*x))/2, and p = y-q. Then finding the exponents a and b is trivial.
In the second case, n/gcd(n,φ(n)) = q. Then you can easily find the exponent b, dividing by q until the division leaves a remainder, and thus obtain p^a. Dividing φ(n) by (q-1)*q^(b-1) gives you z = (p-1)*p^(a-1). Then p^a - z = p^(a-1) and p = p^a/(p^a-z). Finding the exponent a is again trivial.
So it remains to decide which case you have. You have case 2 if and only if n/gcd(n,φ(n)) is a prime.
For that, you need a decent primality test. Or you can first suppose that you have case 1, and if that doesn't work out, conclude that you have case 2.
Try working out what n / (n - φ(n)) is.
Follow up:
n / (n - φ(n)) = pq. You just keep dividing n by pq.

finding a^b^c^... mod m

I would like to calculate:
abcd... mod m
Do you know any efficient way since this number is too big but a , b , c , ... and m fit in a simple 32-bit int.
Any Ideas?
Caveat: This question is different from finding ab mod m.
Also please note that abc is not the same as (ab)c. The later is equal to abc. Exponentiation is right-associative.
abc mod m = abc mod n mod m, where n = φ(m) Euler's totient function.
If m is prime, then n = m-1.
Edit: as Nabb pointed out, this only holds if a is coprime to m. So you would have to check this first.
The answer does not contain full formal mathematical proof of correctness. I assumed that it is unnecessary here. Besides, it would be very illegible on SO, (no MathJax for example).
I will use (just a little bit) specific prime factorization algorithm. It's not best option, but enough.
tl;dr
We want calculate a^x mod m. We will use function modpow(a,x,m). Described below.
If x is small enough (not exponential form or exists p^x | m) just calculate it and return
Split into primes and calculate p^x mod m separately for each prime, using modpow function
Calculate c' = gcd(p^x,m) and t' = totient(m/c')
Calculate w = modpow(x.base, x.exponent, t') + t'
Save pow(p, w - log_p c', m) * c' in A table
Multiple all elements from A and return modulo m
Here pow should look like python's pow.
Main problem:
Because current best answer is about only special case gcd(a,m) = 1, and OP did not consider this assumption in question, I decided to write this answer. I will also use Euler's totient theorem. Quoting wikipedia:
Euler's totient theorem:
If n and a are coprime positive integers, then
where φ(n) is Euler's totient function.
The assumption numbers are co-primeis very important, as Nabb shows in comment. So, firstly we need to ensure that the numbers are co-prime. (For greater clarity assume x = b^(c^...).) Because , where we can factorize a, and separately calculate q1 = (p1^alpha)^x mod m,q2 = (p2^beta)^x mod m... and then calculate answer in easy way (q1 * q2 * q3 * ... mod m). Number has at most o(log a) prime factors, so we will be force to perform at most o(log a) calculations.
In fact we doesn't have to split to every prime factor of a (if not all occur in m with other exponents) and we can combine with same exponent, but it is not noteworthy by now.
Now take a look at (p^z)^x mod m problem, where p is prime. Notice some important observation:
If a,b are positive integers smaller than m and c is some positive integer and , then true is sentence .
Using the above observation, we can receive solution for actual problem. We can easily calculate gcd((p^z)^x, m). If x*z are big, it is number how many times we can divide m by p. Let m' = m /gcd((p^z)^x, m). (Notice (p^z)^x = p^(z*x).) Let c = gcd(p^(zx),m). Now we can easily (look below) calculate w = p^(zx - c) mod m' using Euler's theorem, because this numbers are co-prime! And after, using above observation, we can receive p^(zx) mod m. From above assumption wc mod m'c = p^(zx) mod m, so the answer for now is p^(zx) mod m = wc and w,c are easy to calculate.
Therefore we can easily calculate a^x mod m.
Calculate a^x mod m using Euler's theorem
Now assume a,m are co-prime. If we want calculate a^x mod m, we can calculate t = totient(m) and notice a^x mod m = a^(x mod t) mod m. It can be helpful, if x is big and we know only specific expression of x, like for example x = 7^200.
Look at example x = b^c. we can calculate t = totient(m) and x' = b^c mod t using exponentiation by squaring algorithm in Θ(log c) time. And after (using same algorithm) a^x' mod m, which is equal to solution.
If x = b^(c^(d^...) we will solve it recursively. Firstly calculate t1 = totient(m), after t2 = totient(t1) and so on. For example take x=b^(c^d). If t1=totient(m), a^x mod m = a^(b^(c^d) mod t1), and we are able to say b^(c^d) mod t1 = b^(c^d mod t2) mod t1, where t2 = totient(t1). everything we are calculating using exponentiation by squaring algorithm.
Note: If some totient isn't co-prime to exponent, it is necessary to use same trick, as in main problem (in fact, we should forget that it's exponent and recursively solve problem, like in main problem). In above example, if t2 isn't relatively prime with c, we have to use this trick.
Calculate φ(n)
Notice simple facts:
if gcd(a,b)=1, then φ(ab) = φ(a)*φ(b)
if p is prime φ(p^k)=(p-1)*p^(k-1)
Therefore we can factorize n (ak. n = p1^k1 * p2^k2 * ...) and separately calculate φ(p1^k1),φ(p2^k2),... using fact 2. Then combine this using fact 1. φ(n)=φ(p1^k1)*φ(p2^k2)*...
It is worth remembering that, if we will calculate totient repeatedly, we may want to use Sieve of Eratosthenes and save prime numbers in table. It will reduce the constant.
python example: (it is correct, for the same reason as this factorization algorithm)
def totient(n) : # n - unsigned int
result = 1
p = 2 #prime numbers - 'iterator'
while p**2 <= n :
if(n%p == 0) : # * (p-1)
result *= (p-1)
n /= p
while(n%p == 0) : # * p^(k-1)
result *= p
n /= p
p += 1
if n != 1 :
result *= (n-1)
return result # in O(sqrt(n))
Case: abc mod m
Cause it's in fact doing the same thing many times, I believe this case will show you how to solve this generally.
Firstly, we have to split a into prime powers. Best representation will be pair <number,
exponent>.
c++11 example:
std::vector<std::tuple<unsigned, unsigned>> split(unsigned n) {
std::vector<std::tuple<unsigned, unsigned>> result;
for(unsigned p = 2; p*p <= n; ++p) {
unsigned current = 0;
while(n % p == 0) {
current += 1;
n /= p;
}
if(current != 0)
result.emplace_back(p, current);
}
if(n != 1)
result.emplace_back(n, 1);
return result;
}
After split, we have to calculate (p^z)^(b^c) mod m=p^(z*(b^c)) mod m for every pair. Firstly we should check, if p^(z*(b^c)) | m. If, yes the answer is just (p^z)^(b^c), but it's possible only in case in which z,b,c are very small. I believe I don't have to show code example to it.
And finally if p^(z*b^c) > m we have to calculate the answer. Firstly, we have to calculate c' = gcd(m, p^(z*b^c)). After we are able to calculate t = totient(m'). and (z*b^c - c' mod t). It's easy way to get an answer.
function modpow(p, z, b, c, m : integers) # (p^z)^(b^c) mod m
c' = 0
m' = m
while m' % p == 0 :
c' += 1
m' /= p
# now m' = m / gcd((p^z)^(b^c), m)
t = totient(m')
exponent = z*(b^c)-c' mod t
return p^c' * (p^exponent mod m')
And below Python working example:
def modpow(p, z, b, c, m) : # (p^z)^(b^c) mod m
cp = 0
while m % p == 0 :
cp += 1
m /= p # m = m' now
t = totient(m)
exponent = ((pow(b,c,t)*z)%t + t - (cp%t))%t
# exponent = z*(b^c)-cp mod t
return pow(p, cp)*pow(p, exponent, m)
Using this function, we can easily calculate (p^z)^(b^c) mod m, after we just have to multiple all results (mod m), we can also calculate everything on an ongoing basis. Example below. (I hope I didn't make mistake, writing.) Only assumption, b,c are big enough (b^c > log(m) ak. each p^(z*b^k) doesn't divide m), it's simple check and I don't see point to make clutter by it.
def solve(a,b,c,m) : # split and solve
result = 1
p = 2 # primes
while p**2 <= a :
z = 0
while a % p == 0 :
# calculate z
a /= p
z += 1
if z != 0 :
result *= modpow(p,z,b,c,m)
result %= m
p += 1
if a != 1 : # Possible last prime
result *= modpow(a, 1, b, c, m)
return result % m
Looks, like it works.
DEMO and it's correct!
Since for any relationship a=x^y, the relationship is invariant with respect to the numeric base you are using (base 2, base 6, base 16, etc).
Since the mod N operation is equivalent to extracting the least significant digit (LSD) in base N
Since the LSD of the result A in base N can only be affected by the LSD of X in base N, and not digits in higher places. (e.g. 34*56 = 30*50+30*6+50*4+4*5 = 10*(3+50+3*6+5*4)+4*6)
Therefore, from LSD(A)=LSD(X^Y) we can deduce
LSD(A)=LSD(LSD(X)^Y)
Therefore
A mod N = ((X mod N) ^ Y) mod N
and
(X ^ Y) mod N = ((X mod N) ^ Y) mod N)
Therefore you can do the mod before each power step, which keeps your result in the range of integers.
This assumes a is not negative, and for any x^y, a^y < MAXINT
This answer answers the wrong question. (alex)
Modular Exponentiation is a correct way to solve this problem, here's a little bit of hint:
To find abcd % m
You have to start with calculating
a % m, then ab % m, then abc % m and then abcd % m ... (you get the idea)
To find ab % m, you basically need two ideas: [Let B=floor(b/2)]
ab = (aB)2 if b is even OR ab = (aB)2*a if b is odd.
(X*Y)%m = ((X%m) * (Y%m)) % m
(% = mod)
Therefore,
if b is even
ab % m = (aB % m)2 % m
or if b is odd
ab % m = (((aB % m)2) * (a % m)) % m
So if you knew the value of aB, you can calculate this value.
To find aB, apply similar approach, dividing B until you reach 1.
e.g. To calculate 1613 % 11:
1613 % 11 = (16 % 11)13 % 11 = 513 % 11
= (56 % 11) * (56 % 11) * (5 % 11) <---- (I)
To find 56 % 11:
56 % 11 = ((53 % 11) * (53 % 11)) % 11 <----(II)
To find 53%11:
53 % 11 = ((51 % 11) * (51 % 11) * (5 % 11)) % 11
= (((5 * 5) % 11) * 5) % 11 = ((25 % 11) * 5) % 11 = (3 * 5) % 11 = 15 % 11 = 4
Plugging this value to (II) gives
56 % 11 = (((4 * 4) % 11) * 5) % 11 = ((16 % 11) * 5) % 11 = (5 * 5) % 11 = 25 % 11 = 3
Plugging this value to (I) gives
513 % 11 = ((3 % 11) * (3 % 11) * 5) % 11 = ((9 % 11) * 5) % 11 = 45 % 11 = 4
This way 513 % 11 = 4
With this you can calculate anything of form a513 % 11 and so on...
Look at the behavior of A^X mod M as X increases. It must eventually go into a cycle. Suppose the cycle has length P and starts after N steps. Then X >= N implies A^X = A^(X+P) = A^(X%P + (-N)%P + N) (mod M). Therefore we can compute A^B^C by computing y=B^C, z = y < N ? y : y%P + (-N)%P + N, return A^z (mod m).
Notice that we can recursively apply this strategy up the power tree, because the derived equation either has an exponent < M or an exponent involving a smaller exponent tower with a smaller dividend.
The only question is if you can efficiently compute N and P given A and M. Notice that overestimating N is fine. We can just set N to M and things will work out. P is a bit harder. If A and M are different primes, then P=M-1. If A has all of M's prime factors, then we get stuck at 0 and P=1. I'll leave it as an exercise to figure that out, because I don't know how.
///Returns equivalent to list.reverse().aggregate(1, acc,item => item^acc) % M
func PowerTowerMod(Link<int> list, int M, int upperB = M)
requires M > 0, upperB >= M
var X = list.Item
if list.Next == null: return X
var P = GetPeriodSomehow(base: X, mod: M)
var e = PowerTowerMod(list.Next, P, M)
if e^X < upperB then return e^X //todo: rewrite e^X < upperB so it doesn't blowup for large x
return ModPow(X, M + (e-M) % P, M)
Tacet's answer is good, but there are substantial simplifications possible.
The powers of x, mod m, are preperiodic. If x is relatively prime to m, the powers of x are periodic, but even without that assumption, the part before the period is not long, at most the maximum of the exponents in the prime factorization of m, which is at most log_2 m. The length of the period divides phi(m), and in fact lambda(m), where lambda is Carmichael's function, the maximum multiplicative order mod m. This can be significantly smaller than phi(m). Lambda(m) can be computed quickly from the prime factorization of m, just as phi(m) can. Lambda(m) is the GCD of lambda(p_i^e_i) over all prime powers p_i^e_i in the prime factorization of m, and for odd prime powers, lambda(p_i^e_i) = phi(p_i^e^i). lambda(2)=1, lamnda(4)=2, lambda(2^n)=2^(n-2) for larger powers of 2.
Define modPos(a,n) to be the representative of the congruence class of a in {0,1,..,n-1}. For nonnegative a, this is just a%n. For a negative, for some reason a%n is defined to be negative, so modPos(a,n) is (a%n)+n.
Define modMin(a,n,min) to be the least positive integer congruent to a mod n that is at least min. For a positive, you can compute this as min+modPos(a-min,n).
If b^c^... is smaller than log_2 m (and we can check whether this inequality holds by recursively taking logarithms), then we can simply compute a^b^c^... Otherwise, a^b^c^... mod m = a^modMin(b^c^..., lambda(m), [log_2 m])) mod m = a^modMin(b^c^... mod lambda(m), lambda(m),[log_2 m]).
For example, suppose we want to compute 2^3^4^5 mod 100. Note that 3^4^5 only has 489 digits, so this is doable by other methods, but it's big enough that you don't want to compute it directly. However, by the methods I gave here, you can compute 2^3^4^5 mod 100 by hand.
Since 3^4^5 > log_2 100,
2^3^4^5 mod 100
= 2^modMin(3^4^5,lambda(100),6) mod 100
= 2^modMin(3^4^5 mod lambda(100), lambda(100),6) mod 100
= 2^modMin(3^4^5 mod 20, 20,6) mod 100.
Let's compute 3^4^5 mod 20. Since 4^5 > log_2 20,
3^4^5 mod 20
= 3^modMin(4^5,lambda(20),4) mod 20
= 3^modMin(4^5 mod lambda(20),lambda(20),4) mod 20
= 3^modMin(4^5 mod 4, 4, 4) mod 20
= 3^modMin(0,4,4) mod 20
= 3^4 mod 20
= 81 mod 20
= 1
We can plug this into the previous calculation:
2^3^4^5 mod 100
= 2^modMin(3^4^5 mod 20, 20,6) mod 100
= 2^modMin(1,20,6) mod 100
= 2^21 mod 100
= 2097152 mod 100
= 52.
Note that 2^(3^4^5 mod 20) mod 100 = 2^1 mod 100 = 2, which is not correct. You can't reduce down to the preperiodic part of the powers of the base.

Resources