Modular arithmetic. How to solve the following equation? - number-theory

How to solve the following equation?
I am interested in the methods of solutions.
n^3 mod P = (n+1)^3 mod P
P- Prime number
Short example with the answer.
Could you gives step-by-step solutions for my example.
n^3 mod 61 = (n + 1)^3 mod 61
Integer solutions:
n = 61 m + 4,
n = 61 m + 56,
m element Z
Z - is set of integers.

An other way to state n^3 ≡ (n+1)^3 is n^3 ≡ n^3 + 3 n^2 + 3 n + 1 (just work out the cube of n+1) then the cubic terms cancel out to give a nicer quadratic 3 n^2 + 3 n + 1 ≡ 0
Then the usual quadratic formula applies, though all of its operations are now modulo P, and the determinant is not always a quadratic residue in which case there are no solutions to the original equation (this happens about half the time). This involves finding a square root modulo a prime, which is not hard for a computer to do for example with the Tonelli–Shanks algorithm, though not trivial to implement.
By the way 3 n^2 + 3 n + 1 = 0 has the property that if n is a solution, then -n - 1 is too.
For example, with some Python, once all the support functions exist it is pretty simple:
def solve(p):
# solve 3 n^2 + 3 n + 1 ≡ 0
D = -3 % p
sqrtD = modular_sqrt(D, p)
if sqrtD == 0:
return None
else:
n = (sqrtD - 3) * inverse(6, p) % p
return (n, -(n+1) % p)
Inverse modulo a prime is really easy,
def inverse(x, p):
return pow(x, p - 2, p)
I adapted this implementation of Tonelli-Shanks to Python3 (// instead of / for integer division)
def modular_sqrt(a, p):
""" Find a quadratic residue (mod p) of 'a'. p
must be an odd prime.
Solve the congruence of the form:
x^2 = a (mod p)
And returns x. Note that p - x is also a root.
0 is returned is no square root exists for
these a and p.
The Tonelli-Shanks algorithm is used (except
for some simple cases in which the solution
is known from an identity). This algorithm
runs in polynomial time (unless the
generalized Riemann hypothesis is false).
"""
# Simple cases
#
if legendre_symbol(a, p) != 1:
return 0
elif a == 0:
return 0
elif p == 2:
return 0
elif p % 4 == 3:
return pow(a, (p + 1) // 4, p)
# Partition p-1 to s * 2^e for an odd s (i.e.
# reduce all the powers of 2 from p-1)
#
s = p - 1
e = 0
while s % 2 == 0:
s //= 2
e += 1
# Find some 'n' with a legendre symbol n|p = -1.
# Shouldn't take long.
#
n = 2
while legendre_symbol(n, p) != -1:
n += 1
# Here be dragons!
# Read the paper "Square roots from 1; 24, 51,
# 10 to Dan Shanks" by Ezra Brown for more
# information
#
# x is a guess of the square root that gets better
# with each iteration.
# b is the "fudge factor" - by how much we're off
# with the guess. The invariant x^2 = ab (mod p)
# is maintained throughout the loop.
# g is used for successive powers of n to update
# both a and b
# r is the exponent - decreases with each update
#
x = pow(a, (s + 1) // 2, p)
b = pow(a, s, p)
g = pow(n, s, p)
r = e
while True:
t = b
m = 0
for m in range(r):
if t == 1:
break
t = pow(t, 2, p)
if m == 0:
return x
gs = pow(g, 2 ** (r - m - 1), p)
g = (gs * gs) % p
x = (x * gs) % p
b = (b * g) % p
r = m
def legendre_symbol(a, p):
""" Compute the Legendre symbol a|p using
Euler's criterion. p is a prime, a is
relatively prime to p (if p divides
a, then a|p = 0)
Returns 1 if a has a square root modulo
p, -1 otherwise.
"""
ls = pow(a, (p - 1) // 2, p)
return -1 if ls == p - 1 else ls
You can see some results on ideone

Related

Can I ignore the last k while expanding (a + b) % k?

Today I was trying to solve a problem that involved modular arithmetic. I was not able to solve it. So I looked it up on Geeks for Geeks
The above image shows what the author did. I know modular addition for two numbers
(a + b) % m = (a % m + b % m) % m
This works for any positive values of a and b
When I consider the equation the author used in the image.
a % k + b % k = 0
I substituted some random values for a , b and k to see if it really works. It turns out it fails for the input values a = 2, b = 5 and k = 7.
2 % 7 + 5 % 7 = 7 ≠ 0
When I considered the last equation. It worked.
b % k = (k - a % k) % k
(5 % 7) = (7 - 2 % 7) % 7
5 % 7 = 5 % 7
(a + b) % k = c
When I solved the above equation with the same idea as the author, I got
(a + b) % k = c
a % k + b % k = c
b % k = (c - a % k + k) % k
It works for any positive values of a, b, c and k
In the equation,
(a + b) % k = (a % k + b % k) % k
Can I just ignore the last k and proceed while expanding (a + b) % k ?. I wonder how the absence of the last k doesn't affect the final result
No, a = b = 0 is a counterexample.
Indeed, the final formula is incorrect, assuming that % denotes the remainder of truncating division. Let a = 1 and b = -1. (In Python, or for nonnegative integers, it's OK.)
This is why mathematicians prefer to deal in equivalence mod K, which avoids the issue of where to put the mod operator.

O(log n) solution to 1a + 2a^2 + 3a^3 + ... + na^n

The task is to find the sum of the equation given n and a. So for the equation 1a + 2a^2 + 3a^3 + ... + na^n, we can find the n-th element in the sequence with the following formula (from observation):
n-th element = a^n * (n-(n-2)/n-(n-1)) * (n-(n-3)/n-(n-2)) * ... * (n/(n-1))
I think that it's impossible to simplify the sum of n elements by modifying the above formula to a sum formula. Even if it is possible, I assume that it will involve the use of exponent n, which will introduce a n-time loop; thus causing the solution to not be O(log n). The best solution I can get is simply find the ratio of each element, which is a(n+1)/n and apply that to the n-1 element to find the n-th element.
I think that I may be missing something. Could someone provide me with solution(s)?
You can solve this problem, and lots of problems like it, with matrix exponentiation.
Let's start with this sequence:
A[n] = a + a^2 + a^3 ... + a^n
That sequence can be generated with a simple formula:
A[i] = a*(A[i-1] + 1)
Now if we consider your sequence:
B[n] = a + 2a^2 + 3a^3 ... + na^n
We can generate that with a formula that makes use of the previous one:
B[i] = (B[i-1] + A[i-1] + 1) * a
If we make a sequence of vectors containing all the components we need:
V[n] = (B[n], A[n], 1)
Then we can construct a matrix M so that:
V[i] = M*V[i-1]
And so:
V[n] = (M^(n-1))V[1]
Since the size of the matrix is fixed at 3x3, you can use exponentiation by squaring on the matrix itself to calculate M^(n-1) in O(log n) time, and the final multiplication takes constant time.
Here's an implementation in python with numpy (so I don't have to include matrix multiply code):
import numpy as np
def getSum(a,n):
# A[n] = a + a^2 + a^3...a^n
# B[n] = a + 2a^2 + 3a^3 +. .. na^n
# V[n] = [B[n],A[n],1]
M = np.matrix([
[a, a, a], # B[i] = B[i-1]*a + A[i-1]*a + a
[0, a, a], # A[i] = A[i-1]*a + a
[0, 0, 1]
])
# calculate MsupN = M^(n-1)
n-=1
MsupN=np.matrix([[1,0,0],[0,1,0],[0,0,1]]);
while(n>0):
if n%2 > 0:
MsupN *= M
n-=1
M*=M
n=n/2
# calculate V[n] = MsupN*V
Vn = MsupN*np.matrix([a,a,1]).T;
return Vn.item(0,0);
I assume a, n are nonnegative integers. The explicit formula for a > 1 is
a * (n * a^{n + 1} - (n + 1) * a^n + 1) / (a - 1)^2
It can be evaluated efficiently in O(log(n)) using
square and multiply for a^n.
To derive the formula, you could use the following ingredients:
explicit formula for geometric series
You have to notice that this polynomial looks almost like a derivative of a geometric series
Gaussian sum formula for the special case a = 1.
Now you can simply calculate:
sum_{i = 1}^n i * a^i // [0] ugly sum
= a * sum_{i = 1}^n i * a^{i-1} // [1] linearity
= a * d/da (sum_{i = 1}^n a^i) // [2] antiderivative
= a * d/da (sum_{i = 0}^n a^i - 1) // [3] + 1 - 1
= a * d/da ((a^{n + 1} - 1) / (a - 1) - 1) // [4] geom. series
= a * ((n + 1)*a^n / (a - 1) - (a^{n+1} - 1)/(a - 1)^2) // [5] derivative
= a * (n * a^{n + 1} - (n + 1)a^n + 1) / (a - 1)^2 // [6] explicit formula
This is just a simple arithmetic expression with a^n, which can be evaluated in O(log(n)) time using square-and-multiply.
This doesn't work for a = 0 or a = 1, so you have to treat those cases specially: for a = 0 you just return 0 immediately, for a = 1, you return n * (n + 1) / 2.
Scala snippet to test the formula:
def fast(a: Int, n: Int): Int = {
def pow(a: Int, n: Int): Int =
if (n == 0) 1
else if (n == 1) a
else {
val r = pow(a, n / 2)
if (n % 2 == 0) r * r else r * r * a
}
if (a == 0) 0
else if (a == 1) n * (n + 1) / 2
else {
val aPowN = pow(a, n)
val d = a - 1
a * (n * aPowN * a - (n + 1) * aPowN + 1) / (d * d)
}
}
Slower, but simpler version, for comparison:
def slow(a: Int, n: Int): Int = {
def slowPow(a: Int, n: Int): Int = if (n == 0) 1 else slowPow(a, n - 1) * a
(1 to n).map(i => i * slowPow(a, i)).sum
}
Comparison:
for (a <- 0 to 5; n <- 0 to 5) {
println(s"${slow(a, n)} <-> ${fast(a, n)}")
}
Output:
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
1 <-> 1
3 <-> 3
6 <-> 6
10 <-> 10
15 <-> 15
0 <-> 0
2 <-> 2
10 <-> 10
34 <-> 34
98 <-> 98
258 <-> 258
0 <-> 0
3 <-> 3
21 <-> 21
102 <-> 102
426 <-> 426
1641 <-> 1641
0 <-> 0
4 <-> 4
36 <-> 36
228 <-> 228
1252 <-> 1252
6372 <-> 6372
0 <-> 0
5 <-> 5
55 <-> 55
430 <-> 430
2930 <-> 2930
18555 <-> 18555
So, yes, the O(log(n)) formula gives the same numbers as the O(n^2) formula.
a^n can be indeed computed in O(log n).
The method is called Exponentiation by squaring and the main idea is that if you know a^n you also know a^(2*n) which is just a^n * a^n.
So if you want to compute a^n (if n is even) you can just compute a^(n/2) and multiply the result with itself: a^n = a^(n/2) * a^(n/2). So instead of having a loop up to n, now you only have a loop up to n/2. But n/2 is just another number, and can be computed the same way, thus doing only half the operations. Halving the number of operations each time leads to the logarithmic complexity.
As mentioned by #Sopel in the comment, the series can be written as a simple equation/function:
a * (n * a^(n+1) - (n+1) * a^n + 1)
f(a,n) = ------------------------------------
(a- 1) ^ 2
So to find the answer you only have to compute the above formula, using the fast exponentiation described above to do it in O(logN) complexity.

How to find the number of solutions of modular equation?

Find the number of solutions of 𝑥 = x (mod m)
Let p and q be the primes.
You can break a modular equation into separate equations if the factors are coprime.
This means that 𝑥**2 = x (mod m) is equivalent to 𝑥**2 = x (mod p) and 𝑥**2 = x (mod q).
Each of these can be factorized as x(x-1)=0 => x=0 or x=1.
So you know that x is 0 or 1 modulo p, and x is 0 or 1 modulo q. Each choice has 1 solution modulo m by the chinese remainder theorem so there will be 4 solutions.
2 are easy (x=0 and x=1). The other two can be found with the extended Euclidean algorithm as follows:
def egcd(a, b):
x,y = 0,1
lx,ly = 1,0
while b != 0:
q = a/b
(a, b) = (b, a%b)
(lx, x) = (x, lx-q*x)
(ly, y) = (y, ly-q*y)
return (lx, ly)
p=7
q=11
m=p*q
(lx, ly) = egcd(p,q)
print lx*p%m,ly*q%m

General formula for a recurrence relation?

I was solving a coding question, and found out the following relation to find the number of possible arrangements:
one[1] = two[1] = three[1] = 1
one[i] = two[i-1] + three[i-1]
two[i] = one[i-1] + three[i-1]
three[i] = one[i-1] + two[i-1] + three[i-1]
I could have easily used a for loop to find out the values of the individual arrays till n, but the value of n is of the order 10^9, and I won't be able to iterate from 1 to such a huge number.
For every value of n, I need to output the value of (one[n] + two[n] + three[n]) % 10^9+7 in O(1) time.
Some results:
For n = 1, result = 3
For n = 2, result = 7
For n = 3, result = 17
For n = 4, result = 41
I was not able to find out a general formula for n for the above after spending hours on it. Can someone help me out.
Edit:
n = 1, result(1) = 3
n = 2, result(2) = 7
n = 3, result(3) = result(2)*2 + result(1) = 17
n = 4, result(4) = result(3)*2 + result(2) = 41
So, result(n) = result(n-1)*2 + result(n-2) OR
T(n) = 2T(n-1) + T(n-2)
You can use a matrix to represent the recurrence relation. (I've renamed one, two, three to a, b, c).
(a[n+1]) = ( 0 1 1 ) (a[n])
(b[n+1]) ( 1 0 1 ) (b[n])
(c[n+1]) ( 1 1 1 ) (c[n])
With this representation, it's feasible to compute values for large n, by matrix exponentation (modulo your large number), using exponentation by squaring. That'll give you the result in O(log n) time.
(a[n]) = ( 0 1 1 )^(n-1) (1)
(b[n]) ( 1 0 1 ) (1)
(c[n]) ( 1 1 1 ) (1)
Here's some Python that implements this all from scratch:
# compute a*b mod K where a and b are square matrices of the same size
def mmul(a, b, K):
n = len(a)
return [
[sum(a[i][k] * b[k][j] for k in xrange(n)) % K for j in xrange(n)]
for i in xrange(n)]
# compute a^n mod K where a is a square matrix
def mpow(a, n, K):
if n == 0: return [[i == j for i in xrange(len(a))] for j in xrange(len(a))]
if n % 2: return mmul(mpow(a, n-1, K), a, K)
a2 = mpow(a, n//2, K)
return mmul(a2, a2, K)
M = [[0, 1, 1], [1, 0, 1], [1, 1, 1]]
def f(n):
K = 10**9+7
return sum(sum(a) for a in mpow(M, n-1, K)) % K
print f(1), f(2), f(3), f(4)
print f(10 ** 9)
Output:
3 7 17 41
999999966
It runs effectively instantly, even for the n=10**9 case.

finding a^b^c^... mod m

I would like to calculate:
abcd... mod m
Do you know any efficient way since this number is too big but a , b , c , ... and m fit in a simple 32-bit int.
Any Ideas?
Caveat: This question is different from finding ab mod m.
Also please note that abc is not the same as (ab)c. The later is equal to abc. Exponentiation is right-associative.
abc mod m = abc mod n mod m, where n = φ(m) Euler's totient function.
If m is prime, then n = m-1.
Edit: as Nabb pointed out, this only holds if a is coprime to m. So you would have to check this first.
The answer does not contain full formal mathematical proof of correctness. I assumed that it is unnecessary here. Besides, it would be very illegible on SO, (no MathJax for example).
I will use (just a little bit) specific prime factorization algorithm. It's not best option, but enough.
tl;dr
We want calculate a^x mod m. We will use function modpow(a,x,m). Described below.
If x is small enough (not exponential form or exists p^x | m) just calculate it and return
Split into primes and calculate p^x mod m separately for each prime, using modpow function
Calculate c' = gcd(p^x,m) and t' = totient(m/c')
Calculate w = modpow(x.base, x.exponent, t') + t'
Save pow(p, w - log_p c', m) * c' in A table
Multiple all elements from A and return modulo m
Here pow should look like python's pow.
Main problem:
Because current best answer is about only special case gcd(a,m) = 1, and OP did not consider this assumption in question, I decided to write this answer. I will also use Euler's totient theorem. Quoting wikipedia:
Euler's totient theorem:
If n and a are coprime positive integers, then
where φ(n) is Euler's totient function.
The assumption numbers are co-primeis very important, as Nabb shows in comment. So, firstly we need to ensure that the numbers are co-prime. (For greater clarity assume x = b^(c^...).) Because , where we can factorize a, and separately calculate q1 = (p1^alpha)^x mod m,q2 = (p2^beta)^x mod m... and then calculate answer in easy way (q1 * q2 * q3 * ... mod m). Number has at most o(log a) prime factors, so we will be force to perform at most o(log a) calculations.
In fact we doesn't have to split to every prime factor of a (if not all occur in m with other exponents) and we can combine with same exponent, but it is not noteworthy by now.
Now take a look at (p^z)^x mod m problem, where p is prime. Notice some important observation:
If a,b are positive integers smaller than m and c is some positive integer and , then true is sentence .
Using the above observation, we can receive solution for actual problem. We can easily calculate gcd((p^z)^x, m). If x*z are big, it is number how many times we can divide m by p. Let m' = m /gcd((p^z)^x, m). (Notice (p^z)^x = p^(z*x).) Let c = gcd(p^(zx),m). Now we can easily (look below) calculate w = p^(zx - c) mod m' using Euler's theorem, because this numbers are co-prime! And after, using above observation, we can receive p^(zx) mod m. From above assumption wc mod m'c = p^(zx) mod m, so the answer for now is p^(zx) mod m = wc and w,c are easy to calculate.
Therefore we can easily calculate a^x mod m.
Calculate a^x mod m using Euler's theorem
Now assume a,m are co-prime. If we want calculate a^x mod m, we can calculate t = totient(m) and notice a^x mod m = a^(x mod t) mod m. It can be helpful, if x is big and we know only specific expression of x, like for example x = 7^200.
Look at example x = b^c. we can calculate t = totient(m) and x' = b^c mod t using exponentiation by squaring algorithm in Θ(log c) time. And after (using same algorithm) a^x' mod m, which is equal to solution.
If x = b^(c^(d^...) we will solve it recursively. Firstly calculate t1 = totient(m), after t2 = totient(t1) and so on. For example take x=b^(c^d). If t1=totient(m), a^x mod m = a^(b^(c^d) mod t1), and we are able to say b^(c^d) mod t1 = b^(c^d mod t2) mod t1, where t2 = totient(t1). everything we are calculating using exponentiation by squaring algorithm.
Note: If some totient isn't co-prime to exponent, it is necessary to use same trick, as in main problem (in fact, we should forget that it's exponent and recursively solve problem, like in main problem). In above example, if t2 isn't relatively prime with c, we have to use this trick.
Calculate φ(n)
Notice simple facts:
if gcd(a,b)=1, then φ(ab) = φ(a)*φ(b)
if p is prime φ(p^k)=(p-1)*p^(k-1)
Therefore we can factorize n (ak. n = p1^k1 * p2^k2 * ...) and separately calculate φ(p1^k1),φ(p2^k2),... using fact 2. Then combine this using fact 1. φ(n)=φ(p1^k1)*φ(p2^k2)*...
It is worth remembering that, if we will calculate totient repeatedly, we may want to use Sieve of Eratosthenes and save prime numbers in table. It will reduce the constant.
python example: (it is correct, for the same reason as this factorization algorithm)
def totient(n) : # n - unsigned int
result = 1
p = 2 #prime numbers - 'iterator'
while p**2 <= n :
if(n%p == 0) : # * (p-1)
result *= (p-1)
n /= p
while(n%p == 0) : # * p^(k-1)
result *= p
n /= p
p += 1
if n != 1 :
result *= (n-1)
return result # in O(sqrt(n))
Case: abc mod m
Cause it's in fact doing the same thing many times, I believe this case will show you how to solve this generally.
Firstly, we have to split a into prime powers. Best representation will be pair <number,
exponent>.
c++11 example:
std::vector<std::tuple<unsigned, unsigned>> split(unsigned n) {
std::vector<std::tuple<unsigned, unsigned>> result;
for(unsigned p = 2; p*p <= n; ++p) {
unsigned current = 0;
while(n % p == 0) {
current += 1;
n /= p;
}
if(current != 0)
result.emplace_back(p, current);
}
if(n != 1)
result.emplace_back(n, 1);
return result;
}
After split, we have to calculate (p^z)^(b^c) mod m=p^(z*(b^c)) mod m for every pair. Firstly we should check, if p^(z*(b^c)) | m. If, yes the answer is just (p^z)^(b^c), but it's possible only in case in which z,b,c are very small. I believe I don't have to show code example to it.
And finally if p^(z*b^c) > m we have to calculate the answer. Firstly, we have to calculate c' = gcd(m, p^(z*b^c)). After we are able to calculate t = totient(m'). and (z*b^c - c' mod t). It's easy way to get an answer.
function modpow(p, z, b, c, m : integers) # (p^z)^(b^c) mod m
c' = 0
m' = m
while m' % p == 0 :
c' += 1
m' /= p
# now m' = m / gcd((p^z)^(b^c), m)
t = totient(m')
exponent = z*(b^c)-c' mod t
return p^c' * (p^exponent mod m')
And below Python working example:
def modpow(p, z, b, c, m) : # (p^z)^(b^c) mod m
cp = 0
while m % p == 0 :
cp += 1
m /= p # m = m' now
t = totient(m)
exponent = ((pow(b,c,t)*z)%t + t - (cp%t))%t
# exponent = z*(b^c)-cp mod t
return pow(p, cp)*pow(p, exponent, m)
Using this function, we can easily calculate (p^z)^(b^c) mod m, after we just have to multiple all results (mod m), we can also calculate everything on an ongoing basis. Example below. (I hope I didn't make mistake, writing.) Only assumption, b,c are big enough (b^c > log(m) ak. each p^(z*b^k) doesn't divide m), it's simple check and I don't see point to make clutter by it.
def solve(a,b,c,m) : # split and solve
result = 1
p = 2 # primes
while p**2 <= a :
z = 0
while a % p == 0 :
# calculate z
a /= p
z += 1
if z != 0 :
result *= modpow(p,z,b,c,m)
result %= m
p += 1
if a != 1 : # Possible last prime
result *= modpow(a, 1, b, c, m)
return result % m
Looks, like it works.
DEMO and it's correct!
Since for any relationship a=x^y, the relationship is invariant with respect to the numeric base you are using (base 2, base 6, base 16, etc).
Since the mod N operation is equivalent to extracting the least significant digit (LSD) in base N
Since the LSD of the result A in base N can only be affected by the LSD of X in base N, and not digits in higher places. (e.g. 34*56 = 30*50+30*6+50*4+4*5 = 10*(3+50+3*6+5*4)+4*6)
Therefore, from LSD(A)=LSD(X^Y) we can deduce
LSD(A)=LSD(LSD(X)^Y)
Therefore
A mod N = ((X mod N) ^ Y) mod N
and
(X ^ Y) mod N = ((X mod N) ^ Y) mod N)
Therefore you can do the mod before each power step, which keeps your result in the range of integers.
This assumes a is not negative, and for any x^y, a^y < MAXINT
This answer answers the wrong question. (alex)
Modular Exponentiation is a correct way to solve this problem, here's a little bit of hint:
To find abcd % m
You have to start with calculating
a % m, then ab % m, then abc % m and then abcd % m ... (you get the idea)
To find ab % m, you basically need two ideas: [Let B=floor(b/2)]
ab = (aB)2 if b is even OR ab = (aB)2*a if b is odd.
(X*Y)%m = ((X%m) * (Y%m)) % m
(% = mod)
Therefore,
if b is even
ab % m = (aB % m)2 % m
or if b is odd
ab % m = (((aB % m)2) * (a % m)) % m
So if you knew the value of aB, you can calculate this value.
To find aB, apply similar approach, dividing B until you reach 1.
e.g. To calculate 1613 % 11:
1613 % 11 = (16 % 11)13 % 11 = 513 % 11
= (56 % 11) * (56 % 11) * (5 % 11) <---- (I)
To find 56 % 11:
56 % 11 = ((53 % 11) * (53 % 11)) % 11 <----(II)
To find 53%11:
53 % 11 = ((51 % 11) * (51 % 11) * (5 % 11)) % 11
= (((5 * 5) % 11) * 5) % 11 = ((25 % 11) * 5) % 11 = (3 * 5) % 11 = 15 % 11 = 4
Plugging this value to (II) gives
56 % 11 = (((4 * 4) % 11) * 5) % 11 = ((16 % 11) * 5) % 11 = (5 * 5) % 11 = 25 % 11 = 3
Plugging this value to (I) gives
513 % 11 = ((3 % 11) * (3 % 11) * 5) % 11 = ((9 % 11) * 5) % 11 = 45 % 11 = 4
This way 513 % 11 = 4
With this you can calculate anything of form a513 % 11 and so on...
Look at the behavior of A^X mod M as X increases. It must eventually go into a cycle. Suppose the cycle has length P and starts after N steps. Then X >= N implies A^X = A^(X+P) = A^(X%P + (-N)%P + N) (mod M). Therefore we can compute A^B^C by computing y=B^C, z = y < N ? y : y%P + (-N)%P + N, return A^z (mod m).
Notice that we can recursively apply this strategy up the power tree, because the derived equation either has an exponent < M or an exponent involving a smaller exponent tower with a smaller dividend.
The only question is if you can efficiently compute N and P given A and M. Notice that overestimating N is fine. We can just set N to M and things will work out. P is a bit harder. If A and M are different primes, then P=M-1. If A has all of M's prime factors, then we get stuck at 0 and P=1. I'll leave it as an exercise to figure that out, because I don't know how.
///Returns equivalent to list.reverse().aggregate(1, acc,item => item^acc) % M
func PowerTowerMod(Link<int> list, int M, int upperB = M)
requires M > 0, upperB >= M
var X = list.Item
if list.Next == null: return X
var P = GetPeriodSomehow(base: X, mod: M)
var e = PowerTowerMod(list.Next, P, M)
if e^X < upperB then return e^X //todo: rewrite e^X < upperB so it doesn't blowup for large x
return ModPow(X, M + (e-M) % P, M)
Tacet's answer is good, but there are substantial simplifications possible.
The powers of x, mod m, are preperiodic. If x is relatively prime to m, the powers of x are periodic, but even without that assumption, the part before the period is not long, at most the maximum of the exponents in the prime factorization of m, which is at most log_2 m. The length of the period divides phi(m), and in fact lambda(m), where lambda is Carmichael's function, the maximum multiplicative order mod m. This can be significantly smaller than phi(m). Lambda(m) can be computed quickly from the prime factorization of m, just as phi(m) can. Lambda(m) is the GCD of lambda(p_i^e_i) over all prime powers p_i^e_i in the prime factorization of m, and for odd prime powers, lambda(p_i^e_i) = phi(p_i^e^i). lambda(2)=1, lamnda(4)=2, lambda(2^n)=2^(n-2) for larger powers of 2.
Define modPos(a,n) to be the representative of the congruence class of a in {0,1,..,n-1}. For nonnegative a, this is just a%n. For a negative, for some reason a%n is defined to be negative, so modPos(a,n) is (a%n)+n.
Define modMin(a,n,min) to be the least positive integer congruent to a mod n that is at least min. For a positive, you can compute this as min+modPos(a-min,n).
If b^c^... is smaller than log_2 m (and we can check whether this inequality holds by recursively taking logarithms), then we can simply compute a^b^c^... Otherwise, a^b^c^... mod m = a^modMin(b^c^..., lambda(m), [log_2 m])) mod m = a^modMin(b^c^... mod lambda(m), lambda(m),[log_2 m]).
For example, suppose we want to compute 2^3^4^5 mod 100. Note that 3^4^5 only has 489 digits, so this is doable by other methods, but it's big enough that you don't want to compute it directly. However, by the methods I gave here, you can compute 2^3^4^5 mod 100 by hand.
Since 3^4^5 > log_2 100,
2^3^4^5 mod 100
= 2^modMin(3^4^5,lambda(100),6) mod 100
= 2^modMin(3^4^5 mod lambda(100), lambda(100),6) mod 100
= 2^modMin(3^4^5 mod 20, 20,6) mod 100.
Let's compute 3^4^5 mod 20. Since 4^5 > log_2 20,
3^4^5 mod 20
= 3^modMin(4^5,lambda(20),4) mod 20
= 3^modMin(4^5 mod lambda(20),lambda(20),4) mod 20
= 3^modMin(4^5 mod 4, 4, 4) mod 20
= 3^modMin(0,4,4) mod 20
= 3^4 mod 20
= 81 mod 20
= 1
We can plug this into the previous calculation:
2^3^4^5 mod 100
= 2^modMin(3^4^5 mod 20, 20,6) mod 100
= 2^modMin(1,20,6) mod 100
= 2^21 mod 100
= 2097152 mod 100
= 52.
Note that 2^(3^4^5 mod 20) mod 100 = 2^1 mod 100 = 2, which is not correct. You can't reduce down to the preperiodic part of the powers of the base.

Resources