Modulo Arithmetic in Modified Geometric Progression - algorithm

We know that sum of n terms in a Geometric Progression is given by
Sn = a1(1-r^n)/(1-r) if the series is of the form a1, a1*r, a1*r^2, a1*r^3....a1*r^n.
Now i have modified geometric progression where series is of the form
a1, (a1*r) mod p , (a1*r^2) mod p, (a1*r^3) mod p.....(a1*r^n)mod p where a1 is the initial term, p is prime number and r is common ratio. Nth term of this series is given by: (a1 * r^n-1) mod p.
I am trying to get summation formula for above modified GP and struggling very hard. If anyone can throw some light on it or advice on finding efficient algorithm for finding sum without iterating for all the n terms, will be of great help.

Note that if r is a primitive root modulo p.
Then we can reduce complexity of the sum.
We have to find S = a1*1 + a1*r + a1*r^2 + ... + a1*r^n. Then we write S in the closed form as S = a1*(r^n - 1) / (r - 1).
Now it can be reduced to:
a1*(r^n - 1) / (r - 1) = S (mod p)
=> a1*r^n = S * (r - 1) + 1 (mod p)
Now take discrete logarithm with base r both sides,
log(a1*r^n) = log_r(S*(r-1) + 1 (mod p))
=>log_r(a1) + n*log_r(r) = log_r(S*(r-1) + 1 (mod p))
=>n*log_r(r) = log_r(S*(r-1) + 1 (mod p)) - log_r(a1) (mod(p-1))
=>n*1 = log_r(S*(r-1) + 1 (mod (p-1))) - log_r(a1) (mod (p-1))
Note that if a1 is 1 then the last term is 0.
Let S = 6, r = 3, and m = 7, a1 = 1.
Then, we want to solve for n in the following congruence:
(3^n - 1)/(3 - 1) = 6 (mod 7)
=> 3^n - 1 = (3 - 1) * 6 (mod 7)
=> 3^n = 2 * 6 + 1 (mod 7)
=> 3^n = 6 (mod 7)
Then we take the discrete logarithm of both sides:
log_3(3^n) = log_3(6) (mod (7-1))
=> n * log_3(3) = log_3(6) (mod 6)
=> n * 1 = 3 (mod 6)
=> n = 3 (mod 6)
So, n = 3.
You can use Baby-step Giant-step algorithm to solve this in O(sqrt(m)).
If you want implementation in code I will provide you.

The principal relation is the same, the sum x is the solution of
a1*(r^N-1) = (r-1)*x mod p.
The difficulty to observe is that p and r-1 may have common divisors, which is not really a problem as r-1 divides into r^N-1, but still requires careful handling.
Modular division can be achieved via multiplication with the inverse and that can be computed via the extended Euclidean algorithm. Any implementation of
d,u,v = XGCD(r-1,p)
returns the largest common divisor d and Bezout factors u,v so that
u*(r-1)+v*p = d
Multiplication with f/d, f = a1*(r^N-1) results in
(u*f/d)*(r-1) + (v*f/d)*p = f = a1*(r^N-1)
so that the solution can be identified as x = u*(f/d). An implementation will thus follow the lines of
rN = powmod(r,N,p)
f = a1*(rN-1) mod p
d,u,v = XGCD(r-1,p)
return u*(f/d) mod p

Related

Rabin-Miller test to Carmichael numbers

I am a computer science student, I am studying the Algorithms course independently.
During the course I saw this question:
Show an efficient randomized algorithm to factor Carmichael numbers
(that is, we want a polynomial time algorithm, that given any
Carmichael number C, with probability at least 3/4 finds a nontrivial
factor of C). Hint: use the Rabin-Miller test.
my solution:
my idea is use Rabin-Miller test:
i will check if C is prime
i will use Rabin-Miller Primality tests steps:
Find n-1=c^k*m
choose a: 1 < a < n-1
compute b_0 = a^m(mod n), b_i = b_(i-1)^2 (mod n)
if b_0 = -/+1 this is prime, i will return nothing. if b_i = -1 this is prime, will return nothing. else if = 1 this is not prime i will return the factor of C.
algorithm:
function MillerRabinPrimality(n)
Input: integer n, Carmichael number
Output: return with probability 3/4 nontrivial factor of n
Find integers k,q > 0, q odd, so that (n-1)=2^(k)
Select a random integer a, 1<a<n-1
if a^q mod n = +/-1
return 'this prime'
for j = 0 to k-1 do
a = a^2 mod q
if (a = -1)
return 'this prime'
if (a = 1)
return 'this is composite, factor is ?'
i dont sure how to return the factor of c, for example i run Rabin-Miller Primality tests for 561, first carmichael number:
n = 561
n-1 = 2(^k)*m => 560
560/2^1 = 280 => 560/2^2 = 140 => 560/2^3 = 70 => **560/2^4 = 35**
k = 4
m = 35
choose a: 1<a<560
a = 2
b_0 = 2^35 mod 561 = 263
b_1 = 263^2 mod 561 = 166
b_2 = 166^2 mod 561 = 67
b_3 = 17^2 mod 561 = 1 --> composite
i found that 561 is composite but not sure how to return his factors (3 / 11 / 17)
If Miller–Rabin fails on a Carmichael number n, then as a byproduct you get some x ≢ ±1 mod n such that x² ≡ 1 mod n. Both gcd(x + 1, n) and gcd(x − 1, n) are proper divisors of n.
The proof: x ≢ 1 mod n is equivalent to x − 1 ≢ 0 mod n, which is equivalent to x − 1 not being divisible by n. Therefore gcd(x − 1, n) ≠ n. Likewise, x ≢ −1 mod n implies that gcd(x + 1, n) ≠ n.
On the other hand, x² ≡ 1 mod n is equivalent to (x + 1) (x − 1) being divisible by n, hence gcd((x + 1) (x − 1), n) = n. We cannot have gcd(x + 1, n) = 1, or else gcd(x − 1, n) = n (since gcd(a b, c) = gcd(a, c) for all b such that gcd(b, c) = 1). Likewise, gcd(x − 1, n) ≠ 1.

Calculating 1^X + 2^X + ... + N^X mod 1000000007

Is there any algorithm to calculate (1^x + 2^x + 3^x + ... + n^x) mod 1000000007?
Note: a^b is the b-th power of a.
The constraints are 1 <= n <= 10^16, 1 <= x <= 1000. So the value of N is very large.
I can only solve for O(m log m) if m = 1000000007. It is very slow because the time limit is 2 secs.
Do you have any efficient algorithm?
There was a comment that it might be duplicate of this question, but it is definitely different.
You can sum up the series
1**X + 2**X + ... + N**X
with the help of Faulhaber's formula and you'll get a polynomial with an X + 1 power to compute for arbitrary N.
If you don't want to compute Bernoulli Numbers, you can find the the polynomial by solving X + 2 linear equations (for N = 1, N = 2, N = 3, ..., N = X + 2) which is a slower method but easier to implement.
Let's have an example for X = 2. In this case we have an X + 1 = 3 order polynomial:
A*N**3 + B*N**2 + C*N + D
The linear equations are
A + B + C + D = 1 = 1
A*8 + B*4 + C*2 + D = 1 + 4 = 5
A*27 + B*9 + C*3 + D = 1 + 4 + 9 = 14
A*64 + B*16 + C*4 + D = 1 + 4 + 9 + 16 = 30
Having solved the equations we'll get
A = 1/3
B = 1/2
C = 1/6
D = 0
The final formula is
1**2 + 2**2 + ... + N**2 == N**3 / 3 + N**2 / 2 + N / 6
Now, all you have to do is to put an arbitrary large N into the formula. So far the algorithm has O(X**2) complexity (since it doesn't depend on N).
There are a few ways of speeding up modular exponentiation. From here on, I will use ** to denote "exponentiate" and % to denote "modulus".
First a few observations. It is always the case that (a * b) % m is ((a % m) * (b % m)) % m. It is also always the case that a ** n is the same as (a ** floor(n / 2)) * (a ** (n - floor(n/2)). This means that for an exponent <= 1000, we can always complete the exponentiation in at most 20 multiplications (and 21 mods).
We can also skip quite a few calculations, since (a ** b) % m is the same as ((a % m) ** b) % m and if m is significantly lower than n, we simply have multiple repeating sums, with a "tail" of a partial repeat.
I think Vatine’s answer is probably the way to go, but I already typed
this up and it may be useful, for this or for someone else’s similar
problem.
I don’t have time this morning for a detailed answer, but consider this.
1^2 + 2^2 + 3^2 + ... + n^2 would take O(n) steps to compute directly.
However, it’s equivalent to (n(n+1)(2n+1)/6), which can be computed in
O(1) time. A similar equivalence exists for any higher integral power
x.
There may be a general solution to such problems; I don’t know of one,
and Wolfram Alpha doesn’t seem to know of one either. However, in
general the equivalent expression is of degree x+1, and can be worked
out by computing some sample values and solving a set of linear
equations for the coefficients.
However, this would be difficult for large x, such as 1000 as in your
problem, and probably could not be done within 2 seconds.
Perhaps someone who knows more math than I do can turn this into a
workable solution?
Edit: Whoops, I see Fabian Pijcke had already posted a useful link about Faulhaber's formula before I posted.
If you want something easy to implement and fast, try this:
Function Sum(x: Number, n: Integer) -> Number
P := PolySum(:x, n)
return P(x)
End
Function PolySum(x: Variable, n: Integer) -> Polynomial
C := Sum-Coefficients(n)
P := 0
For i from 1 to n + 1
P += C[i] * x^i
End
return P
End
Function Sum-Coefficients(n: Integer) -> Vector of Rationals
A := Create-Matrix(n)
R := Reduced-Row-Echelon-Form(A)
return last column of R
End
Function Create-Matrix(n: Integer) -> Matrix of Integers
A := New (n + 1) x (n + 2) Matrix of Integers
Fill A with 0s
Fill first row of A with 1s
For i from 2 to n + 1
For j from i to n + 1
A[i, j] := A[i-1, j] * (j - i + 2)
End
A[i, n+2] := A[i, n]
End
A[n+1, n+2] := A[n, n+2]
return A
End
Explanation
Our goal is to obtain a polynomial Q such that Q(x) = sum i^n for i from 1 to x. Knowing that Q(x) = Q(x - 1) + x^n => Q(x) - Q(x - 1) = x^n, we can then make a system of equations like so:
d^0/dx^0( Q(x) - Q(x - 1) ) = d^0/dx^0( x^n )
d^1/dx^1( Q(x) - Q(x - 1) ) = d^1/dx^1( x^n )
d^2/dx^2( Q(x) - Q(x - 1) ) = d^2/dx^2( x^n )
... .
d^n/dx^n( Q(x) - Q(x - 1) ) = d^n/dx^n( x^n )
Assuming that Q(x) = a_1*x + a_2*x^2 + ... + a_(n+1)*x^(n+1), we will then have n+1 linear equations with unknowns a1, ..., a_(n+1), and it turns out the coefficient cj multiplying the unknown aj in equation i follows the pattern (where (k)_p = (k!)/(k - p)!):
if j < i, cj = 0
otherwise, cj = (j)_(i - 1)
and the independent value of the ith equation is (n)_(i - 1). Explaining why gets a bit messy, but you can check the proof here.
The above algorithm is equivalent to solving this system of linear equations.
Plenty of implementations and further explanations can be found in https://github.com/fcard/PolySum. The main drawback of this algorithm is that it consumes a lot of memory, even my most memory efficient version uses almost 1gb for n=3000. But it's faster than both SymPy and Mathematica, so I assume it's okay. Compare to Schultz's method, which uses an alternate set of equations.
Examples
It's easy to apply this method by hand for small n. The matrix for n=1 is
| (1)_0 (2)_0 (1)_0 | | 1 1 1 |
| 0 (2)_1 (1)_1 | = | 0 2 1 |
Applying a Gauss-Jordan elimination we then obtain
| 1 0 1/2 |
| 0 1 1/2 |
Result = {a1 = 1/2, a2 = 1/2} => Q(x) = x/2 + (x^2)/2
Note the matrix is always already in row echelon form, we just need to reduce it.
For n=2:
| (1)_0 (2)_0 (3)_0 (2)_0 | | 1 1 1 1 |
| 0 (2)_1 (3)_1 (2)_1 | = | 0 2 3 2 |
| 0 0 (3)_2 (2)_2 | | 0 0 6 2 |
Applying a Gauss-Jordan elimination we then obtain
| 1 1 0 2/3 | | 1 0 0 1/6 |
| 0 2 0 1 | => | 0 1 0 1/2 |
| 0 0 1 1/3 | | 0 0 1 1/3 |
Result = {a1 = 1/6, a2 = 1/2, a3 = 1/3} => Q(x) = x/6 + (x^2)/2 + (x^3)/3
The key to the algorithm's speed is that it doesn't calculate a factorial for every element of the matrix, instead it knows that (k)_p = (k)_(p-1) * (k - (p - 1)), therefore A[i,j] = (j)_(i-1) = (j)_(i-2) * (j - (i - 2)) = A[i-1, j] * (j - (i - 2)), so it uses the previous row to calculate the current one.

What is the runtime of this algorithm? (Recursive Pascal's triangle)

Given the following function:
Function f(n,m)
if n == 0 or m == 0: return 1
return f(n-1, m) + f(n, m-1)
What's the runtime compexity of f? I understand how to do it quick and dirty, but how to properly characterize it? Is it O(2^(m*n))?
This is an instance of Pascal's triangle: every element is the sum of the two elements just above it, the sides being all ones.
So f(n, m) = (n + m)! / (n! . m!).
Now to know the number of calls to f required to compute f(n, m), you can construct a modified Pascal's triangle: instead of the sum of the elements above, consider 1 + sum (the call itself plus the two recursive calls).
Draw the modified triangle and you will quickly convince yourself that this is exactly 2.f(n, m) - 1.
You can obtain the asymptotic behavior of the binomial coefficients from Stirling's approximation. http://en.wikipedia.org/wiki/Binomial_coefficient#Bounds_and_asymptotic_formulas
f(n, m) ~ (n + m)^(n + m) / (n^n . m^m)
The runtime of f(n, m) is in O(f(n, m)). This is easily verified by the following observation:
Function g(n, m):
if n=0 or m=0: return 1
return g(n-1, m) + g(n, m-1) + 1
The function f is called equally often as g. Furthermore, the function g is called exactly g(n, m) times to evaluate the result of g(n, m). Likewise, the function f is called exactly g(n, m) = 2*f(n, m)-1 times in order to evaluate the result of f(n, m).
As #Yves Daoust points out in his answer, f(n, m) = (n + m)!/(n!*m!), therefore you get a non-recursive runtime of O((n+m)!/(n!*m!)) for f.
Understanding the Recursive Function
f(n, 0) = 1
f(0, m) = 1
f(n, m) = f(n - 1, m) + f(n, m - 1)
The values look like a Pascal triangle to me:
n 0 1 2 3 4 ..
m
0 1 1 1 1 1 ..
1 1 2 3 4
2 1 3 6
3 1 4 .
4 1 .
. .
. .
Solving the Recursion Equation
The values of Pascal's triangle can be expressed as binomial coefficients. Translating the coordinates one gets the solution for f:
f(n, m) = (n + m)
( m )
= (n + m)! / (m! (n + m - m)!)
= (n + m)! / (n! m!)
which is a nice term symmetric in both arguments n and m. (Final term first given by #Yves Daoust in this discussion)
Pascal's Rule
The recursion equation of f can be derived by using the symmetry of the binomial coefficients and Pascal's Rule
f(n, m) = (n + m)
( n )
= (n + m)
( m )
= ((n + m) - 1) + ((n + m) - 1)
( m ) ( m - 1 )
= ((n - 1) + m) + (n + (m - 1))
( m ) ( m )
= f(n - 1, m) + f(n, m - 1)
Determing the Number of Calls
The "number of calls of f" calculation function F is similar to f, we just have to add the call to f itself and the two recursive calls:
F(0, m) = F(n, 0) = 1, otherwise
F(n, m) = 1 + F(n - 1, m) + F(n, m - 1)
(Given first by #blubb in this discussion).
Understanding the Number of Calls Function
If we write it down, we get another triangle scheme:
1 1 1 1 1 ..
1 3 5 7
1 5 11
1 7 .
1 .
.
.
Comparing the triangles value by value, one guesses
F(n, m) = 2 f(n, m) - 1 (*)
(Result first suggested by #blubb in this discussion)
Proof
We get
F(0, m) = 2 f(0, m) - 1 ; using (*)
= 1 ; yields boundary condition for F
F(n, 0) = 2 f(n, 0) - 1
= 1
as it should and examining the otherwise clause, we see that
F(n, m) = 2 f(n, m) - 1 ; assumption
= 2 ( f(n - 1, m) + f(n, m - 1) ) - 1 ; definition f
= 1 + (2 f(n - 1, m) - 1) + (2 f(n, m - 1) - 1) ; algebra
= 1 + F(n - 1, m) + F(n, m - 1) ; 2 * assumption
Thus if we use (*) and the otherwise clause for f, the otherwise clause for F results.
As the finite difference equation and the start condition for F hold, we know it is F (uniqueness of the solution).
Estimating the Asymptotic Behaviour of the Number of Calls
Now on calculating / estimating the values of F (i.e. the runtime of your algorithm).
As
F = 2 f - 1
we see that
O(F) = O(f).
So the runtime of this algorithm is
O( (n + m)! / (n! m!) )
(Result first given by #Yves Daoust in this discussion)
Approximating the Runtime
Using the Stirling approximation
n! ~= sqrt(2 pi n) (n / e)^n
one can get a form without hard to calculate factorials. One gets
f(n, m) ~= 1/(2 pi) sqrt((n+m) / (n m)) [(n + m)^(n + m)] / (n^n m^m)
thus arriving at
O( sqrt((n + m) / (n m)) [(n + m)^(n + m)] / (n^n m^m) )
(Use of Stirling's formula first suggested by #Yves Daoust in this discussion)

How to evalute an exponential tower modulo a prime

I want to find a fast algorithm to evaluate an expression like the following, where P is prime.
A ^ B ^ C ^ D ^ E mod P
Example:
(9 ^ (3 ^ (15 ^ (3 ^ 15)))) mod 65537 = 16134
The problem is the intermediate results can grow much too large to handle.
Basically the problem reduces to computing a^T mod m for given a, m and a term T that is ridiulously huge. However, we are able to evaluate T mod n with a given modulus n much faster than T . So we ask: "Is there an integer n, such that a^(T mod n) mod m = a^T mod m?"
Now if a and m are coprime, we know that n = phi(m) fulfills our condition according to Euler's theorem:
a^T (mod m)
= a^((T mod phi(m)) + k * phi(m)) (mod m) (for some k)
= a^(T mod phi(m)) * a^(k * phi(m)) (mod m)
= a^(T mod phi(m)) * (a^phi(m))^k (mod m)
= a^(T mod phi(m)) * 1^k (mod m)
= a^(T mod phi(m)) (mod m)
If we can compute phi(m) (which is easy to do for example in O(m^(1/2)) or if we know the prime factorization of m), we have reduced the problem to computing T mod phi(m) and a simple modular exponentiation.
What if a and m are not coprime? The situation is not as pleasant as before, since there might not be a valid n with the property a^T mod m = a^(T mod n) mod m for all T. However, we can show that the sequence a^k mod m for k = 0, 1, 2, ... enters a cycle after some point, that is there exist x and C with x, C < m, such that a^y = a^(y + C) for all y >= x.
Example: For a = 2, m = 12, we get the sequence 2^0, 2^1, ... = 1, 2, 4, 8, 4, 8, ... (mod 12). We can see the cycle with parameters x = 2 and C = 2.
We can find the cycle length via brute-force, by computing the sequence elements a^0, a^1, ... until we find two indices X < Y with a^X = a^Y. Now we set x = X and C = Y - X. This gives us an algorithm with O(m) exponentiations per recursion.
What if we want to do better? Thanks to Jyrki Lahtonen from Math Exchange for providing the essentials for the following algorithm!
Let's evaluate the sequence d_k = gcd(a^k, m) until we find an x with d_x = d_{x+1}. This will take at most log(m) GCD computations, because x is bounded by the highest exponent in the prime factorization of m. Let C = phi(m / d_x). We can now prove that a^{k + C} = a^k for all k >= x, so we have found the cycle parameters in O(m^(1/2)) time.
Let's assume we have found x and C and want to compute a^T mod m now.
If T < x, the task is trivial to perform with simple modular exponentiation. Otherwise, we have T >= x and can thus make use of the cycle:
a^T (mod m)
= a^(x + ((T - x) mod C)) (mod m)
= a^(x + (-x mod C) + (T mod C) + k*C) (mod m) (for some k)
= a^(x + (-x mod C) + k*C) * a^(T mod C) (mod m)
= a^(x + (-x mod C)) * a^(T mod C) (mod m)
Again, we have reduced the problem to a subproblem of the same form ("compute T mod C") and two simple modular exponentiations.
Since the modulus is reduced by at least 1 in every iteration, we get a pretty weak bound of O(P^(1/2) * min (P, n)) for the runtime of this algorithm, where n is the height of the stack. In practice we should get a lot better, since the moduli are expected to decrease exponentially. Of course this argument is a bit hand-wavy, maybe some more mathematically-inclined person can improve on it.
There are a few edge cases to consider that actually make your life a bit easier: you can stop immediately if m = 1 (the result is 0 in this case) or if a is a multiple of m (the result is 0 as well in this case).
EDIT: It can be shown that x = C = phi(m) is valid, so as a quick and dirty solution we can use the formula
a^T = a^(phi(m) + T mod phi(m)) (mod m)
for T >= phi(m) or even T >= log_2(m).

Any faster algorithm to compute the number of divisors

The F series is defined as
F(0) = 1
F(1) = 1
F(i) = i * F(i - 1) * F(i - 2) for i > 1
The task is to find the number of different divisors for F(i)
This question is from Timus . I tried the following Python but it surely gives a time limit exceeded. This bruteforce approach will not work for a large input since it will cause integer overflow as well.
#!/usr/bin/env python
from math import sqrt
n = int(raw_input())
def f(n):
global arr
if n == 0:
return 1
if n == 1:
return 1
a = 1
b = 1
for i in xrange(2, n + 1):
k = i * a * b
a = b
b = k
return b
x = f(n)
cnt = 0
for i in xrange(1, int(sqrt(x)) + 1):
if x % i == 0:
if x / i == i:
cnt += 1
else:
cnt += 2
print cnt
Any optimization?
EDIT
I have tried the suggestion, and rewrite the solution: (not storing the F(n) value directly, but a list of factors)
#!/usr/bin/env python
#from math import sqrt
T = 10000
primes = range(T)
primes[0] = False
primes[1] = False
primes[2] = True
primes[3] = True
for i in xrange(T):
if primes[i]:
j = i + i
while j < T:
primes[j] = False
j += i
p = []
for i in xrange(T):
if primes[i]:
p.append(i)
n = int(raw_input())
def f(n):
global p
if n == 1:
return 1
a = dict()
b = dict()
for i in xrange(2, n + 1):
c = a.copy()
for y in b.iterkeys():
if c.has_key(y):
c[y] += b[y]
else:
c[y] = b[y]
k = i
for y in p:
d = 0
if k % y == 0:
while k % y == 0:
k /= y
d += 1
if c.has_key(y):
c[y] += d
else:
c[y] = d
if k < y: break
a = b
b = c
k = 1
for i in b.iterkeys():
k = k * (b[i] + 1) % (1000000007)
return k
print f(n)
And it still gives TL5, not faster enough, but this solves the problem of overflow for value F(n).
First see this wikipedia article on the divisor function. In short, if you have a number and you know its prime factors, you can easily calculate the number of divisors (get SO to do TeX math):
$n = \prod_{i=1}^r p_i^{a_i}$
$\sigma_x(n) = \prod_{i=1}^{r} \frac{p_{i}^{(a_{i}+1)x}-1}{p_{i}^x-1}$
Anyway, it's a simple function.
Now, to solve your problem, instead of keeping F(n) as the number itself, keep it as a set of prime factors and exponent sizes. Then the function that calculates F(n) simply takes the two sets for F(n-1) and F(n-2), sums the exponents of the same prime factors in both sets (assuming zero for nonexistent ones) and additionally adds the set of prime factors and exponent sizes for the number i. This means that you need another simple1 function to find the prime factors of i.
Computing F(n) this way, you just need to apply the above formula (taken from Wikipedia) to the set and there's your value. Note also that F(n) can quickly get very large. This solution also avoids usage of big-num libraries (since no prime factor nor its exponent is likely to go beyond 4 billion2).
1 Of course this is not so simple for arbitrarily large i, otherwise we wouldn't have any form of security right now, but for your application it should be simple enough.
2 Well it might. If you happen to figure out a simple formula answering your question given any n, then large ns would also be possible in the test case, for which this algorithm is likely going to give a time limit exceeded.
That is a fun problem.
The F(n) grow extremely fast. Since F(n) <= F(n+1) for all n, we have
F(n+2) > F(n)²
for all n, and thus
F(n) > 2^(2^(n/2-1))
for n > 2. That crude estimate already shows that one cannot store these numbers for any but the smallest n. By that F(100) requires more than (2^49) bits of storage, and 128 GB are only 2^40 bits. Actually, the prime factorisation of F(100) is
*Fiborial> fiborials !! 100
[(2,464855623252387472061),(3,184754360086075580988),(5,56806012190322167100)
,(7,20444417903078359662),(11,2894612619136622614),(13,1102203323977318975)
,(17,160545601976374531),(19,61312348893415199),(23,8944533909832252),(29,498454445374078)
,(31,190392553955142),(37,10610210054141),(41,1548008760101),(43,591286730489)
,(47,86267571285),(53,4807526976),(59,267914296),(61,102334155),(67,5702887),(71,832040)
,(73,317811),(79,17711),(83,2584),(89,144),(97,3)]
and that would require about 9.6 * 10^20 (roughly 2^70) bits - a little less than half of them are trailing zeros, but even storing the numbers à la floating point numbers with a significand and an exponent doesn't bring the required storage down far enough.
So instead of storing the numbers themselves, one can consider the prime factorisation. That also allows an easier computation of the number of divisors, since
k k
divisors(n) = ∏ (e_i + 1) if n = ∏ p_i^e_i
i=1 i=1
Now, let us investigate the prime factorisations of the F(n) a little. We begin with the
Lemma: A prime p divides F(n) if and only if p <= n.
That is easily proved by induction: F(0) = F(1) = 1 is not divisible by any prime, and there are no primes <= 1.
Now suppose that n > 1 and
A(k) = The prime factors of F(k) are exactly the primes <= k
holds for k < n. Then, since
F(n) = n * F(n-1) * F(n-2)
the set prime factors of F(n) is the union of the sets of prime factors of n, F(n-1) and F(n-2).
By the induction hypothesis, the set of prime factors of F(k) is
P(k) = { p | 1 < p <= k, p prime }
for k < n. Now, if n is composite, all prime factors of n are samller than n, hence the set of prime factors of F(n) is P(n-1), but since n is not prime, P(n) = P(n-1). If, on the other hand, n is prime, the set of prime factors of F(n) is
P(n-1) ∪ {n} = P(n)
With that, let us see how much work it is to track the prime factorisation of F(n) at once, and update the list/dictionary for each n (I ignore the problem of finding the factorisation of n, that doesn't take long for the small n involved).
The entry for the prime p appears first for n = p, and is then updated for each further n, altogether it is created/updated N - p + 1 times for F(N). Thus there are
∑ (N + 1 - p) = π(N)*(N+1) - ∑ p ≈ N²/(2*log N)
p <= N p <= N
updates in total. For N = 10^6, about 3.6 * 10^10 updates, that is way more than can be done in the allowed time (0.5 seconds).
So we need a different approach. Let us look at one prime p alone, and follow the exponent of p in the F(n).
Let v_p(k) be the exponent of p in the prime factorisation of k. Then we have
v_p(F(n)) = v_p(n) + v_p(F(n-1)) + v_p(F(n-2))
and we know that v_p(F(k)) = 0 for k < p. So (assuming p is not too small to understand what goes on):
v_p(F(n)) = v_p(n) + v_p(F(n-1)) + v_p(F(n-2))
v_p(F(p)) = 1 + 0 + 0 = 1
v_p(F(p+1)) = 0 + 1 + 0 = 1
v_p(F(p+2)) = 0 + 1 + 1 = 2
v_p(F(p+3)) = 0 + 2 + 1 = 3
v_p(F(p+4)) = 0 + 3 + 2 = 5
v_p(F(p+5)) = 0 + 5 + 3 = 8
So we get Fibonacci numbers for the exponents, v_p(F(p+k)) = Fib(k+1) - for a while, since later multiples of p inject further powers of p,
v_p(F(2*p-1)) = 0 + Fib(p-1) + Fib(p-2) = Fib(p)
v_p(F(2*p)) = 1 + Fib(p) + Fib(p-1) = 1 + Fib(p+1)
v_p(F(2*p+1)) = 0 + (1 + Fib(p+1)) + Fib(p) = 1 + Fib(p+2)
v_p(F(2*p+2)) = 0 + (1 + Fib(p+2)) + (1 + Fib(p+1)) = 2 + Fib(p+3)
v_p(F(2*p+3)) = 0 + (2 + Fib(p+3)) + (1 + Fib(p+2)) = 3 + Fib(p+4)
but the additional powers from 2*p also follow a nice Fibonacci pattern, and we have v_p(F(2*p+k)) = Fib(p+k+1) + Fib(k+1) for 0 <= k < p.
For further multiples of p, we get another Fibonacci summand in the exponent, so
n/p
v_p(F(n)) = ∑ Fib(n + 1 - k*p)
k=1
-- until n >= p², because multiples of p² contribute two to the exponent, and the corresponding summand would have to be multiplied by 2; for multiples of p³, by 3 etc.
One can also split the contributions of multiples of higher powers of p, so one would get one Fibonacci summand due to it being a multiple of p, one for it being a multiple of p², one for being a multiple of p³ etc, that yields
n/p n/p² n/p³
v_p(F(n)) = ∑ Fib(n + 1 - k*p) + ∑ Fib(n + 1 - k*p²) + ∑ Fib(n + 1 - k*p³) + ...
k=1 k=1 k=1
Now, in particular for the smaller primes, these sums have a lot of terms, and computing them that way would be slow. Fortunately, there is a closed formula for sums of Fibonacci numbers whose indices are an arithmetic progression, for 0 < a <= s
m
∑ Fib(a + k*s) = (Fib(a + (m+1)*s) - (-1)^s * Fib(a + m*s) - (-1)^a * Fib(s - a) - Fib(a)) / D(s)
k=0
where
D(s) = Luc(s) - 1 - (-1)^s
and Luc(k) is the k-th Lucas number, Luc(k) = Fib(k+1) + Fib(k-1).
For our purposes, we only need the Fibonacci numbers modulo 10^9 + 7, then the division must be replaced by a multiplication with the modular inverse of D(s).
Using these facts, the number of divisors of F(n) modulo 10^9+7 can be computed in the allowed time for n <= 10^6 (about 0.06 seconds on my old 32-bit box), although with Python, on the testing machines, further optimisations might be necessary.

Resources