Calculating 1^X + 2^X + ... + N^X mod 1000000007 - algorithm

Is there any algorithm to calculate (1^x + 2^x + 3^x + ... + n^x) mod 1000000007?
Note: a^b is the b-th power of a.
The constraints are 1 <= n <= 10^16, 1 <= x <= 1000. So the value of N is very large.
I can only solve for O(m log m) if m = 1000000007. It is very slow because the time limit is 2 secs.
Do you have any efficient algorithm?
There was a comment that it might be duplicate of this question, but it is definitely different.

You can sum up the series
1**X + 2**X + ... + N**X
with the help of Faulhaber's formula and you'll get a polynomial with an X + 1 power to compute for arbitrary N.
If you don't want to compute Bernoulli Numbers, you can find the the polynomial by solving X + 2 linear equations (for N = 1, N = 2, N = 3, ..., N = X + 2) which is a slower method but easier to implement.
Let's have an example for X = 2. In this case we have an X + 1 = 3 order polynomial:
A*N**3 + B*N**2 + C*N + D
The linear equations are
A + B + C + D = 1 = 1
A*8 + B*4 + C*2 + D = 1 + 4 = 5
A*27 + B*9 + C*3 + D = 1 + 4 + 9 = 14
A*64 + B*16 + C*4 + D = 1 + 4 + 9 + 16 = 30
Having solved the equations we'll get
A = 1/3
B = 1/2
C = 1/6
D = 0
The final formula is
1**2 + 2**2 + ... + N**2 == N**3 / 3 + N**2 / 2 + N / 6
Now, all you have to do is to put an arbitrary large N into the formula. So far the algorithm has O(X**2) complexity (since it doesn't depend on N).

There are a few ways of speeding up modular exponentiation. From here on, I will use ** to denote "exponentiate" and % to denote "modulus".
First a few observations. It is always the case that (a * b) % m is ((a % m) * (b % m)) % m. It is also always the case that a ** n is the same as (a ** floor(n / 2)) * (a ** (n - floor(n/2)). This means that for an exponent <= 1000, we can always complete the exponentiation in at most 20 multiplications (and 21 mods).
We can also skip quite a few calculations, since (a ** b) % m is the same as ((a % m) ** b) % m and if m is significantly lower than n, we simply have multiple repeating sums, with a "tail" of a partial repeat.

I think Vatine’s answer is probably the way to go, but I already typed
this up and it may be useful, for this or for someone else’s similar
problem.
I don’t have time this morning for a detailed answer, but consider this.
1^2 + 2^2 + 3^2 + ... + n^2 would take O(n) steps to compute directly.
However, it’s equivalent to (n(n+1)(2n+1)/6), which can be computed in
O(1) time. A similar equivalence exists for any higher integral power
x.
There may be a general solution to such problems; I don’t know of one,
and Wolfram Alpha doesn’t seem to know of one either. However, in
general the equivalent expression is of degree x+1, and can be worked
out by computing some sample values and solving a set of linear
equations for the coefficients.
However, this would be difficult for large x, such as 1000 as in your
problem, and probably could not be done within 2 seconds.
Perhaps someone who knows more math than I do can turn this into a
workable solution?
Edit: Whoops, I see Fabian Pijcke had already posted a useful link about Faulhaber's formula before I posted.

If you want something easy to implement and fast, try this:
Function Sum(x: Number, n: Integer) -> Number
P := PolySum(:x, n)
return P(x)
End
Function PolySum(x: Variable, n: Integer) -> Polynomial
C := Sum-Coefficients(n)
P := 0
For i from 1 to n + 1
P += C[i] * x^i
End
return P
End
Function Sum-Coefficients(n: Integer) -> Vector of Rationals
A := Create-Matrix(n)
R := Reduced-Row-Echelon-Form(A)
return last column of R
End
Function Create-Matrix(n: Integer) -> Matrix of Integers
A := New (n + 1) x (n + 2) Matrix of Integers
Fill A with 0s
Fill first row of A with 1s
For i from 2 to n + 1
For j from i to n + 1
A[i, j] := A[i-1, j] * (j - i + 2)
End
A[i, n+2] := A[i, n]
End
A[n+1, n+2] := A[n, n+2]
return A
End
Explanation
Our goal is to obtain a polynomial Q such that Q(x) = sum i^n for i from 1 to x. Knowing that Q(x) = Q(x - 1) + x^n => Q(x) - Q(x - 1) = x^n, we can then make a system of equations like so:
d^0/dx^0( Q(x) - Q(x - 1) ) = d^0/dx^0( x^n )
d^1/dx^1( Q(x) - Q(x - 1) ) = d^1/dx^1( x^n )
d^2/dx^2( Q(x) - Q(x - 1) ) = d^2/dx^2( x^n )
... .
d^n/dx^n( Q(x) - Q(x - 1) ) = d^n/dx^n( x^n )
Assuming that Q(x) = a_1*x + a_2*x^2 + ... + a_(n+1)*x^(n+1), we will then have n+1 linear equations with unknowns a1, ..., a_(n+1), and it turns out the coefficient cj multiplying the unknown aj in equation i follows the pattern (where (k)_p = (k!)/(k - p)!):
if j < i, cj = 0
otherwise, cj = (j)_(i - 1)
and the independent value of the ith equation is (n)_(i - 1). Explaining why gets a bit messy, but you can check the proof here.
The above algorithm is equivalent to solving this system of linear equations.
Plenty of implementations and further explanations can be found in https://github.com/fcard/PolySum. The main drawback of this algorithm is that it consumes a lot of memory, even my most memory efficient version uses almost 1gb for n=3000. But it's faster than both SymPy and Mathematica, so I assume it's okay. Compare to Schultz's method, which uses an alternate set of equations.
Examples
It's easy to apply this method by hand for small n. The matrix for n=1 is
| (1)_0 (2)_0 (1)_0 | | 1 1 1 |
| 0 (2)_1 (1)_1 | = | 0 2 1 |
Applying a Gauss-Jordan elimination we then obtain
| 1 0 1/2 |
| 0 1 1/2 |
Result = {a1 = 1/2, a2 = 1/2} => Q(x) = x/2 + (x^2)/2
Note the matrix is always already in row echelon form, we just need to reduce it.
For n=2:
| (1)_0 (2)_0 (3)_0 (2)_0 | | 1 1 1 1 |
| 0 (2)_1 (3)_1 (2)_1 | = | 0 2 3 2 |
| 0 0 (3)_2 (2)_2 | | 0 0 6 2 |
Applying a Gauss-Jordan elimination we then obtain
| 1 1 0 2/3 | | 1 0 0 1/6 |
| 0 2 0 1 | => | 0 1 0 1/2 |
| 0 0 1 1/3 | | 0 0 1 1/3 |
Result = {a1 = 1/6, a2 = 1/2, a3 = 1/3} => Q(x) = x/6 + (x^2)/2 + (x^3)/3
The key to the algorithm's speed is that it doesn't calculate a factorial for every element of the matrix, instead it knows that (k)_p = (k)_(p-1) * (k - (p - 1)), therefore A[i,j] = (j)_(i-1) = (j)_(i-2) * (j - (i - 2)) = A[i-1, j] * (j - (i - 2)), so it uses the previous row to calculate the current one.

Related

Finding sum in less than O(N)

Question:
In less O(n) find a number K in sequence 1,2,3...N such that sum of 1,2,3...K is exactly half of sum of 1,2,3..N
Maths:
I know that the sum of the sequence 1,2,3....N is N(N+1)/2.
Therefore our task is to find K such that:
K(K+1) = 1/2 * (N)(N+1)/2 if such a K exists.
Pseudo-Code:
sum1 = n(n+1)/2
sum2 = 0
for(i=1;i<n;i++)
{
sum2 += i;
if(sum2 == sum1)
{
index = i
break;
}
}
Problem: The solution is O(n) but I need better such as O(n), O(log(n))...
You're close with your equation, but you dropped the divide by 2 from the K side. You actually want
K * (K + 1) / 2 = N * (N + 1) / (2 * 2)
Or
2 * K * (K + 1) = N * (N + 1)
Plugging that into wolfram alpha gives the real solutions:
K = 1/2 * (-sqrt(2N^2 + 2N + 1) - 1)
K = 1/2 * (sqrt(2N^2 + 2N + 1) - 1)
Since you probably don't want the negative value, the second equation is what you're looking for. That should be an O(1) solution.
The other answers show the analytical solutions of the equation
k * (k + 1) = n * (n + 1) / 2 Where n is given
The OP needs k to be a whole number, though, and such value may not exist for every chosen n.
We can adapt the Newton's method to solve this equation using only integer arithmetics.
sum_n = n * (n + 1) / 2
k = n
repeat indefinitely // It usually needs only a few iterations, it's O(log(n))
f_k = k * (k + 1)
if f_k == sum_n
k is the solution, exit
if f_k < sum_n
there's no k, exit
k_n = (f_k - sum_n) / (2 * k + 1) // Newton step: f(k)/f'(k)
if k_n == 0
k_n = 1 // Avoid inifinite loop
k = k - k_n;
Here there is a C++ implementation.
We can find all the pairs (n, k) that satisfy the equation for 0 < k < n ≤ N adapting the algorithm posted in the question.
n = 1 // This algorithm compares 2 * k * (k + 1) and n * (n + 1)
sum_n = 1 // It finds all the pairs (n, k) where 0 < n ≤ N in O(N)
sum_2k = 1
for every n <= N // Note that n / k → sqrt(2) when n → ∞
while sum_n < sum_2k
n = n + 1 // This inner loop requires a couple of iterations,
sum_n = sum_n + n // at most.
if ( sum_n == sum_2k )
print n and k
k = k + 1
sum_2k = sum_2k + 2 * k
Here there is an implementation in C++ that can find the first pairs where N < 200,000,000:
N K K * (K + 1)
----------------------------------------------
3 2 6
20 14 210
119 84 7140
696 492 242556
4059 2870 8239770
23660 16730 279909630
137903 97512 9508687656
803760 568344 323015470680
4684659 3312554 10973017315470
27304196 19306982 372759573255306
159140519 112529340 12662852473364940
Of course it becomes impractical for too large values and eventually overflows.
Besides, there's a far better way to find all those pairs (have you noticed the patterns in the sequences of the last digits?).
We can start by manipulating this Diophantine equation:
2k(k + 1) = n(n + 1)
introducing u = n + 1 → n = u - 1
v = k + 1 k = v - 1
2(v - 1)v = (u - 1)u
2(v2 - v) = u2 + u
2(4v2 - 4v) = 4u2 + 4u
2(4v2 - 4v) + 2 = 4u2 - 4u + 2
2(4v2 - 4v + 1) = (4u2 - 4u + 1) + 1
2(2v - 1)2 = (2u - 1)2 + 1
substituting x = 2u - 1 → u = (x + 1)/2
y = 2v - 1 v = (y + 1)/2
2y2 = x2 + 1
x2 - 2y2 = -1
Which is the negative Pell's equation for 2.
It's easy to find its fundamental solutions by inspection, x1 = 1 and y1 = 1. Those would correspond to n = k = 0, a solution of the original Diophantine equation, but not of the original problem (I'm ignoring the sums of 0 terms).
Once those are known, we can calculate all the other ones with two simple recurrence relations
xi+1 = xi + 2yi
yi+1 = yi + xi
Note that we need to "skip" the even ys as they would lead to non integer solutions. So we can directly use theese
xi+2 = 3xi + 4yi → ui+1 = 3ui + 4vi - 3 → ni+1 = 3ni + 4ki + 3
yi+2 = 2xi + 3yi vi+1 = 2ui + 3vi - 2 ki+1 = 2ni + 3ki + 2
Summing up:
n k
-----------------------------------------------
3* 0 + 4* 0 + 3 = 3 2* 0 + 3* 0 + 2 = 2
3* 3 + 4* 2 + 3 = 20 2* 3 + 3* 2 + 2 = 14
3*20 + 4*14 + 3 = 119 2*20 + 3*14 + 2 = 84
...
It seems that the problem is asking to solve the diophantine equation
2K(K+1) = N(N+1).
By inspection, K=2, N=3 is a solution !
Note that technically this is an O(1) problem, because N has a finite value and does not vary (and if no solution exists, the dependency on N is even meanignless).
The condition you have is that the sum of 1..N is twice the sum of 1..K
So you have N(N+1) = 2K(K+1) or K^2 + K - (N^2 + N) / 2 = 0
Which means K = (-1 +/- sqrt(1 + 2(N^2 + N)))/2
Which is O(1)

Modulo Arithmetic in Modified Geometric Progression

We know that sum of n terms in a Geometric Progression is given by
Sn = a1(1-r^n)/(1-r) if the series is of the form a1, a1*r, a1*r^2, a1*r^3....a1*r^n.
Now i have modified geometric progression where series is of the form
a1, (a1*r) mod p , (a1*r^2) mod p, (a1*r^3) mod p.....(a1*r^n)mod p where a1 is the initial term, p is prime number and r is common ratio. Nth term of this series is given by: (a1 * r^n-1) mod p.
I am trying to get summation formula for above modified GP and struggling very hard. If anyone can throw some light on it or advice on finding efficient algorithm for finding sum without iterating for all the n terms, will be of great help.
Note that if r is a primitive root modulo p.
Then we can reduce complexity of the sum.
We have to find S = a1*1 + a1*r + a1*r^2 + ... + a1*r^n. Then we write S in the closed form as S = a1*(r^n - 1) / (r - 1).
Now it can be reduced to:
a1*(r^n - 1) / (r - 1) = S (mod p)
=> a1*r^n = S * (r - 1) + 1 (mod p)
Now take discrete logarithm with base r both sides,
log(a1*r^n) = log_r(S*(r-1) + 1 (mod p))
=>log_r(a1) + n*log_r(r) = log_r(S*(r-1) + 1 (mod p))
=>n*log_r(r) = log_r(S*(r-1) + 1 (mod p)) - log_r(a1) (mod(p-1))
=>n*1 = log_r(S*(r-1) + 1 (mod (p-1))) - log_r(a1) (mod (p-1))
Note that if a1 is 1 then the last term is 0.
Let S = 6, r = 3, and m = 7, a1 = 1.
Then, we want to solve for n in the following congruence:
(3^n - 1)/(3 - 1) = 6 (mod 7)
=> 3^n - 1 = (3 - 1) * 6 (mod 7)
=> 3^n = 2 * 6 + 1 (mod 7)
=> 3^n = 6 (mod 7)
Then we take the discrete logarithm of both sides:
log_3(3^n) = log_3(6) (mod (7-1))
=> n * log_3(3) = log_3(6) (mod 6)
=> n * 1 = 3 (mod 6)
=> n = 3 (mod 6)
So, n = 3.
You can use Baby-step Giant-step algorithm to solve this in O(sqrt(m)).
If you want implementation in code I will provide you.
The principal relation is the same, the sum x is the solution of
a1*(r^N-1) = (r-1)*x mod p.
The difficulty to observe is that p and r-1 may have common divisors, which is not really a problem as r-1 divides into r^N-1, but still requires careful handling.
Modular division can be achieved via multiplication with the inverse and that can be computed via the extended Euclidean algorithm. Any implementation of
d,u,v = XGCD(r-1,p)
returns the largest common divisor d and Bezout factors u,v so that
u*(r-1)+v*p = d
Multiplication with f/d, f = a1*(r^N-1) results in
(u*f/d)*(r-1) + (v*f/d)*p = f = a1*(r^N-1)
so that the solution can be identified as x = u*(f/d). An implementation will thus follow the lines of
rN = powmod(r,N,p)
f = a1*(rN-1) mod p
d,u,v = XGCD(r-1,p)
return u*(f/d) mod p

Any faster algorithm to compute the number of divisors

The F series is defined as
F(0) = 1
F(1) = 1
F(i) = i * F(i - 1) * F(i - 2) for i > 1
The task is to find the number of different divisors for F(i)
This question is from Timus . I tried the following Python but it surely gives a time limit exceeded. This bruteforce approach will not work for a large input since it will cause integer overflow as well.
#!/usr/bin/env python
from math import sqrt
n = int(raw_input())
def f(n):
global arr
if n == 0:
return 1
if n == 1:
return 1
a = 1
b = 1
for i in xrange(2, n + 1):
k = i * a * b
a = b
b = k
return b
x = f(n)
cnt = 0
for i in xrange(1, int(sqrt(x)) + 1):
if x % i == 0:
if x / i == i:
cnt += 1
else:
cnt += 2
print cnt
Any optimization?
EDIT
I have tried the suggestion, and rewrite the solution: (not storing the F(n) value directly, but a list of factors)
#!/usr/bin/env python
#from math import sqrt
T = 10000
primes = range(T)
primes[0] = False
primes[1] = False
primes[2] = True
primes[3] = True
for i in xrange(T):
if primes[i]:
j = i + i
while j < T:
primes[j] = False
j += i
p = []
for i in xrange(T):
if primes[i]:
p.append(i)
n = int(raw_input())
def f(n):
global p
if n == 1:
return 1
a = dict()
b = dict()
for i in xrange(2, n + 1):
c = a.copy()
for y in b.iterkeys():
if c.has_key(y):
c[y] += b[y]
else:
c[y] = b[y]
k = i
for y in p:
d = 0
if k % y == 0:
while k % y == 0:
k /= y
d += 1
if c.has_key(y):
c[y] += d
else:
c[y] = d
if k < y: break
a = b
b = c
k = 1
for i in b.iterkeys():
k = k * (b[i] + 1) % (1000000007)
return k
print f(n)
And it still gives TL5, not faster enough, but this solves the problem of overflow for value F(n).
First see this wikipedia article on the divisor function. In short, if you have a number and you know its prime factors, you can easily calculate the number of divisors (get SO to do TeX math):
$n = \prod_{i=1}^r p_i^{a_i}$
$\sigma_x(n) = \prod_{i=1}^{r} \frac{p_{i}^{(a_{i}+1)x}-1}{p_{i}^x-1}$
Anyway, it's a simple function.
Now, to solve your problem, instead of keeping F(n) as the number itself, keep it as a set of prime factors and exponent sizes. Then the function that calculates F(n) simply takes the two sets for F(n-1) and F(n-2), sums the exponents of the same prime factors in both sets (assuming zero for nonexistent ones) and additionally adds the set of prime factors and exponent sizes for the number i. This means that you need another simple1 function to find the prime factors of i.
Computing F(n) this way, you just need to apply the above formula (taken from Wikipedia) to the set and there's your value. Note also that F(n) can quickly get very large. This solution also avoids usage of big-num libraries (since no prime factor nor its exponent is likely to go beyond 4 billion2).
1 Of course this is not so simple for arbitrarily large i, otherwise we wouldn't have any form of security right now, but for your application it should be simple enough.
2 Well it might. If you happen to figure out a simple formula answering your question given any n, then large ns would also be possible in the test case, for which this algorithm is likely going to give a time limit exceeded.
That is a fun problem.
The F(n) grow extremely fast. Since F(n) <= F(n+1) for all n, we have
F(n+2) > F(n)²
for all n, and thus
F(n) > 2^(2^(n/2-1))
for n > 2. That crude estimate already shows that one cannot store these numbers for any but the smallest n. By that F(100) requires more than (2^49) bits of storage, and 128 GB are only 2^40 bits. Actually, the prime factorisation of F(100) is
*Fiborial> fiborials !! 100
[(2,464855623252387472061),(3,184754360086075580988),(5,56806012190322167100)
,(7,20444417903078359662),(11,2894612619136622614),(13,1102203323977318975)
,(17,160545601976374531),(19,61312348893415199),(23,8944533909832252),(29,498454445374078)
,(31,190392553955142),(37,10610210054141),(41,1548008760101),(43,591286730489)
,(47,86267571285),(53,4807526976),(59,267914296),(61,102334155),(67,5702887),(71,832040)
,(73,317811),(79,17711),(83,2584),(89,144),(97,3)]
and that would require about 9.6 * 10^20 (roughly 2^70) bits - a little less than half of them are trailing zeros, but even storing the numbers à la floating point numbers with a significand and an exponent doesn't bring the required storage down far enough.
So instead of storing the numbers themselves, one can consider the prime factorisation. That also allows an easier computation of the number of divisors, since
k k
divisors(n) = ∏ (e_i + 1) if n = ∏ p_i^e_i
i=1 i=1
Now, let us investigate the prime factorisations of the F(n) a little. We begin with the
Lemma: A prime p divides F(n) if and only if p <= n.
That is easily proved by induction: F(0) = F(1) = 1 is not divisible by any prime, and there are no primes <= 1.
Now suppose that n > 1 and
A(k) = The prime factors of F(k) are exactly the primes <= k
holds for k < n. Then, since
F(n) = n * F(n-1) * F(n-2)
the set prime factors of F(n) is the union of the sets of prime factors of n, F(n-1) and F(n-2).
By the induction hypothesis, the set of prime factors of F(k) is
P(k) = { p | 1 < p <= k, p prime }
for k < n. Now, if n is composite, all prime factors of n are samller than n, hence the set of prime factors of F(n) is P(n-1), but since n is not prime, P(n) = P(n-1). If, on the other hand, n is prime, the set of prime factors of F(n) is
P(n-1) ∪ {n} = P(n)
With that, let us see how much work it is to track the prime factorisation of F(n) at once, and update the list/dictionary for each n (I ignore the problem of finding the factorisation of n, that doesn't take long for the small n involved).
The entry for the prime p appears first for n = p, and is then updated for each further n, altogether it is created/updated N - p + 1 times for F(N). Thus there are
∑ (N + 1 - p) = π(N)*(N+1) - ∑ p ≈ N²/(2*log N)
p <= N p <= N
updates in total. For N = 10^6, about 3.6 * 10^10 updates, that is way more than can be done in the allowed time (0.5 seconds).
So we need a different approach. Let us look at one prime p alone, and follow the exponent of p in the F(n).
Let v_p(k) be the exponent of p in the prime factorisation of k. Then we have
v_p(F(n)) = v_p(n) + v_p(F(n-1)) + v_p(F(n-2))
and we know that v_p(F(k)) = 0 for k < p. So (assuming p is not too small to understand what goes on):
v_p(F(n)) = v_p(n) + v_p(F(n-1)) + v_p(F(n-2))
v_p(F(p)) = 1 + 0 + 0 = 1
v_p(F(p+1)) = 0 + 1 + 0 = 1
v_p(F(p+2)) = 0 + 1 + 1 = 2
v_p(F(p+3)) = 0 + 2 + 1 = 3
v_p(F(p+4)) = 0 + 3 + 2 = 5
v_p(F(p+5)) = 0 + 5 + 3 = 8
So we get Fibonacci numbers for the exponents, v_p(F(p+k)) = Fib(k+1) - for a while, since later multiples of p inject further powers of p,
v_p(F(2*p-1)) = 0 + Fib(p-1) + Fib(p-2) = Fib(p)
v_p(F(2*p)) = 1 + Fib(p) + Fib(p-1) = 1 + Fib(p+1)
v_p(F(2*p+1)) = 0 + (1 + Fib(p+1)) + Fib(p) = 1 + Fib(p+2)
v_p(F(2*p+2)) = 0 + (1 + Fib(p+2)) + (1 + Fib(p+1)) = 2 + Fib(p+3)
v_p(F(2*p+3)) = 0 + (2 + Fib(p+3)) + (1 + Fib(p+2)) = 3 + Fib(p+4)
but the additional powers from 2*p also follow a nice Fibonacci pattern, and we have v_p(F(2*p+k)) = Fib(p+k+1) + Fib(k+1) for 0 <= k < p.
For further multiples of p, we get another Fibonacci summand in the exponent, so
n/p
v_p(F(n)) = ∑ Fib(n + 1 - k*p)
k=1
-- until n >= p², because multiples of p² contribute two to the exponent, and the corresponding summand would have to be multiplied by 2; for multiples of p³, by 3 etc.
One can also split the contributions of multiples of higher powers of p, so one would get one Fibonacci summand due to it being a multiple of p, one for it being a multiple of p², one for being a multiple of p³ etc, that yields
n/p n/p² n/p³
v_p(F(n)) = ∑ Fib(n + 1 - k*p) + ∑ Fib(n + 1 - k*p²) + ∑ Fib(n + 1 - k*p³) + ...
k=1 k=1 k=1
Now, in particular for the smaller primes, these sums have a lot of terms, and computing them that way would be slow. Fortunately, there is a closed formula for sums of Fibonacci numbers whose indices are an arithmetic progression, for 0 < a <= s
m
∑ Fib(a + k*s) = (Fib(a + (m+1)*s) - (-1)^s * Fib(a + m*s) - (-1)^a * Fib(s - a) - Fib(a)) / D(s)
k=0
where
D(s) = Luc(s) - 1 - (-1)^s
and Luc(k) is the k-th Lucas number, Luc(k) = Fib(k+1) + Fib(k-1).
For our purposes, we only need the Fibonacci numbers modulo 10^9 + 7, then the division must be replaced by a multiplication with the modular inverse of D(s).
Using these facts, the number of divisors of F(n) modulo 10^9+7 can be computed in the allowed time for n <= 10^6 (about 0.06 seconds on my old 32-bit box), although with Python, on the testing machines, further optimisations might be necessary.

time complexity for the following

int i=1,s=1;
while(s<=n)
{
i++;
s=s+i;
}
time complexity for this is O(root(n)).
I do not understood it how.
since the series is going like 1+2+...+k .
please help.
s(k) = 1 + 2 + 3 + ... k = (k + 1) * k / 2
for s(k) >= n you need at least k steps. n = (k + 1) * k / 2, thus k = -1/2 +- sqrt(1 + 4 * n)/2;
you ignore constants and coeficients and O(-1/2 + sqrt(1+4n)/2) = O(sqrt(n))
Let the loop execute x times. Now, the loop will execute as long as s is less than n.
We have :
After 1st iteration :
s = s + 1
After 2nd iteration :
s = s + 1 + 2
As it goes on for x iterations, Finally we will have
1 + 2 ... + x <= n
=> (x * (x + 1)) / 2 <= n
=> O(x^2) <= n
=> x= O (root(n))
This computes the sum s(k)=1+2+...+k and stops when s(k) > n.
Since s(k)=k*(k+1)/2, the number of iterations required for s(k) to exceed n is O(sqrt(n)).

Algorithm Recurrence formula

I am reading Algorithms in C++ by Robert Sedgewick. Basic recurrences section it was mentioned as This recurrence arises for a recursive program that loops through the input to eliminate one item
Cn = cn-1 + N, for N >=2 with C1 = 1.
Cn is about Nsquare/2. Evaluating the sum 1 + 2 +...+ N is elementary. in addition to this following statement is mentioned.
" This result - twice the value sought - consists of N terms, each of which sums to N +1
I need help in understanding abouve statement what are N terms here and how each sums to
N +1, aslo what does "twice the value sought" means.
Thanks for your help
I think he refers to this basic mathematical trick to calculate that sum. Although, it's difficult to conclude anything from such short passage you cited.
Let's assume N = 100. E.g., the sum is 1 + 2 + 3 + .. + 99 + 100.
Now, let's group pairs of elements with sum 101: 1 + 100, 2 + 99, 3 + 98, ..., 50 + 51. That gives us 50 (N/2) pairs with sum 101 (N + 1) in each: thus the overall sum is 50*101.
Anyway, could you provide a bit more context to that quote?
The recurrence formula means:
C1 = 1
C2 = C1 + 2 = 1 + 2 = 3
C3 = C2 + 3 = 3 + 3 = 6
C4 = C3 + 4 = 6 + 4 = 10
C5 = C4 + 5 = 10 + 5 = 15
etc.
But you can also write it directly:
C5 = 1 + 2 + 3 + 4 + 5 = 15
And then use the old trick:
1 + 2 + 3 + ... + N
+ N + N-1 + N-2 + ... + 1
-------------------------
(N+1) ... (N+1)
= (N+1) * N
From there we get : 1 + 2 + ... N = N * (N+1) / 2
For the anecdote, the above formula was found by the great mathematician Carl Friedrich Gauss, when he was at school.
From there we can deduce a recursive algorithm is O(N square) and that is probably what Robert Sedgewick is doing.

Resources