What is the runtime of this algorithm? (Recursive Pascal's triangle) - algorithm

Given the following function:
Function f(n,m)
if n == 0 or m == 0: return 1
return f(n-1, m) + f(n, m-1)
What's the runtime compexity of f? I understand how to do it quick and dirty, but how to properly characterize it? Is it O(2^(m*n))?

This is an instance of Pascal's triangle: every element is the sum of the two elements just above it, the sides being all ones.
So f(n, m) = (n + m)! / (n! . m!).
Now to know the number of calls to f required to compute f(n, m), you can construct a modified Pascal's triangle: instead of the sum of the elements above, consider 1 + sum (the call itself plus the two recursive calls).
Draw the modified triangle and you will quickly convince yourself that this is exactly 2.f(n, m) - 1.
You can obtain the asymptotic behavior of the binomial coefficients from Stirling's approximation. http://en.wikipedia.org/wiki/Binomial_coefficient#Bounds_and_asymptotic_formulas
f(n, m) ~ (n + m)^(n + m) / (n^n . m^m)

The runtime of f(n, m) is in O(f(n, m)). This is easily verified by the following observation:
Function g(n, m):
if n=0 or m=0: return 1
return g(n-1, m) + g(n, m-1) + 1
The function f is called equally often as g. Furthermore, the function g is called exactly g(n, m) times to evaluate the result of g(n, m). Likewise, the function f is called exactly g(n, m) = 2*f(n, m)-1 times in order to evaluate the result of f(n, m).
As #Yves Daoust points out in his answer, f(n, m) = (n + m)!/(n!*m!), therefore you get a non-recursive runtime of O((n+m)!/(n!*m!)) for f.

Understanding the Recursive Function
f(n, 0) = 1
f(0, m) = 1
f(n, m) = f(n - 1, m) + f(n, m - 1)
The values look like a Pascal triangle to me:
n 0 1 2 3 4 ..
m
0 1 1 1 1 1 ..
1 1 2 3 4
2 1 3 6
3 1 4 .
4 1 .
. .
. .
Solving the Recursion Equation
The values of Pascal's triangle can be expressed as binomial coefficients. Translating the coordinates one gets the solution for f:
f(n, m) = (n + m)
( m )
= (n + m)! / (m! (n + m - m)!)
= (n + m)! / (n! m!)
which is a nice term symmetric in both arguments n and m. (Final term first given by #Yves Daoust in this discussion)
Pascal's Rule
The recursion equation of f can be derived by using the symmetry of the binomial coefficients and Pascal's Rule
f(n, m) = (n + m)
( n )
= (n + m)
( m )
= ((n + m) - 1) + ((n + m) - 1)
( m ) ( m - 1 )
= ((n - 1) + m) + (n + (m - 1))
( m ) ( m )
= f(n - 1, m) + f(n, m - 1)
Determing the Number of Calls
The "number of calls of f" calculation function F is similar to f, we just have to add the call to f itself and the two recursive calls:
F(0, m) = F(n, 0) = 1, otherwise
F(n, m) = 1 + F(n - 1, m) + F(n, m - 1)
(Given first by #blubb in this discussion).
Understanding the Number of Calls Function
If we write it down, we get another triangle scheme:
1 1 1 1 1 ..
1 3 5 7
1 5 11
1 7 .
1 .
.
.
Comparing the triangles value by value, one guesses
F(n, m) = 2 f(n, m) - 1 (*)
(Result first suggested by #blubb in this discussion)
Proof
We get
F(0, m) = 2 f(0, m) - 1 ; using (*)
= 1 ; yields boundary condition for F
F(n, 0) = 2 f(n, 0) - 1
= 1
as it should and examining the otherwise clause, we see that
F(n, m) = 2 f(n, m) - 1 ; assumption
= 2 ( f(n - 1, m) + f(n, m - 1) ) - 1 ; definition f
= 1 + (2 f(n - 1, m) - 1) + (2 f(n, m - 1) - 1) ; algebra
= 1 + F(n - 1, m) + F(n, m - 1) ; 2 * assumption
Thus if we use (*) and the otherwise clause for f, the otherwise clause for F results.
As the finite difference equation and the start condition for F hold, we know it is F (uniqueness of the solution).
Estimating the Asymptotic Behaviour of the Number of Calls
Now on calculating / estimating the values of F (i.e. the runtime of your algorithm).
As
F = 2 f - 1
we see that
O(F) = O(f).
So the runtime of this algorithm is
O( (n + m)! / (n! m!) )
(Result first given by #Yves Daoust in this discussion)
Approximating the Runtime
Using the Stirling approximation
n! ~= sqrt(2 pi n) (n / e)^n
one can get a form without hard to calculate factorials. One gets
f(n, m) ~= 1/(2 pi) sqrt((n+m) / (n m)) [(n + m)^(n + m)] / (n^n m^m)
thus arriving at
O( sqrt((n + m) / (n m)) [(n + m)^(n + m)] / (n^n m^m) )
(Use of Stirling's formula first suggested by #Yves Daoust in this discussion)

Related

Big(O) for this algorithm

What is the big(O) of this algorithm.
I know that it is similar to O(log(n)) but instead of being halved each time, it is being shrunken exponentially.
sum = 0
i = n
j = 2
while(i>=1)
sum = sum+i
i = i/j
j = 2*j
The denominator d is
d := 2^(k * (k + 1) / 2)
in the k-th iteration of the loop. Thus you have to solve when d is larger than n which leads to a fraction less than 1
2^(k * (k + 1) / 2) > n
for k and fixed n. Inserting
solve 2^(k * (k + 1) / 2) > n for k
in WolframAlpha gives
Thus, you have a running time of O(sqrt(log n)) for your algorithm, when you remove the irrelevant constants from the formula.

Modulo Arithmetic in Modified Geometric Progression

We know that sum of n terms in a Geometric Progression is given by
Sn = a1(1-r^n)/(1-r) if the series is of the form a1, a1*r, a1*r^2, a1*r^3....a1*r^n.
Now i have modified geometric progression where series is of the form
a1, (a1*r) mod p , (a1*r^2) mod p, (a1*r^3) mod p.....(a1*r^n)mod p where a1 is the initial term, p is prime number and r is common ratio. Nth term of this series is given by: (a1 * r^n-1) mod p.
I am trying to get summation formula for above modified GP and struggling very hard. If anyone can throw some light on it or advice on finding efficient algorithm for finding sum without iterating for all the n terms, will be of great help.
Note that if r is a primitive root modulo p.
Then we can reduce complexity of the sum.
We have to find S = a1*1 + a1*r + a1*r^2 + ... + a1*r^n. Then we write S in the closed form as S = a1*(r^n - 1) / (r - 1).
Now it can be reduced to:
a1*(r^n - 1) / (r - 1) = S (mod p)
=> a1*r^n = S * (r - 1) + 1 (mod p)
Now take discrete logarithm with base r both sides,
log(a1*r^n) = log_r(S*(r-1) + 1 (mod p))
=>log_r(a1) + n*log_r(r) = log_r(S*(r-1) + 1 (mod p))
=>n*log_r(r) = log_r(S*(r-1) + 1 (mod p)) - log_r(a1) (mod(p-1))
=>n*1 = log_r(S*(r-1) + 1 (mod (p-1))) - log_r(a1) (mod (p-1))
Note that if a1 is 1 then the last term is 0.
Let S = 6, r = 3, and m = 7, a1 = 1.
Then, we want to solve for n in the following congruence:
(3^n - 1)/(3 - 1) = 6 (mod 7)
=> 3^n - 1 = (3 - 1) * 6 (mod 7)
=> 3^n = 2 * 6 + 1 (mod 7)
=> 3^n = 6 (mod 7)
Then we take the discrete logarithm of both sides:
log_3(3^n) = log_3(6) (mod (7-1))
=> n * log_3(3) = log_3(6) (mod 6)
=> n * 1 = 3 (mod 6)
=> n = 3 (mod 6)
So, n = 3.
You can use Baby-step Giant-step algorithm to solve this in O(sqrt(m)).
If you want implementation in code I will provide you.
The principal relation is the same, the sum x is the solution of
a1*(r^N-1) = (r-1)*x mod p.
The difficulty to observe is that p and r-1 may have common divisors, which is not really a problem as r-1 divides into r^N-1, but still requires careful handling.
Modular division can be achieved via multiplication with the inverse and that can be computed via the extended Euclidean algorithm. Any implementation of
d,u,v = XGCD(r-1,p)
returns the largest common divisor d and Bezout factors u,v so that
u*(r-1)+v*p = d
Multiplication with f/d, f = a1*(r^N-1) results in
(u*f/d)*(r-1) + (v*f/d)*p = f = a1*(r^N-1)
so that the solution can be identified as x = u*(f/d). An implementation will thus follow the lines of
rN = powmod(r,N,p)
f = a1*(rN-1) mod p
d,u,v = XGCD(r-1,p)
return u*(f/d) mod p

Computing Predecessor and Successor

I come across an interesting question and I want to discuss it in order to see how it will be approached by different people:
Let n be a natural number, the task is to implement a function f so that
f(n) = n + 1 if 2 divides n
f(n) = n - 1 if 2 does not divide n
Condition: The implementation must not use conditional constructs
My Answer is f(n) = n xor 1
You could do:
f(n) = n + 1 - 2 * (n % 2)
because
(n % 2) == 0 if 2 divides n and therefore f(n) = n + 1 - 0 and
(n % 2) == 1 if 2 does not divide n and therefore f(n) = n + 1 - 2 = n - 1

is there any O(logn) solution for [1+2a+3a^2+4a^3+....+ba^(b-1)] MOD M

Suppose we have a series summation
s = 1 + 2a + 3a^2 + 4a^3 + .... + ba^(b-1)
i need to find s MOD M, where M is a prime number and b is relatively big integer.
I have found an O((log n)^2) divide and conquer solution.
where,
g(n) = (1 + a + a^2 + ... + a^n) MOD M
f(a, b) = [f(a, b/2) + a^b/2*(f(a,b/2) + b/2*g(b/2))] MOD M, where b is even number
f(a,b) = [f(a,b/2) + a^b/2*(f(a,b/2) + b/2*g(b/2)) + ba(b-1)] MOD M, where b is odd number
is there any O(log n) solution for this problem?
Yes. Observe that 1 + 2a + 3a^2 + ... + ba^(b-1) is the derivative in a of 1 + a + a^2 + a^3 + ... + a^b. (The field of formal power series covers a lot of tricks like this.) We can evaluate the latter with automatic differentiation with dual numbers in time O(log b) arithmetic ops. Something like this:
def fdf(a, b, m):
if b == 0:
return (1, 0)
elif b % 2 == 1:
f, df = fdf((a**2) % m, (b - 1) / 2, m)
df *= 2 * a
return ((1 + a) * f % m, (f + (1 + a) * df) % m)
else:
f, df = fdf((a**2) % m, (b - 2) / 2, m)
df *= 2 * a
return ((1 + (a + a**2) * f) % m, (
(1 + 2 * a) * f + (a + a**2) * df) % m)
The answer is fdf(a, b, m)[1]. Note the use of the chain rule when we go from the derivative with respect to a**2 to the derivative with respect to a.

determining recurrence relation for number of multiplications of an algorithm

I have an algorithm
R(N)
{
if(n<=2) return n;
else
sum=0;
for i=1 to n-2
sum+=(n-1)*R(i)
return sum;
}
I want to get the recurrence for number of multiplications operations performed by R(n).
if
T(n)=0 for n<=2
what is T(n) for n>2?
In addition, how can I show an exponential lower bound on complexity of M(n)?
1) For n > 2:
T(n) = sum(f(i) + 1 for i from 1 to n - 2) =
n - 2 + sum(f(i) for i from 1 to n - 2)
The first equation is correct because R(1), R(2), ..., R(n - 2)
are called recursively and than one more multiplication is performed
after each call.
2)The proof for exponential lower bound:
a) Let's assume that g(i) is such a sequence that:
g(0) = 0
g(1) = 1
g(2) = 1
g(3) = 1
g(i) = g(i - 2) + g(i - 3) for i >= 4
Then g(i) <= R(i + 3) for any valid i, because
a formula for g(i) is obtained by removing n - 2 and g(1), g(2),...,g(i - 4) from
the right side of the recursive formula for R(and all terms are non-negative).
b) On the other hand, g(i) >= f(i / 2), where f(i) is a Fibonacci sequence
(here I assume that f(0) = 0, f(1) = 1, f(2) = 1 and so on).
Proof:
1) Base case: for i <= 4 this inequality holds true(one can simply check it).
2) Induction:
g(i) = g(i - 2) + g(i - 3) >= f((i - 2) / 2) + f((i - 3) / 2) >= f(i / 2 - 1) + f(i / 2 - 2) = f(i / 2).
c) Thus, f(i) <= g(i) <= R(i + 3) so R is lower bounded by Fibonacci sequence. And Fibonacci sequence grows exponentially!

Resources