Solution to the equation using the generalization of the Master Theorem - algorithm

I ask help in explaining how the proof works. I've seen examples of it, but have trouble understanding it.
Prove the following
The solution to the equation
T(n) = aT(n/b) + Θ(nk logp n) where a ≥ 1, b > 1, p ≥ 0
T(n) = O(nlogb a) if a > bk
T(n) = O(nk logp+1 n) if a = bk
T(n) = O(nk logp (n)) if a < bk
Here is the screenshot of the question in a better format
This a generalization of the Master Theorem.

For some x =log(n)/log(b) one has n=bx. Divide the equation by ax
T(bx)/ax = T(bx-1)/ax-1 + Θ((bk/a)x·xp·logp b)
The summation of terms mp·qm for m < x is
bounded by a constant for q < 1
growing like xp+1 for q = 1
dominated by the last term xp·qx for q > 1
Recognizing q=bk/a and substituting back gives the result
for a < bk: T(bx)=O(ax), or T(n)=O(nlogba)
for a = bk: T(bx)=O(xp+1·ax), or T(n)=O(nlogba·logp+1n )
for a > bk: T(bx)=O(xp·bkx), or T(n)=O(nk·logpn)

Related

What is the complexity of function f(n) with n=f(n).log(f(n))

What is the complexity of function f(n),preferably the Big-O notation, and f(n) satisfies the condition n = f(n).log(f(n)) ,f(n) > 1 .Let assume that log in base 2.
I tried to isolate f(n) from the condition but could not get it done.
After using excel to get the graph of function f(n).
It seems that f(n) = O(n^2) but I cant figure out how to get it out?
I think the complexity is even lower than O(n) - namely, O(n/ln(n)). Semi-proof:
(substituting n/ln(n) for f(n))
RHS = n/ln(n) * ln(n/ln(n)) = n/ln(n) * (ln(n) -ln(ln(n))) =
= n - n * ln(ln(n))/ln(n) = n * (1-ln(ln(n))/ln(n)) = n*Theta(1) = Theta(n) = LHS
For clarity, I skipped Theta notation almost everywhere.
Comments are getting long, so here's a sketch of a proof for you. This is probably homework, so please make sure you learn something instead of just copying it down.
In order to show that f is O(n), you have to show that there is an M and n1 where f(n) < M|n| for all n > n1.
We know that n = f(n) log(f(n)), so M |n| = M |f(n)| |log(f(n))|.
So what we are trying to find is an M and n1 for which
f(n) < M |n| = M |f(n)| |log(f(n))|
for n > n1.
n, f, and log f are all positive, so we can drop the |.| to get
f(n) < M f(n) log(f(n)) = M |n|
Our goal is to find an M and n1 for which
f(n) < M f(n) log(f(n)) = M |n|
is true for all n > n1. Pick M = 1, n1 = 10, then
f(n) < f(n) log(f(10)) <= f(n) log(f(n)) = |n| (where M is now set to 1)
for n > n1. f(n) log(f(10)) <= f(n) log(f(n)) is true because log(f(n)) is monotonic for n>n1 (homework exercise: show that this is true). f(n) < f(n) log(f(10)) is trivially true because log(f(10)) > 1.
This shows, then, that f(n) is O(n).

Difference between solving T(n) = 2T(n/2) + n/log n and T(n) = 4T(n/2) + n/log n using Master Method

I recently stumbled upon a resource where the 2T(n/2) + n/log n type of recurrences were declared unsolvable by MM.
I accepted it as a lemma, until today, when another resource proved to be a contradiction (in some sense).
As per the resource (link below): Q7 and Q18 in it are the rec. 1 and 2 respectively in the question whereby, the answer to Q7 says it can't be solved by giving the reason 'Polynomial difference b/w f(n) and n^(log a base b)'.
On the contrary, answer 18 solves the second recurrence (in the question here) using case 1.
http://www.csd.uwo.ca/~moreno/CS433-CS9624/Resources/master.pdf
Can somebody please clear the confusion?
If you try to apply the master theorem to
T(n) = 2T(n/2) + n/log n
You consider a = 2, b = 2 which means logb(a) = 1
Can you apply case 1?0 < c < logb(a) = 1. Is n/logn = O(n^c). No, because n/logn grow infinitely faster than n^c
Can you apply case 2? No. c = 1 You need to find some k > 0 such that n/log n = Theta(n log^k n )
Can you apply case 3 ? c > 1, is n/logn = Big Omega(n^c) ? No because it is not even Big Omega(n)
If you try to apply the master theorem to
T(n) = 4T(n/2) + n/log n
You consider a = 4, b = 2 which means logb(a) = 2
Can you apply case 1? c < logb(a) = 2. is n/logn = O(n^0) or n/logn = O(n^1). Yes indeed n/logn = O(n). Thus we have
T(n) = Theta(n^2)
note: Explanation about 0 < c <1, case 1
The case 1 is more about analytics.
f(x) = x/log(x) , g(x) = x^c , 0< c < 1
f(x) is O(g(x)) if f(x) < M g(x) after some x0, for some M finite, so
f(x) is O(g(x)) if f(x)/g(x) < M cause we know they are positive
This isnt true here We pose y = log x
f2(y) = e^y/y , g2(y) = e^cy , 0< c < 1
f2(y)/g2(y) = (e^y/y) / (e^cy) = e^(1-c)y / y , 0< c < 1
lim inf f2(y)/g2(y) = inf
lim inf f(x)/g(x) = inf
This is because in Q18 we have a = 4 and b = 2, thus we get that n^{log(b,a)} = n^2 which has an exponent strictly bigger than the exponent of the polynomial part of n/log(n).

Solving Recurrence relation: T(n) = 3T(n/5) + lgn * lgn

Consider the following recurrence
T(n) = 3T(n/5) + lgn * lgn
What is the value of T(n)?
(A) Theta(n ^ log_5{3})
(B) Theta(n ^ log_3{5})
(c) Theta(n Log n )
(D) Theta( Log n )
Answer is (A)
My Approach :
lgn * lgn = theta(n) since c2lgn < 2*lglgn < c1*lgn for some n>n0
Above inequality is shown in this picture for c2 = 0.1 and c1 = 1
log_5{3} < 1,
Hence by master theorem answer has to be theta(n) and none of the answers match. How to solve this problem??
Your claim that lg n * lg n = Θ(n) is false. Notice that the limit of (lg n)2 / n tends toward 0 as n goes to infinity. You can see this using l'Hopital's rule:
limn → ∞ (lg n)2 / n
= lim n → ∞ 2 lg n / n
= lim n → ∞ 2 / n
= 0
More generally, using similar reasoning, you can prove that lg n = o(nε) for any ε > 0.
Let's try to solve this recurrence using the master theorem. We see that there are three subproblems of size n / 5 each, so we should look at the value of log5 3. Since (lg n)2 = o(nlog5 3), we see that the recursion is bottom-heavy and can conclude that the recurrence solves to O(nlog5 3), which is answer (A) in your list up above.
Hope this helps!
To apply Master Theorem we should check the relation between
nlog5(3) ~= n0.682 and (lg(n))2
Unfortunately lg(n)2 != 2*lg(n): it is lg(n2) that's equal to 2*lg(n)
Also, there is a big difference, in Master Theorem, if f(n) is O(nlogb(a)-ε), or instead Θ(nlogba): if the former holds we can apply case 1, if the latter holds case 2 of the theorem.
With just a glance, it looks highly unlikely (lg(n))2 = Ω(n0.682), so let's try to prove that (lg(n))2 = O(n0.682), i.e.:
∃ n0, c ∈ N+, such that for n>n0, (lg(n))2 < c * n0.682
Let's take the square root of both sides (assuming n > 1, the inequality holds)
lg(n) < c1 * n0.341 , (where c1 = sqrt(c))
now we can assume, that lg(n) = log2(n) (otherwise the multiplicative factor could be absorbed by our constant - as you know constant factors don't matter in asymptotic analysis) and exponentiate both sides:
2lg(n) < 2c2 * n0.341 <=> n < 2c2 * n0.341 <=> n < (n20.341)c2 <=> n < (n20.341)c2 <=> n < (n1.266)c2
which is immediately true choosing c2 = 1 and n0 = 1
Therefore, it does hold true that f(n) = O(nlogb(a)-ε), and we can apply case 1 of the Master Theorem, and conclude that:
T(n) = O(nlog53)
Same result, a bit more formally.

Time complexity of the program using recurrence equation

I want to find out the time complexity of the program using recurrence equations.
That is ..
int f(int x)
{
if(x<1) return 1;
else return f(x-1)+g(x);
}
int g(int x)
{
if(x<2) return 1;
else return f(x-1)+g(x/2);
}
I write its recurrence equation and tried to solve it but it keep on getting complex
T(n) =T(n-1)+g(n)+c
=T(n-2)+g(n-1)+g(n)+c+c
=T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c
=T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c
……………………….
……………………..
Kth time …..
=kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . … +T(n-k)
Let at kth time input become 1
Then n-k=1
K=n-1
Now i end up with this..
T(n)= (n-1)c+g(n)+g(n-1)+g(n-2)+g(n-3)+….. .. g(1)
I ‘m not able to solve it further.
Any way if we count the number of function calls in this program , it can be easily seen that time complexity is exponential but I want proof it using recurrence . how can it be done ?
Explanation in Anwer 1, looks correct , similar work I did.
The most difficult task in this code is to write its recursion equation. I have drawn another diagram , I identified some patterns , I think we can get some help form this diagram what could be the possible recurrence equation.
And I came up with this equation , not sure if it is right ??? Please help.
T(n) = 2*T(n-1) + c * logn
Ok, I think I have been able to prove that f(x) = Theta(2^x) (note that the time complexity is the same). This also proves that g(x) = Theta(2^x) as f(x) > g(x) > f(x-1).
First as everyone noted, it is easy to prove that f(x) = Omega(2^x).
Now we have the relation that f(x) <= 2 f(x-1) + f(x/2) (since f(x) > g(x))
We will show that, for sufficiently large x, there is some constant K > 0 such that
f(x) <= K*H(x), where H(x) = (2 + 1/x)^x
This implies that f(x) = Theta(2^x), as H(x) = Theta(2^x), which itself follows from the fact that H(x)/2^x -> sqrt(e) as x-> infinity (wolfram alpha link of the limit).
Now (warning: heavier math, perhap cs.stackexchange or math.stackexchange is better suited)
according to wolfram alpha (click the link and see series expansion near x = infinity),
H(x) = exp(x ln(2) + 1/2 + O(1/x))
And again, according to wolfram alpha (click the link (different from above) and see the series expansion for x = infinity), we have that
H(x) - 2H(x-1) = [1/2x + O(1/x^2)]exp(x ln(2) + 1/2 + O(1/x))
and so
[H(x) - 2H(x-1)]/H(x/2) -> infinity as x -> infinity
Thus, for sufficiently large x (say x > L) we have the inequality
H(x) >= 2H(x-1) + H(x/2)
Now there is some K (dependent only on L (for instance K = f(2L))) such that
f(x) <= K*H(x) for all x <= 2L
Now we proceed by (strong) induction (you can revert to natural numbers if you want to)
f(x+1) <= 2f(x) + f((x+1)/2)
By induction, the right side is
<= 2*K*H(x) + K*H((x+1)/2)
And we proved earlier that
2*H(x) + H((x+1)/2) <= H(x+1)
Thus f(x+1) <= K * H(x+1)
Using memoisation, both functions can easily be computed in O(n) time. But the program takes at least O(2^n) time, and thus is a very inefficient way of computing f(n) and g(n)
To prove that the program takes at most O(2+epsilon)^n time for any epsilon > 0:
Let F(n) and G(n) be the number of function calls that are made in evaluating f(n) and g(n), respectively. Clearly (counting the addition as 1 function call):
F(0) = 1; F(n) = F(n-1) + G(n) + 1
G(1) = 1; G(n) = F(n-1) + G(n/2) + 1
Then one can prove:
F and G are monotonic
F > G
Define H(1) = 2; H(n) = 2 * H(n-1) + H(n/2) + 1
clearly, H > F
for all n, H(n) > 2 * H(n-1)
hence H(n/2) / H(n-1) -> 0 for sufficiently large n
hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n
hence H in O((2 + epsilon)^n) for any epsilon > 0
(Edit: originally I concluded here that the upper bound is O(2^n). That is incorrect,as nhahtdh pointed out, but see below)
so this is the best I can prove.... Because G < F < H they are also in O((2 + epsilon)^n) for any epsilon > 0
Postscript (after seeing Mr Knoothes solution): Because i.m.h.o a good mathematical proof gives insight, rather than lots of formulas, and SO exists for all those future generations (hi gals!):
For many algorithms, calculating f(n+1) involves twice (thrice,..) the amount of work for f(n), plus something more. If this something more becomes relatively less with increasing n (which is often the case) using a fixed epsilon like above is not optimal.
Replacing the epsilon above by some decreasing function ε(n) of n will in many cases (if ε decreases fast enough, say ε(n)=1/n) yield an upper bound O((2 + ε(n))^n ) = O(2^n)
Let f(0)=0 and g(0)=0
From the function we have,
f(x) = f(x - 1) + g(x)
g(x) = f(x - 1) + g(x/2)
Substituting g(x) in f(x) we get,
f(x) = f(x-1) + f(x -1) + g(x/2)
∴f(x) = 2f(x-1) + g(x/2)
Expanding this we get,
f(x) = 2f(x-1)+f(x/2-1)+f(x/4-1)+ ... + f(1)
Let s(x) be a function defined as follows,
s(x) = 2s(x-1)
Now clearly f(x)=Ω(s(x)).
The complexity of s(x) is O(2x).
Therefore function f(x)=Ω(2x).
I think is clear to see that f(n) > 2n, because f(n) > h(n) = 2h(n-1) = 2n.
Now I claim that for every n, there is an ε such that:
f(n) < (2+ε)n, to see this, let do it by induction, but to make it more sensible at first I'll use ε = 1, to show f(n) <= 3n, then I'll extend it.
We will use strong induction, suppose for every m < n, f(m) < 3m then we have:
f(n) = 2[f(n-1) + f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)]
but for this part:
A = f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)
we have:
f(n/2) = 2[f(n/2 -1) + f(n/4 -1)+ ... +f(1-1]) ==>
A <= f(n/2) [1]
So we can rewrite f(n):
f(n) = 2f(n-1) + A < 2f(n-1) +f(n/2),
Now let back to our claim:
f(n) < 2*3^(n-1) + 2*3^(n/2)==>
f(n) < 2*3^(n-1) + 3^(n-1) ==>
f(n) < 3^n. [2]
By [2], proof of f(n)&in;O(3n) is completed.
But If you want to extend this to the format of (2+ε)n, just use 1 to replace the inequality, then we will have
for ε > 1/(2+ε)n/2-1 → f(n) < (2+ε)n.[3]
Also by [3] you can say that for every n there is an ε such that f(n) < (2+ε)n actually there is constant ε such that for n > n0, f(n)&in;O((2+ε)n). [4]
Now we can use wolfarmalpha like #Knoothe, by setting ε=1/n, then we will have:
f(n) < (2+1/n)n which results on f(n) < e*2n, and by our simple lower bound at start we have: f(n)&in; Θ(2^n).[5]
P.S: I didn't calculate epsilon exactly, but you can do it with pen and paper simply, I think this epsilon is not correct, but is easy to find it, and if is hard tell me is hard, and I'll write it.

What is the recurrence if the base case is O(n)?

We have to create an algorithm and find and solve its recurrence. Finding the recurrence has me stumped..
foo(A, C)
if (C.Length = 0)
Sum(A)
else
t = C.Pop()
A.Push(t)
foo(A,C)
foo(A,C)
Initially A is empty and C.Length = n. I can't give the real algorithm because that's not allowed.
My instructor told me that I might try to use 2 variables. This is what I came up with:
T(n, i) = { n if i = 0
2*T(n, i-1) + C if i != 0
I couldn't solve it, so I also tried to solve a recurrence with just one variable:
T(n) = { n0 if n = 0
2*T(n-1) + C if n != 0
Where n0 is the initial value of n.
How do you form a recurrence from an algorithm where the complexity of the base case is O(n)?
Let f(n) be the complexity if C is of size n. Let N be the original size of C.
Then f(0) = N and f(n) = 2 * f(n - 1) + c.
This has the solution f(n) = N * 2^n + (2^n - 1) * c, and so f(N) = O(N * 2^N).

Resources