Proving recursive function complexity by induction - asymptotic-complexity

I need to prove by induction that for -
T(n) = T(n-1) + c2 , T(1) = c1
The run time complexity is - T(n) = O(n)
In my induction step after the base case and the induction assumption I wrote that -
T(k+1) = T(k) + c2 = O(k) + c2 = O(k + 1)
but for proving this run time complexity I need to show that there exist
C,N0 > 0 so the final inequality true.
If someone can tell me how to find/or correct my induction.
Thanks.

To prove by induction, you have to do three steps.
define proposition P(n) for n
show P(n_0) is true for base case n_0
assume that P(k) is true and show P(k+1)is also true
it seems that you don't have concrete definition of your P(n).
so Let P(n) := there exists constant c(>0) that T(n) <= c*n.
and your induction step will be like this:
assume that P(k)is true. then P(k+1) = T(k) + c2
by P(k), there exists constant c_k(>0) that T(k) <= c_k*k.
so T(k+1) = T(k) + c2 <= c_k*k + c2 <= (c_k+1)*k + (c_k+1) = (c_k+1)*(k+1) (if we choose c_k+1 = max(c_k, c2))
so P(k+1)is true.
therefore these proves by induction that
for every n > n_0=1, there exists constant c(>0) that T(n) <= c*n.
P.S. here you can get some readings on induction. and also, at MIT OCW, the course 6042J provides nice lecture for induction and strong induction you can practice and get intuition.

Related

Finding these three algorithm's run time

Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).

I tried Solving a recurrence by guessing then proving by induction, but something went wrong

For the following recurrence I have ti find the Big-O complexity O().
T(n) = 2T(n/2) + cn
T(1)=c1
I guessed its T(n) = O(n).
Induction hypothesis : T(n)<= ak for all k < n .
T(n) = 2T(n/2) + c*n
T(n) <= 2(a*(n/2)) + c*n
<= an +cn
=O(n)
I find it completely correct but my TA graded me 0 in this , Where do you think I went wrong ?
The so-called Master Theorem, case 2, states that
T(n) = Theta(n log n)
for your example. Furthermore, if T(n) = O(n) would be true, Quicksort (for which the above recurrence relation is satisfied) would have a linear runtime complexity, which is also not the case.
Concerning your argument, apparently you state that there exists a constant a such that
T(n) <= a*n
holds. Consequently, the induction hypothesis should be as follows.
T(k) <= a*k for each k < n
But even if this is assumed, the induction step proves that
T(n) <= (a+c)*n
holds; however, this does not prove the desired property as it does not prove that
T(n) <= a*n
holds.

Algorithmic T(n) = T(n-1)+2

How do i apply the Master Theorem on this equation?
if T(1) = 1 and
T(n) = T(n-1)+2
What would be the runtime of such a program?
What is the T(1) = 1 for?
Which case is this and why?
Please an detailed explanation. Thanks.
You cannot use Master Theorem here (without variable substitution at least) here, since it is not in the correct format.
Your function however is easy to analyze and is in Theta(n).
Proof by induction, T(k) <= 2k for each k<n
T(n) = T(n-1) + 2 <= 2(n-1) + 2 = 2n -2 + 2 <= 2n
^
induction
hypothesis
Base of induction is T(1) = 1 <= 2
The above shows that T(n) is in O(n), since we found c=2 such that for n>0, the following is correct: T(n) <= c*n, and this is the definition of big O notation.
Proving similarly that T(n) is in Omega(n) is easy, and from this you can conclude T(n) is in Theta(n)
Thank you for your help. So with the help of you answer and a new recursiv equation i think i finally understood it. Lets say i have T(N) = 1*T(n-1) + n^2. Master Theorem doesn´t apply here aswell so i have my base case.
T(1) = 1
T(2) = 5
T(3) = 14
T(4) = 30
--> Proof by induction, T(k) <= 2k for each k<n
T(n) = T(n-1) + n^2 <= n^2(n-1) + n^2 = n^3 - n^2 + n^2 = n^3
^
induction
hypothesis
So this leads to O(N^3) ? not sure about this. Why not Omega or Theta.
When would my hypothesis/induction be <, > or >=.

Asymptotic notations

From what I have studied: I have been asked to determine the complexity of a function with respect to another function. i.e. Given f(n) and g(n), determine O(f(n(). In such cases, I substitute values, compare both of them and arrive at a complexity - using O(), Theta and Omega notations.
However, in the substitution method for solving recurrences, every standard document has the following lines:
• [Assume that T(1) = Θ(1).]
• Guess O(n3) . (Prove O and Ω separately.)
• Assume that T(k) ≤ ck3 for k < n .
• Prove T(n) ≤ cn3 by induction.
How am I supposed to find O and Ω when nothing else (apart from f(n)) is given? I might be wrong (I, definitely am), and any information on the above is welcome.
Some of the assumptions above are with reference to this problem: T(n) = 4T(n/2) + n
, while the basic outline of the steps is for all such problems.
That particular recurrence is solvable via the Master Theorem, but you can get some feedback from the substitution method. Let's try your initial guess of cn^3.
T(n) = 4T(n/2) + n
<= 4c(n/2)^3 + n
= cn^3/2 + n
Assuming that we choose c so that n <= cn^3/2 for all relevant n,
T(n) <= cn^3/2 + n
<= cn^3/2 + cn^3/2
= cn^3,
so T is O(n^3). The interesting part of this derivation is where we used a cubic term to wipe out a linear one. Overkill like that is often a sign that we could guess lower. Let's try cn.
T(n) = 4T(n/2) + n
<= 4cn/2 + n
= 2cn + n
This won't work. The gap between the right-hand side and the bound we want is is cn + n, which is big Theta of the bound we want. That usually means we need to guess higher. Let's try cn^2.
T(n) = 4T(n/2) + n
<= 4c(n/2)^2 + n
= cn^2 + n
At first that looks like a failure as well. Unlike our guess of n, though, the deficit is little o of the bound itself. We might be able to close it by considering a bound of the form cn^2 - h(n), where h is o(n^2). Why subtraction? If we used h as the candidate bound, we'd run a deficit; by subtracting h, we run a surplus. Common choices for h are lower-order polynomials or log n. Let's try cn^2 - n.
T(n) = 4T(n/2) + n
<= 4(c(n/2)^2 - n/2) + n
= cn^2 - 2n + n
= cn^2 - n
That happens to be the exact solution to the recurrence, which was rather lucky on my part. If we had guessed cn^2 - 2n instead, we would have had a little credit left over.
T(n) = 4T(n/2) + n
<= 4(c(n/2)^2 - 2n/2) + n
= cn^2 - 4n + n
= cn^2 - 3n,
which is slightly smaller than cn^2 - 2n.

Division operation on asymptotic notation

Suppose
S(n) = Big-Oh(f(n)) & T(n) = Big-Oh(f(n))
both f(n) identically belongs from the same class.
My ques is: Why S(n)/T(n) = Big-Oh(1) is incorrect?
Consider S(n) = n^2 and T(n) = n. Then both S and T are O(n^2) but S(n) / T(n) = n which is not O(1).
Here's another example. Consider S(n) = sin(n) and T(n) = cos(n). Then S and T are O(1) but S(n) / T(n) = tan(n) is not O(1). This second example is important because it shows that even if you have a tight bound, the conclusion can still fail.
Why is this happening? Because the obvious "proof" completely fails. The obvious "proof" is the following. There are constants C_S and C_T and N_S and N_T where n >= N_S implies |S(n)| <= C_S * f(n) and n >= N_T implies |T(n)| <= C_T * f(n). Let N = max(N_S, N_T). Then for n >= N we have
|S(n) / T(n)| <= (C_S * f(n)) / (C_T * f(n)) = C_S / C_T.
This is completely and utterly wrong. It is not the case that |T(n)| <= C_T * f(n) implies that 1 / |T(n)| <= 1 / (C_T * f(n)). In fact, what is true is that 1 / |T(n)| >= 1 / (C_T * f(n)). The inequality reverses, and that suggests there is a serious problem with the "theorem." The intuitive idea is that if T is "small" (i.e., bounded) then 1 / T is "big." But we're trying to show that 1 / T is "small" and we just can't do that. As our counterexamples show, the "proof" is fatally flawed.
However, there is a theorem here that is true. Namely, if S(n) is O(f(n)) and T(n) is Ω(f(n)), then S(n) / T(n) is O(1). The above "proof" works for this theorem (thanks are due to Simone for the idea to generalize to this statement).
Here's one counter example:
Let's say f(n) = n^2, S(n) = n^2 and T(n) = n. Now both S and T are in O(f(n)) (you have to remember that O(n^2) is a superset of O(n), so everything that's in O(n) is also in O(n^2)), but U(n) = S(n)/T(n) = n^2/n = n is definitely not in O(1).
Like the others explained S(n) / T(n) is not generally O(1).
Your doubt probably derive from the confusion between O and Θ; in fact if:
S(n) = Θ(n) and T(n) = Θ(n)
then the following is true:
S(n) / T(n) = Θ(1) and thus S(n) / T(n) = O(1)
You can prove that this is wrong if you plug some actual functions into your formula, e.g.
S(n) = n^2
T(n) = n
f(n) = n^2
S(n) / T(n) = n
n != O(1)

Resources