I'm familiar with solving recurrences with iteration:
t(1) = c1
t(2) = t(1) + c2 = c1 + c2
t(3) = t(2) + c2 = c1 + 2c2
...
t(n) = c1 + (n-1)c2 = O(n)
But what if I had a recurrence with no base case? How would I solve it using the three methods mentioned in the title?
t(n) = 2t(n/2) + 1
For Master Theorem I know the first step, find a, b, and f(n):
a = 2
b = 2
f(n) = 1
But not where to go from here. I'm at a standstill because I'm not sure how to approach the question.
I know of 2 ways to solve this:
(1) T(n) = 2T(n/2) + 1
(2) T(n/2) = 2T(n/4) + 1
now replace T(n/2) from (2) into (1)
T(n) = 2[2T(n/4) + 1] + 1
= 2^2T(n/4) + 2 + 1
T(n/4) = 2T(n/8) + 1
T(n) = 2^2[2T(n/8) + 1] + 2 + 1
= 2^3T(n/8) + 4 + 2 + 1
You would just keep doing this until you can generalize. Eventually you will spot that:
T(n) = 2^kT(n/2^k) + sum(2^(k-1))
You want T(1) so set n/2^k = 1 and solve for k. When you do this you will find that, k = lgn
Substitute lgn for k you will end up with
T(n) = 2^lgnT(n/2^lgn) + (1 - 2^lgn) / (1 - 2)
2^lgn = n so,
T(n) = nT(1) + n - 1
T(n) = n + n - 1 where n is the dominant term.
For Master Theorem its really fast
Consider, T(n) = aT(n/b) + n^c for n>1
There are three cases (note that b is the log base)
(1) if logb a < c, T(n)=Θ(n^c),
(2) if logb a = c, T (n) = Θ(n^c log n),
(3) if logb a > c, T(n) = Θ(n^(logb a)).
In this case a = 2, b = 2, and c = 0 (n^0 = 1)
A quick check shows case 3.
n^(log2 2)
note log2 2 is 1
So by master theorem this is Θ(n)
Apart from the Master Theorem, the Recursion Tree Method and the Iterative Method there is also the so
called "Substitution Method".
Often you will find people talking about the
substitution method, when in fact they mean the iterative method (especially on Youtube).
I guess this stems from the fact that in the iterative method you are also substituting
something, namely the n+1-th recursive call into the n-th one...
The standard reference work about algorithms
(CLRS)
defines it as follows:
Substitution Method
Guess the form of the solution.
Use mathematical induction to find the constants and show that the solution works.
As example let's take your recurrence equation: T(n) = 2T(ⁿ/₂)+1
We guess that the solution is T(n) ∈ O(n²), so we have to prove that
T(n) ≤ cn² for some constant c.
Also, let's assume that for n=1 you are doing some constant work c.
Given:
T(1) ≤ c
T(n) = 2T(ⁿ/₂)+1
To prove:
∃c > 0, ∃n₀ ∈ ℕ, ∀n ≥ n₀, such that T(n) ≤ cn² is true.
Base Case:
n=1: T(1) ≤ c
n=2: T(2) ≤ T(1) + T(1) + 1 ≤ 4c
(≤c) (≤c) (cn²)
Induction Step:
As inductive hypothesis we assume T(n) ≤ cn² for all positive numbers smaller than n
especially for (ⁿ/₂).
Therefore T(ⁿ/₂) ≤ c(ⁿ/₂)², and hence
T(n) ≤ 2c(ⁿ/₂)² + 1 ⟵ Here we're substituting c(ⁿ/₂)² for T(ⁿ/₂)
= (¹/₂)cn² + 1
≤ cn² (for c ≥ 2, and all n ∈ ℕ)
So we have shown, that there is a constant c, such that T(n) ≤ cn² is true for all n ∈ ℕ.
This means exactly T(n) ∈ O(n²). ∎
(for Ω, and hence Θ, the proof is similar).
Related
i want to know what the Time Complexity of my recursion method :
T(n) = 2T(n/2) + O(1)
i saw a result that says it is O(n) but i don't know why , i solved it like this :
T(n) = 2T(n/2) + 1
T(n-1) = 4T(n-1/4) + 3
T(n-2) = 8T(n-2/8) + 7
...... ………….. ..
T(n) = 2^n+1 T (n/2^n+1) + (2^n+1 - 1)
I think you have got the wrong idea about recursive relations. You can think as follows:
If T(n) represents the value of function T() at input = n then the relation says that output is one more double the value at half of the current input. So for input = n-1 output i.e. T(n-1) will be one more than double the value at half of this input, that is T(n-1) = 2*T((n-1)/2) + 1
The above kind of recursive relation should be solved as answered by Yves Daoust. For more examples on recursive relations, you can refer this
Consider that n=2^m, which allows you to write
T(2^m)=2T(2^(m-1))+O(1)
or by denoting S(m):= T(2^m),
S(m)=2 S(m-1) + O(1),
2^m S(m)=2 2^(m-1)S(m-1) + 2^(m-1) O(1)
and finally,
R(m) = R(m-1) + 2^(m-1) O(1).
Now by induction,
R(m) = R(0) + (2^m-1) O(1),
T(n) = S(m) = 2^(1-m) T(2^m) + (2 - 2^(m-1)) O(1) = 2/n T(n) + (2 - n/2) O(1).
There are a couple of rules that you might need to remember. If you can remember these easy rules then Master Theorem is very easy to solve recurrence equations. The following are the basic rules which needs to be remembered
case 1) If n^(log b base a) << f(n) then T(n) = f(n)
case 2) If n^(log b base a) = f(n) then T(n) = f(n) * log n
case 3) 1) If n^(log b base a) >> f(n) then T(n) = n^(log b base a)
Now, lets solve the recurrence using the above equations.
a = 2, b = 2, f(n) = O(1)
n^(log b base a) = n = O(n)
This is case 3) in the above equations. Hence T(n) = n^(log b base a) = O(n).
How to get big-O for this?
T(N) = 2T(N − 1) + N, T(1) = 2
I got two variants of answer O(2^N) or O(N^2), but I am not sure how to solve it correctly
Divide T(N) by 2^N and name the result:
S(N) = T(N)/2^N
From the definition of T(N) we get
S(N) = S(N-1) + N/2^N (eq.1)
meaning that S(N) increases, but quickly converges to a constant (since N/2^N -> 0). So,
T(N)/2^N -> constant
or
T(N) = O(2^N)
Detailed proof
In the comment below Paul Hankin suggests how to complete the proof. Take eq.1 and sum from N=2 to N=M
sum_{N=2}^M S(N) = sum_{N=2}^M S(N-1) + sum_{N=2}^M N/2^N
= sum_{N=1}{M-1} S(N) + sum_{N=1}^{M-1} (N-1)/2^{N-1}
thus, after canceling terms with indexes N = 2, 3, ..., M-1, we get
S(M) = S(1) + sum_{N=1}^M N/2^N - M/2^M
and since the series on the right converges (because its terms are bounded by 1/N^2 for N>>1 which is known to converge), S(M) converges to a finite constant.
It's a math problem and Leandro Caniglia is right.
let b(n) = T(n) / 2^n
thus b(n) = b(n-1) + n / 2^n = b(n-2) + n / 2^n + (n-1) / 2^(n-1) ....
i / 2^i is less than 1 for every integer i
So the sum of them has limit and must smaller than some constant.
thus b(n) < C.
thus T(n) < 2^n * C.
It is obvious that T(n) >= 2^n.
So T(n) is O(2^n)
Check by plugging the answer in the equation.
2^N = 2.2^(N-1) + N = 2^N + N
or
N^2 = 2 (N-1)^2 + N
Keeping only the dominant terms, you have
2^N ~ 2^N
or
N^2 ~ 2 N^2.
Conclude.
Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).
I have been trying to solve a recurrence relation.
The recurrence is T(n) = T(n/3)+T(2n/3)+n^2
I solved the the recurrence n i got it as T(n)=nT(1)+ [ (9/5)(n^2)( (5/9)^(log n) ) ]
Can anyone tell me the runtime of this expression?
I think this recurrence works out to Θ(n2). To see this, we'll show that T(n) = Ω(n2) and that T(n) = O(n2).
Showing that T(n) = Ω(n2) is pretty straightforward - since T(n) has an n2 term in it, it's certainly Ω(n2).
Let's now show that T(n) = O(n2). We have that
T(n) = T(n / 3) + T(2n / 3) + n2
Consider this other recurrence:
S(n) = S(2n / 3) + S(2n / 3) + n2 = 2S(2n / 3) + n2
Since T(n) is increasing and T(n) ≤ S(n), any upper bound for S(n) should also be an upper-bound for T(n).
Using the Master Theorem on S(n), we have that a = 2, b = 3/2, and c = 2. Since logb a = log3/2 2 = 1.709511291... < c, the Master Theorem says that this will solve to O(n2). Since S(n) = O(n2), we also know that T(n) = O(n2).
We've shown that T(n) = Ω(n2) and that T(n) = O(n2), so T(n) = Θ(n2), as required.
Hope this helps!
(By the way - (5 / 9)log n = (2log 5/9)log n = 2log n log 5/9 = (2log n)log 5/9 = nlog 5/9. That makes it a bit easier to reason about.)
One can't tell about runtime from the T(n) OR the time complexity!It is simply an estimation of running time in terms of order of input(n).
One thing which I'd like to add is :-
I haven't solved your recurrence relation,but keeping in mind that your derived relation is correct and hence further putting n=1,in your given recurrence relation,we get
T(1)=T(1/3)+T(2/3)+1
So,either you'll be provided with the values for T(1/3) and T(2/3) in your question OR you have to understand from the given problem statement like what should be T(1) for Tower of Hanoi problem!
For a recurrence, the base-case is T(1), now by definition its value is as following:
T(1) = T(1/3) + T(2/3) + 1
Now since T(n) denotes the runtime-function, then the run-time of any input that will not be processed is always 0, this includes all terms under the base-case, so we have:
T(X < 1) = 0
T(1/3) = 0
T(2/3) = 0
T(1) = T(1/3) + T(2/3) + 1^2
T(1) = 0 + 0 + 1
T(1) = 1
Then we can substitute the value:
T(n) = n T(1) + [ (9/5)(n^2)( (5/9)^(log n) ) ]
T(n) = n + ( 9/5 n^2 (5/9)^(log n) )
T(n) = n^2 (9/5)^(1-log(n)) + n
We can approximate (9/5)^(1-log(n)) to 9/5 for asymptotic upper-bound, since (9/5)^(1-log(n)) <= 9/5:
T(n) ~ 9/5 n^2 + n
O(T(n)) = O(n^2)
I have this recurrence:
T(n)= 2T(n/2) + (n-1)
My try is as follow:
the tree is like this:
T(n) = 2T(n/2) + (n-1)
T(n/2) = 2T(n/4) + ((n/2)-1)
T(n/4) = 2T(n/8) + ((n/4)-1)
...
the hight of the tree : (n/(2h))-1 = 1 ⇒ h = lg n - 1 = lg n - lg 2
the cost of the last level : 2h = 2lg n - lg 2 = (1/2) n
the cost of all levels until level h-1 : Σi=0,...,lg(2n) n - (2i-1), which is a geometric series and equals (1/2)((1/2)n-1)
So, T(n) = Θ(n lg n)
my question is: Is that right?
No, it isn't. You have the cost of the last level wrong, so what you derived from that is also wrong.
(I'm assuming you want to find the complexity yourself, so no more hints unless you ask.)
Edit: Some hints, as requested
To find the complexity, one usually helpful method is to recursively apply the equation and insert the result into the first,
T(n) = 2*T(n/2) + (n-1)
= 2*(2*T(n/4) + (n/2-1)) + (n-1)
= 4*T(n/4) + (n-2) + (n-1)
= 4*T(n/4) + 2*n - 3
= 4*(2*T(n/8) + (n/4-1)) + 2*n - 3
= ...
That often leads to a closed formula you can prove via induction (you don't need to carry out the proof if you have enough experience, then you see the correctness without writing down the proof).
Spoiler: You can look up the complexity in almost any resource dealing with the Master Theorem.
This can be easily solved with Masters theorem.
You have a=2, b=2, f(n) = n - 1 = O(n) and therefore c = log2(2) = 1. This falls into the first case of Master's theorem, which means that the complexity is O(n^c) = O(n)