What is the runtime of a recurrence T(n)=3T(2n/3)+1 and how did you get it?
This type of recurrences can be solved with a Master theorem. Here a=3, b=3/2 and f(n) = 1. Your c = log1.5(3) = 2.709 And because n^2.709 is bigger than f(n), you fall into first case.
So the solution is O(n^2.709)
Use the Master Theorem. This is a lot easier than trying to solve the recurrence yourself, as you tried in your original question.
OTTOMH this should get you T(n) = Theta(n^2.7) (Case 1 of the Master Theorem).
Related
I've been trying to solve the next recurrence relation T(n)=T(n/3)+T(n/6)+1
I don't know where to start. I thought about drawing a recursion tree first and then solve it, but I don't know if it's right.
Can someone please help me with this? Thanks
You could use the Akra-Bazzi method, which is a generalization of the Master Theorem for solving divide-and-conquer recurrences with sub-problems of different sizes. It applies to recurrences of the form:
for positive constants a, constants b in (0,1), g(n) in O(n^c) and h(n) in O(n/log^2(n)).
In this case:
Following the method, we need the p value such that:
Solving this equation for p gives p=0.48954...
The Akra-Bazzi theorem says that the complexity for the algorithm is then:
which when solved given g(u) = 1 yields:
Master theorem can be used to solve recurrence relations like
T(n)= aT(n/b)+f(n).
So, if f(n)=O(n) or if f(n)=cn are both the values same?
can I use master theorem for f(n)=cn also?
Asumming that c is a constant and that I understand your question correctly, the solution will be the same for both f(n) = O(n) and f(n) = cn, since cn = O(n) and thus the Master theorem can be applied to solve the recurrance.
If I understood the question correctly, f(n)=cn (where c is a constant) is in O(n); the master theorem can be applied.
I am trying to apply the Master's Theorem to a recurrence of this type:
T(n) = T(n/2) + 2^n
However, f(n) = 2^n doesn't seem to fit any of the three cases described in the master's theorem, which all seem to have base n instead of base 2. How can I solve a recurrence of this type, could anyone please help ? Thanks.
If none of the cases of the theorem applies, then the theorem won't solve your recurrence. It can't solve every single recurrence out there.
To address your issue: what you get by repeatedly substituting the recursive case is T(n) = 2^n + 2^(n/2) + 2^(n/4) + ... + 2, and since there are log n many terms to add up, you end up with something below 2^(n+1), so in total you're in Θ(2^n).
We can take log on both sides and solve. It will fall in case 3 of Master's theorem.
Given the following recursive equations:
T(n) = 5T(n/5)+(5sin^5(5n^5)+5)*n
T(n) = T(n/4)+2sin^2(n^4)
I can easily see that both equations fit the 2nd case of the master theorem,
but due to the fact that sin is a circular function, it seems that a large enough N
might bring it really close to zero.
So, we will always be able to find an N > N0 for two constants c1,c2 (By theta definition)
which will disapprove it..
Is it really possible solving it with the master theorem?
thanks
I think you're right, the Master Theorem does not apply here. The reason for this is that the difference between f(n) and n^(log_b(a)) has to be polynomial. (See Master Theorem Recurrences: What is exactly polynomial difference?)
In your case:
((5sin^5(5n^5)+5)*n)/(n^(log_5(5)))=(5sin^5(5n^5)+5and
(2sin^2(n^4))/(n^(log_4(1)))= 2sin^2(n^4), which is not polynomial, so Master Theorem is invalid in this case.
I'm trying to solve a recurrence relation to find out the complexity of an algorithm I wrote. This is the equation..
T(n) = T(n-1) + Θ(n)
And I found out the answer to O(n2), but I'm not sure if I did it right. Can someone please confirm?
Update: What if the equation is T(n) = T(n-1)+Θ(nlogn)? Will it still be O(n2)?
It is O(N)+O(N-1)+...+O(1) = O(N*(N+1)/2). So yes, the total complexity is quadratic.
Yes, you guess it right.
However, the form of the recurrence doesn't fit with Master method. Since you have guessed the bound correctly, substitution method is more suitable here.
Now your job is finding two constants c and n0 to prove that:
T(n) <= c*(n^2) forall n >= n0