I'm trying to solve a recurrence relation to find out the complexity of an algorithm I wrote. This is the equation..
T(n) = T(n-1) + Θ(n)
And I found out the answer to O(n2), but I'm not sure if I did it right. Can someone please confirm?
Update: What if the equation is T(n) = T(n-1)+Θ(nlogn)? Will it still be O(n2)?
It is O(N)+O(N-1)+...+O(1) = O(N*(N+1)/2). So yes, the total complexity is quadratic.
Yes, you guess it right.
However, the form of the recurrence doesn't fit with Master method. Since you have guessed the bound correctly, substitution method is more suitable here.
Now your job is finding two constants c and n0 to prove that:
T(n) <= c*(n^2) forall n >= n0
Related
I've been trying to solve the next recurrence relation T(n)=T(n/3)+T(n/6)+1
I don't know where to start. I thought about drawing a recursion tree first and then solve it, but I don't know if it's right.
Can someone please help me with this? Thanks
You could use the Akra-Bazzi method, which is a generalization of the Master Theorem for solving divide-and-conquer recurrences with sub-problems of different sizes. It applies to recurrences of the form:
for positive constants a, constants b in (0,1), g(n) in O(n^c) and h(n) in O(n/log^2(n)).
In this case:
Following the method, we need the p value such that:
Solving this equation for p gives p=0.48954...
The Akra-Bazzi theorem says that the complexity for the algorithm is then:
which when solved given g(u) = 1 yields:
I know how to calculate the master theorem and I managed to calculate it for best and average case.
T(n) = 2T(n/2) + Theta(n)
The worst case equation is
T(n) = T(n-1) + Theta(n)
If I am correct a is 1, b is n/(n-1) and f(n) is n.
But how do I choose the right case of the master theorem and get a worst-case time complexity of Theta(n^2)?
Thanks!
As #DavidEisenstat pointed out in the comments, the Master Theorem doesn’t apply to the recurrence you’ve come up with here.
To give some context as to why this is - the Master Theorem is specifically designed for the case where
the number of subproblems is a constant, and
the sizes of the subproblems decays geometrically.
In this case, that second requirement doesn’t hold, since your recurrence has the problem size decay linearly rather than geometrically.
You are correct, however, that the recurrence solves to Θ(n2). To see why, note that if you unroll the recurrence, you get that the cost is
Θ(n + (n-1) + (n-2) + ... + 2 + 1)
= Θ(n(n+1)/2)
= Θ(n2).
Hope this helps!
I am trying to apply the Master's Theorem to a recurrence of this type:
T(n) = T(n/2) + 2^n
However, f(n) = 2^n doesn't seem to fit any of the three cases described in the master's theorem, which all seem to have base n instead of base 2. How can I solve a recurrence of this type, could anyone please help ? Thanks.
If none of the cases of the theorem applies, then the theorem won't solve your recurrence. It can't solve every single recurrence out there.
To address your issue: what you get by repeatedly substituting the recursive case is T(n) = 2^n + 2^(n/2) + 2^(n/4) + ... + 2, and since there are log n many terms to add up, you end up with something below 2^(n+1), so in total you're in Θ(2^n).
We can take log on both sides and solve. It will fall in case 3 of Master's theorem.
What is the runtime of a recurrence T(n)=3T(2n/3)+1 and how did you get it?
This type of recurrences can be solved with a Master theorem. Here a=3, b=3/2 and f(n) = 1. Your c = log1.5(3) = 2.709 And because n^2.709 is bigger than f(n), you fall into first case.
So the solution is O(n^2.709)
Use the Master Theorem. This is a lot easier than trying to solve the recurrence yourself, as you tried in your original question.
OTTOMH this should get you T(n) = Theta(n^2.7) (Case 1 of the Master Theorem).
I have a homework question asks
Given f(n) is O(k(n)) and g(n) is O(k(n)), prove f(n)+g(n) is also O(k(n))
I'm not sure where to start with this, any help to guide me of how to work on this?
Try and work through it logically. f(n) increases at a linear rate. So does g(n). Therefore
O(n) + O(n) = O(2n)
When attempting to find the big O classification of a function, constants don't count.
I'll leave the rest (including the why) as an exercise for you. (Getting the full answer on SO would be cheating!)
Refer to the Rules for Big-Oh Notation.
Sum Rule: If f(n) is O(h(n)) and g(n) is O(p(n)), then f(n)+g(n) is O(h(n)+p(n)).
Using this rule for your case the complexity would be O(2k(n)), which is nothing but O(k(n)).
So, f(n) is O(g(n)) iff f(n) is less than or equal to some positive constant multiple of g(n) for arbitrarily large values of n (so this: f(n) <= cg(n) for n >= n_0). Usually, to prove something is O(g(n)), we provide some c and n_0 to show that it is true.
In your case, I would start by using that definition, so you could say f(n) <= ck(n) and g(n) <= dk(n). I don't want to totally answer the question for you, but you are basically just going to try to show that f(n)+g(n) <= tk(n).
*c, d, and t are all just arbitrary, positive constants. If you need more help, just comment, and I will gladly provide more info.