I have to solve this recurrence relation with tree method, because Master theorem does not apply.
T(n) = (2+1/log n) T(n/2)
After a some thoughts I can not come up with an exact solution. Master's theorem does not work here and unrolling the tree has not gave me anything reasonable. So I will just estimate the complexity in the following way.
For any reasonably big n you can estimate 0 < 1/log n < 1. So you can get:
T1(n) = 2 * T1(n/2)
T2(n) = 3 * T2(n/2)
and O(T1) < O(T) < O(T2). You can find the complexity for both recurrences using master theorem. The complexity of T1 is O(n) and of T2 is O(n^log2(3)).
So you can be sure that the complexity of your recurrence is bigger than O(n) and less than O(n^1.58), so less than quadratic.
Related
Suppose we have been given a recurrence relation T(n) = 2T(n/3) + n. And need to find the time complexity.
My problem is why I am getting different answers by master theorem and using recursion tree method.
By Master Theorem for equation T(n) = aT(n/b) + O(nklogpn)
Here a=2,b=3,p=0,k=1 and as a < bk here i.e 2 < 31 by masters theorem it gives us T(n) = O(n) time complexity.
Here is my recursion tree method
And at the end I seriously don't know what did I found out. It looks something like O(n.2logn).
Surely its not the answer and I have messed it up. But I don't get where I am wrong? What is right approach?
Also I wanted to ask if recurrence relations T(n) = 2T(n/3) + n and T(n) = 2T(n/3) + C are both same? where C is a constant in 2nd equation.
I'm quite new to algorithm and I encountered a question that I don't know how to apply Master Theorem:
We have an algorithm A, which solves P by creating 3 subproblems, each of size 2n/3, recursively solving each subproblem, and then combining solutions in O(nlogn) time. Would this algorithm have a better running time than O(n)? Prove your answer.
What I know here is a=3, b=3/2, but how can I deal with O(nlogn)?
Henve, the recursive formula is T(n) = 3T(2n/3) + O(n\log(n)). As f(n) = O(n\log(n)) = O(n^{\log(3)/\log(1.5)}) ~ O(n^{2.7}), from the master theorem we can say T(n) = \Theta(n^{2.7}).
Therefore, T(n) = \Omega(n^{2.7}).
T(n) ={ 2T(n/2) + n^2 when n is even and T(n) = 2T(n/2) + n^3 when n is odd
I solved this separately and i am getting the solution as theta(n^2) if n is even and theta(n^3) if n is odd from case 3 of master's theorem. But i am not supposed to solve this problem separately.
How to solve a recurrence relation like this together?
T(n) ={ 2T(n/2) + n^2 when n is even and T(n) = 2T(n/2) + n^3 when n is odd
Is it solvable by master's theorem or master's theorem does not apply?
Kindly help me with this.
Suppose n = 2^k for some integer k, so n equals to 100...00. Then you can apply master method the even part of the recurrence. and obtain theta(n^2).
Now suppose there is also 1 not in the most significant bit, e.g. 100100..00. So, you will have at least one level in your recursion-tree all nodes of which add up to n^3 * constant, and by this you obtain theta(n^3).
Thus, the answer is theta(n^2) if n is a power of two and theta(n^3) otherwise. But if we first encounter odd n and it is equal to a base case then it might not be cubic.
After some chatting with kelalaka it came to me that if first 1 is k-th from the right in n then if k > (2/3)(1/lg 2)lg n, we don't care any more about (n/2^k)^3. It is still O(n^2).
The question is :
T(n) = √2*T(n/2) + log n
I'm not sure whether the master theorem works here, and kinda stuck.
This looks more like the Akra-Bazzi theorem: http://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method#The_formula with k=1, h=0, g(n)=log n, a=(2)^{1/2}, b=1/2. In that case, p=1/2 and you need to evaluate the integral \int_1^x log(u)/u^{3/2} du. You can use integration by parts, or a symbolic integrator. Wolfram Alpha tells me the indefinite integral is -2(log u + 2)/u^{1/2} + C, so the definite integral is 4 - 2(log x + 2)/x^{1/2}. Adding 1 and multiplying by x^{1/2}, we get T(x) = \Theta(5x^{1/2} - 2 log x - 4).
Master theorem have only constrains on your a and b which holds for your case. The fact that a is irrational and you have log(n) as your f(n) has no relation to it.
So in your case your c = log2(sqrt(2)) = 1/2. Since n^c grows faster than your log(n), the complexity of the recursion is O(sqrt(n)).
P.S. solution of Danyal is wrong as the complexity is not nlogn and the solution of Edward Doolittle is correct, also it is an overkill in this simple case.
As per master theorem, f(n) should be polynomial but here
f(n) = logn
which is not a polynomial so it can not be solved by master theorem as per rules. I read somewhere about the fourth case as well. I must mention that as well.
It is also discussed here:
Master's theorem with f(n)=log n
However, there is a limited "fourth case" for the master theorem, which allows it to apply to polylogarithmic functions.
If
f(n) = O(nlogba logk n), then T(n) = O(nlogba log k+1 n).
In other words, suppose you have T(n) = 2T (n/2) + n log n. f(n) isn't a polynomial, but f(n)=n log n, and k = 1. Therefore, T(n) = O(n log2 n)
See this handout for more information: http://cse.unl.edu/~choueiry/S06-235/files/MasterTheorem-HandoutNoNotes.pdf
I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.