It's been a while for me since my undergraduate class in algorithms. Could you please help me solving this recurrence equation?
T(0)=14
T(n)=4*T(n/2)+n^2 for n>0
I'm interested in an upper bound that is as low as possible.
The exact solution of this equation is difficult to compute, but according to the master theorem, it's asymptotic bound is Θ(n2 log n)
EDIT 1:
Actually it is possible to compute the exact solution, it is (for n > 0)
n2(228 log(2)+4 log (n)) / log(16)
(I obtained this solution by adding constants wherever possible to the master theorem solution and solving system of 5 equations with computer algebra application)
When n is a power of 2, and n > 0, then the following expression gives the solution:
(57+log2n)n²
Related
I am working Problem 4-3 from Introduction to Algorithm, 3rd Edition. And I am asked to find the asymptotic upper and lower bounds for T(n):
T(n) = 4T(n/3) + n lg(n)
I have browsed online for the solution and the solution says:
By master's theorem, we get T(n) ∈ Θ(nlog3(4))
I believe that the solution assumes that nlog34 is asymptotically larger than n lg(n)? But why is this true? I will be grateful if someone can help me understand!
In layman's terms:
We need to compare grow of n*log(n) with n^1.25 (log3(4)~1.26)
Divide both functions by n
log(n) vs n^(1/4)
Both are increasing.
Derivatives of both functions
n^(-1) vs n^(-3/4)
Derivative of the second one is clearly larger, so the second function grows faster
We can see that plots of these functions intersect and power function becomes larger for big n values - for any power>1
I am new to Data Structures and have been trying to grasp the concepts. I understand Big - O notation, and looking for examples relate to O(n log n). I searched the internet but haven't got a satisfied example or implementation - where in I can see complexity of O(n log n).
Can some one point me to a better example and implementation for this ?
Thanks in advance
A classic example of an O(nlogn) algorithm is this of Merge Sort. Here you would find a detailed calculation of it's complexity. Generally speaking, there are many divide and conquer algorithms that have this complexity.
There is a well known theorem in complexity theory called the Master theorem. Its particular case says that if the complexity T(n) of an algorithm satisfies the equation
T(n) = a T(n/a) + b*n (1)
then
T(n) = O (n log n) (2)
The equation (1) above can be interpreted as though the algorithm works by splitting the problem into a parts and applying itself to each part separately and then doing some work on the complete input. This algorithmic pattern is sometimes called Merge and Recombine.
Consider the following example in Python
def f(x):
if len(x) > 1:
x1 = [z for z in x[1:] if z <= x[0]]
x2 = [z for z in x[1:] if z > x[0]]
return f(x1) + [x[0]] + f(x2)
else:
return x
this function implements a recursive algorithm that splits the input list into two parts and applies itself to each part independently, then concatenates the results. If we are lucky and x and y parts are of the same length then the complexity of the algorithm can be computed with the formula (2) above with a = 2.
If you are familiar with sorting and the Python language, you would recognize here an algorithm that emulates Quicksort but without the complexity of performing the sorting inplace. A somewhat cleaner example is Merge sort mentioned in the answer given by Christos.
I would suggest looking at the Cormen's Introduction to Algorithms Page-151 (Heapsort) and 170 (Quicksort). These are explained in detail in this book. The key idea in every case is that you need to understand the basic operation that is being done which is the comparison (in this two cases) and then analyze using the feature of the algorithm itself. For quicksort the pivottal analysis and for heapsort the heapify part.
This book covers everything you need to know in order to analyze complexity.
How do you calculate a tight bound run time for these relations?
T(n)=T(n-3)+n^2
T(n) = 4T(n/4)+log^3(n)
For the first one I used the substitution method which gave me n^2 but wasn't right and the second one I used Masters Theorem and got nlog^4(n) which also wasn't right. A thorough explanation would be helpful. Thanks!
for the First Recurrence, we can solve it by recurrence tree method
T(n)=T(n-3)+n^2
a) here we see that the number of sub problems are n/3(every i Subtract 3 from n so in n/3 steps we will be reaching the last subproblem).
b) at each level the cost is n^2
therefore the time complexiety is roughly (n/3)*n^2= (n^3)/3 which is O(n^3)
Coming to the second recurrence relation
T(n)=4T(n/4)+log^3(n)
Here we can't apply Master's theorem because n and log^3(n) are not comparable Polynomial times
we could have applied master's theorem(Corollary for strictly logarithmic bounds) if we had something like nlog^3(n) because it is greater strictly by log times
correct me if i am wrong here
Given the following recursive equations:
T(n) = 5T(n/5)+(5sin^5(5n^5)+5)*n
T(n) = T(n/4)+2sin^2(n^4)
I can easily see that both equations fit the 2nd case of the master theorem,
but due to the fact that sin is a circular function, it seems that a large enough N
might bring it really close to zero.
So, we will always be able to find an N > N0 for two constants c1,c2 (By theta definition)
which will disapprove it..
Is it really possible solving it with the master theorem?
thanks
I think you're right, the Master Theorem does not apply here. The reason for this is that the difference between f(n) and n^(log_b(a)) has to be polynomial. (See Master Theorem Recurrences: What is exactly polynomial difference?)
In your case:
((5sin^5(5n^5)+5)*n)/(n^(log_5(5)))=(5sin^5(5n^5)+5and
(2sin^2(n^4))/(n^(log_4(1)))= 2sin^2(n^4), which is not polynomial, so Master Theorem is invalid in this case.
Say I have an algorithm which operates on an input of size n and I know that the time it takes for n is twice the time it takes for n-1. I can observe in this simple case (assuming it takes, say, 1 second for n = 0) that the algorithm takes 2n seconds.
Is there a general method for converting between recursively-defined definitions to the more familiar direct type of expression?
Master Theorem
In particular:
With T(n) = aT(n/b) + nc
If logba < c, then T(n) = O(nc)
If logba = c, then T(n) = O(nclog[n])
If logba > c, then T(n) = O(nlogba)
That's one useful theorem to know, but doesn't fully answer your question.
What you are looking for is the generator function of a recurrence relation. In general, these are only solvable for very simple cases, i.e. when f(n) = Af(n-1) + Bf(n-1) and f(0) = f(1) = 1 (or f(1) = A). Other recurrence relations are very difficult to solve.
See linear recurrence relation for more info.
"Recursive functions" like this are called Recurrence Relations, and their "direct types" are known as the Closed-form solution.
While the Master Theorem listed by Poita is very helpful in computing time-complexity, it has nothing to do with actually solving recurrence relations.
Wikipedia and Wolfram's Math World (under "See Also") list the closed-forms of some common classes of recurrence relations. However, complicated (non-linear) recurrence relations can be very difficult to find closed-form solutions to, if one exists at all. There is no general algorithm for solving them.
If it's linear, you can express the relation as a matrix and find the Eigen values, decomposing it into a form that lets you raise the eigen values to a power, as worked through for Fibonacci here.