Is CLRS completely accurate to state that max-heapify running time is described by the recurrence `T(n) = T(2n/3) + O(1)`? - algorithm

In CLRS on page 155, about max-heaps, the running time of max-heapify is described as T(n) = T(2n/3) + O(1).
I understand why the first recursive call is on a subproblem of size 2n/3 in the case where we have a nearly complete binary tree (always the case with heaps) in which the deepest level of nodes is half full (and we are recursing on the child that is the root of the subtree that contains these nodes at the deepest level). A more in depth explanation of this is here.
What I don't understand is: after that first recursive call, the subtree is now a complete binary tree, so the next recursive calls will be on problems of size n/2.
So is it accurate to simply state that the running time of max-heapify is described by the recurrence T(n) = T(2n/3) + O(1)?

Converting my comment to an answer: if you assume that T(n), the time required to build a max-heap with n nodes, is a nondecreasing function of n, then we know that T(m) ≤ T(n) for any m ≤ n. You're correct that the ratio of 2n / 3 is the worst-case ratio and that after the first level of the recurrence it won't be reached, but under the above assumption you can safely conclude that T(n / 2) ≤ T(2n / 3), so we can upper-bound the recurrence as
T(n) ≤ T(2n / 3) + O(1)
even if strict equality doesn't hold. That then lets us use the master theorem to conclude that T(n) = O(log n).

Related

How do you solve this recurrence relation using the recursion tree?

I am having a difficult time understanding and solving recurrence relations. Can someone please help me with this particular one?
T(n)=T(n/3)+T(2n/3)+n
Look at this image:
Which is a recursion tree you can continue.
Also, it's like this one I have found in the URL: Recursion tree T(n) = T(n/3) + T(2n/3) + cn
Which may help you for further help. It's almost like your question, just use c=1 instead of c.
The shortest path to a leaf occurs when we take the heavy branch each time.
Consider k as the height of the tree results: (pow(n*(1/3),k) ≤ 1) meaning k ≥ lg3 n.
The longest path to a leaf occurs when we take the light branch each time.
Consider k as the height of the tree results: (pow(n*(2/3),k) ≤ 1) meaning k ≥ lg3/2 n.
Now look at this image:
This means on any full level, the sum of the additive terms to n.
Now, let's look at what we have.
If you pick height to be
log3 n
The result would be:
T(n) ≥ nlog3 n --> T(n) ≥ Ω (nlogn)
If you pick height to be
log3/2 n
The result would be:
T(n) ≥ nlog3/2 n --> T(n) ≤ O(nlogn)
And this two (1 & 2) will leads us to T(n) = Θ(nlogn)
Other sources : help

How to solve such this recurrence to find out time complexity

There is this version of merge sort where the array is divided into n/3 and 2n/3 halves each time(instead of n/2 and n/2 originally).
The recurrence here would be:
T(n)=T(n/3)+T(2n/3)+n
Now the problem is, how to solve this to get the time complexity of this implementation?
There is Akra–Bazzi_method to calculate complexity for some more complex cases than Master Theorem is intended for.
In this example you'll get the same Theta(NlogN) as for equal parts (p=1, T=Theta(n(1+Integral(n/n^2*dn)))
T(n) denotes the total time taken by the algorithm.
We can calculate time complexity of this recurrence relation through recursion tree.
T(n)=T(n/3)+T(2n/3)+n ------- 1
Root node of T(n) is n, Root node will be expanded into 2 parts:
T(n/3) and T(2n/3)
In next step we will find root node value of T(n/3) and T(2n/3)
To compute T(n/3) substitute n/3 in place of n in equation 1
T(n/3)=T(n/9)+T(2n/9)+n/3
To compute T(2n/3) substitute 2n/3 in place of n in equation 1
T(2n/3)=T(2n/9)+T(4n/9)+2n/3
Root node of T(n/3) is n/3root node will be expanded into 2 parts:
T(n/9) and T(2n/9)
Expand root node value till you will get individual elements i.e T(1)
Calculation of depth:
For calculating depth, n/(b^i)=1
So we will get, n/(3^i) or n/((3/2)^i)
If n=9 then n/3=3, 2n/3=6
for next level n/9=1, 2n/9=2,4n/9=4
Right part of recursion tree n->2n/3->4n/9 this is the longest path that we will take to expand the root node
If we take left part of tree to expand the root node, we will use n/(3^i) to find the depth of tree, to know where the tree will stop
So here we are using right part of tree, n/((3/2)^i)
n=(3/2)^i
log n=log(3/2)^i
i=(logn base 3/2)
Now, calculating cost of each level
Since, cost of each level is same i.e. n
T(n) = cost of level * depth
T(n) = n * i
T(n) = n(logn base 3/2)
Or we can calculate using T(n)=n+n+n..... i times i.e T(n) = n * i
You can even find time complexity using Akra–Bazzi method

Why is the Fibonacci Sequence Big O(2^n) instead of O(logn)?

I took discrete math (in which I learned about master theorem, Big Theta/Omega/O) a while ago and I seem to have forgotten the difference between O(logn) and O(2^n) (not in the theoretical sense of Big Oh). I generally understand that algorithms like merge and quick sort are O(nlogn) because they repeatedly divide the initial input array into sub arrays until each sub array is of size 1 before recursing back up the tree, giving a recursion tree that is of height logn + 1. But if you calculate the height of a recursive tree using n/b^x = 1 (when the size of the subproblem has become 1 as was stated in an answer here) it seems that you always get that the height of the tree is log(n).
If you solve the Fibonacci sequence using recursion, I would think that you would also get a tree of size logn, but for some reason, the Big O of the algorithm is O(2^n). I was thinking that maybe the difference is because you have to remember all of the fib numbers for each subproblem to get the actual fib number meaning that the value at each node has to be recalled, but it seems that in merge sort, the value of each node has to be used (or at least sorted) as well. This is unlike binary search, however, where you only visit certain nodes based on comparisons made at each level of the tree so I think this is where the confusion is coming from.
So specifically, what causes the Fibonacci sequence to have a different time complexity than algorithms like merge/quick sort?
The other answers are correct, but don't make it clear - where does the large difference between the Fibonacci algorithm and divide-and-conquer algorithms come from? Indeed, the shape of the recursion tree for both classes of functions is the same - it's a binary tree.
The trick to understand is actually very simple: consider the size of the recursion tree as a function of the input size n.
In the Fibonacci recursion, the input size n is the height of the tree; for sorting, the input size n is the width of the tree. In the former case, the size of the tree (i.e. the complexity) is an exponent of the input size, in the latter: it is input size multiplied by the height of the tree, which is usually just a logarithm of the input size.
More formally, start by these facts about binary trees:
The number of leaves n is a binary tree is equal to the the number of non-leaf nodes plus one. The size of a binary tree is therefore 2n-1.
In a perfect binary tree, all non-leaf nodes have two children.
The height h for a perfect binary tree with n leaves is equal to log(n), for a random binary tree: h = O(log(n)), and for a degenerate binary tree h = n-1.
Intuitively:
For sorting an array of n elements with a recursive algorithm, the recursion tree has n leaves. It follows that the width of the tree is n, the height of the tree is O(log(n)) on the average and O(n) in the worst case.
For calculating a Fibonacci sequence element k with the recursive algorithm, the recursion tree has k levels (to see why, consider that fib(k) calls fib(k-1), which calls fib(k-2), and so on). It follows that height of the tree is k. To estimate a lower-bound on the width and number of nodes in the recursion tree, consider that since fib(k) also calls fib(k-2), therefore there is a perfect binary tree of height k/2 as part of the recursion tree. If extracted, that perfect subtree would have 2k/2 leaf nodes. So the width of the recursion tree is at least O(2^{k/2}) or, equivalently, 2^O(k).
The crucial difference is that:
for divide-and-conquer algorithms, the input size is the width of the binary tree.
for the Fibonnaci algorithm, the input size is it the height of the tree.
Therefore the number of nodes in the tree is O(n) in the first case, but 2^O(n) in the second. The Fibonacci tree is much larger compared to the input size.
You mention Master theorem; however, the theorem cannot be applied to analyze the complexity of Fibonacci because it only applies to algorithms where the input is actually divided at each level of recursion. Fibonacci does not divide the input; in fact, the functions at level i produce almost twice as much input for the next level i+1.
To address the core of the question, that is "why Fibonacci and not Mergesort", you should focus on this crucial difference:
The tree you get from Mergesort has N elements for each level, and there are log(n) levels.
The tree you get from Fibonacci has N levels because of the presence of F(n-1) in the formula for F(N), and the number of elements for each level can vary greatly: it can be very low (near the root, or near the lowest leaves) or very high. This, of course, is because of repeated computation of the same values.
To see what I mean by "repeated computation", look at the tree for the computation of F(6):
Fibonacci tree picture from: http://composingprograms.com/pages/28-efficiency.html
How many times do you see F(3) being computed?
Consider the following implementation
int fib(int n)
{
if(n < 2)
return n;
return fib(n-1) + fib(n-2)
}
Let's denote T(n) the number of operations that fib performs to calculate fib(n). Because fib(n) is calling fib(n-1) and fib(n-2), it means that T(n) is at least T(n-1) + T(n-2). This in turn means that T(n) > fib(n). There is a direct formula of fib(n) which is some constant to the power of n. Therefore T(n) is at least exponential. QED.
To my understanding, the mistake in your reasoning is that using a recursive implementation to evaluate f(n) where f denotes the Fibonacci sequence, the input size is reduced by a factor of 2 (or some other factor), which is not the case. Each call (except for the 'base cases' 0 and 1) uses exactly 2 recursive calls, as there is no possibility to re-use previously calculated values. In the light of the presentation of the master theorem on Wikipedia, the recurrence
f(n) = f (n-1) + f(n-2)
is a case for which the master theorem cannot be applied.
With the recursive algo, you have approximately 2^N operations (additions) for fibonacci (N). Then it is O(2^N).
With a cache (memoization), you have approximately N operations, then it is O(N).
Algorithms with complexity O(N log N) are often a conjunction of iterate over every item (O(N)) , split recurse, and merge ... Split by 2 => you do log N recursions.
Merge sort time complexity is O(n log(n)). Quick sort best case is O(n log(n)), worst case O(n^2).
The other answers explain why naive recursive Fibonacci is O(2^n).
In case you read that Fibonacci(n) can be O(log(n)), this is possible if calculated using iteration and repeated squaring either using matrix method or lucas sequence method. Example code for lucas sequence method (note that n is divided by 2 on each loop):
/* lucas sequence method */
int fib(int n) {
int a, b, p, q, qq, aq;
a = q = 1;
b = p = 0;
while(1) {
if(n & 1) {
aq = a*q;
a = b*q + aq + a*p;
b = b*p + aq;
}
n /= 2;
if(n == 0)
break;
qq = q*q;
q = 2*p*q + qq;
p = p*p + qq;
}
return b;
}
As opposed to answers master theorem can be applied. But master theorem for decreasing functions needs to be applied instead of master theorem for dividing functions. Without theorem with following recurrence relation with substitution it can be solved,
f(n) = f(n-1) + f(n-2)
f(n) = 2*f(n-1) + c
let assume c is equal 1 since it is constant and doesn't affect the complexity
f(n) = 2*f(n-1) + 1
and substitute this function k times
f(n) = 2*[2*f(n-2) +1 ] + 1
f(n) = 2^2*f(n-2) + 2 + 1
f(n) = 2^2*[2*f(n-3) + 1] +2 + 1
f(n) = 2^3*f(n-3) + 4 + 2 + 1
.
.
.
f(n) = 2^k*f(n-k) + 2^k-1 + 2^k-2 + ... + 4 + 2 + 1
now let's assume n=k
f(n) = 2^n*f(0) + 2^n-1 + 2^n-2 + ... + 4 + 2 + 1
f(n) = 2^n+1 thus complexity is O(2^n)
Check this video for master theorem for decreasing functions.

Recurrence for the Worst-Case Running Time of Quicksort

Assume we constructed a quicksort and the pivot value takes linear time. Find the recurrence for worst-case running time.
My answer:
T(n)= T(n-1) + T(1) + theta(n)
Worst case occurs when the subarrays are completely unbalanced.
There is 1 element in one subarray and (n-1) elements in the other subarray.
theta(n) because it takes running time n to find the pivot.
Am I doing this correctly?
Your recurrence is mostly correct, but you don't actually have two recursive calls made. In the worst-case for quicksort, the pivot will be the largest or smallest element in the array, so you'll recur on one giant array of size n - 1. The other subarray has length 0, so no recursive calls are made. To top everything off, the total work done is Θ(n) per level, so the recurrence relation would more appropriately be
T(n) = T(n - 1) + Θ(n)
This in turn then solves to Θ(n2).
Hope this helps!
you cannot observe, because according to my research T(N)= T(N-K)+T(K-1)+n
we cannot observe exact value until we have
value of k,
T(n) = T(an/(a+b)) + T(bn/(a+b)) + n
Where a/(a+b) and b/(a+b) are fractions of array under consideration

recursion tree and binary tree cost calculation

I've got the following recursion:
T(n) = T(n/3) + T(2n/3) + O(n)
The height of the tree would be log3/2 of 2. Now the recursion tree for this recurrence is not a complete binary tree. It has missing nodes lower down. This makes sense to me, however I don't understand how the following small omega notation relates to the cost of all leaves in the tree.
"... the total cost of all leaves would then be Theta (n^log3/2 of 2) which, since log3/2 of 2 is a constant strictly greater then 1, is small omega(n lg n)."
Can someone please help me understand how the Theta(n^log3/2 of 2) becomes small omega(n lg n)?
OK, to answer your explicit question about why n^(log_1.5(2)) is omega(n lg n):
For all k > 1, n^k grows faster than n lg n. (Powers grow faster than logs eventually). Therefore since 2 > 1.5, log_1.5(2) > 1, and thus n^(log_1.5(2)) grows faster than n lg n. And since our function is in Theta(n^(log_1.5(2))), it must also be in omega(n lg n)

Resources