Give an asymptotic upper bound on the height of an n-node binary search tree in which the average depth of a node is Θ(lg n) - algorithm

Recently, I'm trying to solve all the exercises in CLRS. but there are some of them i can't figure out. Here is one of them, from CLRS exercise 12.4-2:
Describe a binary search tree on n nodes such that the average depth of a node in the tree is Θ(lg n) but the height of the tree is ω(lg n). Give an asymptotic upper bound on the height of an n-node binary search tree in which the average depth of a node is Θ(lg n).
Can anyone share some ideas or references to solve this problem? Thanks.

So let's suppose that we build the tree this way: given n nodes, take f(n) nodes and set them aside. Then build a tree by building a perfect binary tree where the root has a left subtree that's a perfect binary tree of n - f(n) - 1 nodes and a right subtree that's a chain of length f(n). We'll pick f(n) later.
So what's the average depth in the tree? Since we just want an asymptotic bound, let's pick n such that n - f(n) - 1 is one less than a perfect power of two, say, 2^k - 1. In that case, the sum of the heights in this part of the tree is 1*2 + 2*3 + 4*4 + 8*5 + ... + 2^(k-1) * k, which is (IIRC) about k 2^k, which is just about (n - f(n)) log (n - f(n)) by our choice of k. In the other part of the tree, the total depth is about f(n)^2. This means that the average path length is about ((n - f(n))log (n - f(n)) + f(n)^2) / n. Also, the height of the tree is f(n). So we want to maximize f(n) while keeping the average depth O(log n).
To do this, we need to find f(n) such that
n - f(n) = Θ(n), or the log term in the numerator disappears and the height isn't logarithmic,
f(n)^2 / n = O(log n), or the second term in the numerator gets too big.
If you pick f(n) = Θ(sqrt(n log n)), I think that 1 and 2 are satisfied maximally. So I'd wager (though I could be totally wrong about this) that this is as good as you can get. You get a tree of height Θ(sqrt(n log n)) that has average depth Θ(Log n).
Hope this helps! If my math is way off, please let me know. It's late now and I haven't done my usual double-checking. :-)

first maximize the height of the tree. (have a tree where each node only has one child node, so you have a long chain going downward).
Check the average depth. (obviously the average depth will be too high).
while the average depth is too high, you must decrease the height of the tree by one.
There are many ways to decrease the height of the tree by one. Choose the way which minimizes the average height. (prove by induction that each time you should select the one that minimizes the average height). Keep going until you fall under the average height requirement. (e.g. calculate using induction a formula for the height and the average depth).

If you are trying to maximize the height of a tree while minimizing the average depth of all the nodes of the tree, the unambiguous best shape would be an "umbrella" shape, e.g. a full binary tree with k nodes and height = lg k, where 0 < k < n, along with a single path, or "tail", of n-k nodes coming out of one of the leaves of the full part. The height of this tree is roughly lg k + n - k.
Now let's compute the total depth of all the nodes. The sum of the depths of the nodes of the full part is sum[ j * 2^j ], where the sum is taken from j=0 to j=lg k. By some algebra, the dominant term of the result is 2k lg k.
Next, the sum of the depths of the tail part is given by sum[i + lg k], where the sum is taken from i=0 to i=n-k. By some algebra, the result is approximately (n-k)lg k + (1/2)(n-k)^2.
Hence, summing the two parts above together and dividing by n, the average depth of all the nodes is (1 + k/n) lg k + (n-k)^2 / (2n). Note that because 0 < k < n, the first term here is O(lg n) no matter what k we choose. Hence, we need only make sure the second term is O(lg n). To do so, we require that (n-k)^2 = O(n lg n), or k = n - O(sqrt(n lg n)). With this choice, the height of the tree is
lg k + n - k = O( sqrt(n lg n) )
this is asymptotically larger than the ordinary O(lg n), and is asymptotically the tallest you can make the tree while keeping the average depth to be O(lg n)

Related

Algorithm to join two AVL trees together in O(logn) time

So I'm trying to figure out an algorithm to join 2 AVL trees together in O(logn) time, where n is the total number of integers in both trees and is also odd. In this problem, the integers in the trees are distinct from one another. Additionally, each node of the trees store the size of the subtree rooted at it. I was thinking of adding the elements of the smaller tree into the larger one but I wasn't sure how to go about proving that this would take O(logn) time. Does anyone have any suggestions as to how I could go about this?
This is impossible.
Proof: Assume you had an algorithm to join 2 AVL search trees in O(logn), and let it be A(T1,T2)
We now represent a new sorting algorithm: Sort(A)(1)
Sort(A):
Let T_i be an AVL tree consisting only of A_i // O(1) n times, total O(n).
curr_size = 1
while curr_size < size(A):
Let T_i, T_j be two trees of size curr_size // O(1)
// Assume without loss of generality i < j.
if there are such T_i,T_j:
T_i = A(T_i,T_j) // O(log(curr_size))
else:
curr_size = curr_size * 2 // O(1)
return in_order(T_0) // O(n) by in-order traversal.
The algorithm complexity is:
T(n) = n/2 * log(2) + n/4 * log(4) + n/8 * log(8) + ... + 2*log(n/2) + log(n)
Explanation
First we need to merge all trees of size 1 to trees of size 2. This requires n/2 merges, each takes O(log(2)). Next, merge the resulting n/2 trees to trees of size 4. This is done n/4, each O(log4), ... lastly we have two trees and we merge them once, and it takes O(n).
This gives us the formula:
T(n) = sum (n/2^i * log(2^i)) for i=1,2,3,...,logn
We could do some more algebra, but I take a shortcut and feed it to Wolfram alpha, which gives us:
T(n) = 2n -log(n) -2
Since the above is linear, this means our general purpose sorting algorithm Sort(A) is linear.
But Sorting is Omega(nlogn).
This means something is wrong - so the assumption that such an algorithm A(T1,T2) exists, with O(logn) complexity is wrong.
QED
(1) For simplicity, the algorithm assumes size(A) = 2^i for some i in N. This restriction can be relaxed without changing the conclusion, only changing the complication of the algorithm.

Complexity of array sum with divide and conquer

Let the following algorithm be:
sum(v, i, j) {
if i == j
return v[i]
else {
k = (i + j) / 2
return sum(v, i, k) + sum(v, k+1, j)
}
}
The time complexity of this algorithm is O(n), but how can I prove (in natural language) its complexity? The problem always gets divided in two new problems so that would be O(log n), but where does the rest of the complexity come from?
Applying master theorem yields the expected result, O(n).
Thanks.
From a high level perspective, your algorithm acts as if it is traversing a balanced binary tree, where each node covers a specific interval [i, j]. Their children divide the interval into 2, roughly equal parts, namely [i, (i+j)/2] and [(i+j)/2 + 1, j].
Let's assume that they are, in this case equal. (in other words, for the sake of the proof, the length of the array n is a power of 2)
Think of it in the following way. There are n leaves of this balanced binary tree your algorithm is traversing. Each are responsible from an interval of length 1. There are n/2 nodes of the tree that are the parents of these n leaves. Those n/2 nodes have n/4 parents. This goes all the way until you reach the root node of the tree, which covers the entire interval.
Think of how many nodes there are in this tree. n + (n/2) + (n/4) + (n/8) + ... + 2 + 1. Since we initially assumed that n = 2^k, we can formulate this sum as the sum of exponents, for which the summation formula is well known. It turns out that there are 2^(k+1) - 1 = 2 * (2^k) - 1 = 2n - 1 nodes in that tree. So, obviously traversing all nodes of that tree would take O(n) time.
Dividing the problem in two parts does not necessarly mean that complexity is log(n).
I guess you are referring to binary search algorithm but in that every division each half is skipped as we know search key would be in other side of division.
Just by looking at the sudo code , Recursive call is made for every division and it is not skipping anything. Why would it be log(n)?
O(n) is correct complexity.

Why is the Fibonacci Sequence Big O(2^n) instead of O(logn)?

I took discrete math (in which I learned about master theorem, Big Theta/Omega/O) a while ago and I seem to have forgotten the difference between O(logn) and O(2^n) (not in the theoretical sense of Big Oh). I generally understand that algorithms like merge and quick sort are O(nlogn) because they repeatedly divide the initial input array into sub arrays until each sub array is of size 1 before recursing back up the tree, giving a recursion tree that is of height logn + 1. But if you calculate the height of a recursive tree using n/b^x = 1 (when the size of the subproblem has become 1 as was stated in an answer here) it seems that you always get that the height of the tree is log(n).
If you solve the Fibonacci sequence using recursion, I would think that you would also get a tree of size logn, but for some reason, the Big O of the algorithm is O(2^n). I was thinking that maybe the difference is because you have to remember all of the fib numbers for each subproblem to get the actual fib number meaning that the value at each node has to be recalled, but it seems that in merge sort, the value of each node has to be used (or at least sorted) as well. This is unlike binary search, however, where you only visit certain nodes based on comparisons made at each level of the tree so I think this is where the confusion is coming from.
So specifically, what causes the Fibonacci sequence to have a different time complexity than algorithms like merge/quick sort?
The other answers are correct, but don't make it clear - where does the large difference between the Fibonacci algorithm and divide-and-conquer algorithms come from? Indeed, the shape of the recursion tree for both classes of functions is the same - it's a binary tree.
The trick to understand is actually very simple: consider the size of the recursion tree as a function of the input size n.
In the Fibonacci recursion, the input size n is the height of the tree; for sorting, the input size n is the width of the tree. In the former case, the size of the tree (i.e. the complexity) is an exponent of the input size, in the latter: it is input size multiplied by the height of the tree, which is usually just a logarithm of the input size.
More formally, start by these facts about binary trees:
The number of leaves n is a binary tree is equal to the the number of non-leaf nodes plus one. The size of a binary tree is therefore 2n-1.
In a perfect binary tree, all non-leaf nodes have two children.
The height h for a perfect binary tree with n leaves is equal to log(n), for a random binary tree: h = O(log(n)), and for a degenerate binary tree h = n-1.
Intuitively:
For sorting an array of n elements with a recursive algorithm, the recursion tree has n leaves. It follows that the width of the tree is n, the height of the tree is O(log(n)) on the average and O(n) in the worst case.
For calculating a Fibonacci sequence element k with the recursive algorithm, the recursion tree has k levels (to see why, consider that fib(k) calls fib(k-1), which calls fib(k-2), and so on). It follows that height of the tree is k. To estimate a lower-bound on the width and number of nodes in the recursion tree, consider that since fib(k) also calls fib(k-2), therefore there is a perfect binary tree of height k/2 as part of the recursion tree. If extracted, that perfect subtree would have 2k/2 leaf nodes. So the width of the recursion tree is at least O(2^{k/2}) or, equivalently, 2^O(k).
The crucial difference is that:
for divide-and-conquer algorithms, the input size is the width of the binary tree.
for the Fibonnaci algorithm, the input size is it the height of the tree.
Therefore the number of nodes in the tree is O(n) in the first case, but 2^O(n) in the second. The Fibonacci tree is much larger compared to the input size.
You mention Master theorem; however, the theorem cannot be applied to analyze the complexity of Fibonacci because it only applies to algorithms where the input is actually divided at each level of recursion. Fibonacci does not divide the input; in fact, the functions at level i produce almost twice as much input for the next level i+1.
To address the core of the question, that is "why Fibonacci and not Mergesort", you should focus on this crucial difference:
The tree you get from Mergesort has N elements for each level, and there are log(n) levels.
The tree you get from Fibonacci has N levels because of the presence of F(n-1) in the formula for F(N), and the number of elements for each level can vary greatly: it can be very low (near the root, or near the lowest leaves) or very high. This, of course, is because of repeated computation of the same values.
To see what I mean by "repeated computation", look at the tree for the computation of F(6):
Fibonacci tree picture from: http://composingprograms.com/pages/28-efficiency.html
How many times do you see F(3) being computed?
Consider the following implementation
int fib(int n)
{
if(n < 2)
return n;
return fib(n-1) + fib(n-2)
}
Let's denote T(n) the number of operations that fib performs to calculate fib(n). Because fib(n) is calling fib(n-1) and fib(n-2), it means that T(n) is at least T(n-1) + T(n-2). This in turn means that T(n) > fib(n). There is a direct formula of fib(n) which is some constant to the power of n. Therefore T(n) is at least exponential. QED.
To my understanding, the mistake in your reasoning is that using a recursive implementation to evaluate f(n) where f denotes the Fibonacci sequence, the input size is reduced by a factor of 2 (or some other factor), which is not the case. Each call (except for the 'base cases' 0 and 1) uses exactly 2 recursive calls, as there is no possibility to re-use previously calculated values. In the light of the presentation of the master theorem on Wikipedia, the recurrence
f(n) = f (n-1) + f(n-2)
is a case for which the master theorem cannot be applied.
With the recursive algo, you have approximately 2^N operations (additions) for fibonacci (N). Then it is O(2^N).
With a cache (memoization), you have approximately N operations, then it is O(N).
Algorithms with complexity O(N log N) are often a conjunction of iterate over every item (O(N)) , split recurse, and merge ... Split by 2 => you do log N recursions.
Merge sort time complexity is O(n log(n)). Quick sort best case is O(n log(n)), worst case O(n^2).
The other answers explain why naive recursive Fibonacci is O(2^n).
In case you read that Fibonacci(n) can be O(log(n)), this is possible if calculated using iteration and repeated squaring either using matrix method or lucas sequence method. Example code for lucas sequence method (note that n is divided by 2 on each loop):
/* lucas sequence method */
int fib(int n) {
int a, b, p, q, qq, aq;
a = q = 1;
b = p = 0;
while(1) {
if(n & 1) {
aq = a*q;
a = b*q + aq + a*p;
b = b*p + aq;
}
n /= 2;
if(n == 0)
break;
qq = q*q;
q = 2*p*q + qq;
p = p*p + qq;
}
return b;
}
As opposed to answers master theorem can be applied. But master theorem for decreasing functions needs to be applied instead of master theorem for dividing functions. Without theorem with following recurrence relation with substitution it can be solved,
f(n) = f(n-1) + f(n-2)
f(n) = 2*f(n-1) + c
let assume c is equal 1 since it is constant and doesn't affect the complexity
f(n) = 2*f(n-1) + 1
and substitute this function k times
f(n) = 2*[2*f(n-2) +1 ] + 1
f(n) = 2^2*f(n-2) + 2 + 1
f(n) = 2^2*[2*f(n-3) + 1] +2 + 1
f(n) = 2^3*f(n-3) + 4 + 2 + 1
.
.
.
f(n) = 2^k*f(n-k) + 2^k-1 + 2^k-2 + ... + 4 + 2 + 1
now let's assume n=k
f(n) = 2^n*f(0) + 2^n-1 + 2^n-2 + ... + 4 + 2 + 1
f(n) = 2^n+1 thus complexity is O(2^n)
Check this video for master theorem for decreasing functions.

How is that a binary tree with n! leaves has height omega (n log n)

I came across this proposition that a binary tree with n! leaves has height omega(n log n).
I am unable to understand how it is possible. I understand that height of a binary tree with n nodes is log n <= h <= n, i.e the height is at least log n (in case of complete binary tree), but I do not see a hint as to how the above proposition could be true or proved correct.
Any suggestions?
You have already stated that the lower bound for a binary tree with n nodes is log n. It is a well known fact (Stirlings formula), that log(n!) is approximately n log n. See for example here for a derivation.
A tree with n! leaves and minimal height has approximately 2n! nodes. This gives log(2n!) = log 2 + log(n!) approximately log 2 + n log n which is in omega(n log n)

recursion tree and binary tree cost calculation

I've got the following recursion:
T(n) = T(n/3) + T(2n/3) + O(n)
The height of the tree would be log3/2 of 2. Now the recursion tree for this recurrence is not a complete binary tree. It has missing nodes lower down. This makes sense to me, however I don't understand how the following small omega notation relates to the cost of all leaves in the tree.
"... the total cost of all leaves would then be Theta (n^log3/2 of 2) which, since log3/2 of 2 is a constant strictly greater then 1, is small omega(n lg n)."
Can someone please help me understand how the Theta(n^log3/2 of 2) becomes small omega(n lg n)?
OK, to answer your explicit question about why n^(log_1.5(2)) is omega(n lg n):
For all k > 1, n^k grows faster than n lg n. (Powers grow faster than logs eventually). Therefore since 2 > 1.5, log_1.5(2) > 1, and thus n^(log_1.5(2)) grows faster than n lg n. And since our function is in Theta(n^(log_1.5(2))), it must also be in omega(n lg n)

Resources