I came to know the height of Random-BST/Red-Black trees and some other trees are O(log n).
I wonder, how this can be. Lets say I have a tree like this
The height of the tree is essentially the depth of the tree, which is in this case will be 4 (leaving the parent depth). But how could people say that the height can be represented by O(log n) notion?
I'm very to algorithms, and this point is confusing me a lot. Where I'm missing the point?
In algorithm complexity the variable n typically refers to the total number of items in a collection or involved in some calculation. In this case, n is the total number of nodes in the tree. So, in the picture you posted n=31. If the height of the tree is O(log n) that means that the height of the tree is proportional to the log of n. Since this is a binary tree, you'd use log base 2.
⌊log₂(31)⌋ = 4
Therefore, the height of the tree should be about 4—which is exactly the case in your example.
As I explained in a comment, a binary tree can have multiple cases:
In the degenerate case, a binary tree is simply a chain, and its height is O(n).
In the best case (for most search algorithms), a complete binary tree has the property that for any node, the height of the subtrees are the same. In this case the length will be the floor of log(n) (base 2, or base k, for k branches). You can prove this by induction on the size of the tree (structural induction in the constructors)
In the general case you will have a mix of these, a tree constructed where any node has subtress with possibly different height.
Related
If someone wants to generates a complete binary tree. This tree has h levels where h can be any positive integer and as an input to the algorithm. What complexity will it lie in and why?
A complete binary tree is tree where all levels are full of nodes except the last level, we can define the time complexity in terms of upper bound.
If we know the height of the tree is h, then the maximum number of possible nodes in the tree are 2h - 1.
Therefore, time complexity = O(2h - 1).
To sell your algorithm in the market, you need tight upper bounds to prove that your algorithm is better than the others'.
A slightly tight upper bound for this problem can be defined after knowing exactly how many nodes are there in the tree. Let's say there are N.
Then, the time complexity = O(N).
for the time efficiency of inserting into binary search tree,
I know that the best/average case of insertion is O(log n), where as the worst case is O(N).
What I'm wondering is if there is any way to ensure that we will always have best/average case when inserting besides implementing an AVL (Balanced BST)?
Thanks!
There is no guaranteed log n complexity without balancing a binary search tree. While searching/inserting/deleting, you have to navigate through the tree in order to position yourself at the right place and perform the operation. The key question is - what is the number of steps needed to get at the right position? If BST is balanced, you can expect on average 2^(i-1) nodes at the level i. This further means, if the tree has k levels (kis called the height of tree), the expected number of nodes in the tree is 1 + 2 + 4 + .. + 2^(k-1) = 2^k - 1 = n, which gives k = log n, and that is the average number of steps needed to navigate from the root to the leaf.
Having said that, there are various implementations of balanced BST. You mentioned AVL, the other very popular is red-black tree, which is used e.g. in C++ for implementing std::map or in Java for implementing TreeMap.
The worst case, O(n), can happen when you don't balance BST and your tree degenerates into a linked list. It is clear that in order to position at the end of the list (which is a worst case), you have to iterate through the whole list, and this requires n steps.
I have two questions:
What is the difference between nearly balanced BST and nearly Complete Binary tree. Even if the definition of the former is clear then we can differenciate, but not able to get a relevant article.
Today, in my class I was taught about the condition to be balanced as:
max( size(root.left) , size(root.right) ) <= 3*n/4 ------------ (eqn 1).
Hence, H(n) = height for the tree of n nodes following the above property < = 1+H(3*n/4).
Continuing the recursive steps we get the bound for logn.
My question is that, is this a specific type of BST ? For example in case of AVL trees, as I remember the condtion is that the difference in the heights of left and right childs being atmost 1, or is this a more general result and the equation 1 as stated earlier can be reduced to prove the result for AVL Trees as well ? i.e. any Balanced BST will result in difference of heights of siblings being atmost 1 ?
In case its different than AVL, how do we manage the Insertion and Delete Operations in this new kind of tree ?
EDIT : Also if you can explain why 3*n/4 only ?
My Thought: It is because we can then surely say that H(n) <= 1+H(3*n/4), since if we take something like 3n/5 less than 3n/4 then H(3n/5) wont be necessarily less that H(2n/5) as the raio of 3n/5 and 2n/5 is less than 2 and as we know a factor of 2 for number of nodes increases the height by 1.
So we wont surely write H(n) <= 1 + H(3n/5), it may be H(2n/5) in place of H(3n/5) as well, am I right ?
A nearly complete BST is a BST where all levels are filled, except the last one. Definitions are kind of messed up here (some call this property perfect). Please refer to wikipedia for this.
Being balanced is a less strict criterion, i.e. all (nearly) complete BSTs are balanced, but not all balanced BSTs are complete. In that Wikipedia article is a definition for that too. Im my world a BST is balanced, if it leads to O(log n) operation cost.
For example, one can say, a BST is balanced, if each subtree has at most epsilon * n nodes, where epsilon < 1 (for example epsilon = 3/4 or even epsilon = 0.999 -- which are practically not balanced at all).
The reason for that is that the height of such a BST is roughly log_{1/epsilon} n = log_2 n / (- log_2 epsilon) = O(log n), but 1 / (- log_2 0.99) = 99.5 is a huge constant. You can try to prove that with the usual ration of epsilon = 1/2, where both subtrees have roughly the same size.
I don't know of a common BST, which uses this 3/4. Common BSTs are for example Red-Black-Trees, Splay-Trees or - if you are on a hard disk - a whole family of B-Trees. For your example, you can probably implement the operations by augumenting each node with two integers representing the number of nodes in the left and the right subtree respectively. When inserting or deleting someting, you update the numbers as you walk from the root to the leaf (or up) and if the condition is validated, you do a rotation.
I have been working on determining the height of a self-balanced binary tree knowing its number of nodes(N) and I came with the formula:
height = ceilling[log2(N+1)], where ceilling[x] is the smallest integer not less than x.
The thing is I can't find this formula on the internet and it seems pretty accurate.
Is there any case of self-balanced binary tree this formula would fail?
What would be the general formula to determine the height of the tree then?
There is a formula on Wikipedia's Self-balancing binary search tree article.
h >= ceilling(log2(n+1) - 1) >= floor(log2(n))
The minimum height is floor(log2(n)). It's worth noting, however, that
the simplest algorithms for BST item insertion may yield a tree with height n in rather common situations. For example, when the items are inserted in sorted key order, the tree degenerates into a linked list with n nodes.
So, your formula is not far off from the "good approximation" formula (just off by 1), but there can be a pretty wide range between n and log2(n) to take into account.
In researching complexity for any algorithm that traverses a binary search tree, I see two different ways to express the same thing:
Version #1: The traversal algorithm at worst case compares once per height of the tree; therefore complexity is O(h).
Version #2: The traversal algorithm at worst case compares once per height of the tree; therefore complexity is O(logN).
It seems to me that the same logic is at work, yet different authors use either logN or h. Can someone explain to me why this is the case?
The correct value for the worst-case time to search is tree is O(h), where h is the height of a tree. If you are using a balanced search tree (one where the height of the tree is O(log n)), then the lookup time is worst-case O(log n). That said, not all trees are balanced. For example, here's a tree with height n - 1:
1
\
2
\
3
\
...
\
n
Here, h = O(n), so the lookup is O(n). It's correct to say that the lookup time is also O(h), but h ≠ O(log n) in this case and it would be erroneous to claim that the lookup time was O(log n).
In short, O(h) is the correct bound. O(log n) is the correct bound in a balanced search tree when the height is at most O(log n), but not all trees have lookup time O(log n) because they may be imbalanced.
Hope this helps!
If your binary tree is balanced so that every node has exactly two child nodes, then the number of nodes in the tree will be exactly N = 2h − 1, so the height is the logarithm of the number of elements (and similarly for any complete n-ary tree).
An arbitrary, unconstrained tree may look totally different, though; for instance, it could just have one node at every level, so N = h in that case. So the height is the more general measure, as it relates to actual comparisons, but under the additional assumption of balance you can express the height as the logarithm of the number of elements.
O(h) would refer to a binary tree that is sorted but not balanced
O(logn) would refer to a tree that is sorted and balanced
It's sort of two ways of saying the same thing, because your average balanced binary tree of height 'h' will have around 2^h nodes.
Depending on the context, either height or #nodes may be more relevant, and so that's what you'll see referenced.
because the (h)eight of a balanced tree varies as the log of the (N)umber of elements