Balanced Binary Tree - data-structures

What is the name of the binary tree (or the family of the binary
trees), that is balanced, and has the minimum number of nodes
possible for its height?

balanced binary tree
(data structure)
Definition: A binary tree where no leaf is more than a certain amount farther from the root than any other. After inserting or deleting a node, the tree may rebalanced with "rotations."
Generalization (I am a kind of ...)
binary tree.
Specialization (... is a kind of me.)
AVL tree, red-black tree, B-tree, balanced binary search tree.
Aggregate child (... is a part of or used in me.)
left rotation, right rotation.
See also BB(α) tree, height-balanced tree.
-- http://www.itl.nist.gov/div897/sqg/dads/HTML/balancedbitr.html

It is called Fibonacci tree

AVL is a balanced tree with log(n) height (this is the lowest height possible for binary tree).
Another implementation of a similar data structure is Red Black Tree.
Both trees implement all operations in O(log(n)).

AVL Tree is something that you have been looking for.
From Wikipedia:
In computer science, an AVL tree is a self-balancing binary search tree, and it is the first such data structure to be invented. In an AVL tree, the heights of the two child subtrees of any node differ by at most one; therefore, it is also said to be height-balanced. Lookup, insertion, and deletion all take O(log n) time in both the average and worst cases, where n is the number of nodes in the tree prior to the operation. Insertions and deletions may require the tree to be rebalanced by one or more tree rotations.

Related

Skewed Trees relation to Binary Search Tree

I know what Binary search tree is and I know how they work. But what does it take for it to become a skewed tree? What I mean is, do all nodes have to go on one side? or is there any other combination?
Having a tree in this shape (see below) is the only way to make it a skewed tree? If not, what are other possible skewed trees?
Skewed tree example:
Also, I searched but couldn't find a good solid definition of a skewed tree. Does anyone have a good definition?
Figured out a skewed Tree is the worst case of a tree.
`
The number of permutations of 1, 2, ... n = n!
The number of BST Shapes: (1/n+1)(2n!/n!n!)
The number of skewed trees of 1, 2, ....n = 2^(n-1)
`
Here is an example I was shown:
http://i61.tinypic.com/4gji9u.png
A good definition for a skew tree is a binary tree such that all the nodes except one have one and only one child. (The remaining node has no children.) Another good definition is a binary tree of n nodes such that its depth is n-1.
A binary tree, which is dominated solely by left child nodes or right child nodes, is called a skewed binary tree, more specifically left skewed binary tree, or right skewed binary tree.

Determining if a binary search tree can be constructed by a sequence of splay tree insertions

A splay tree is a type of self-adjusting binary search tree. Inserting a node into a splay tree involves inserting it as a leaf in the binary search tree, then bringing that node up to the root via a "splay" operation.
Let us say a binary search tree is "splay-constructible" if the tree can be produced by inserting its elements into an initially empty splay tree in some order.
Not all binary search trees are splay-constructible. For example, the following is a minimal non-splay-constructible binary search tree:
What is an efficient algorithm that, given a binary search tree, determines whether it is splay-constructible?
This question was inspired by a related question regarding concordance between AVL and splay trees.
More details: I have code to generate a splay tree from a given sequence, so I could perform a brute-force test in O(n! log(n)) time or so, but I suspect polynomial time performance is possible using some form of dynamic programming over the tree structure. Presumably such an algorithm would exploit the fact that every splay-constructible tree of size n can be produced by inserting the current root into some splay-constructible tree of size n-1, then do something to take advantage of overlapping/isomorphic subproblems.

Comparison about balanced binary search trees

I've read some Q&As about self-balancing binary trees, but I'm not quite familiar with all of them.
The first one of them I got to know is AVL, the second is Red-Black tree.
There are something I don't quite understand: according to some books and articles, AVL can perform searching a little bit faster than Red-Black tree, well, this is understandable.
Then what's Red-Black tree's edge over AVL?
In AVL, probably after each insertion, we have to check for balance, but in Red-Black tree we don't have to do something like that frequently, right?
PS:
I search SO for something similar, but I didn't get satisfying answer.
Hope some friends can give me a detailed comparison of self-balancing trees.
An AVL tree has the following property: from each node, the difference in height of the left and the right subtree is at most 2.
In a red-black tree, on the other hand, the height of the left or right subtree of any node is at most twice the height of the other tree. That is, they differ at most by a factor of 2.
This shows intuitively that lookup is indeed faster in an AVL tree on average.
However, when inserting or deleting a node, we have to rebalance the AVL tree more often, to preserve the much stricter height invariant (on the other hand, rebalancing in a red-black tree is algorithmically much more complicated). This means that in practice, a red-black tree may perform much better than an AVL tree, in particular when it’s often changed.

Why is avl tree faster for searching than red black tree?

I have read it in a couple of places that avl tree search faster, but not able to understand. As I understand :
max height of red-black tree = 2*log(N+1)
height of AVL tree = 1.44*logo(N+1)
Is it because AVL is shorter?
Yes.
The number of steps required to find an item depends on the distance between the item and the root.
Since the AVL tree is packed tighter (i.e. it has a lower max height) it means more items are closer to the root than in the red-black case.
The extra tight packing also means the AVL tree requires more work when inserting elements.
The best choice for any app depends on whether it is insert intensive or search intensive...
AVL tree is better than red-black tree if the input key is almost ascending/descending because then we would have to do single rotation(left-left or right-right case) to add this element. Also, since the tree would be tightly balanced, the search would also be faster.
But for randomly selected input key, RBTree are better since they require less rotation for insertion in comparison to AVL.
Overall, it depends on the input sequence, which would decide how tilted our tree is, and the operation performed.For insert-intensive use Red-Black Tree and for search-intensive use AVL.
AVL tree and RBTree do have respective advantages as well as disadvantages. You'll perceive that better if you've already learned how they work.
AVL is slightly faster than RBTree in insert operation because there would be at most one rotation involved in insertion, while there may be two for RBTree.
RBTree only require at most three rotations in deletion, but this is not guaranteed in AVL. So it can delete nodes faster than AVL.
However, above all, they both have strict logarithmic tree height.
Pick up any subtree, the property that makes AVL "balanced" guarantees that the difference of height between two child subtrees is at most one, which is to say, intuitively, the whole tree is rigidly balanced.
But when it comes to an RBTree, the rule becomes likely "looser", since property of RBTree can only guarantee the depth of a tree is not larger than twice as the logarithm of the total number of nodes.
Here're some facts that may be more precise:
An AVL tree's height is strictly less than: 1.44log(n+2)-0.328
(approximately)
A red-black tree's height is at most 2log(n+1)
See https://en.wikipedia.org/wiki/AVL_tree#Comparison_to_other_structures for detailed information.

How does a red-black tree work?

There are lots of questions around about red-black trees but none of them answer how they work. Why is it called red-black? How does this keep the tree balanced (thus increasing performance over an unbalanced normal binary search tree)? I'm just looking for an overview of how and why it works.
For searches and traversals, it's the same as any binary tree.
For inserts and deletes, more sophisticated algorithms are applied which aim to ensure that the tree cannot be too unbalanced. These guarantee that all single-item operations will always run in at worst O(log n) time, whereas in a simple binary tree the binary tree can become so unbalanced that it's effectively a linked list, giving O(n) worst case performance for each single-item operation.
The basic idea of the red-black tree is to imitate a B-tree with up to 3 keys and 4 children per node. B-trees (or variations such as B+ trees) are mainly used for database indexes and for data stored on hard disk.
Each binary tree node has a "colour" - red or black. Each black node is, in the B-tree analogy, the subtree root for the subtree that fits within that B-tree node. If this node has red children, they are also considered part of the same B-tree node. So it is possible (though not done in practice) to convert a red-black tree to a B-tree and back, with (most) structure preserved. The only possible anomoly is that when a B-tree node has two keys and three children, you have a choice of which key to goes in the black node in the equivalent red-black tree.
For example, with red-black trees, every line from root to leaf has the same number of black nodes. This rule is derived from the B-tree rule that all leaf nodes are at the same depth.
Although this is the basic idea from which red-black trees are derived, the algorithms used in practice for inserts and deletes are modified to enforce all the B-tree rules (there might be a minor exception - I forget) during updates, but are tailored for the binary tree form. This means that doing a red-black tree insert or delete may give a different structure for the result than that you'd expect comparing with doing the B-tree insert or delete.
For more detail, follow the Wikipedia link that MigDus already supplied.
A red-black tree is an ordered binary tree where each vertex is coloured red or black. The intuition is that a red vertex should be seen as being at the same height as its parent (i.e., an edge to a red vertex is thought of as "horizontal" rather than "descending").
[I don't believe the Wikipedia entry makes this point clear.]
The usual rules for red-black trees require that a red vertex never point to another red vertex. This means that the possible vertex arrangements for any subtree rooted with a black vertex (bbb, bbr, rbb, rbr -- for [left child][root][right child]) correspond to 234 trees.
Searching a red-black tree is just the same as searching an ordinary binary tree. Insertion and deletion are similar, except that a "fix-up" rotation may be required at some point to preserve the red-black invariant.
Cheers!

Resources