How does a red-black tree work? - algorithm

There are lots of questions around about red-black trees but none of them answer how they work. Why is it called red-black? How does this keep the tree balanced (thus increasing performance over an unbalanced normal binary search tree)? I'm just looking for an overview of how and why it works.

For searches and traversals, it's the same as any binary tree.
For inserts and deletes, more sophisticated algorithms are applied which aim to ensure that the tree cannot be too unbalanced. These guarantee that all single-item operations will always run in at worst O(log n) time, whereas in a simple binary tree the binary tree can become so unbalanced that it's effectively a linked list, giving O(n) worst case performance for each single-item operation.
The basic idea of the red-black tree is to imitate a B-tree with up to 3 keys and 4 children per node. B-trees (or variations such as B+ trees) are mainly used for database indexes and for data stored on hard disk.
Each binary tree node has a "colour" - red or black. Each black node is, in the B-tree analogy, the subtree root for the subtree that fits within that B-tree node. If this node has red children, they are also considered part of the same B-tree node. So it is possible (though not done in practice) to convert a red-black tree to a B-tree and back, with (most) structure preserved. The only possible anomoly is that when a B-tree node has two keys and three children, you have a choice of which key to goes in the black node in the equivalent red-black tree.
For example, with red-black trees, every line from root to leaf has the same number of black nodes. This rule is derived from the B-tree rule that all leaf nodes are at the same depth.
Although this is the basic idea from which red-black trees are derived, the algorithms used in practice for inserts and deletes are modified to enforce all the B-tree rules (there might be a minor exception - I forget) during updates, but are tailored for the binary tree form. This means that doing a red-black tree insert or delete may give a different structure for the result than that you'd expect comparing with doing the B-tree insert or delete.
For more detail, follow the Wikipedia link that MigDus already supplied.

A red-black tree is an ordered binary tree where each vertex is coloured red or black. The intuition is that a red vertex should be seen as being at the same height as its parent (i.e., an edge to a red vertex is thought of as "horizontal" rather than "descending").
[I don't believe the Wikipedia entry makes this point clear.]
The usual rules for red-black trees require that a red vertex never point to another red vertex. This means that the possible vertex arrangements for any subtree rooted with a black vertex (bbb, bbr, rbb, rbr -- for [left child][root][right child]) correspond to 234 trees.
Searching a red-black tree is just the same as searching an ordinary binary tree. Insertion and deletion are similar, except that a "fix-up" rotation may be required at some point to preserve the red-black invariant.
Cheers!

Related

Red Black Tree Insertion & Deletion Uniqueness

I've been learning and working on implementing a red-black tree data structure. I'm following this article on red-black tree deletion examples and looking at example 5 they have:
When I insert the same nodes into my tree, I get the following:
I understand that red black trees are not unique (I think), therefore both of the above trees are valid since they don't violate any of the properties.
In the example article, after deleting node 1, they get the following:
But after deleting node 1 in my code, I get the following:
Since in my case, node 1 is red, I don't call my delete_fix function which takes care of re-arranging the tree and such. The deletion algorithm I was following simply states to call a delete_fix function if the node to be deleted is black.
However, after comparing my tree with the one in the example article I can see that mine is not exactly optimized. It still follows the rules of the red-black tree though. Is this to be expected with red-black trees or am I missing something here?
However, after comparing my tree with the one in the example article I can see that mine is not exactly optimized.
It is optimised. Your tree will be fast at deleting nodes 5, 7, 20 & 28. The other only 5 & 7.
Bear in mind that for Red-Black Trees, they can be bushy in one direction. If the black tree height of real nodes is N, then the minimum path from root to leaf node is N (all black) and maximum path from root to leaf node is 2 * N (alternatively black-red-black-red etc). If you try to add a new node to the bushy path that is at maximum height, the tree will recolour and/or rebalance.
If you want a more balanced search tree you should use an AVL tree. Red-Black trees favour minimal insertion/deletion fixups over finding a node. Your tree is fine.

Keeping avl tree balanced without rotations

B Tree is self balancing tree like AVL tree. HERE we can see how left and right rotations are used to keep AVL tree balanced.
And HERE is a link which explains insertion in B tree. This insertion technique does not involve any rotations, if I am not wrong, to keep tree balanced. And therefore it looks simpler.
Question: Is there any similar (or any other technique without using rotations) to keep avl tree balanced ?
The answer is... yes and no.
B-trees don't need to perform rotations because they have some slack with how many different keys they can pack into a node. As you add more and more keys into a B-tree, you can avoid the tree becoming lopsided by absorbing those keys into the nodes themselves.
Binary trees don't have this luxury. If you insert a key into a binary tree, it will increase the height of some branch in that tree by 1 in all cases because that key needs to go into its own node. Rotations combat the overall growth of the tree by ensuring that if certain branches grow too much, that height is shuffled into the rest of the tree.
Most balanced BSTs have some sort of rebalancing strategy that involves rotations, but not all do. One notable example of a strategy that doesn't directly involve rotations is the scapegoat tree, which rebalances by tearing huge subtrees out of the master tree, optimally rebuilding them, then gluing the subtree back into the main tree. This approach doesn't technically involve any rotations and is a pretty clean way to implement a balanced tree.
That said - the most space-efficient implementations of scapegoat trees do indeed use rotations to convert an imbalanced tree into a perfectly balanced one. You don't have to use rotations to do this, though if space is short it's probably the best way to do so.
Hope this helps!
Rotations can be made simple (if you need only simplicity).
If the insertion traffic is left, the balance -1 is the red-light.
If the insertion traffic is right, the balance 1 is the red-light.
This is a (simplified) coarse-graining (2-adic rounding) of the normalized fundamental AVL balance:
{left,even,right} ~ {low,even,high} ~ {green,green,red}
Walk the insertion route and rotate every red-light (before the insertion). If the next light is green, you can just rotate the red-light 1 or 2 times. You may have to re-balance the next subtrees before each rotation, because inner subtrees are invariant. This is simple, but it takes a very long time. You have to move down the green-light before each rotation. You can always move down the green-light to the root, and you can rotate the tree-top to generate a new green-light.
The red-light rotations naturally move down the green-light.
At this point, you have only the green-lights for the insertion.
The cost structure of this naive method is topologically simplified as
df(h)/dh=∫f(h)dh
such as sin(h),sinh(h),etc.

Comparison about balanced binary search trees

I've read some Q&As about self-balancing binary trees, but I'm not quite familiar with all of them.
The first one of them I got to know is AVL, the second is Red-Black tree.
There are something I don't quite understand: according to some books and articles, AVL can perform searching a little bit faster than Red-Black tree, well, this is understandable.
Then what's Red-Black tree's edge over AVL?
In AVL, probably after each insertion, we have to check for balance, but in Red-Black tree we don't have to do something like that frequently, right?
PS:
I search SO for something similar, but I didn't get satisfying answer.
Hope some friends can give me a detailed comparison of self-balancing trees.
An AVL tree has the following property: from each node, the difference in height of the left and the right subtree is at most 2.
In a red-black tree, on the other hand, the height of the left or right subtree of any node is at most twice the height of the other tree. That is, they differ at most by a factor of 2.
This shows intuitively that lookup is indeed faster in an AVL tree on average.
However, when inserting or deleting a node, we have to rebalance the AVL tree more often, to preserve the much stricter height invariant (on the other hand, rebalancing in a red-black tree is algorithmically much more complicated). This means that in practice, a red-black tree may perform much better than an AVL tree, in particular when it’s often changed.

Why is avl tree faster for searching than red black tree?

I have read it in a couple of places that avl tree search faster, but not able to understand. As I understand :
max height of red-black tree = 2*log(N+1)
height of AVL tree = 1.44*logo(N+1)
Is it because AVL is shorter?
Yes.
The number of steps required to find an item depends on the distance between the item and the root.
Since the AVL tree is packed tighter (i.e. it has a lower max height) it means more items are closer to the root than in the red-black case.
The extra tight packing also means the AVL tree requires more work when inserting elements.
The best choice for any app depends on whether it is insert intensive or search intensive...
AVL tree is better than red-black tree if the input key is almost ascending/descending because then we would have to do single rotation(left-left or right-right case) to add this element. Also, since the tree would be tightly balanced, the search would also be faster.
But for randomly selected input key, RBTree are better since they require less rotation for insertion in comparison to AVL.
Overall, it depends on the input sequence, which would decide how tilted our tree is, and the operation performed.For insert-intensive use Red-Black Tree and for search-intensive use AVL.
AVL tree and RBTree do have respective advantages as well as disadvantages. You'll perceive that better if you've already learned how they work.
AVL is slightly faster than RBTree in insert operation because there would be at most one rotation involved in insertion, while there may be two for RBTree.
RBTree only require at most three rotations in deletion, but this is not guaranteed in AVL. So it can delete nodes faster than AVL.
However, above all, they both have strict logarithmic tree height.
Pick up any subtree, the property that makes AVL "balanced" guarantees that the difference of height between two child subtrees is at most one, which is to say, intuitively, the whole tree is rigidly balanced.
But when it comes to an RBTree, the rule becomes likely "looser", since property of RBTree can only guarantee the depth of a tree is not larger than twice as the logarithm of the total number of nodes.
Here're some facts that may be more precise:
An AVL tree's height is strictly less than: 1.44log(n+2)-0.328
(approximately)
A red-black tree's height is at most 2log(n+1)
See https://en.wikipedia.org/wiki/AVL_tree#Comparison_to_other_structures for detailed information.

Balanced Binary Tree

What is the name of the binary tree (or the family of the binary
trees), that is balanced, and has the minimum number of nodes
possible for its height?
balanced binary tree
(data structure)
Definition: A binary tree where no leaf is more than a certain amount farther from the root than any other. After inserting or deleting a node, the tree may rebalanced with "rotations."
Generalization (I am a kind of ...)
binary tree.
Specialization (... is a kind of me.)
AVL tree, red-black tree, B-tree, balanced binary search tree.
Aggregate child (... is a part of or used in me.)
left rotation, right rotation.
See also BB(α) tree, height-balanced tree.
-- http://www.itl.nist.gov/div897/sqg/dads/HTML/balancedbitr.html
It is called Fibonacci tree
AVL is a balanced tree with log(n) height (this is the lowest height possible for binary tree).
Another implementation of a similar data structure is Red Black Tree.
Both trees implement all operations in O(log(n)).
AVL Tree is something that you have been looking for.
From Wikipedia:
In computer science, an AVL tree is a self-balancing binary search tree, and it is the first such data structure to be invented. In an AVL tree, the heights of the two child subtrees of any node differ by at most one; therefore, it is also said to be height-balanced. Lookup, insertion, and deletion all take O(log n) time in both the average and worst cases, where n is the number of nodes in the tree prior to the operation. Insertions and deletions may require the tree to be rebalanced by one or more tree rotations.

Resources