Updating the balancing factor of AVL tree nodes - data-structures

I am learning about AVL trees, and I know how to do all of the rotations, but the one thing I need to know is how to make it so that after each insertion or rotation the balancing factors of the nodes are updated.
Thanks!

Just have a look at an existing AVL tree implementation. This is one I wrote originally for Hypersonic SQL, it is still used as part of my H2 database:
TreeNode
TreeIndex
TreeCursor

Related

Is kd-tree always balanced?

I have used kd-tree algoritham and make tree.
But i found that tree is not balanced so my question is if we used kd-tree algoritham then that tree is always balanced if not then how can we make it balance ?.
We can use another algoritham likes AVL or Red-Black for balancing kd tree ?
I have some sample data for that i used kd-tree algoritham but that tree is not balanced.
(14,31), (15,32), (17,42), (16,44), (18,52), (16,62)
This is a fairly broad topic and the questions themselves are kind of general.
Hopefully this will give you some useful insights and material to work with:
Kd tree is not always balanced.
AVL and Red-Black will not work with K-D Trees, you will have either construct some balanced variant such as K-D-B-tree or use other balancing techniques.
K-d Tree are commonly used to store GeoSpatial data because they let you search over more then one key, contrary to 'traditional' tree which lets you do single dimensional search. GeoSpatial data certainly cannot be represented in single dimension.
Note that there are also specialized databases working with GeoSpatial data so it might be worth checking if the overhead could be shifted to them instead of making your own solution: Although i don't have much experience with this, maybe it is worth checking the postgis.
postgis
Here are some useful links showing how to build balanced K-D tree variant and usage of K-D trees with Spatial data:
balancing K-D-Tree
K-D-B-tree
spatial data k-d-trees
It depends on how you build the tree.
If built as originally published, the tree will be balanced, i.e. only at the leaf level it will have at most a height difference of 1. If your data set has 2^n-1 elements, the tree will be perfectly balanced.
When constructed with the median, then half of the objects must be on either branch of the tree, thus it has minimal height and is balanced.
However, this tree cannot be changed then. I am not aware of an insert or remove algorithm that would preserve this property, but YMMV. I bet there are two dozens of kd-tree extensions that aim at rebalancing and making insertions/deletions more effective.
The k-d-tree is not designed for changes, and will quickly lose efficiency. It relies on the median, and thus any change to the tree would worst-case propagate through all of the tree. Therefore, you need to allow some tolerance in the tree quality to support changes. It appears to be a common approach to just keep track of insertions/deletions and rebuild the tree eventually. You cannot combine it with red-black-trees or AVL-trees, because data with more than 1 dimension is not ordered; these trees only work for ordered data. Upon rotation of the tree the splitting axis changes; and there may be elements in either half that suddenly would need to move to the other branch. This does not happen in AVL or red-black trees.
But as you can imagine, people have published several indexes that remain balanced. Such as k-d-b-trees, and R-trees. These also work better for large data that needs to be stored on disk.
In order to make your kd-tree balanced use median value.
(14,31), (15,32), (17,42), (16,44), (18,52), (16,62)
In the root choose median of x-cordinates [14,15,16,16,17,18] which is 16,
So all the elements less than 16 goes to left part of the tree and
greater than or equal to goes to right side of tree.
as of now,
left part tree consists of [14,31],[15,32] ,now for y-axis find the median for [31,32]
so that the tree is balanced

Keeping avl tree balanced without rotations

B Tree is self balancing tree like AVL tree. HERE we can see how left and right rotations are used to keep AVL tree balanced.
And HERE is a link which explains insertion in B tree. This insertion technique does not involve any rotations, if I am not wrong, to keep tree balanced. And therefore it looks simpler.
Question: Is there any similar (or any other technique without using rotations) to keep avl tree balanced ?
The answer is... yes and no.
B-trees don't need to perform rotations because they have some slack with how many different keys they can pack into a node. As you add more and more keys into a B-tree, you can avoid the tree becoming lopsided by absorbing those keys into the nodes themselves.
Binary trees don't have this luxury. If you insert a key into a binary tree, it will increase the height of some branch in that tree by 1 in all cases because that key needs to go into its own node. Rotations combat the overall growth of the tree by ensuring that if certain branches grow too much, that height is shuffled into the rest of the tree.
Most balanced BSTs have some sort of rebalancing strategy that involves rotations, but not all do. One notable example of a strategy that doesn't directly involve rotations is the scapegoat tree, which rebalances by tearing huge subtrees out of the master tree, optimally rebuilding them, then gluing the subtree back into the main tree. This approach doesn't technically involve any rotations and is a pretty clean way to implement a balanced tree.
That said - the most space-efficient implementations of scapegoat trees do indeed use rotations to convert an imbalanced tree into a perfectly balanced one. You don't have to use rotations to do this, though if space is short it's probably the best way to do so.
Hope this helps!
Rotations can be made simple (if you need only simplicity).
If the insertion traffic is left, the balance -1 is the red-light.
If the insertion traffic is right, the balance 1 is the red-light.
This is a (simplified) coarse-graining (2-adic rounding) of the normalized fundamental AVL balance:
{left,even,right} ~ {low,even,high} ~ {green,green,red}
Walk the insertion route and rotate every red-light (before the insertion). If the next light is green, you can just rotate the red-light 1 or 2 times. You may have to re-balance the next subtrees before each rotation, because inner subtrees are invariant. This is simple, but it takes a very long time. You have to move down the green-light before each rotation. You can always move down the green-light to the root, and you can rotate the tree-top to generate a new green-light.
The red-light rotations naturally move down the green-light.
At this point, you have only the green-lights for the insertion.
The cost structure of this naive method is topologically simplified as
df(h)/dh=∫f(h)dh
such as sin(h),sinh(h),etc.

AVL tree delete

Find an example AVL tree such that removing a single (specific) value
from the tree causes rebalancing to occur starting at two different
nodes.
I have this as my homework question. I know what an AVL tree is, but I don't understand the above question. Can someone shed some light?
Does rebalancing at two different nodes mean that two rotations are needed to fix the tree?
An AVL rebalance operation is a time when a particular node needs to have either a single or double rotation applied to correct the imbalance in the tree. I think the question is asking you to find a case where doing a single or double rotation within an AVL tree locally fixes the balance, but then requires a rebalance operation to be performed at a node higher up in the tree.
Hope this helps!

Why storing data only in the leaf nodes of a balanced binary-search tree?

I have bought a nice little book about computational geometry. While reading it here and there, I often stumbled over the use of this special kind of binary search tree. These trees are balanced and should store the data only in the leaf nodes, whereas inner nodes should only store values to guide the search down to the leaves.
The following image shows an example of this trees (where the leaves are rectangles and the inner nodes are circles).
I have two questions:
What is the advantage of not storing data in the inner nodes?
For the purpose of learning, I would like to implement such a tree. Therefore, I thought it might be a good idea to use an AVL tree as the basis, but is it a good idea?
Any kind of helpful resource is very welcome.
What is the advantage of not storing data in the inner nodes?
There are some tree data structures that, by design, require that no data is stored in the inner nodes, such as Huffman code trees and B+ trees. In the case of Huffman trees, the requirement is that no two leaves have the same prefix (i.e. the path to node 'A' is 101 whereas the path to node 'B' is 10). In the case of B+ trees, it comes from the fact that it is optimized for block-search (this also means that every internal node has a lot of children, and that the tree is usually only a few levels deep).
For the purpose of learning, I would like to implement such a tree. Therefore, I thought it might be a good idea to use an AVL tree as the basis, but is it a good idea?
Sure! An AVL tree is not extremely complicated, so it's a good candidate for learning.
It is common to have other kinds of binary trees with data at the leaves instead of the interior nodes, but fairly uncommon for binary SEARCH trees.
One reason you might WANT to do this is educational -- it's often EASIER to implement a binary search tree this way then the traditional way. Why? Almost entirely because of deletions. Deleting a leaf is usually very easy, whereas deleting an interior node is harder/messier. If your data is only at the leaves, then you are always in the easy case!
It's worth thinking about where the keys on interior nodes come from. Often they are duplicates of keys that are also at the leaves (with data). Later, if the key at the leaf is deleted, the key at the interior nodes might still hang around.
What is the advantage of not storing data in the inner nodes?
In general, there is no advantage in not storing data in the inner nodes. For example, a red-black tree is a balanced tree and it stores its data into the inner and leaf nodes.
For the purpose of learning, I would like to implement such a tree. Therefore, I thought it might be a good idea to use an AVL tree as the basis, but is it a good idea?
In my opinion, it is.
One benefit to only keeping the data in leaf nodes (e.g., B+ tree) is that scanning/reading the data is exceedingly simple. The leaf nodes are linked together. So to read the next item when you are at the "end" (right or left) of the data within a given leaf node, you just read the link/pointer to the next (or previous) node and jump to the next leaf page.
With a B tree where data is in every node, you have to traverse the tree to read the data in order. That is certainly a well-defined process but is arguably more complex and typically requires more state information.
I am reading the same book and they say it could be done either way, data storage at external or at internal nodes.
The trees they use are Red-Black.
In any case, here is an article that stores data at internal nodes of a Red Black Tree and then links these data nodes together as a list.
Balanced binary search tree with a doubly linked list in C++
by Arjan van den Boogaard
http://archive.gamedev.net/archive/reference/programming/features/TStorage/default.html

Applications of 2-3-4 Tree

What are the applications of 2-3-4 trees? Are they widely used in applications for providing better performance of the applications ?
Edit : Which algorithms make the best use of 2-3-4 trees ?
2-3-4 trees are self-balancing and usually are usually very efficient for finding, adding and deleting elements, so like all trees they can be used for storing and retrieving elements in non-linear order. Unfortunately they tend to use more memory than other trees, because even nodes with only 2 data items still need to have enough memory to possibly store 4 of them.
This is why 2-3-4 trees are used as models for red-black trees, which are like standard BSTs except that nodes can be either red or black, and various rules exist about how to choose which colour a node is.
The key is that algorithms for searching/adding/deleting in a 2-3-4 tree are VERY similar to the ones for a red-black tree, so usually 2-3-4 trees are studied as a way of understanding red-black trees. The red-black trees themselves are quite widely used - I believe the standard Java Collections Framework tree is a red-black tree.
Answer to the applications of 2-3-4 Trees are:
• Linux Kernel.
• Completely Fair Scheduler
• To keep track of Virtual Memory Segments of a Process.
Because it is similar of Red Black Trees and as #Adam said we use it in Java Collections Framework itself.

Resources