practical use case of Binary tree zig zag traversal - data-structures

Are there any practical use cases of binary tree zig zag traversal?
Is there any use case where we need top view, bottom view, left view and boundary nodes of a binary tree? I am interested to know these questions for interviews or are there any other practical uses of knowing it.

Related

Why do we have to use depth-first traversal for a parse tree?

During my learning of parsing technology, it seems the parse tree is always traversed in a depth-first manner.
The leftmost derivation corresponds to a preorder traversal of the
parse tree, while the rightmost derivation corresponds to the reverse
of a postorder traversal of the parse tree.
[1]
And pre-order and post-order traversals are just 2 specific types of
depth-first tree traversal[2].
I think the reason lies in the difference between a plain tree and a parse tree. A plain tree only records the topology structure among nodes, while a parse tree records more than that. A parse tree further implies that the parent node is built upon the child nodes because a parent node derives into a collection of child nodes. If we want to compute the root node of the parse tree, which is the ultimate goal of creating a parse tree, we have to compute all the prerequisites. So a depth-first traversal is a natural must.
Is my understanding correct? Or is there any other scenario where other ways of traversal of a parse tree are necessary/mandatory?
You are considering only two of the possible parsing strategies: top-down left-to-right parsing, and bottom-up left-to-right parsing. Those are the two most popular strategies, to be sure. But they are not the only possibilities.
Each of these two strategies corresponds to one parse tree traverse, as the text you quote indicates. And the two traverses are both depth-first, in effect because the two parse strategies are both left-to-right. [Note 1]
Many other parse strategies are available, and they would correspond to other tree traverses. You could, for example, attempt to parse the text by starting in the middle somewhere (say, at a point where you were for some reason certain of the parse, perhaps because you are within all possible parenthetic groupings) and work outwards from there in some manner determined by your parsing algorithm. This strategy is certainly possible, and there is even a certain amount of literature about it (possibly not very current) because it makes sense in the context of doing partial parses of incorrect texts, for example for diagnostic purposes or syntax-highlighted display.
Even if you perform a left-to-right parse, you don't need to choose between top-down and bottom-up parsing. Before the LALR algorithm was discovered, there was quite a bit of investigation of "left corner" (LC) parsing, which switches between top-down and bottom-up parsing at the point where it becomes convenient to do so (the "corner"). The derivation so produced is neither leftmost nor rightmost, and it is hard to characterize the corresponding traverse (as per my footnote), although I think that a reasonable characterization would still result in a depth-first correspondence because the algorithm is still left-to-right.
In all cases, once the parse tree (or abstract syntax tree) has been constructed, you are free to traverse it in any fashion you like, and different semantic analysis algorithms perform different types of traverses. In an optimizing multi-pass compiler, you would expect to find a huge variety of different tree traverses, some depth-first, some breadth-first, and some which bounce around as necessary.
Notes:
I'm not sure whether the word "traverse" is really accurate here. The parse tree is not really being traversed, as such, since it doesn't yet exist; it is being constructed. The top-down strategy can be viewed as a depth-first preorder traverse of a tree which magically springs into existence during the traverse.
On the other hand, the bottom-up strategy starts at the leftmost leaf node, and proceeds to deduce the traverse which arrived at that point, which is why the quoted text calls it "the reverse" of a traverse. Is that really a meaningful concept? It is meaningful as a description of the final result, certainly, but it doesn't really correspond to any intuitive sense of the word "traverse". If you were travelling to London, you couldn't start your trip at the point where you make the final exit from the M40.

Is it always possible to turn one BST into another using at most O(n) tree rotations?

This earlier question asks whether it's always possible to turn one BST for a set of values into another BST for the same set of values purely using tree rotations (the answer is yes). However, is it always possible to do this using at most O(n) total tree rotations?
Yes, it is always possible to turn one BST into another using at most O(n) tree rotations. This answer follows the same general approach as the other answer by picking some canonical tree shape T* and bounding the number of rotations needed to turn an arbitrary tree into our canonical tree. Then you can turn an arbitrary tree T₁ into another tree T₂ by transforming T₁ into T* and then transforming T* into T₂.
As suggested in comments, you can choose your canonical tree to be a degenerate linked list. For trees of n nodes, this upper bounds the number of rotations needed at 2n−2.
In the paper Rotation Distance, Triangulation, and Hyperbolic Geometry, Daniel Sleator, Robert Tarjan, and William Thurston proved that the rotation distance between any two binary trees of n nodes is at most 2n−6 (better than the bound we get when transforming into a linked list).
At a high level, they did this by introducing a way to represent any binary tree as a polygon triangulation, where a tree rotation has a corresponding triangulation operation. Then, instead of reasoning about binary trees in their usual representation, the paper picks a canonical triangulation and shows how to transform an arbitrary triangulation into their desired one.
The canonical triangulation they chose is one where all diagonals emanate from a single vertex in a fan-like shape, which ends up corresponding to a somewhat unintuitive binary tree shape (a generalization of linked lists that also includes diamond shaped trees consisting of a root, a left child whose right child is a linked list, and a right child whose left child is a linked list).
It's a very cool technique that illustrates the power of isometries in data structures, showing how changing our representation can give us a new way of approaching a problem. Some friends and I recently put together a writeup walking through Sleator, Tarjan, and Thurston's proof if you would like to explore this in more detail.
Yes, this is always possible. I fear that the best I can do right now is give you a silly algorithm that proves it's possible, though I suspect that there must be a much better way to do this.
The Day-Stout-Warren algorithm is an algorithm that, starting with any BST, uses tree rotations to convert it to a perfectly balanced BST. It runs in time O(n) and does O(n) total rotations.
So suppose that you want to turn one tree T1 into another tree T2 using tree rotations. Run Day-Stout-Warren on both trees to convert them to the same balanced tree T*, and record the rotations that you needed to make in both cases. Then you can turn T1 into T2 by first running all the rotations needed to perfectly balanced T1, then running the reverse of the rotations needed to turn T2 into a balanced tree. This turns T1 into T* and then turns T* into T2. Since the Day-Stout-Warren algorithms makes only O(n) total rotations, this too makes only O(n) total rotations.
I feel like there has to be a better way to do this, but I'm not sure off the top of my head how to achieve this. If I think of anything, I'll let you know!

Is kd-tree always balanced?

I have used kd-tree algoritham and make tree.
But i found that tree is not balanced so my question is if we used kd-tree algoritham then that tree is always balanced if not then how can we make it balance ?.
We can use another algoritham likes AVL or Red-Black for balancing kd tree ?
I have some sample data for that i used kd-tree algoritham but that tree is not balanced.
(14,31), (15,32), (17,42), (16,44), (18,52), (16,62)
This is a fairly broad topic and the questions themselves are kind of general.
Hopefully this will give you some useful insights and material to work with:
Kd tree is not always balanced.
AVL and Red-Black will not work with K-D Trees, you will have either construct some balanced variant such as K-D-B-tree or use other balancing techniques.
K-d Tree are commonly used to store GeoSpatial data because they let you search over more then one key, contrary to 'traditional' tree which lets you do single dimensional search. GeoSpatial data certainly cannot be represented in single dimension.
Note that there are also specialized databases working with GeoSpatial data so it might be worth checking if the overhead could be shifted to them instead of making your own solution: Although i don't have much experience with this, maybe it is worth checking the postgis.
postgis
Here are some useful links showing how to build balanced K-D tree variant and usage of K-D trees with Spatial data:
balancing K-D-Tree
K-D-B-tree
spatial data k-d-trees
It depends on how you build the tree.
If built as originally published, the tree will be balanced, i.e. only at the leaf level it will have at most a height difference of 1. If your data set has 2^n-1 elements, the tree will be perfectly balanced.
When constructed with the median, then half of the objects must be on either branch of the tree, thus it has minimal height and is balanced.
However, this tree cannot be changed then. I am not aware of an insert or remove algorithm that would preserve this property, but YMMV. I bet there are two dozens of kd-tree extensions that aim at rebalancing and making insertions/deletions more effective.
The k-d-tree is not designed for changes, and will quickly lose efficiency. It relies on the median, and thus any change to the tree would worst-case propagate through all of the tree. Therefore, you need to allow some tolerance in the tree quality to support changes. It appears to be a common approach to just keep track of insertions/deletions and rebuild the tree eventually. You cannot combine it with red-black-trees or AVL-trees, because data with more than 1 dimension is not ordered; these trees only work for ordered data. Upon rotation of the tree the splitting axis changes; and there may be elements in either half that suddenly would need to move to the other branch. This does not happen in AVL or red-black trees.
But as you can imagine, people have published several indexes that remain balanced. Such as k-d-b-trees, and R-trees. These also work better for large data that needs to be stored on disk.
In order to make your kd-tree balanced use median value.
(14,31), (15,32), (17,42), (16,44), (18,52), (16,62)
In the root choose median of x-cordinates [14,15,16,16,17,18] which is 16,
So all the elements less than 16 goes to left part of the tree and
greater than or equal to goes to right side of tree.
as of now,
left part tree consists of [14,31],[15,32] ,now for y-axis find the median for [31,32]
so that the tree is balanced

Why is avl tree faster for searching than red black tree?

I have read it in a couple of places that avl tree search faster, but not able to understand. As I understand :
max height of red-black tree = 2*log(N+1)
height of AVL tree = 1.44*logo(N+1)
Is it because AVL is shorter?
Yes.
The number of steps required to find an item depends on the distance between the item and the root.
Since the AVL tree is packed tighter (i.e. it has a lower max height) it means more items are closer to the root than in the red-black case.
The extra tight packing also means the AVL tree requires more work when inserting elements.
The best choice for any app depends on whether it is insert intensive or search intensive...
AVL tree is better than red-black tree if the input key is almost ascending/descending because then we would have to do single rotation(left-left or right-right case) to add this element. Also, since the tree would be tightly balanced, the search would also be faster.
But for randomly selected input key, RBTree are better since they require less rotation for insertion in comparison to AVL.
Overall, it depends on the input sequence, which would decide how tilted our tree is, and the operation performed.For insert-intensive use Red-Black Tree and for search-intensive use AVL.
AVL tree and RBTree do have respective advantages as well as disadvantages. You'll perceive that better if you've already learned how they work.
AVL is slightly faster than RBTree in insert operation because there would be at most one rotation involved in insertion, while there may be two for RBTree.
RBTree only require at most three rotations in deletion, but this is not guaranteed in AVL. So it can delete nodes faster than AVL.
However, above all, they both have strict logarithmic tree height.
Pick up any subtree, the property that makes AVL "balanced" guarantees that the difference of height between two child subtrees is at most one, which is to say, intuitively, the whole tree is rigidly balanced.
But when it comes to an RBTree, the rule becomes likely "looser", since property of RBTree can only guarantee the depth of a tree is not larger than twice as the logarithm of the total number of nodes.
Here're some facts that may be more precise:
An AVL tree's height is strictly less than: 1.44log(n+2)-0.328
(approximately)
A red-black tree's height is at most 2log(n+1)
See https://en.wikipedia.org/wiki/AVL_tree#Comparison_to_other_structures for detailed information.

Can a non binary tree be tranversed in order?

We are dealing with a Most similar neigthbour algorithm here. Part of the algorithm involves searching in order over a tree.
The thing is that until now, we cant make that tree to be binary.
Is there an analog to in order traversal for non binary trees. Particularly, I think there is, just traversing the nodes from left to right (and processing the parent node only once?")
Any thoughts?
update
This tree will have in each node a small graph of n objects. Each node will have n children (1 per each element in the graph), each of which will be another graph. So its "kind of" a b tree, without all the overflow - underflow mechanics. So I guess the most similar in order traversal would be similar to a btree inorder traversal ?
Thanks in advance.
Yes, but you need to define what the order is. Post and Pre order are identical, but inorder takes a definition of how the branches compare with the nodes.
There is no simple analog of the in-order sequence for trees other than binary trees (actually in-order is a way to get sorted elements from a binary search tree).
You can find more detail in "The art of computer programming" by Knuth, vol. 1, page 336.
If breadth-first search can serve your purpose then you can use that.

Resources