I need to implement an n-ary tree. The problem is I'm allowed to use only preorder traversal. What I find it hard is to make a function that will add new node. New nodes are being added from left to right, and again, I'm not allowed to use level order, only preorder.
What I thought is to maybe somehow compare levels of the leaf nodes, and if there are free nodes at the maximum level of the tree, then that's where I add new node. Since this is not that easy, and I'm not sure if I'm even on the right way, I decided to post question here to see if anyone has any idea? Or maybe if there's another way of doing this?
Thank you in advance.
Related
Is there a data structure for a sorted set allows quick lookup of the n-th (i.e. the least but n-th) item? That is, something like a a hybrid between a rope and a red-black tree.
Seems like it should be possible to either keep track of the size of the left subtree and update it through rotations or do something else clever and I'm hoping someone smart has already worked this out.
Seems like it should be possible to either keep track of the size of the left subtree and update it through rotations […]
Yes, this is quite possible; but instead of keeping track of the size of the left subtree, it's a bit simpler to keep track of the size of the complete subtree rooted at a given node. (You can then get the size of its left subtree by examining its left-child's size.) It's not as tricky as you might think, because you can always re-calculate a node's size as long as its children are up-to-date, so you don't need any extra bookkeeping beyond making sure that you recalculate sizes by working your way up the tree.
Note that, in most mutable red-black tree implementations, 'put' and 'delete' stop walking back up the tree once they've restored the invariants, whereas with this approach you need to walk all the way back up the tree in all cases. That'll be a small performance hit, but at least it's not hard to implement. (In purely functional red-black tree implementations, even that isn't a problem, because those always have to walk the full path back up to create the new parent nodes. So you can just put the size-calculation in the constructor — very simple.)
Edited in response to your comment:
I was vaguely hoping this data structure already had a name so I could just find some implementations out there and that there was something clever one could do to minimize the updating but (while I can find plenty of papers on data structures that are variations of balanced binary trees) I can't figure out a good search term to look for papers that let one lookup the nth least element.
The fancy term for the nth smallest value in a collection is order statistic; so a tree structure that enables fast lookup by order statistic is called an order statistic tree. That second link includes some references that may help you — not sure, I haven't looked at them — but regardless, that should give you some good search terms. :-)
Yes, this is fully possible. Self-balancing tree algorithms do not actually need to be search trees, that is simply the typical presentation. The actual requirement is that nodes be ordered in some fashion (which a rope provides).
What is required is to update the tree weight on insert and erase. Rotations do not require a full update, local is enough. For example, a left rotate requires that the weight of the parent be added to the new parent (since that new parent is the old parent's right child it is not necessary to walk down the new parent's right descent tree since that was already the new parent's left descent tree). Similarly, for a right rotate it is necessary to subtract the weight of the new parent only, since the new parent's right descent tree will become the left descent tree of the old parent.
I suppose it would be possible to create an insert that updates the weight as it does rotations then adds the weight up any remaining ancestors but I didn't bother when I was solving this problem. I simply added the new node's weight all the way up the tree then did rotations as needed. Similarly for erase, I did the fix-up rotations then subtracted the weight of the node being removed before finally unhooking the node from the tree.
Why nodes of a binary tree have links only from parent to children? I know tha there is threaded binary tree but those are harder to implement. A binary tree with two links will allow traversal in both directions iteratively without a stack or queue.
I do not know of any such design. If there is one please let me know.
Edit1: Let me conjure a problem for this. I want to do traversal without recursion and without using extra memory in form of stack or queue.
PS: I am afraid that I am going to get flake and downvotes for this stupid question.
Some binary trees do require children to keep up with their parent, or even their grandparent, e.g. Splay Trees. However this is only to balance or splay the tree. The reason we only traverse a tree from the parent to the children is because we are usually searching for a specific node, and as long as the binary tree is implemented such that all left children are less than the parent, and all right children are greater than the parent (or vice-versa), we only need links in one direction to find that node. We start the search at the root and then iterate down, and if the node is in the tree, we are guaranteed to find it. If we started at a leaf, there is no guarantee we would find the node we want by going back to the root. The reason we don't have links from the child to the parent is because it is unnecessary for searches. Hope this helps.
It can be, however, we should consider the balance between the memory usage and the complexity.
Yeah you can traverse the binary tree with an extra link in each node, but actually you are using the same extra memory as you do the traversal with a queue, which even run faster.
What binary search tree good at is that it can implement many searching problems in O(logN). It's fast enough and memory saving.
Let me conjure a problem for this. I want to do traversal without recursion and without using extra memory in form of stack or queue.
Have you considered that the parent pointers in the tree occupy space themselves?
They add O(N) memory to the tree to store parent pointer in order not to use O(log N) space during recursion.
What parent pointers allow us to do is to support an API whereby the caller can pass a pointer to a node and request an operation on it like "find the next node in order" (for example).
In this situation, we do not have a stack which holds the path to the root; we just receive a node "out of the blue" from the caller. With parent pointers, given a tree node, we can find its successor in amortized constant time O(1).
Implementations which don't require this functionality can save space by not including the parent pointers in the tree, and using recursion or an explicit stack structure for the root to leaf traversals.
I'm learning about graph and DFS, and trying to do something similar to how ANT resolves the dependency. I'm confused about something and all the articles I read seems to assume everyone knows this.
I'm thinking of having a Map> with key = file, and value = set of files that the key depends on.
The DFS algorithm shows that I have to change the color of the node if it's already visited, that means the reference to the same fileNode must be the same between the one in key and the one in Set<> right?
Therefore, I'm thinking, each time a Node is created (including neighbor nodes), I would add it to one more Collection (maybe another Map?), then whenever a new Node is to be add to the graph (as key), search that Collection and use that reference instead? am I wasting too much space? How is it usually done? is there some other better way?
During my studies the DFS algorithm was implement like this:
Put all the nodes of a graph into a stack (this is a structure, where you can only retrieve and delete the first element).
Retrieve the first element, set it to seen, this can either be done through the coloring or by setting an attribute, lets call it isSeen, to true.
You then look at all the neighbors of that node, and if they are not seen already, you put them in the stack.
Once you looked at all the neighbors, you remove the node from the stack and retrieve the next element of the stack and do the same as for the first.
The result will then be, that all the nodes, that can be reached from the starting node, will have an attribute that is set to seen.
Hope this helped.
For dynamic programming, what are some of the ways that I store a tree with?
I am working on an assignment that requires me to solve a maze with no left turn and a minimize right turn. The idea that I had is to store all possible paths into a tree and then going through (traverse) the tree looking for the minimum right-turns. To make the code more efficient, anytime a path involves either
a) a left turn
b) a solution with more right turn than the current best known solution
I will not add it to the tree. Hopefully I have a clear understanding of what I am doing here. I really do appreciate input on this.
The tree that I am looking at storing will contain all possible directions in the maze, and the parent of each children will be the previous location. I believe that some parents will have more than 2 children.
I am wondering what is the best way to store this kind of tree?
Thank you in advance.
If the problem is to solve the maze, I suggest using backtracking instead of creating such a tree. If you have to create the tree, you could use a tree in which every junction where you could turn right is represented as a node, and the children would be the next junction if turned right, or the next one if you did not. I'm not sure I understood you correctly, but I hope this gives you some pointers as to how to continue.
I am stumbled by this question. The following is my approach:
Lets say the two nodes are node1 and node2
For any node (lets say node1), find the path from Root to
node1 and store it in an HashMap
For the other node node2, find the path from root to node2, and while traversing back,
check if any of the nodes are present in the hashmap or not
Return the first found node
Time Complexity is O(n) and Space Complexity is also O(h), where n is the height of the tree
I just wanted to know how good is this approach. Or whether there exists any other better solution.
EDIT: The given tree is Binary Tree and not BST. So, the time taken to find a node is linear to the size of nodes.
If you only need to do this once (or a few times), then this is a good method (you could optimize by going from the nodes towards the root, if possible). If you need to do this a lot of times for different pairs of nodes, it's worth to spend a bit more time precomputing certain things, which will speed up queries.
I think this article explains things very well. Post back if you have any questions please.
How's the tree represented? In particular, is there a reference to the parent node of any tree node? Is it an ordered tree?
Isn't it simpler to calculate the paths from node to root, then compare the paths from root to node? The last node that's the sanme on both paths is the common ancestor.
I think finding the path from root to node (as your approach has it) is O(n) where n is the size of the tree, unless the tree is ordered...
So your approach works, but if I were asking you the question I would have expected you to ask some additional questions about the layout of the tree in order to determine the correct answer...
Here is an instruction to solve Lowest Common Ancestor problem.
[Range Minimum Query and Lowest Common Ancestor][1]
[1]: http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=lowestCommonAncestor#Lowest Common Ancestor (LCA)