I came across this picture, and someone had commented that there's a problem with the diagram, but I am not sure what it is.
Here's the picture: (original link)
Now the tree looks alright to me but the heap creates some doubt.
I know in binary heap, if the root has two children, then the left child must have it's two children before we can proceed on to the right child. Is it the case with n-ary heap also. That is, since the root has four children, then the first child should have had it's four children, before we move on to the next child.
In general, a structure is a heap if it satisfies heap condition - therefore this heap is ok, because it does satisfy it.
If we're looking for some concrete heap, I guess that pairing heap would be ok.
The problem is that there is a second condition that is generally required. That condition is that every row of the tree must be full except possibly the last, but the last row must be left-filled. In other words, if there are any nodes missing on the last row, they must be all towards the right. In the diagram the second node in the fourth row has no children, and the forth and fifth each have just a right child. Even worse, the first node in the second row doesn't have a right child. There is one more problem, but I'll leave it to you to find it.
Related
In the following image, if I add 14 to the right side of 12, then 14 can replace the 15 without influencing other nodes, just like the correct answer 16. Why the successor is defined to use the number that is bit larger than it not the one that is a bit smaller?
Well, in terms of language, successor is the one that comes right after, implying that it must be bigger.
In terms of the deleting algorithm, you can use both the successor and the predecessor to replace the deleted node.
Successor: is the smallest node in the right subtree of the deleted node, which means that it is the smallest node that is bigger than the deleted node, so you guarantee that, if you replace the deleted node with it, it will still be smaller than every other node in the right subtree, so it won't break any property.
Predecessor: is the biggest node in the left subtree of the deleted node, which means that it is the biggest node that is smaller than the deleted node, so you guarantee that, if you replace the deleted node with it, it will still be bigger than every other node in the left subtree, so it won't break any property.
In a nutshell, you can use the successor or the predecessor without any problems, is not a question of definition, only a question of choice.
I'm working though the B-Tree example given on the ever wonderful wikipedia on this page. (I'm using wikipedia, because Stackoverflow tells me to... How do you remove an element from a b-tree?
I'm happy with the construction of this tree.
..and I find the algorithm elegant.
My issue is that the descriptions on Wikipedia for deleting a node appear to be missing a case. The three cases given for 're-balancing after deletion' are:
If the deficient node's right sibling exists and has more than the minimum number of elements, then rotate left
Otherwise, if the deficient node's left sibling exists and has more than the minimum number of elements, then rotate right
Otherwise, if both immediate siblings have only the minimum number of elements, then merge with a sibling sandwiching their separator taken off from their parent.
None of which turn out to be helpful if the deficient node has no siblings (for example in the tree above, delete '1', '3' is now deficient and has no siblings).
My question is, what is the case/cases that are missing (presuming I've understood correctly), and what should the Wikipedia page say?
for example in the tree above, delete '1', '2' is now deficient and has no siblings
Yes, it has a sibling: The node (6,_). If you have no siblings, you are the root.
So in this case, we apply option 3 and end up with a two-level tree.
I keep seeing it defined as
A complete binary tree is a binary tree in which every level, except
possibly the last, is completely filled, and all nodes are as far left
as possible.
But..I have no clue as to what it means by "all nodes are as far left as possible." That's..literally my question. I can't expand on it any further because I have no idea what it means by "all nodes are as far left as possible." Like..as far left as possible compared to what? I don't get it
The as far left as possible part applies to the last level. That is, at the last level, you should start filling nodes from the left.
For example, the following is a valid complete binary tree since at the last level, all the nodes are as far left as possible
The following is not
I'm currently trying to implement MCTS for a project of mine but I'm not sure if I understand the idea of node selection correctly. In the beginning of the game, after I randomly select one move, unwind the whole tree to the point of a game end and then do the backpropagation, this node is obviously seen as better than all the other ones since it's 1/1 (if we got the win) vs. their 0/0. How does the MCTS flee that trap and not get stuck with the one, randomly selected, node?
I mean, if we use, say, UCB for finding the best node to expand, it'll always choose the node we selected first (given it resulted in a win) completely ignoring all the other ones since it'll be the only one non-zero valued. What am I missing here, since it's obviously not the case?
Each time you are at a node, you expand a node according to these rules :
if a child node has never been expanded before, then expand one of the unexplored child at random (and you can immediately unwind from this child node)
otherwise, each child node has been visited at least once. Compute for all of them the "exploration/exploitation" value and expand the child node with highest value
The idea of MCTS is maximizing the exploration/exploitation. If a child node has never been explored before, the "exploration" value associated with it is infinite, you will have to explore it. However, once you have expanded all child nodes, then you will expand more frequently the child nodes with higher value (this is the "exploitation" part)
How to find a loop in a binary tree? I am looking for a solution other than marking the visited nodes as visited or doing a address hashing. Any ideas?
Suppose you have a binary tree but you don't trust it and you think it might be a graph, the general case will dictate to remember the visited nodes. It is, somewhat, the same algorithm to construct a minimum spanning tree from a graph and this means the space and time complexity will be an issue.
Another approach would be to consider the data you save in the tree. Consider you have numbers of hashes so you can compare.
A pseudocode would test for this conditions:
Every node would have to have a maximum of 2 children and 1 parent (max 3 connections). More then 3 connections => not a binary tree.
The parent must not be a child.
If a node has two children, then the left child has a smaller value than the parent and the right child has a bigger value. So considering this, if a leaf, or inner node has as a child some node on a higher level (like parent's parent) you can determine a loop based on the values. If a child is a right node then it's value must be bigger then it's parent but if that child forms a loop, it means he is from the left part or the right part of the parent.
3.a. So if it is from the left part then it's value is smaller than it's sibling. So => not a binary tree. The idea is somewhat the same for the other part.
Testing aside, in what form is the tree that you want to test? Remeber that every node has a pointer to it's parent. An this pointer points to a single parent. So depending of the format you tree is in, you can take advantage from this.
As mentioned already: A tree does not (by definition) contain cycles (loops).
To test if your directed graph contains cycles (references to nodes already added to the tree) you can iterate trough the tree and add each node to a visited-list (or the hash of it if you rather prefer) and check each new node if it is in the list.
Plenty of algorithms for cycle-detection in graphs are just a google-search away.