Create Binary Tree from Ancestor Matrix - algorithm

The question is how to create a binary tree, given its ancestor matrix. I found a cool solution at http://www.ritambhara.in/build-binary-tree-from-ancestor-matrics/. Problem is that it involves deleting rows and columns from the matrix. Now how do I do that? Can anybody suggest a pseudocode for this? Or, is there any better algo possible?

You don't have to actually delete the rows and columns. You can either flag them as deleted in some additional array, or you can make them all zeros, which I think will be effectively the same (actually, you'll still need to know that they are removed, so you don't choose them again in step 4.c - so, flagging the node as deleted should be good enough).
Here are the modifications to the pseudocode from the page:
4.b.
used[temp] = true;
for (i = 0 to N)
Sum[i] -= matrix[i][temp]; (aka decrement sum if temp is a predecessor of i)
matrix[i][temp] = 0;
4.c. Look for all rows for which Sum[i] == 0 and used[i] == false.

This reminds me of the Dancing Links used by Doanld Knuth to implement his Algorithm X
It's basically a structure of circular doubly linked list. You could maintain a seperate Sum array and update it with removal of rows and columns as required.
Actually you don't need to maintain a separate Sum array.
Edit:
I meant -
You could use a structure made up of circular 2D linked lists.
The node structure would somewhat look like:
struct node{
int val;
struct node *left;
struct node *right;
struct node *down;
};
The Top-most and Left-most List is the header List for the vertices(Binary tree node values).
If vertex j is an ancestor of vertex i, build a (empty)new node such that j column's current down is assigned this new node and i's current left is assigned this new node. Note: Structure can be easily built by scanning each rows of ancestor matrix from left to right and inserting rows from 0 to N. (assuming N is the no. of vertices here)
I borrowed these images from Image1 and Image2 to give an idea of the grid. 2nd image is missing the Left-most header though.
If N is no. of vertices. There can be at worse O(N^2) entries in ancestor matrix(in case tree is skewed) or on average O(NlogN) entries.
To search for current Root: O(N)
Assuming a dummy node to start with, linearly scan the Leftmost header and choose a node with node->down->right == node->down.
To delete this vertex information: O(N)
Deleting row:O(1)
node->down = node->down->down;
Deleting column:O(N)
Goto the corresponding column - say(p):
node* q = p;
while(q->down != p){
q->down->left->right = q->down->right;
q->down->right->left = q->down->left;
q = q->down;
}
After discovering current Root you can assign it to it's parent node and insert them into a Queue to process the next level as per that link suggests you to.
Overall time complexity: N + (N-1) + (N-2) +.... = O(N^2).
Worst case space complexity O(N^2)
Though there is no big improvement in the asymptotic run-time from the solution you already have. I thought it's worth mentioning since, this kind of structure can be particularly useful for storing sparse matrices and defining operations like multiplication on them or if you are working with some backtracking algorithm which removes a row/column and later backtracks and adds it again like Knuth's AlgorithmX.

You don't have to update the matrix. Just decrement the values in the sum array for any descendents of the current node, and check if any of them reaches zero, which means the current noe is that last ancestor, e.g. the direct parent:
for (i = 0 to N)
if matrix[i][temp]==1:
Sum[i]=Sum[i]-1
if Sum[i]==0:
add i as child of temp
add i to queue

Related

Counting p-cousins on a directed tree

We're given a directed tree to work with. We define the concepts of p-ancestor and p-cousin as follows
p-ancestor: A node is an 1-ancestor of another if it is the parent of it. It is the p-ancestor of a node, if it is the parent of the (p-1)-th ancestor.
p-cousin: A node is the p-cousin of another, if they share the same p-ancestor.
For example, consider the tree below.
4 has three 1-cousins i,e, 3, 4 and 5 since they all share the common
1-ancestor, which is 1
For a particular tree, the problem is as follows. You are given multiple pairs of (node,p) and are supposed to count (and output) the number of p-cousins of the corresponding nodes.
A slow algorithm would be to crawl up to the p-ancestor and run a BFS for each node.
What is the (asymptotically) fastest way to solve the problem?
If an off-line solution is acceptable, two Depth first searches can do the job.
Assume that we can index all of those n queries (node, p) from 0 to n - 1
We can convert each query (node, p) into another type of query (ancestor , p) as follow:
Answer for query (node, p), with node has level a (distance from root to this node is a), is the number of descendants level a of the ancestor at level a - p. So, for each queries, we can find who is that ancestor:
Pseudo code
dfs(int node, int level, int[]path, int[] ancestorForQuery, List<Query>[]data){
path[level] = node;
visit all child node;
for(Query query : data[node])
if(query.p <= level)
ancestorForQuery[query.index] = path[level - p];
}
Now, after the first DFS, instead of the original query, we have a new type of query (ancestor, p)
Assume that we have an array count, which at index i stores the number of node which has level i. Assume that, node a at level x , we need to count number of p descendants, so, the result for this query is:
query result = count[x + p] after we visit a - count[x + p] before we visit a
Pseudo code
dfs2(int node, int level, int[] result, int[]count, List<TransformedQuery>[]data){
count[level] ++;
for(TransformedQuery query : data[node]){
result[query.index] -= count[level + query.p];
}
visit all child node;
for(TransformedQuery query : data[node]){
result[query.index] += count[level + query.p];
}
}
Result of each query is stored in result array.
If p is fixed, I suggest the following algorithm:
Let's say that count[v] is number of p-children of v. Initially all count[v] are set to 0. And pparent[v] is p-parent of v.
Let's now run a dfs on the tree and keep the stack of visited nodes, i.e. when we visit some v, we put it into the stack. Once we leave v, we pop.
Suppose we've come to some node v in our dfs. Let's do count[stack[size - p]]++, indicating that we are a p-child of v. Also pparent[v] = stack[size-p]
Once your dfs is finished, you can calculate the desired number of p-cousins of v like this:
count[pparent[v]]
The complexity of this is O(n + m) for dfs and O(1) for each query
First I'll describe a fairly simple way to answer each query in O(p) time that uses O(n) preprocessing time and space, and then mention a way that query times can be sped up to O(log p) time for a factor of just O(log n) extra preprocessing time and space.
O(p)-time query algorithm
The basic idea is that if we write out the sequence of nodes visited during a DFS traversal of the tree in such a way that every node is written out at a vertical position corresponding to its level in the tree, then the set of p-cousins of a node form a horizontal interval in this diagram. Note that this "writing out" looks very much like a typical tree diagram, except without lines connecting nodes, and (if a postorder traversal is used; preorder would be just as good) parent nodes always appearing to the right of their children. So given a query (v, p), what we will do is essentially:
Find the p-th ancestor u of the given node v. Naively this takes O(p) time.
Find the p-th left-descendant l of u -- that is, the node you reach after repeating the process of visiting the leftmost child of the current node, p times. Naively this takes O(p) time.
Find the p-th right-descendant r of u (defined similarly). Naively this takes O(p) time.
Return the value x[r] - x[l] + 1, where x[i] is a precalculated value that records the number of nodes in the sequence described above that are at the same level as, and at or to the left of, node i. This takes constant time.
The preprocessing step is where we calculate x[i], for each 1 <= i <= n. This is accomplished by performing a DFS that builds up a second array y[] that records the number y[d] of nodes visited so far at depth d. Specifically, y[d] is initially 0 for each d; during the DFS, when we visit a node v at depth d, we simply increment y[d] and then set x[v] = y[d].
O(log p)-time query algorithm
The above algorithm should already be fast enough if the tree is fairly balanced -- but in the worst case, when each node has just a single child, O(p) = O(n). Notice that it is navigating up and down the tree in the first 3 of the above 4 steps that force O(p) time -- the last step takes constant time.
To fix this, we can add some extra pointers to make navigating up and down the tree faster. A simple and flexible way uses "pointer doubling": For each node v, we will store log2(depth(v)) pointers to successively higher ancestors. To populate these pointers, we perform log2(maxDepth) DFS iterations, where on the i-th iteration we set each node v's i-th ancestor pointer to its (i-1)-th ancestor's (i-1)-th ancestor: this takes just two pointer lookups per node per DFS. With these pointers, moving any distance p up the tree always takes at most log(p) jumps, because the distance can be reduced by at least half on each jump. The exact same procedure can be used to populate corresponding lists of pointers for "left-descendants" and "right-descendants" to speed up steps 2 and 3, respectively, to O(log p) time.

Select a Node at Random from Unbalanced Binary Tree

One of my friends had the following interview question, and neither of us are quite sure what the correct answer is. Does anyone have an idea about how to approach this?
Given an unbalanced binary tree, describe an algorithm to select a node at random such that each node has an equal probability of being selected.
You can do it with a single pass of the tree. The algorithm is the same as with a list.
When you see the first item in the tree, you set it as the selected item.
When you see the second item, you pick a random number in the range (0,2]. If it's 1, then the new item becomes the selected item. Otherwise you skip that item.
For each node you see, you increase the count, and with probability 1/count, you select it. So at the 101st node, you pick a random number in the range (0,101]. If it's 100, that node is the new selected node.
When you're done traversing the tree, return the selected node. The operation is O(n) in time, with n being the number of nodes in the tree, and O(1) in space. No preprocessing required.
We can do this recursively in one parse by selecting the random node while parsing the tree and counting the number of nodes in left and right sub tree. At every step in recursion, we return the number of nodes at the root and a random node selected uniformly randomly from nodes in sub tree rooted at root.
Let's say number of nodes in left sub tree is n_l and number of nodes in right sub tree is n_r. Also, randomly selected node from left and right subtree be R_l and R_r respectively. Then, select a uniform random number in [0,1] and select R_l with probability n_l/(n_l+n_r+1) or select root with probability 1/(n_l+n_r+1) or select R_r with probability n_r/(n_l+n_r+1).
Note
If you're only doing a single query, and you don't already have a count at each node, the best time complexity you can get is O(n), so the depth-first-search approach would be the best one.
For repeated queries, the best option depends on the given constraints
(the fastest per-query approach is using a supplementary array).
Supplementary array
O(n) space, O(n) preprocessing, O(1) insert / remove, O(1) query
Have a supplementary array containing all the nodes.
Also have each node store its own index (so you can remove it from the array in O(1) - the way to do this would be to swap it with the last element in the array, update the index of the node that was at the last index appropriately and decrease the size of the array (removing the last element).
To get a random node, simply generate a random index in the array.
Per-node count
Modified tree (O(n) space), N/A (or O(n)) preprocessing, O(depth) insert / remove, O(depth) query
Let each node contain the number of elements in its subtree.
When generating a random node, go left or right based on the value of a random number generated and the counts of the left or right subtrees.
// note that subtreeCount = leftCount + rightCount + 1
val = getRandomNumber(subtreeCount)
if val = 0
return this node
else if val <= leftCount
go left
else
go right
Depth-first-search
O(depth) space, O(1) preprocessing, O(1) insert / remove, O(n) query
Count the number of nodes in the tree (if you don't already have the count).
Generate a random number between 0 and the number of nodes.
Simply do a depth-first-search through the tree and stop when you've processed the desired number of nodes.
This presumes a node doesn't have a parent member - having this will make this O(1) space.
I implemented #jim-mischel's algorithm in C# and it works great:
private void SelectRandomNode(ref int count, Node curNode, ref Node selectedNode)
{
foreach( var childNode in curNode.Children )
{
++count;
if( random.Next(count) == count - 1 )
selectedNode = childNode;
SelectRandomNode(ref count, childNode, ref selectedNode);
}
}
Call it like this:
var count = 1;
Node selected = root;
SelectRandomNode(ref count, root, ref selected);

Find whether given sum exists over a path in a BST

The question is to find whether a given sum exists over any path in a BST. The question is damn easy if a path means root to leaf, or easy if the path means a portion of a path from root to leaf that may not include the root or the leaf. But it becomes difficult here, because a path may span both left and right child of a node. For example, in the given figure, a sum of 132 exists over the circled path. How can I find the existence of such a path? Using hash to store all possible sums under a node is frowned upon!
You can certainly generate all possible paths, summing incrementally as you go. The fact that the tree is a BST might let you save time by bounding out certain sums, though I'm not sure that will give an asymptotic speed increase. The problem is that a sum formed using the left child of a given node will not necessarily be less than a sum formed using the right child, since the path for the former sum could contain many more nodes. The following algorithm will work for all trees, not just BSTs.
To generate all possible paths, notice that the topmost point of a path is special: it's the only point in a path which is allowed (though not required) to have both children contained in the path. Every path contains a unique topmost point. Therefore the outer layer of recursion should be to visit every tree node, and to generate all paths that have that node as the topmost point.
// Report whether any path whose topmost node is t sums to target.
// Recurses to examine every node under t.
EnumerateTopmost(Tree t, int target) {
// Get a list of sums for paths containing the left child.
// Include a 0 at the start to account for a "zero-length path" that
// does not contain any children. This will be in increasing order.
a = append(0, EnumerateSums(t.left))
// Do the same for paths containing the right child. This needs to
// be sorted in decreasing order.
b = reverse(append(0, EnumerateSums(t.right)))
// "List match" to detect any pair of sums that works.
// This is a linear-time algorithm that takes two sorted lists --
// one increasing, the other decreasing -- and detects whether there is
// any pair of elements (one from the first list, the other from the
// second) that sum to a given value. Starting at the beginning of
// each list, we compute the current sum, and proceed to strike out any
// elements that we know cannot be part of a satisfying pair.
// If the sum of a[i] and b[j] is too small, then we know that a[i]
// cannot be part of any satisfying pair, since all remaining elements
// from b that it could be added to are at least as small as b[j], so we
// can strike it out (which we do by advancing i by 1). Similarly if
// the sum of a[i] and b[j] is too big, then we know that b[j] cannot
// be part of any satisfying pair, since all remaining elements from a
// that b[j] could be added to are at least as big as a[i], so we can
// strike it out (which we do by advancing j by 1). If we get to the
// end of either list without finding the right sum, there can be
// no satisfying pair.
i = 0
j = 0
while (i < length(a) and j < length(b)) {
if (a[i] + b[j] + t.value < target) {
i = i + 1
} else if (a[i] + b[j] + t.value > target) {
j = j + 1
} else {
print "Found! Topmost node=", t
return
}
}
// Recurse to examine the rest of the tree.
EnumerateTopmost(t.left)
EnumerateTopmost(t.right)
}
// Return a list of all sums that contain t and at most one of its children,
// in increasing order.
EnumerateSums(Tree t) {
If (t == NULL) {
// We have been called with the "child" of a leaf node.
return [] // Empty list
} else {
// Include a 0 in one of the child sum lists to stand for
// "just node t" (arbitrarily picking left here).
// Note that even if t is a leaf node, we still call ourselves on
// its "children" here -- in C/C++, a special "NULL" value represents
// these nonexistent children.
a = append(0, EnumerateSums(t.left))
b = EnumerateSums(t.right)
Add t.value to each element in a
Add t.value to each element in b
// "Ordinary" list merge that simply combines two sorted lists
// to produce a new sorted list, in linear time.
c = ListMerge(a, b)
return c
}
}
The above pseudocode only reports the topmost node in the path. The entire path can be reconstructed by having EnumerateSums() return a list of pairs (sum, goesLeft) instead of a plain list of sums, where goesLeft is a boolean that indicates whether the path used to generate that sum initially goes left from the parent node.
The above pseudocode calculates sum lists multiple times for each node: EnumerateSums(t) will be called once for each node above t in the tree, in addition to being called for t itself. It would be possible to make EnumerateSums() memoise the list of sums for each node so that it's not recomputed on subsequent calls, but actually this doesn't improve the asymptotics: only O(n) work is required to produce a list of n sums using the plain recursion, and changing this to O(1) doesn't change the overall time complexity because the entire list of sums produced by any call to EnumerateSums() must in general be read by the caller anyway, and this requires O(n) time. EDIT: As pointed out by Evgeny Kluev, EnumerateSums() actually behaves like a merge sort, being O(nlog n) when the tree is perfectly balanced and O(n^2) when it is a single path. So memoisation will in fact give an asymptotic performance improvement.
It is possible to get rid of the temporary lists of sums by rearranging EnumerateSums() into an iterator-like object that performs the list merge lazily, and can be queried to retrieve the next sum in increasing order. This would entail also creating an EnumerateSumsDown() that does the same thing but retrieves sums in decreasing order, and using this in place of reverse(append(0, EnumerateSums(t.right))). Doing this brings the space complexity of the algorithm down to O(n), where n is the number of nodes in the tree, since each iterator object requires constant space (pointers to left and right child iterator objects, plus a place to record the last sum) and there can be at most one per tree node.
i would in order traverse the left subtree and in reverse order traverse the right subtree at the same time kind of how merge sort works. each time move the iterator that makes the aum closer.like merge sort almost. its order n
Not the fastest, but simple approach would be to use two nested depth-first searches.
Use normal depth-first search to get starting node. Use second, modified version of depth-first search to check sums for all paths, starting from this node.
Second depth-first search is different from normal depth-first search in two details:
It keeps current path sum. It adds value to the sum each time a new node is added to the path and removes value from the sum when some node is removed.
It traverses edges of the path from root to the starting node only in opposite direction (red edges on diagram). All other edges are traversed in proper direction, as usual (black edges on diagram). To traverse edges in opposite direction, it either uses "parent" pointers of the original BST (if there are any), or peeks into the stack of first depth-first search to obtain these "parent" pointers.
Time complexity of each DFS in O(N), so total time complexity is O(N2). Space requirements are O(N) (space for both DFS stacks). If original BST contains "parent" pointers, space requirements are O(1) ("parent" pointers allow traversing the tree in any direction without stacks).
Other approach is based on ideas by j_random_hacker and robert king (maintaining lists of sums, matching them, then merging them together). It processes the tree in bottom-up manner (starting from leafs).
Use DFS to find some leaf node. Then go back and find the last branch node, that is a grand-...-grand-parent of this leaf node. This gives a chain between branch and leaf nodes. Process this chain:
match1(chain)
sum_list = sum(chain)
match1(chain):
i = j = sum = 0
loop:
while (sum += chain[i]) < target:
++i
while (sum -= chain[j]) > target:
++j
if sum == target:
success!
sum(chain):
result = [0]
sum = 0
i = chain.length - 1
loop:
sum += chain[i]
--i
result.append(sum)
return result
Continue DFS and search other leaf chains. When two chains, coming from the same node are found, possibly preceded by another chain (red and green chains on diagram, preceded by blue chain), process these chains:
match2(parent, sum_list1, sum_list2)
sum_list3 = merge1(parent, sum_list1, sum_list2)
if !chain3.empty:
match1(chain3)
match3(sum_list3, chain3)
sum_list4 = merge2(sum_list3, chain3)
match2(parent, sum_list1, sum_list2):
i = 0
j = chain2.length - 1
sum = target - parent.value
loop:
while sum < sum_list1[i] + sum_list2[j]:
++i
while sum > sum_list1[i] + sum_list2[j]:
--j
if sum == sum_list1[i] + sum_list2[j]:
success!
merge1(parent, sum_list1, sum_list2):
result = [0, parent.value]
i = j = 1
loop:
if sum_list1[i] < sum_list2[j]:
result.append(parent.value + sum_list1[i])
++i
else:
result.append(parent.value + sum_list2[j])
++j
return result
match3(sum_list3, chain3):
i = sum = 0
j = sum_list3.length - 1
loop:
sum += chain3[i++]
while sum_list3[j] + sum > target:
--j
if sum_list3[j] + sum == target:
success!
merge2(sum_list3, chain3):
result = [0]
sum = 0
i = chain3.length - 1
loop:
sum += chain3[i--]
result.append(sum)
result.append(sum_list3[1...] + sum)
Do the same wherever any two lists of sums or a chain and a list of sums are descendants of the same node. This process may be continued until a single list of sums, belonging to root node, remains.
Is there any complexity restrictions?
As you stated: "easy if a path means root to leaf, or easy if the path means a portion of a path from root to leaf that may not include the root or the leaf".
You can reduce the problem to this statement by setting the root each time to a different node and doing the search n times.
That would be a straightforward approach, not sure if optimal.
Edit: if the tree is unidirectional, something of this kind might work (pseudocode):
findSum(tree, sum)
if(isLeaf(tree))
return (sum == tree->data)
for (i = 0 to sum)
isfound |= findSum(leftSubStree, i) && findSum(rightSubTree, sum-i)
return isfound;
Probably lots of mistakes here, but hopefully it clarifies the idea.

Find median value from a growing set

I came across an interesting algorithm question in an interview. I gave my answer but not sure whether there is any better idea. So I welcome everyone to write something about his/her ideas.
You have an empty set. Now elements are put into the set one by one. We assume all the elements are integers and they are distinct (according to the definition of set, we don't consider two elements with the same value).
Every time a new element is added to the set, the set's median value is asked. The median value is defined the same as in math: the middle element in a sorted list. Here, specially, when the size of set is even, assuming size of set = 2*x, the median element is the x-th element of the set.
An example:
Start with an empty set,
when 12 is added, the median is 12,
when 7 is added, the median is 7,
when 8 is added, the median is 8,
when 11 is added, the median is 8,
when 5 is added, the median is 8,
when 16 is added, the median is 8,
...
Notice that, first, elements are added to set one by one and second, we don't know the elements going to be added.
My answer.
Since it is a question about finding median, sorting is needed. The easiest solution is to use a normal array and keep the array sorted. When a new element comes, use binary search to find the position for the element (log_n) and add the element to the array. Since it is a normal array so shifting the rest of the array is needed, whose time complexity is n. When the element is inserted, we can immediately get the median, using instance time.
The WORST time complexity is: log_n + n + 1.
Another solution is to use link list. The reason for using link list is to remove the need of shifting the array. But finding the location of the new element requires a linear search. Adding the element takes instant time and then we need to find the median by going through half of the array, which always takes n/2 time.
The WORST time complexity is: n + 1 + n/2.
The third solution is to use a binary search tree. Using a tree, we avoid shifting array. But using the binary search tree to find the median is not very attractive. So I change the binary search tree in a way that it is always the case that the left subtree and the right subtree are balanced. This means that at any time, either the left subtree and the right subtree have the same number of nodes or the right subtree has one node more than in the left subtree. In other words, it is ensured that at any time, the root element is the median. Of course this requires changes in the way the tree is built. The technical detail is similar to rotating a red-black tree.
If the tree is maintained properly, it is ensured that the WORST time complexity is O(n).
So the three algorithms are all linear to the size of the set. If no sub-linear algorithm exists, the three algorithms can be thought as the optimal solutions. Since they don't differ from each other much, the best is the easiest to implement, which is the second one, using link list.
So what I really wonder is, will there be a sub-linear algorithm for this problem and if so what will it be like. Any ideas guys?
Steve.
Your complexity analysis is confusing. Let's say that n items total are added; we want to output the stream of n medians (where the ith in the stream is the median of the first i items) efficiently.
I believe this can be done in O(n*lg n) time using two priority queues (e.g. binary or fibonacci heap); one queue for the items below the current median (so the largest element is at the top), and the other for items above it (in this heap, the smallest is at the bottom). Note that in fibonacci (and other) heaps, insertion is O(1) amortized; it's only popping an element that's O(lg n).
This would be called an "online median selection" algorithm, although Wikipedia only talks about online min/max selection. Here's an approximate algorithm, and a lower bound on deterministic and approximate online median selection (a lower bound means no faster algorithm is possible!)
If there are a small number of possible values compared to n, you can probably break the comparison-based lower bound just like you can for sorting.
I received the same interview question and came up with the two-heap solution in wrang-wrang's post. As he says, the time per operation is O(log n) worst-case. The expected time is also O(log n) because you have to "pop an element" 1/4 of the time assuming random inputs.
I subsequently thought about it further and figured out how to get constant expected time; indeed, the expected number of comparisons per element becomes 2+o(1). You can see my writeup at http://denenberg.com/omf.pdf .
BTW, the solutions discussed here all require space O(n), since you must save all the elements. A completely different approach, requiring only O(log n) space, gives you an approximation to the median (not the exact median). Sorry I can't post a link (I'm limited to one link per post) but my paper has pointers.
Although wrang-wrang already answered, I wish to describe a modification of your binary search tree method that is sub-linear.
We use a binary search tree that is balanced (AVL/Red-Black/etc), but not super-balanced like you described. So adding an item is O(log n)
One modification to the tree: for every node we also store the number of nodes in its subtree. This doesn't change the complexity. (For a leaf this count would be 1, for a node with two leaf children this would be 3, etc)
We can now access the Kth smallest element in O(log n) using these counts:
def get_kth_item(subtree, k):
left_size = 0 if subtree.left is None else subtree.left.size
if k < left_size:
return get_kth_item(subtree.left, k)
elif k == left_size:
return subtree.value
else: # k > left_size
return get_kth_item(subtree.right, k-1-left_size)
A median is a special case of Kth smallest element (given that you know the size of the set).
So all in all this is another O(log n) solution.
We can difine a min and max heap to store numbers. Additionally, we define a class DynamicArray for the number set, with two functions: Insert and Getmedian. Time to insert a new number is O(lgn), while time to get median is O(1).
This solution is implemented in C++ as the following:
template<typename T> class DynamicArray
{
public:
void Insert(T num)
{
if(((minHeap.size() + maxHeap.size()) & 1) == 0)
{
if(maxHeap.size() > 0 && num < maxHeap[0])
{
maxHeap.push_back(num);
push_heap(maxHeap.begin(), maxHeap.end(), less<T>());
num = maxHeap[0];
pop_heap(maxHeap.begin(), maxHeap.end(), less<T>());
maxHeap.pop_back();
}
minHeap.push_back(num);
push_heap(minHeap.begin(), minHeap.end(), greater<T>());
}
else
{
if(minHeap.size() > 0 && minHeap[0] < num)
{
minHeap.push_back(num);
push_heap(minHeap.begin(), minHeap.end(), greater<T>());
num = minHeap[0];
pop_heap(minHeap.begin(), minHeap.end(), greater<T>());
minHeap.pop_back();
}
maxHeap.push_back(num);
push_heap(maxHeap.begin(), maxHeap.end(), less<T>());
}
}
int GetMedian()
{
int size = minHeap.size() + maxHeap.size();
if(size == 0)
throw exception("No numbers are available");
T median = 0;
if(size & 1 == 1)
median = minHeap[0];
else
median = (minHeap[0] + maxHeap[0]) / 2;
return median;
}
private:
vector<T> minHeap;
vector<T> maxHeap;
};
For more detailed analysis, please refer to my blog: http://codercareer.blogspot.com/2012/01/no-30-median-in-stream.html.
1) As with the previous suggestions, keep two heaps and cache their respective sizes. The left heap keeps values below the median, the right heap keeps values above the median. If you simply negate the values in the right heap the smallest value will be at the root so there is no need to create a special data structure.
2) When you add a new number, you determine the new median from the size of your two heaps, the current median, and the two roots of the L&R heaps, which just takes constant time.
3) Call a private threaded method to perform the actual work to perform the insert and update, but return immediately with the new median value. You only need to block until the heap roots are updated. Then, the thread doing the insert just needs to maintain a lock on the traversing grandparent node as it traverses the tree; this will ensue that you can insert and rebalance without blocking other inserting threads working on other sub-branches.
Getting the median becomes a constant time procedure, of course now you may have to wait on synchronization from further adds.
Rob
A balanced tree (e.g. R/B tree) with augmented size field should find the median in lg(n) time in the worst case. I think it is in Chapter 14 of the classic Algorithm text book.
To keep the explanation brief, you can efficiently augment a BST to select a key of a specified rank in O(h) by having each node store the number of nodes in its left subtree. If you can guarantee that the tree is balanced, you can reduce this to O(log(n)). Consider using an AVL which is height-balanced (or red-black tree which is roughly balanced), then you can select any key in O(log(n)). When you insert or delete a node into the AVL you can increment or decrement a variable that keeps track of the total number of nodes in the tree to determine the rank of the median which you can then select in O(log(n)).
In order to find the median in linear time you can try this (it just came to my mind). You need to store some values every time you add number to your set, and you won't need sorting. Here it goes.
typedef struct
{
int number;
int lesser;
int greater;
} record;
int median(record numbers[], int count, int n)
{
int i;
int m = VERY_BIG_NUMBER;
int a, b;
numbers[count + 1].number = n:
for (i = 0; i < count + 1; i++)
{
if (n < numbers[i].number)
{
numbers[i].lesser++;
numbers[count + 1].greater++;
}
else
{
numbers[i].greater++;
numbers[count + 1].lesser++;
}
if (numbers[i].greater - numbers[i].lesser == 0)
m = numbers[i].number;
}
if (m == VERY_BIG_NUMBER)
for (i = 0; i < count + 1; i++)
{
if (numbers[i].greater - numbers[i].lesser == -1)
a = numbers[i].number;
if (numbers[i].greater - numbers[i].lesser == 1)
b = numbers[i].number;
m = (a + b) / 2;
}
return m;
}
What this does is, each time you add a number to the set, you must now how many "lesser than your number" numbers have, and how many "greater than your number" numbers have. So, if you have a number with the same "lesser than" and "greater than" it means your number is in the very middle of the set, without having to sort it. In the case that you have an even amount of numbers you may have two choices for a median, so you just return the mean of those two. BTW, this is C code, I hope this helps.

Finding last element of a binary heap

quoting Wikipedia:
It is perfectly acceptable to use a
traditional binary tree data structure
to implement a binary heap. There is
an issue with finding the adjacent
element on the last level on the
binary heap when adding an element
which can be resolved
algorithmically...
Any ideas on how such an algorithm might work?
I was not able to find any information about this issue, for most binary heaps are implemented using arrays.
Any help appreciated.
Recently, I have registered an OpenID account and am not able to edit my initial post nor comment answers. That's why I am responding via this answer. Sorry for this.
quoting Mitch Wheat:
#Yse: is your question "How do I find
the last element of a binary heap"?
Yes, it is.
Or to be more precise, my question is: "How do I find the last element of a non-array-based binary heap?".
quoting Suppressingfire:
Is there some context in which you're
asking this question? (i.e., is there
some concrete problem you're trying to
solve?)
As stated above, I would like to know a good way to "find the last element of a non-array-based binary heap" which is necessary for insertion and deletion of nodes.
quoting Roy:
It seems most understandable to me to
just use a normal binary tree
structure (using a pRoot and Node
defined as [data, pLeftChild,
pRightChild]) and add two additional
pointers (pInsertionNode and
pLastNode). pInsertionNode and
pLastNode will both be updated during
the insertion and deletion subroutines
to keep them current when the data
within the structure changes. This
gives O(1) access to both insertion
point and last node of the structure.
Yes, this should work. If I am not mistaken, it could be a little bit tricky to find the insertion node and the last node, when their locations change to another subtree due to an deletion/insertion. But I'll give this a try.
quoting Zach Scrivena:
How about performing a depth-first
search...
Yes, this would be a good approach. I'll try that out, too.
Still I am wondering, if there is a way to "calculate" the locations of the last node and the insertion point. The height of a binary heap with N nodes can be calculated by taking the log (of base 2) of the smallest power of two that is larger than N. Perhaps it is possible to calculate the number of nodes on the deepest level, too. Then it was maybe possible to determine how the heap has to be traversed to reach the insertion point or the node for deletion.
Basically, the statement quoted refers to the problem of resolving the location for insertion and deletion of data elements into and from the heap. In order to maintain "the shape property" of a binary heap, the lowest level of the heap must always be filled from left to right leaving no empty nodes. To maintain the average O(1) insertion and deletion times for the binary heap, you must be able to determine the location for the next insertion and the location of the last node on the lowest level to use for deletion of the root node, both in constant time.
For a binary heap stored in an array (with its implicit, compacted data structure as explained in the Wikipedia entry), this is easy. Just insert the newest data member at the end of the array and then "bubble" it into position (following the heap rules). Or replace the root with the last element in the array "bubbling down" for deletions. For heaps in array storage, the number of elements in the heap is an implicit pointer to where the next data element is to be inserted and where to find the last element to use for deletion.
For a binary heap stored in a tree structure, this information is not as obvious, but because it's a complete binary tree, it can be calculated. For example, in a complete binary tree with 4 elements, the point of insertion will always be the right child of the left child of the root node. The node to use for deletion will always be the left child of the left child of the root node. And for any given arbitrary tree size, the tree will always have a specific shape with well defined insertion and deletion points. Because the tree is a "complete binary tree" with a specific structure for any given size, it is very possible to calculate the location of insertion/deletion in O(1) time. However, the catch is that even when you know where it is structurally, you have no idea where the node will be in memory. So, you have to traverse the tree to get to the given node which is an O(log n) process making all inserts and deletions a minimum of O(log n), breaking the usually desired O(1) behavior. Any search ("depth-first", or some other) will be at least O(log n) as well because of the traversal issue noted and usually O(n) because of the random nature of the semi-sorted heap.
The trick is to be able to both calculate and reference those insertion/deletion points in constant time either by augmenting the data structure ("threading" the tree, as mention in the Wikipedia article) or using additional pointers.
The implementation which seems to me to be the easiest to understand, with low memory and extra coding overhead, is to just use a normal simple binary tree structure (using a pRoot and Node defined as [data, pParent, pLeftChild, pRightChild]) and add two additional pointers (pInsert and pLastNode). pInsert and pLastNode will both be updated during the insertion and deletion subroutines to keep them current when the data within the structure changes. This implementation gives O(1) access to both insertion point and last node of the structure and should allow preservation of overall O(1) behavior in both insertion and deletions. The cost of the implementation is two extra pointers and some minor extra code in the insertion/deletion subroutines (aka, minimal).
EDIT: added pseudocode for an O(1) insert()
Here is pseudo code for an insert subroutine which is O(1), on average:
define Node = [T data, *pParent, *pLeft, *pRight]
void insert(T data)
{
do_insertion( data ); // do insertion, update count of data items in tree
# assume: pInsert points node location of the tree that where insertion just took place
# (aka, either shuffle only data during the insertion or keep pInsert updated during the bubble process)
int N = this->CountOfDataItems + 1; # note: CountOfDataItems will always be > 0 (and pRoot != null) after an insertion
p = new Node( <null>, null, null, null); // new empty node for the next insertion
# update pInsert (three cases to handle)
if ( int(log2(N)) == log2(N) )
{# #1 - N is an exact power of two
# O(log2(N))
# tree is currently a full complete binary tree ("perfect")
# ... must start a new lower level
# traverse from pRoot down tree thru each pLeft until empty pLeft is found for insertion
pInsert = pRoot;
while (pInsert->pLeft != null) { pInsert = pInsert->pLeft; } # log2(N) iterations
p->pParent = pInsert;
pInsert->pLeft = p;
}
else if ( isEven(N) )
{# #2 - N is even (and NOT a power of 2)
# O(1)
p->pParent = pInsert->pParent;
pInsert->pParent->pRight = p;
}
else
{# #3 - N is odd
# O(1)
p->pParent = pInsert->pParent->pParent->pRight;
pInsert->pParent->pParent->pRight->pLeft = p;
}
pInsert = p;
// update pLastNode
// ... [similar process]
}
So, insert(T) is O(1) on average: exactly O(1) in all cases except when the tree must be increased by one level when it is O(log N), which happens every log N insertions (assuming no deletions). The addition of another pointer (pLeftmostLeaf) could make insert() O(1) for all cases and avoids the possible pathologic case of alternating insertion & deletion in a full complete binary tree. (Adding pLeftmost is left as an exercise [it's fairly easy].)
My first time to participate in stack overflow.
Yes, the above answer by Zach Scrivena (god I don't know how to properly refer to other people, sorry) is right. What I want to add is a simplified way if we are given the count of nodes.
The basic idea is:
Given the count N of nodes in this full binary tree, do "N % 2" calculation and push the results into a stack. Continue the calculation until N == 1. Then pop the results out. The result being 1 means right, 0 means left. The sequence is the route from root to target position.
Example:
The tree now have 10 nodes, I want insert another node at position 11. How to route it?
11 % 2 = 1 --> right (the quotient is 5, and push right into stack)
5 % 2 = 1 --> right (the quotient is 2, and push right into stack)
2 % 2 = 0 --> left (the quotient is 1, and push left into stack. End)
Then pop the stack: left -> right -> right. This is the path from the root.
You could use the binary representation of the size of the Binary Heap to find the location of the last node in O(log N). The size could be stored and incremented which would take O(1) time. The the fundamental concept behind this is the structure of the binary tree.
Suppose our heap size is 7. The binary representation of 7 is, "111". Now, remember to always omit the first bit. So, now we are left with "11". Read from left-to-right. The bit is '1', so, go to the right child of the root node. Then the string left is "1", the first bit is '1'. So, again go to the right child of the current node you are at. As you no longer have bits to process, this indicates that you have reached the last node. So, the raw working of the process is that, convert the size of the heap into bits. Omit the first bit. According to the leftmost bit, go to the right child of the current node if it is '1', and to the left child of the current node if it is '0'.
As you always to to the very end of the binary tree this operation always takes O(log N) time. This is a simple and accurate procedure to find the last node.
You may not understand it in the first reading. Try working this method on the paper for different values of Binary Heap, I'm sure you'll get the intuition behind it. I'm sure this knowledge is enough to solve your problem, if you want more explanation with figures, you can refer to my blog.
Hope my answer has helped you, if it did, let me know...! ☺
How about performing a depth-first search, visiting the left child before the right child, to determine the height of the tree. Thereafter, the first leaf you encounter with a shorter depth, or a parent with a missing child would indicate where you should place the new node before "bubbling up".
The depth-first search (DFS) approach above doesn't assume that you know the total number of nodes in the tree. If this information is available, then we can "zoom-in" quickly to the desired place, by making use of the properties of complete binary trees:
Let N be the total number of nodes in the tree, and H be the height of the tree.
Some values of (N,H) are (1,0), (2,1), (3,1), (4,2), ..., (7,2), (8, 3).
The general formula relating the two is H = ceil[log2(N+1)] - 1.
Now, given only N, we want to traverse from the root to the position for the new node, in the least number of steps, i.e. without any "backtracking".
We first compute the total number of nodes M in a perfect binary tree of height H = ceil[log2(N+1)] - 1, which is M = 2^(H+1) - 1.
If N == M, then our tree is perfect, and the new node should be added in a new level. This means that we can simply perform a DFS (left before right) until we hit the first leaf; the new node becomes the left child of this leaf. End of story.
However, if N < M, then there are still vacancies in the last level of our tree, and the new node should be added to the leftmost vacant spot.
The number of nodes that are already at the last level of our tree is just (N - 2^H + 1).
This means that the new node takes spot X = (N - 2^H + 2) from the left, at the last level.
Now, to get there from the root, you will need to make the correct turns (L vs R) at each level so that you end up at spot X at the last level. In practice, you would determine the turns with a little computation at each level. However, I think the following table shows the big picture and the relevant patterns without getting mired in the arithmetic (you may recognize this as a form of arithmetic coding for a uniform distribution):
0 0 0 0 0 X 0 0 <--- represents the last level in our tree, X marks the spot!
^
L L L L R R R R <--- at level 0, proceed to the R child
L L R R L L R R <--- at level 1, proceed to the L child
L R L R L R L R <--- at level 2, proceed to the R child
^ (which is the position of the new node)
this column tells us
if we should proceed to the L or R child at each level
EDIT: Added a description on how to get to the new node in the shortest number of steps assuming that we know the total number of nodes in the tree.
Solution in case you don't have reference to parent !!!
To find the right place for next node you have 3 cases to handle
case (1) Tree level is complete Log2(N)
case (2) Tree node count is even
case (3) Tree node count is odd
Insert:
void Insert(Node root,Node n)
{
Node parent = findRequiredParentToInsertNewNode (root);
if(parent.left == null)
parent.left = n;
else
parent.right = n;
}
Find the parent of the node in order to insert it
void findRequiredParentToInsertNewNode(Node root){
Node last = findLastNode(root);
//Case 1
if(2*Math.Pow(levelNumber) == NodeCount){
while(root.left != null)
root=root.left;
return root;
}
//Case 2
else if(Even(N)){
Node n =findParentOfLastNode(root ,findParentOfLastNode(root ,last));
return n.right;
}
//Case 3
else if(Odd(N)){
Node n =findParentOfLastNode(root ,last);
return n;
}
}
To find the last node you need to perform a BFS (breadth first search) and get the last element in the queue
Node findLastNode(Node root)
{
if (root.left == nil)
return root
Queue q = new Queue();
q.enqueue(root);
Node n = null;
while(!q.isEmpty()){
n = q.dequeue();
if ( n.left != null )
q.enqueue(n.left);
if ( n.right != null )
q.enqueue(n.right);
}
return n;
}
Find the parent of the last node in order to set the node to null in case replacing with the root in removal case
Node findParentOfLastNode(Node root ,Node lastNode)
{
if(root == null)
return root;
if( root.left == lastNode || root.right == lastNode )
return root;
Node n1= findParentOfLastNode(root.left,lastNode);
Node n2= findParentOfLastNode(root.left,lastNode);
return n1 != null ? n1 : n2;
}
I know this is an old thread but i was looking for a answer to the same question. But i could not afford to do an o(log n) solution as i had to find the last node thousands of times in a few seconds. I did have a O(log n) algorithm but my program was crawling because of the number of times it performed this operation. So after much thought I did finally find a fix for this. Not sure if anybody things this is interesting.
This solution is O(1) for search. For insertion it is definitely less than O(log n), although I cannot say it is O(1).
Just wanted to add that if there is interest, i can provide my solution as well.
The solution is to add the nodes in the binary heap to a queue. Every queue node has front and back pointers.We keep adding nodes to the end of this queue from left to right until we reach the last node in the binary heap. At this point, the last node in the binary heap will be in the rear of the queue.
Every time we need to find the last node, we dequeue from the rear,and the second-to-last now becomes the last node in the tree.
When we want to insert, we search backwards from the rear for the first node where we can insert and put it there. It is not exactly O(1) but reduces the running time dramatically.

Resources