What will this B-tree look like? - b-tree

The B-tree is of order 4, meaning that a node can hold 4 pointers, and 3 keys.
The following is inserted: A G I Y
Since they can't all fit in one node, I know that the node will split. So I know there's going to be a root node with 2 child nodes after these things are inserted, but I don't know exactly what they'll look like.

A
A is inserted
AG
G is inserted
AGI
I is inserted
G
/ \
A I
While inserting Y the node is full, split into 2 nodes and pass up the middle, G
G
/ \
A IY
Y is inserted

Here's an animation of the operations:
http://ysangkok.github.io/js-clrs-btree/btree.html#{"actions":[["initTree",{"keys":[]},2],["insert","A"],["insert","G"],["insert","I"],["insert","Y"]]}
The second parameter to "initTree" is the order, but using another defintion. The maximum number of keys in this program is 2*order-1. So I set the order to 2 and it matches your example.

Related

Generate all the leaf to leaf path in an n-array tree

Given an N-ary tree, I have to generate all the leaf to leaf paths in an n-array tree. The path should also denote the direction. As an example:
Tree:
1
/ \
2 6
/ \
3 4
/
5
Paths:
5 UP 3 UP 2 DOWN 4
4 UP 2 UP 1 DOWN 6
5 UP 3 UP 2 UP 1 DOWN 6
These paths can be in any order, but all paths need to be generated.
I kind of see the pattern:
looks like I have to do in order traversal and
need to save what I have seen so far.
However, can't really come up with an actual working algorithm.
Can anyone nudge me to the correct algorithm?
I am not looking for the actual implementation, just the pseudo code and the conceptual idea would be much appreciated.
The first thing I would do is to perform in-order traversal. As a result of this, we will accumulate all the leaves in the order from the leftmost to the rightmost nodes.(in you case this would be [5,4,6])
Along the way, I would certainly find the mapping between nodes and its parents so that we can perform dfs later. We can keep this mapping in HashMap(or its analogue). Apart from this, we will need to have the mapping between nodes and its priorities which we can compute from the result of the in-order traversal. In your example the in-order would be [5,3,2,4,1,6] and the list of priorities would be [0,1,2,3,4,5] respectively.
Here I assume that our node looks like(we may not have the mapping node -> parent a priori):
class TreeNode {
int val;
TreeNode[] nodes;
TreeNode(int x) {
val = x;
}
}
If we have n leaves, then we need to find n * (n - 1) / 2 paths. Obviously, if we have managed to find a path from leaf A to leaf B, then we can easily calculate the path from B to A. (by transforming UP -> DOWN and vice versa)
Then we start traversing over the array of leaves we computed earlier. For each leaf in the array we should be looking for paths to leaves which are situated to the right of the current one. (since we have already found the paths from the leftmost nodes to the current leaf)
To perform the dfs search, we should be going upwards and for each encountered node check whether we can go to its children. We should NOT go to a child whose priority is less than the priority of the current leaf. (doing so will lead us to the paths we already have) In addition to this, we should not visit nodes we have already visited along the way.
As we are performing dfs from some node, we can maintain a certain structure to keep the nodes(for instance, StringBuilder if you program in Java) we have come across so far. In our case, if we have reached leaf 4 from leaf 5, we accumulate the path = 5 UP 3 UP 2 DOWN 4. Since we have reached a leaf, we can discard the last visited node and proceed with dfs and the path = 5 UP 3 UP 2.
There might be a more advanced technique for solving this problem, but I think it is a good starting point. I hope this approach will help you out.
I didn't manage to create a solution without programming it out in Python. UNDER THE ASSUMPTION that I didn't overlook a corner case, my attempt goes like this:
In a depth-first search every node receives the down-paths, emits them (plus itself) if the node is a leaf or passes the down-paths to its children - the only thing to consider is that a leaf node is a starting point of a up-path, so these are input from the left to right children as well as returned to the parent node.
def print_leaf2leaf(root, path_down):
for st in path_down:
st.append(root)
if all([x is None for x in root.children]):
for st in path_down:
for n in st: print(n.d,end=" ")
print()
path_up = [[root]]
else:
path_up = []
for child in root.children:
path_up += child is not None and [st+[root] for st in print_root2root(child, path_down + path_up)] or []
for st in path_down:
st.pop()
return path_up
class node:
def __init__(self,d,*children):
self.d = d
self.children = children
## 1
## / \
## 2 6
## / \ /
## 3 4 7
## / / | \
## 5 8 9 10
five = node(5)
three = node(3,five)
four = node(4)
two = node(2,three,four)
eight = node(8)
nine = node(9)
ten = node(10)
seven = node(7,eight,nine,ten)
six = node(6,None,seven)
one = node(1,two,six)
print_leaf2leaf(one,[])

Algorithm for finding the size of the largest connected region of nodes of the same value in a tree

This is a question I got in an interview, and I'm still not fully sure how to solve it.
Let's say we have a tree of numbers, and we want to find the size of the largest connected region in the tree whose nodes have the same value. For example, in this tree
3
/ \
3 3
/ \ / \
1 2 3 4
The answer is 4, because you have a region of 4 connected 3s.
I would suggest a depth first search with a function that takes two inputs:
A target value
A start node
and returns two outputs:
the size of the subtree of node with values equal to the target value
the largest size of connected region within the subtree of node
You can then call this function with a dummy target value (e.g. -1) and the root node and it will return the answer in the second output.
In pseudocode:
dfs(target_value,start_node):
if start_node.value == target_value:
total = 1
best = 0
for each child of start_node:
x,m = dfs(target_value,child)
best = max(m,best)
total += x
return total,best
else
x,m = dfs(start_node.value,start_node)
return 0,max(x,m)
_,ans = dfs(-1, root_node)
print ans
Associate a counter with each node to represent the largest connected region rooted at that node where all the nodes are the same value. Initialize this counter to 1 for every node.
Run DFS on the tree.
When you back up from any node, if both nodes have the same value, add the child node's counter to the parent's counter.
When you're done, the largest counter associated with a node is your answer. You can keep track of this as you run the algorithm.

Do we have to create a tree all the nodes of which have 3 children?

Steps to build Huffman Tree
Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.
Create a leaf node for each unique character and build a min heap of all leaf nodes (Min Heap is used as a priority queue. The value of frequency field is used to compare two nodes in min heap. Initially, the least frequent character is at root)
Extract two nodes with the minimum frequency from the min heap.
Create a new internal node with frequency equal to the sum of the two nodes frequencies. Make the first extracted node as its left child and the other extracted node as its right child. Add this node to the min heap.
Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the root node and the tree is complete.
At a heap, a node can have at most 2 children, right?
So if we would like to generalize the Huffman algorithm for coded words in ternary system (i.e. coded words using the symbols 0 , 1 and 2 ) what could we do? Do we have to create a tree all the nodes of which have 3 children?
EDIT:
I think that it would be as follows.
Steps to build Huffman Tree
Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.
Create a leaf node for each unique character and build a min heap of all leaf nodes
Extract three nodes with the minimum frequency from the min heap.
Create a new internal node with frequency equal to the sum of the three nodes frequencies. Make the first extracted node as its left child, the second extracted node as its middle child and the third extracted node as its right child. Add this node to the min heap.
Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the root node and the tree is complete.
How can we prove that the algorithm yields optimal ternary codes?
EDIT 2: Suppose that we have the frequencies 5,9,12,13,16,45.
Their number is even, so we add a dummy node with frequency 0. Do we put this at the end of the array? So, will it be as follows?
Then will we have the following heap?
Then:
Then:
Or have I understood it wrong?
Yes! you have to create all nodes with 3 children. Why 3? you can also have n-ary huffman coding using nodes with n child. The tree will look something like this-(for n=3)
*
/ | \
* * *
/|\
* * *
Huffman Algorithm for Ternary Codewords
I am giving the algorithms for easy reference.
HUFFMAN_TERNARY(C)
{
IF |C|=EVEN
THEN ADD DUMMY CHARACTER Z WITH FREQUENCY 0.
N=|C|
Q=C; //WE ARE BASICALLY HEAPIFYING THE CHARACTERS
FOR I=1 TO floor(N/2)
{
ALLOCATE NEW_NODE;
LEFT[NEW_NODE]= U= EXTRACT_MIN(Q)
MID[NEW_NODE] = V= EXTRACT_MIN(Q)
RIGHT[NEW_NODE]=W= EXTRACT_MIN(Q)
F[NEW_NODE]=F[U]+F[V]+F[W];
INSERT(Q,NEW_NODE);
}
RETURN EXTRACT_MIN(Q);
} //END-OF-ALGO
Why are we adding extra nodes? To make the number of nodes odd.(Why?) Because we want to get out of the for loop with just one node in Q.
Why floor(N/2)?
At first we take 3 nodes. Then replace with it 1 node.There are N-2 nodes.
After that we always take 3 nodes (if not available 1 node,it is never possible to get 2 nodes because of the dummy node) and replace with 1. In each iteration we are reducing it by 2 nodes. So that's why we are using the term floor(N/2).
Check it yourself in paper using some sample character set. You will understand.
CORRECTNESS
I am taking here reference from "Introduction to Algorithms" by Cormen, Rivest.
Proof: The step by step mathematical proof is too long to post here but it is quite similar to the proof given in the book.
Idea
Any optimal tree has the lowest three frequencies at the lowest level.(We have to prove this).(using contradiction) Suppose it is not the case then we could switch a leaf with a higher frequency from the lowest level with one of the lowest three leaves and obtain a lower average length. Without any loss of generality, we can assume that all the three lowest frequencies are the children of the same node. if they are at the same level, the average length does not change irrespective of where the frequencies are). They only differ in the last digit of their codeword (one will be 0,1 or 2).
Again as the binary codewords we have to contract the three nodes and make a new character out of it having frequency=total of three node's(character's) frequencies. Like binary Huffman codes, we see that the cost of the optimal tree is the sum of the tree
with the three symbols contracted and the eliminated sub-tree which had the nodes before contraction. Since it has been proved that the sub-tree has
to be present in the final optimal tree, we can optimize on the tree with the contracted newly created node.
Example
Suppose the character set contains frequencies 5,9,12,13,16,45.
Now N=6-> even. So add dummy character with freq=0
N=7 now and freq in C are 0,5,9,12,13,16,45
Now using min priority queue get 3 values. 0 then 5 then 9.
Add them insert new char with freq=0+9+5 in priority queue. This way continue.
The tree will be like this
100
/ | \
/ | \
/ | \
39 16 45 step-3
/ | \
14 12 13 step-2
/ | \
0 5 9 step-1
Finally Prove it
I will now go to straight forward mimic of the proof of Cormen.
Lemma 1. Let C be an alphabet in which each character d belonging to C has frequency c.freq. Let
x ,y and z be three characters in C having the lowest frequencies. Then there exists
an optimal prefix code for C in which the codewords for x ,y and z have the same
length and differ only in the last bit.
Proof:
Idea
First consider any tree T generating arbitrary optimal prefix code.
Then we will modify it to make a tree representing another optimal prefix such that the characters x,y,z appears as sibling nodes at the maximum depth.
If we can construct such a tree then the codewords for x,y and z will have the same length and differ only in the last bit.
Proof--
Let a,b,c be three characters that are sibling leaves of maximum depth in T .
Without loss of generality, we assume that a.freq < b:freq < c.freq and x.freq < y.freq < z.freq.
Since x.freq and y.freq and z.freq are the 3 lowest leaf frequencies, in order (means there are no frequencies between them) and a.freq
, b.freq and c.freq are two arbitrary frequencies, in order, we have x.freq < a:freq and
y.freq < b.freq and z.freq< c.freq.
In the remainder of the proof we can have x.freq=a.freq or y.freq=b.freq or z.freq=c.freq.
But if x.freq=b.freq or x.freq=c.freq
or y.freq=c.freq
then all of them are same. WHY??
Let's see. Suppose x!=y,y!=z,x!=z but z=c and x<y<z in order and aa<b<c.
Also x!=a. --> x<a
y!=b. --> y<b
z!=c. --> z<c but z=c is given. This contradicts our assumption. (Thus proves).
The lemma would be trivially true. Thus we will assume
x!=b and x!=c.
T1
* |
/ | \ |
* * x +---d(x)
/ | \ |
y * z +---d(y) or d(z)
/|\ |
a b c +---d(a) or d(b) or d(c) actually d(a)=d(b)=d(c)
T2
*
/ | \
* * a
/ | \
y * z
/|\
x b c
T3
*
/ | \
* * x
/ | \
b * z
/|\
x y c
T4
*
/ | \
* * a
/ | \
b * c
/|\
x y z
In case of T1 costt1= x.freq*d(x)+ cost_of_other_nodes + y.freq*d(y) + z.freq*d(z) + d(a)*a.freq + b.freq*d(b) + c.freq*d(c)
In case of T2 costt2= x.freq*d(a)+ cost_of_other_nodes + y.freq*d(y) + z.freq*d(z) + d(x)*a.freq + b.freq*d(b) + c.freq*d(c)
costt1-costt2= x.freq*[d(x)-d(a)]+0 + 0 + 0 + a.freq[d(a)-d(x)]+0 + 0
= (a.freq-x.freq)*(d(a)-d(x))
>= 0
So costt1>=costt2. --->(1)
Similarly we can show costt2 >= costt3--->(2)
And costt3 >= costt4--->(3)
From (1),(2) and (3) we get
costt1>=costt4.-->(4)
But T1 is optimal.
So costt1<=costt4 -->(5)
From (4) and (5) we get costt1=costt2.
SO, T4 is an optimal tree in which x,y,and z appears as sibling leaves at maximum depth, from which the lemma follows.
Lemma-2
Let C be a given alphabet with frequency c.freq defined for each character c belonging to C.
Let x , y, z be three characters in C with minimum frequency. Let C1 be the
alphabet C with the characters x and y removed and a new character z1 added,
so that C1 = C - {x,y,z} union {z1}. Define f for C1 as for C, except that
z1.freq=x.freq+y.freq+z.freq. Let T1 be any tree representing an optimal prefix code
for the alphabet C1. Then the tree T , obtained from T1 by replacing the leaf node
for z with an internal node having x , y and z as children, represents an optimal prefix
code for the alphabet C.
Proof.:
Look we are making a transition from T1-> T.
So we must find a way to express the T i.e, costt in terms of costt1.
* *
/ | \ / | \
* * * * * *
/ | \ / | \
* * * ----> * z1 *
/|\
x y z
For c belonging to (C-{x,y,z}), dT(c)=dT1(c). [depth corresponding to T and T1 tree]
Hence c.freq*dT(c)=c.freq*dT1(c).
Since dT(x)=dT(y)=dT(z)=dT1(z1)+1
So we have `x.freq*dT(x)+y.freq*dT(y)+z.freq*dT(z)=(x.freq+y.freq+z.freq)(dT1(z)+1)`
= `z1.freq*dT1(z1)+x.freq+y.freq+z.freq`
Adding both side the cost of other nodes which is same in both T and T1.
x.freq*dT(x)+y.freq*dT(y)+z.freq*dT(z)+cost_of_other_nodes= z1.freq*dT1(z1)+x.freq+y.freq+z.freq+cost_of_other_nodes
So costt=costt1+x.freq+y.freq+z.freq
or equivalently
costt1=costt-x.freq-y.freq-z.freq ---->(1)
Now we prove the lemma by contradiction.
We now prove the lemma by contradiction. Suppose that T does not represent
an optimal prefix code for C. Then there exists an optimal tree T2 such that
costt2 < costt. Without loss of generality (by Lemma 1), T2 has x and y and z as
siblings.
Let T3 be the tree T2 with the common parent of x and y and z replaced by a
leaf z1 with frequency z1.freq=x.freq+y.freq+z.freq Then
costt3 = costt2-x.freq-y.freq-z.freq
< costt-x.freq-y.freq-z.freq
= costt1 (From 1)
yielding a contradiction to the assumption that T1 represents an optimal prefix code
for C1. Thus, T must represent an optimal prefix code for the alphabet C.
-Proved.
Procedure HUFFMAN produces an optimal prefix code.
Proof: Immediate from Lemmas 1 and 2.
NOTE.: Terminologies are from Introduction to Algorithms 3rd edition Cormen Rivest

Print all paths in a tree (Not just root to nodes)

So how would you print all paths in a tree. Here the condition is that we don't only want paths starting from the root or paths in the sub-tree.
For example:
2
/ \
8 10
/\ /
5 6 11
So the program should return:
2-8
2-10
2-8-5
2-8-6
8-5
8-6
2-10-11
10-11
5-8-2-10-11
5-8-2-10
and so on...
One approach is to find the LCA between every distinct pair of nodes and then print the path from the LCA to both nodes (reverse in the left subtree and in order in the right subtree). But the complexity here would be O(n^3). Is there a more efficient solution ?
If you are only interested in the result, not in the algoritm, create the nodes and relations in neo4j with
merge (n2:node{n:2})-[:down]->(n8:node{n:8})-[:down]->(:node{n:5})
merge (n2)-[:down]->(:node{n:10})-[:down]->(:node{n:11})
merge (n8)-[:down]->(:node{n:6})
then query
match p=(a)-[r:down *]-(b) return nodes(p)
Assuming you tree has distinct nodes, you can:
Create a map having key as int and value as vector. The key stands for each node you encounter and vector is for storing all the nodes that you will traverse under the node.
Pass this map by value to each node. You can have a function like:
void printAllPaths(node *proot, map<int, vector<int> > m)
Whenever you encounter a new node n, do the following
a) For each k from set of keys
b) Add n to the value vector of k.
c) Print all keys followed by their value vectors.
d) Also insert new key as n into the map with empty vector as value.
Note: If your tree has duplicate nodes you a multimap will help you keep track. c++ STL will serve you well in this case.

More localized, efficient Lowest Common Ancestor algorithm given multiple binary trees?

I have multiple binary trees stored as an array. In each slot is either nil (or null; pick your language) or a fixed tuple storing two numbers: the indices of the two "children". No node will have only one child -- it's either none or two.
Think of each slot as a binary node that only stores pointers to its children, and no inherent value.
Take this system of binary trees:
0 1
/ \ / \
2 3 4 5
/ \ / \
6 7 8 9
/ \
10 11
The associated array would be:
0 1 2 3 4 5 6 7 8 9 10 11
[ [2,3] , [4,5] , [6,7] , nil , nil , [8,9] , nil , [10,11] , nil , nil , nil , nil ]
I've already written simple functions to find direct parents of nodes (simply by searching from the front until there is a node that contains the child)
Furthermore, let us say that at relevant times, both all trees are anywhere between a few to a few thousand levels deep.
I'd like to find a function
P(m,n)
to find the lowest common ancestor of m and n -- to put more formally, the LCA is defined as the "lowest", or deepest node in which have m and n as descendants (children, or children of children, etc.). If there is none, a nil would be a valid return.
Some examples, given our given tree:
P( 6,11) # => 2
P( 3,10) # => 0
P( 8, 6) # => nil
P( 2,11) # => 2
The main method I've been able to find is one that uses an Euler trace, which turns the given tree (Adding node A as the invisible parent of 0 and 1, with a "value" of -1), into:
A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A
And from that, simply find the node between your given m and n that has the lowest number; For example, to find P(6,11), look for a 6 and an 11 on the trace. The number between them that is the lowest is 2, and that's your answer. If A (-1) is in between them, return nil.
-- Calculating P(6,11) --
A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A
^ ^ ^
| | |
m lowest n
Unfortunately, I do believe that finding the Euler trace of a tree that can be several thousands of levels deep is a bit machine-taxing...and because my tree is constantly being changed throughout the course of the programming, every time I wanted to find the LCA, I'd have to re-calculate the Euler trace and hold it in memory every time.
Is there a more memory efficient way, given the framework I'm using? One that maybe iterates upwards? One way I could think of would be the "count" the generation/depth of both nodes, and climb the lowest node until it matched the depth of the highest, and increment both until they find someone similar.
But that'd involve climbing up from level, say, 3025, back to 0, twice, to count the generation, and using a terribly inefficient climbing-up algorithm in the first place, and then re-climbing back up.
Are there any other better ways?
Clarifications
In the way this system is built, every child will have a number greater than their parents.
This does not guarantee that if n is in generation X, there are no nodes in generation (X-1) that are greater than n. For example:
0
/ \
/ \
/ \
1 2 6
/ \ / \ / \
2 3 9 10 7 8
/ \ / \
4 5 11 12
is a valid tree system.
Also, an artifact of the way the trees are built are that the two immediate children of the same parent will always be consecutively numbered.
Are the nodes in order like in your example where the children have a larger id than the parent? If so, you might be able to do something similar to a merge sort to find them.. for your example, the parent tree of 6 and 11 are:
6 -> 2 -> 0
11 -> 7 -> 2 -> 0
So perhaps the algorithm would be:
left = left_start
right = right_start
while left > 0 and right > 0
if left = right
return left
else if left > right
left = parent(left)
else
right = parent(right)
Which would run as:
left right
---- -----
6 11 (right -> 7)
6 7 (right -> 2)
6 2 (left -> 2)
2 2 (return 2)
Is this correct?
Maybe this will help: Dynamic LCA Queries on Trees.
Abstract:
Richard Cole, Ramesh Hariharan
We show how to maintain a data
structure on trees which allows for
the following operations, all in
worst-case constant time. 1. Insertion
of leaves and internal nodes. 2.
Deletion of leaves. 3. Deletion of
internal nodes with only one child. 4.
Determining the Least Common Ancestor
of any two nodes.
Conference: Symposium on Discrete
Algorithms - SODA 1999
I've solved your problem in Haskell. Assuming you know the roots of the forest, the solution takes time linear in the size of the forest and constant additional memory. You can find the full code at http://pastebin.com/ha4gqU0n.
The solution is recursive, and the main idea is that you can call a function on a subtree which returns one of four results:
The subtree contains neither m nor n.
The subtree contains m but not n.
The subtree contains n but not m.
The subtree contains both m and n, and the index of their least common ancestor is k.
A node without children may contain m, n, or neither, and you simply return the appropriate result.
If a node with index k has two children, you combine the results as follows:
join :: Int -> Result -> Result -> Result
join _ (HasBoth k) _ = HasBoth k
join _ _ (HasBoth k) = HasBoth k
join _ HasNeither r = r
join _ r HasNeither = r
join k HasLeft HasRight = HasBoth k
join k HasRight HasLeft = HasBoth k
After computing this result you have to check the index k of the node itself; if k is equal to m or n, you will "extend" the result of the join operation.
My code uses algebraic data types, but I've been careful to assume you need only the following operations:
Get the index of a node
Find out if a node is empty, and if not, find its two children
Since your question is language-agnostic I hope you'll be able to adapt my solution.
There are various performance tweaks you could put in. For example, if you find a root that has exactly one of the two nodes m and n, you can quit right away, because you know there's no common ancestor. Also, if you look at one subtree and it has the common ancestor, you can ignore the other subtree (that one I get for free using lazy evaluation).
Your question was primarily about how to save memory. If a linear-time solution is too slow, you'll probably need an auxiliary data structure. Space-for-time tradeoffs are the bane of our existence.
I think that you can simply loop backwards through the array, always replacing the higher of the two indices by its parent, until they are either equal or no further parent is found:
(defun lowest-common-ancestor (array node-index-1 node-index-2)
(cond ((or (null node-index-1)
(null node-index-2))
nil)
((= node-index-1 node-index-2)
node-index-1)
((< node-index-1 node-index-2)
(lowest-common-ancestor array
node-index-1
(find-parent array node-index-2)))
(t
(lowest-common-ancestor array
(find-parent array node-index-1)
node-index-2))))

Resources