breadth of a binary tree - algorithm

How do we determine breadth a of binary tree.
A simple bin tree
O
/ \
O O
\
O
\
O
\
O
Breadth of above tree is 4

You could use a recursive function that returns two values for a given node: the extent of the subtree at that node towards the left (a negative number or zero), and the extent to the right (zero or positive). So for the example tree given in the question it would return -1, and 3.
To find these extends is easy when you know the extents of the left child and of the right child. And that is where the recursion kicks in, which in fact represents a depth-first traversal.
Here is how that function would look in Python:
def extents(tree):
if not tree:
# If a tree with just one node has extents 0 and 0, then "nothing" should
# have a negative extent to the right and a positive on the left,
# representing a negative breadth
return 1, -1
leftleft, leftright = extents(tree.left)
rightleft, rightright = extents(tree.right)
return min(leftleft-1, rightleft+1), max(leftright-1, rightright+1)
The breadth is simply the difference between the two extents returned by the above function, plus 1 (to count for the root node):
def breadth(tree):
leftextent, rightextent = extents(tree)
return rightextent-leftextent+1
The complete Python code with the example tree, having 6 nodes, as input:
from collections import namedtuple
Node = namedtuple('Node', ['left', 'right'])
def extents(tree):
if not tree:
return 1, -1
leftleft, leftright = extents(tree.left)
rightleft, rightright = extents(tree.right)
return min(leftleft-1, rightleft+1), max(leftright-1, rightright+1)
def breadth(tree):
left, right = extents(tree)
return right-left+1
# example tree as given in question
tree = Node(
Node(
None,
Node(None, Node(None, Node(None, None)))
),
Node(None, None)
)
print(breadth(tree)) # outputs 4

Related

Check whether 2 nodes have any common parent(s) in a DAG graph

The input is:
An int[][], each sub array contains 2 int as {parent, child}, means there is a path from parent -> child.
e.g
{ { 1, 3 }, { 2, 3 }, { 3, 6 }, { 5, 6 }, { 5, 7 }, { 4, 5 }, { 4, 8 }, { 8, 9 } };
Or as a tree structure:
1 2 4
\ / / \
3 5 8
\ / \ \
6 7 9
The task is:
Giving 2 value (x, y), return a boolean value, to indicate whether they have any common parent(s).
Sample input and output:
[3, 8] => false
[5, 8] => true
[6, 8] => true
My idea:
Represent the input data as a DAG graph, in which data are stored in a Map like this Map<Integer, LinkedList<Integer>>, where key is the vertex, value is its adjacency list. And the direction in graph is reversed (compared to input data) as child -> parent, so that easy to search for parent.
Use a function findAvailableParents(Integer vertex) to find all parents (direct and indirect) for a single vertex, and return Set<Integer>.
Thus, only need to call findAvailableParents() once for each input vertex, then compare whether the 2 returned Sets have any intersection. If yes, then they have common parent; otherwise, they don't.
My questions are:
The time complexity in the solution above is between O(1) ~ O(E), right? (E is edge counts in the graph)
Is there a better solution?
A modified BFS might help you to solve the problem
Algorithm: checkCommonParent
def checkCommonParent(G, v1, v2):
# Create a queues for levelorder traversal
q1 = []
# Mark all the vertices as not visited
# This will be used to cover all the parts of graph
visited = [False]*(len(G.Vertices))
for v in G.Vertices:
if visited[v] == False:
q1.append(v)
visited[v] = True
# Check a connected component and see if it has both vertices exists.
# If it exists, that means they have a common ancestor
v1Visited = False
v2Visited = False
while ((len(q1) > 0) or (len(q2) > 0)):
while len(q1) > 0:
curVertex = q1.popleft()
for adjV in curVertex.adjecentVertices:
if visited[adjV] == False:
q1.append(adjV)
visited[adjV] = True
if adjV == v1:
v1Visited = True
elif adjV == v2:
v2Visited = True
if v1Visited and v2Visited:
return True
return False
I guess the idea is clear on the modification of BFS. Hope it helps!
suppose you have multiple inputs, now BFS would take around O(E) time to process each input.
All inputs can be queried in O(logn) if we do some pre computation which should take about O(nlogn) time
basically you want to find what is the Least common ancestor of those nodes
this thread in topcoder discusses the logic for a tree which can be extended to a DAG
You can also refer to this question for some further ideas
If an LCA exists between 2 nodes, then they have a common parent

How to keep track of depth in breadth first search?

I have a tree as input to the breadth first search and I want to know as the algorithm progresses at which level it is?
# Breadth First Search Implementation
graph = {
'A':['B','C','D'],
'B':['A'],
'C':['A','E','F'],
'D':['A','G','H'],
'E':['C'],
'F':['C'],
'G':['D'],
'H':['D']
}
def breadth_first_search(graph,source):
"""
This function is the Implementation of the breadth_first_search program
"""
# Mark each node as not visited
mark = {}
for item in graph.keys():
mark[item] = 0
queue, output = [],[]
# Initialize an empty queue with the source node and mark it as explored
queue.append(source)
mark[source] = 1
output.append(source)
# while queue is not empty
while queue:
# remove the first element of the queue and call it vertex
vertex = queue[0]
queue.pop(0)
# for each edge from the vertex do the following
for vrtx in graph[vertex]:
# If the vertex is unexplored
if mark[vrtx] == 0:
queue.append(vrtx) # mark it as explored
mark[vrtx] = 1 # and append it to the queue
output.append(vrtx) # fill the output vector
return output
print breadth_first_search(graph, 'A')
It takes tree as an input graph, what I want is, that at each iteration it should print out the current level which is being processed.
Actually, we don't need an extra queue to store the info on the current depth, nor do we need to add null to tell whether it's the end of current level. We just need to know how many nodes the current level contains, then we can deal with all the nodes in the same level, and increase the level by 1 after we are done processing all the nodes on the current level.
int level = 0;
Queue<Node> queue = new LinkedList<>();
queue.add(root);
while(!queue.isEmpty()){
int level_size = queue.size();
while (level_size-- != 0) {
Node temp = queue.poll();
if (temp.right != null) queue.add(temp.right);
if (temp.left != null) queue.add(temp.left);
}
level++;
}
You don't need to use extra queue or do any complicated calculation to achieve what you want to do. This idea is very simple.
This does not use any extra space other than queue used for BFS.
The idea I am going to use is to add null at the end of each level. So the number of nulls you encountered +1 is the depth you are at. (of course after termination it is just level).
int level = 0;
Queue <Node> queue = new LinkedList<>();
queue.add(root);
queue.add(null);
while(!queue.isEmpty()){
Node temp = queue.poll();
if(temp == null){
level++;
queue.add(null);
if(queue.peek() == null) break;// You are encountering two consecutive `nulls` means, you visited all the nodes.
else continue;
}
if(temp.right != null)
queue.add(temp.right);
if(temp.left != null)
queue.add(temp.left);
}
Maintain a queue storing the depth of the corresponding node in BFS queue. Sample code for your information:
queue bfsQueue, depthQueue;
bfsQueue.push(firstNode);
depthQueue.push(0);
while (!bfsQueue.empty()) {
f = bfsQueue.front();
depth = depthQueue.front();
bfsQueue.pop(), depthQueue.pop();
for (every node adjacent to f) {
bfsQueue.push(node), depthQueue.push(depth+1);
}
}
This method is simple and naive, for O(1) extra space you may need the answer post by #stolen_leaves.
Try having a look at this post. It keeps track of the depth using the variable currentDepth
https://stackoverflow.com/a/16923440/3114945
For your implementation, keep track of the left most node and a variable for the depth. Whenever the left most node is popped from the queue, you know you hit a new level and you increment the depth.
So, your root is the leftMostNode at level 0. Then the left most child is the leftMostNode. As soon as you hit it, it becomes level 1. The left most child of this node is the next leftMostNode and so on.
With this Python code you can maintain the depth of each node from the root by increasing the depth only after you encounter a node of new depth in the queue.
queue = deque()
marked = set()
marked.add(root)
queue.append((root,0))
depth = 0
while queue:
r,d = queue.popleft()
if d > depth: # increase depth only when you encounter the first node in the next depth
depth += 1
for node in edges[r]:
if node not in marked:
marked.add(node)
queue.append((node,depth+1))
If your tree is perfectly ballanced (i.e. each node has the same number of children) there's actually a simple, elegant solution here with O(1) time complexity and O(1) space complexity. The main usecase where I find this helpful is in traversing a binary tree, though it's trivially adaptable to other tree sizes.
The key thing to realize here is that each level of a binary tree contains exactly double the quantity of nodes compared to the previous level. This allows us to calculate the total number of nodes in any tree given the tree's depth. For instance, consider the following tree:
This tree has a depth of 3 and 7 total nodes. We don't need to count the number of nodes to figure this out though. We can compute this in O(1) time with the formaula: 2^d - 1 = N, where d is the depth and N is the total number of nodes. (In a ternary tree this is 3^d - 1 = N, and in a tree where each node has K children this is K^d - 1 = N). So in this case, 2^3 - 1 = 7.
To keep track of depth while conducting a breadth first search, we simply need to reverse this calculation. Whereas the above formula allows us to solve for N given d, we actually want to solve for d given N. For instance, say we're evaluating the 5th node. To figure out what depth the 5th node is on, we take the following equation: 2^d - 1 = 5, and then simply solve for d, which is basic algebra:
If d turns out to be anything other than a whole number, just round up (the last node in a row is always a whole number). With that all in mind, I propose the following algorithm to identify the depth of any given node in a binary tree during breadth first traversal:
Let the variable visited equal 0.
Each time a node is visited, increment visited by 1.
Each time visited is incremented, calculate the node's depth as depth = round_up(log2(visited + 1))
You can also use a hash table to map each node to its depth level, though this does increase the space complexity to O(n). Here's a PHP implementation of this algorithm:
<?php
$tree = [
['A', [1,2]],
['B', [3,4]],
['C', [5,6]],
['D', [7,8]],
['E', [9,10]],
['F', [11,12]],
['G', [13,14]],
['H', []],
['I', []],
['J', []],
['K', []],
['L', []],
['M', []],
['N', []],
['O', []],
];
function bfs($tree) {
$queue = new SplQueue();
$queue->enqueue($tree[0]);
$visited = 0;
$depth = 0;
$result = [];
while ($queue->count()) {
$visited++;
$node = $queue->dequeue();
$depth = ceil(log($visited+1, 2));
$result[$depth][] = $node[0];
if (!empty($node[1])) {
foreach ($node[1] as $child) {
$queue->enqueue($tree[$child]);
}
}
}
print_r($result);
}
bfs($tree);
Which prints:
Array
(
[1] => Array
(
[0] => A
)
[2] => Array
(
[0] => B
[1] => C
)
[3] => Array
(
[0] => D
[1] => E
[2] => F
[3] => G
)
[4] => Array
(
[0] => H
[1] => I
[2] => J
[3] => K
[4] => L
[5] => M
[6] => N
[7] => O
)
)
Set a variable cnt and initialize it to the size of the queue cnt=queue.size(), Now decrement cnt each time you do a pop. When cnt gets to 0, increase the depth of your BFS and then set cnt=queue.size() again.
In Java it would be something like this.
The idea is to look at the parent to decide the depth.
//Maintain depth for every node based on its parent's depth
Map<Character,Integer> depthMap=new HashMap<>();
queue.add('A');
depthMap.add('A',0); //this is where you start your search
while(!queue.isEmpty())
{
Character parent=queue.remove();
List<Character> children=adjList.get(parent);
for(Character child :children)
{
if (child.isVisited() == false) {
child.visit(parent);
depthMap.add(child,depthMap.get(parent)+1);//parent's depth + 1
}
}
}
Use a dictionary to keep track of the level (distance from start) of each node when exploring the graph.
Example in Python:
from collections import deque
def bfs(graph, start):
queue = deque([start])
levels = {start: 0}
while queue:
vertex = queue.popleft()
for neighbour in graph[vertex]:
if neighbour in levels:
continue
queue.append(neighbour)
levels[neighbour] = levels[vertex] + 1
return levels
I write a simple and easy to read code in python.
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution:
def dfs(self, root):
assert root is not None
queue = [root]
level = 0
while queue:
print(level, [n.val for n in queue if n is not None])
mark = len(queue)
for i in range(mark):
n = queue[i]
if n.left is not None:
queue.append(n.left)
if n.right is not None:
queue.append(n.right)
queue = queue[mark:]
level += 1
Usage,
# [3,9,20,null,null,15,7]
n3 = TreeNode(3)
n9 = TreeNode(9)
n20 = TreeNode(20)
n15 = TreeNode(15)
n7 = TreeNode(7)
n3.left = n9
n3.right = n20
n20.left = n15
n20.right = n7
DFS().dfs(n3)
Result
0 [3]
1 [9, 20]
2 [15, 7]
I don't see this method posted so far, so here's a simple one:
You can "attach" the level to the node. For e.g., in case of a tree, instead of the typical queue<TreeNode*>, use a queue<pair<TreeNode*,int>> and then push the pairs of {node,level}s into it. The root would be pushed in as, q.push({root,0}), its children as q.push({root->left,1}), q.push({root->right,1}) and so on...
We don't need to modify the input, append nulls or even (asymptotically speaking) use any extra space just to track the levels.

Improving the time complexity of DFS using recursion such that each node only works with its descendants

Problem
There is a perfectly balanced m-ary tree that is n levels deep. Each inner node has exactly m child nodes. The root is said to be at depth 0 and the leaf nodes are said to be at level n, so there are exactly n ancestors of every leaf node. Therefore, the total number of nodes in the tree is:
T = 1 + m^2 + ... + m^n
= (m^(n+1) - 1) / (m - 1)
Here is an example with m = 3 and n = 2.
a (depth 0)
_________|________
| | |
b c d (depth 1)
___|___ ___|___ ___|___
| | | | | | | | |
e f g h i j k l m (depth 2)
I am writing a depth first search function to traverse the entire tree in deepest node first and leftmost node first manner, and insert the value of each node to an output list.
I wrote this function in two different ways and want to compare the time complexity of both functions.
Although this question is language agnostic, I am using Python code below to show my functions because Python code looks almost like pseudocode.
Solutions
The first function is dfs1. It accepts the root node as node argument and an empty output list as output argument. The function descends into the tree recursively, visits every node and appends the value of the node to the output list.
def dfs1(node, output):
"""Visit each node (DFS) and place its value in output list."""
output.append(node.value)
for child_node in node.children:
dfs1(child_node, output)
The second function is dfs2. It accepts the root node as node argument but does not accept any list argument. The function descends into the tree recursively. At every level of recursion, on visiting every node, it creates a list with the value of the current node and all its descendants and returns this list to the caller.
def dfs2(node):
"""Visit nodes (DFS) and return list of values of visited nodes."""
output = [node.value]
for child_node in node.children:
for s in dfs2(child_node):
output.append(s)
return output
Analysis
There are two variables that define the problem size.
m -- The number of child nodes each child node has.
n -- The number of ancestors each leaf node has (height of the tree).
In dfs1, O(1) time is spent while visiting each node, so the total time spent in visiting all nodes is
O(1 + m + m^2 + ... + m^n).
I don't bother about simplifying this expression further.
In dfs2, the time spent while visiting each node is directly proportional to all leaf nodes reachable from that node. In other words, the time spent while visiting a node at depth d is O(m^(n - d)). Therefore, the total spent time in visiting all nodes is
1 * O(m^n) + m * O(m^(n - 1)) + m^2 * O(m^(n - 2)) + ... + m^n * O(1)
= (n + 1) * O(m^n)
Question
Is it possible to write dfs2 in such a manner that its time complexity is
O(1 + m + m^2 + ... + m^n)
without changing the essence of the algorithm, i.e. each node only creates an output list for itself and all its descendants, and does not have to bother with a list that may have values of its ancestors?
Complete working code for reference
Here is a complete Python code that demonstrates the above functions.
class Node:
def __init__(self, value):
"""Initialize current node with a value."""
self.value = value
self.children = []
def add(self, node):
"""Add a new node as a child to current node."""
self.children.append(node)
def make_tree():
"""Create a perfectly balanced m-ary tree with depth n.
(m = 3 and n = 2)
1 (depth 0)
_________|________
| | |
2 3 4 (depth 1)
___|___ ___|___ ___|___
| | | | | | | | |
5 6 7 8 9 10 11 12 13 (depth 2)
"""
# Create the nodes
a = Node( 1);
b = Node( 2); c = Node( 3); d = Node( 4)
e = Node( 5); f = Node( 6); g = Node( 7);
h = Node( 8); i = Node( 9); j = Node(10);
k = Node(11); l = Node(12); m = Node(13)
# Create the tree out of the nodes
a.add(b); a.add(c); a.add(d)
b.add(e); b.add(f); b.add(g)
c.add(h); c.add(i); c.add(j)
d.add(k); d.add(l); d.add(m)
# Return the root node
return a
def dfs1(node, output):
"""Visit each node (DFS) and place its value in output list."""
output.append(node.value)
for child_node in node.children:
dfs1(child_node, output)
def dfs2(node):
"""Visit nodes (DFS) and return list of values of visited nodes."""
output = [node.value]
for child_node in node.children:
for s in dfs2(child_node):
output.append(s)
return output
a = make_tree()
output = []
dfs1(a, output)
print(output)
output = dfs2(a)
print(output)
Both dfs1 and dfs2 functions produce the same output.
['a', 'b', 'e', 'f', 'g', 'c', 'h', 'i', 'j', 'd', 'k', 'l', 'm']
['a', 'b', 'e', 'f', 'g', 'c', 'h', 'i', 'j', 'd', 'k', 'l', 'm']
If in dfs1 output list is passed by reference, then complexity of ds1 is O(total nodes).
Whereas, in dfs2 output list is returned and appended to parent's output list, thus taking O(size of list) for each return. Hence increasing overall complexity. You can avoid this overhead if both your append and returning of output list takes constant time.
This can be done if your output list is "doubly ended linked list". Hence you can return reference of output list and instead of append you can concatenate two doubly ended linked list (which is O(1)).

Pre-order to post-order traversal

If the pre-order traversal of a binary search tree is 6, 2, 1, 4, 3, 7, 10, 9, 11, how to get the post-order traversal?
You are given the pre-order traversal of the tree, which is constructed by doing: output, traverse left, traverse right.
As the post-order traversal comes from a BST, you can deduce the in-order traversal (traverse left, output, traverse right) from the post-order traversal by sorting the numbers. In your example, the in-order traversal is 1, 2, 3, 4, 6, 7, 9, 10, 11.
From two traversals we can then construct the original tree. Let's use a simpler example for this:
Pre-order: 2, 1, 4, 3
In-order: 1, 2, 3, 4
The pre-order traversal gives us the root of the tree as 2. The in-order traversal tells us 1 falls into the left sub-tree and 3, 4 falls into the right sub-tree. The structure of the left sub-tree is trivial as it contains a single element. The right sub-tree's pre-order traversal is deduced by taking the order of the elements in this sub-tree from the original pre-order traversal: 4, 3. From this we know the root of the right sub-tree is 4 and from the in-order traversal (3, 4) we know that 3 falls into the left sub-tree. Our final tree looks like this:
2
/ \
1 4
/
3
With the tree structure, we can get the post-order traversal by walking the tree: traverse left, traverse right, output. For this example, the post-order traversal is 1, 3, 4, 2.
To generalise the algorithm:
The first element in the pre-order traversal is the root of the tree. Elements less than the root form the left sub-tree. Elements greater than the root form the right sub-tree.
Find the structure of the left and right sub-trees using step 1 with a pre-order traversal that consists of the elements we worked out to be in that sub-tree placed in the order they appear in the original pre-order traversal.
Traverse the resulting tree in post-order to get the post-order traversal associated with the given pre-order traversal.
Using the above algorithm, the post-order traversal associated with the pre-order traversal in the question is: 1, 3, 4, 2, 9, 11, 10, 7, 6. Getting there is left as an exercise.
Pre-order = outputting the values of a binary tree in the order of the current node, then the left subtree, then the right subtree.
Post-order = outputting the values of a binary tree in the order of the left subtree, then the right subtree, the the current node.
In a binary search tree, the values of all nodes in the left subtree are less than the value of the current node; and alike for the right subtree. Hence if you know the start of a pre-order dump of a binary search tree (i.e. its root node's value), you can easily decompose the whole dump into the root node value, the values of the left subtree's nodes, and the values of the right subtree's nodes.
To output the tree in post-order, recursion and output reordering is applied. This task is left upon the reader.
Based on Ondrej Tucny's answer. Valid for BST only
example:
20
/ \
10 30
/\ \
6 15 35
Preorder = 20 10 6 15 30 35
Post = 6 15 10 35 30 20
For a BST, In Preorder traversal; first element of array is 20. This is the root of our tree. All numbers in array which are lesser than 20 form its left subtree and greater numbers form right subtree.
//N = number of nodes in BST (size of traversal array)
int post[N] = {0};
int i =0;
void PretoPost(int pre[],int l,int r){
if(l==r){post[i++] = pre[l]; return;}
//pre[l] is root
//Divide array in lesser numbers and greater numbers and then call this function on them recursively
for(int j=l+1;j<=r;j++)
if(pre[j]>pre[l])
break;
PretoPost(a,l+1,j-1); // add left node
PretoPost(a,j,r); //add right node
//root should go in the end
post[i++] = pre[l];
return;
}
Please correct me if there is any mistake.
you are given the pre-order traversal results. then put the values to a suitable binary search tree and just follow the post-order traversal algorithm for the obtained BST.
This is the code of preorder to postorder traversal in python.
I am constructing a tree so you can find any type of traversal
def postorder(root):
if root==None:
return
postorder(root.left)
print(root.data,end=" ")
postorder(root.right)
def preordertoposorder(a,n):
root=Node(a[0])
top=Node(0)
temp=Node(0)
temp=None
stack=[]
stack.append(root)
for i in range(1,len(a)):
while len(stack)!=0 and a[i]>stack[-1].data:
temp=stack.pop()
if temp!=None:
temp.right=Node(a[i])
stack.append(temp.right)
else:
stack[-1].left=Node(a[i])
stack.append(stack[-1].left)
return root
class Node:
def __init__(self,data):
self.data=data
self.left=None
self.right=None
a=[40,30,35,80,100]
n=5
root=preordertoposorder(a,n)
postorder(root)
# print(root.data)
# print(root.left.data)
# print(root.right.data)
# print(root.left.right.data)
# print(root.right.right.data)
If you have been given preorder and you want to convert it into postorder. Then you should remember that in a BST in order always give numbers in ascending order.Thus you have both Inorder as well as the preorder to construct a tree.
preorder: 6, 2, 1, 4, 3, 7, 10, 9, 11
inorder: 1, 2, 3, 4, 6, 7, 9, 10, 11
And its postorder: 1 3 4 2 9 11 10 7 6
I know this is old but there is a better solution.
We don't have to reconstruct a BST to get the post-order from the pre-order.
Here is a simple python code that does it recursively:
import itertools
def postorder(preorder):
if not preorder:
return []
else:
root = preorder[0]
left = list(itertools.takewhile(lambda x: x < root, preorder[1:]))
right = preorder[len(left) + 1:]
return postorder(left) + postorder(right) + [root]
if __name__ == '__main__':
preorder = [20, 10, 6, 15, 30, 35]
print(postorder(preorder))
Output:
[6, 15, 10, 35, 30, 20]
Explanation:
We know that we are in pre-order. This means that the root is at the index 0 of the list of the values in the BST. And we know that the elements following the root are:
first: the elements less than the root, which belong to the left subtree of the root
second: the elements greater than the root, which belong to the right subtree of the root
We then just call recursively the function on both subtrees (which still are in pre-order) and then chain left + right + root (which is the post-order).
Here pre-order traversal of a binary search tree is given in array.
So the 1st element of pre-order array will root of BST.We will find the left part of BST and right part of BST.All the element in pre-order array is lesser than root will be left node and All the element in pre-order array is greater then root will be right node.
#include <bits/stdc++.h>
using namespace std;
int arr[1002];
int no_ans = 0;
int n = 1000;
int ans[1002] ;
int k = 0;
int find_ind(int l,int r,int x){
int index = -1;
for(int i = l;i<=r;i++){
if(x<arr[i]){
index = i;
break;
}
}
if(index == -1)return index;
for(int i =l+1;i<index;i++){
if(arr[i] > x){
no_ans = 1;
return index;
}
}
for(int i = index;i<=r;i++){
if(arr[i]<x){
no_ans = 1;
return index;
}
}
return index;
}
void postorder(int l ,int r){
if(l < 0 || r >= n || l >r ) return;
ans[k++] = arr[l];
if(l==r) return;
int index = find_ind(l+1,r,arr[l]);
if(no_ans){
return;
}
if(index!=-1){
postorder(index,r);
postorder(l+1,index-1);
}
else{
postorder(l+1,r);
}
}
int main(void){
int t;
scanf("%d",&t);
while(t--){
no_ans = 0;
int n ;
scanf("%d",&n);
for(int i = 0;i<n;i++){
cin>>arr[i];
}
postorder(0,n-1);
if(no_ans){
cout<<"NO"<<endl;
}
else{
for(int i =n-1;i>=0;i--){
cout<<ans[i]<<" ";
}
cout<<endl;
}
}
return 0;
}
As we Know preOrder follow parent, left, right series.
In order to construct tree we need to follow few basic steps-:
your question consist of series 6, 2,1,4,3,7,10,9,11
points-:
First number of series will be root(parent) i.e 6
2.Find the number which is greater than 6 so in this series 7 is first greater number in this series so right node will be starting from here and left to this number(7) is your left subtrees.
6
/ \
2 7
/ \ \
1 4 10
/ / \
3 9 11
3.same way follow the basic rule of BST i.e left,root,right
the series of post order will be L, R, N i.e. 1,3,4,2,9,11,10,7,6
Here is full code )
class Tree:
def __init__(self, data = None):
self.left = None
self.right = None
self.data = data
def add(self, data):
if self.data is None:
self.data = data
else:
if data < self.data:
if self.left is None:
self.left = Tree(data)
else:
self.left.add(data)
elif data > self.data:
if self.right is None:
self.right = Tree(data)
else:
self.right.add(data)
def inOrder(self):
if self.data:
if self.left is not None:
self.left.inOrder()
print(self.data)
if self.right is not None:
self.right.inOrder()
def postOrder(self):
if self.data:
if self.left is not None:
self.left.postOrder()
if self.right is not None:
self.right.postOrder()
print(self.data)
def preOrder(self):
if self.data:
print(self.data)
if self.left is not None:
self.left.preOrder()
if self.right is not None:
self.right.preOrder()
arr = [6, 2, 1, 4, 3, 7, 10, 9, 11]
root = Tree()
for i in range(len(arr)):
root.add(arr[i])
print(root.inOrder())
Since, it is a binary search tree, the inorder traversal will be always be the sorted elements. (left < root < right)
so, you can easily write its in-order traversal results first, which is : 1,2,3,4,6,7,9,10,11
given Pre-order : 6, 2, 1, 4, 3, 7, 10, 9, 11
In-order : left, root, right
Pre-order : root, left, right
Post-order : left, right, root
now, we got from pre-order, that root is 6.
now, using in-order and pre-order results:
Step 1:
6
/ \
/ \
/ \
/ \
{1,2,3,4} {7,9,10,11}
Step 2: next root is, using in-order traversal, 2:
6
/ \
/ \
/ \
/ \
2 {7,9,10,11}
/ \
/ \
/ \
1 {3,4}
Step 3: Similarly, next root is 4:
6
/ \
/ \
/ \
/ \
2 {7,9,10,11}
/ \
/ \
/ \
1 4
/
3
Step 4: next root is 3, but no other element is remaining to be fit in the child tree for "3". Considering next root as 7 now,
6
/ \
/ \
/ \
/ \
2 7
/ \ \
/ \ {9,10,11}
/ \
1 4
/
3
Step 5: Next root is 10 :
6
/ \
/ \
/ \
/ \
2 7
/ \ \
/ \ 10
/ \ / \
1 4 9 11
/
3
This is how, you can construct a tree, and finally find its post-order traversal, which is : 1, 3, 4, 2, 9, 11, 10, 7, 6

Algorithm for converting Binary tree to post-fix mathematical expression?

I have a Binary tree for a mathematical expression(infix), i want to convert directly this TREE to a postfix(Stack)
can any body suggest the algorithm?
What you’re searching for is known as post-order tree traversal:
postorder(node)
if node.left ≠ null then postorder(node.left)
if node.right ≠ null then postorder(node.right)
print node.value
Easy, each node is (Left, Right, Data).
Start with the first node. execute the algorithm for the left subtree if available and then execute the algorithm for the right subtree and then print the data.
TreeNode = ([TreeNode], Data, [TreeNode])
TreeToPostfix: [TreeNode] -> Data*
TreeToPostfix(nil) = []
TreeToPostfix((left, data, right)) ==
TreeToPostfix(left) ++ TreeToPostfix(right) ++ Data
For example:
+
/ \
* -
/ \ / \
2 3 4 5
Produces: 2 3 * 4 5 - +

Resources