What is the time complexity of binary tree level order traversal ? Is it O(n) or O(log n)?
void levelorder(Node *n)
{ queue < Node * >q;
q.enqueue(n);
while(!q.empty())
{
Node * node = q.front();
DoSmthwith node;
q.dequeue();
if(node->left != NULL)
q.enqueue(node->left);
if (node->right != NULL)
q.enqueue(node->right);
}
}
It is O(n), or to be exact Theta(n).
Have a look on each node in the tree - each node is "visited" at most 3 times, and at least once) - when it is discovered (all nodes), when coming back from the left son (non leaf) and when coming back from the right son (non leaf), so total of 3*n visits at most and n visites at least per node. Each visit is O(1) (queue push/pop), totaling in - Theta(n).
Another way to approach this problem is identifying that a level-order traversal is very similar to the breadth-first search of a graph. A breadth-first traversal has a time complexity that is O(|V| + |E|) where |V| is the number of vertices and |E| is the number of edges.
In a tree, the number of edges is around equal to the number of vertices. This makes it overall linear in the number of nodes.
The time and space complexities are O(n). n = no. of nodes.
Space complexity - Queue size would be proportional to number of nodes
O(n)
Time complexity - O(n) as each node is visited twice. Once during
enqueue operation and once during dequeue operation.
This is a special case of BFS. You can read about BFS (Breadth First Search) http://en.wikipedia.org/wiki/Breadth-first_search .
Related
What is the time complexity of this particular implementation of Dijkstra's algorithm?
I know several answers to this question say O(E log V) when you use a min heap, and so does this article and this article. However, the article here says O(V+ElogE) and it has similar (but not exactly the same) logic as the code below.
Different implementations of the algorithm can change the time complexity. I'm trying to analyze the complexity of the implementation below, but the optimizations like checking visitedSet and ignoring repeated vertices in minHeap is making me doubt myself.
Here is the pseudo code:
// this part is O(V)
for each vertex in graph {
distanceMap[vertex] = infinity
}
// initialize source vertex
minHeap.add(source vertex and 0 distance)
distanceMap[source] = 0
visitedSet.add(source vertex)
// loop through vertices: O(V)?
while (minHeap is not empty) {
// Removing from heap is O(log n) but is n the V or E?
vertex and distance = minHeap.removeMin
if (distance > saved distance in distanceMap) continue while loop
visitedSet.add(vertex)
// looping through edges: O(E) ?
for (each neighbor of vertex) {
if (visitedSet contains neighbor) continue for loop
totalDistance = distance + weight to neighbor
if (totalDistance < saved distance in vertexMap) {
// Adding to heap is O(log n) but is n the V or E?
minHeap.add(neighbor and totalDistance)
distanceMap[neighbor] = totalDistance
}
}
}
Notes:
Each vertex that is reachable from the source vertex will be visited at least once.
Each edge (neighbor) of each vertex is checked but ignored if in visitedSet
A neighbor is added to the heap only if it has a shorter distance that the currently known distance. (Unknown distances are assumed to have a default length of infinity.)
What is the actual time complexity of this implementation and why?
Despite the test, this implementation of Dijkstra may put Ω(E) items in the priority queue. This will cost Ω(E log E) with every comparison-based priority queue.
Why not E log V? Well, assuming a connected, simple, nontrivial graph, we have Θ(E log V) = Θ(E log E) since log (V−1) ≤ log E < log V² = 2 log V.
The O(E + V log V)-time implementations of Dijkstra's algorithm depend on a(n amortized) constant-time DecreaseKey operation, avoiding multiple entries for an individual vertex. The implementation in this question will likely be faster in practice on sparse graphs, however.
I am looking for algorithm to do BFS in O(n) for n-ary tree, I found the following algorithm, but I have a problem in analysing the time complexity.
I am not sure if it's O(n) or O(n^2).
Can someone explain the time complexity or give an alternative algorithm which run in O(n)
Thanks
breadthFirstSearch = (root, output = []) => {
if (!root) return output;
const q = new Queue();
q.enqueue(root);
while (!q.isEmpty()) {
const node = q.dequeue();
output.push(node.val);
for (let child of node.children) {
q.enqueue(child);
}
}
return output;
};
That is indeed a BFS algorithm for a generic tree. If you define 𝑛 as the 𝑛 in 𝑛-ary tree, then the time complexity is not related to that 𝑛.
If however, 𝑛 represents the total number of nodes in the tree, then the time complexity is O(𝑛) because every node is enqueued exactly once, and dequeued exactly once. As queue operations are O(1), the time complexity is O(𝑛).
We implement Disjoint Data structure with tree. in this data structure makeset() create a set with one element, merge(i, j) merge two tree of set i and j in such a way that tree with lower height become a child of root of the second tree. if we do n makeset() operation and n-1 merge() operations in random manner, and then do one find operation. what is the cost of this find operation in worst case?
I) O(n)
II) O(1)
III) O(n log n)
IV) O(log n)
Answer: IV.
Anyone could mentioned a good tips that the author get this solution?
The O(log n) find is only true when you use union by rank (also known as weighted union). When we use this optimisation, we always place the tree with lower rank under the root of the tree with higher rank. If both have the same rank, we choose arbitrarily, but increase the rank of the resulting tree by one. This gives an O(log n) bound on the depth of the tree. We can prove this by showing that a node that is i levels below the root (equivalent to being in a tree of rank >= i) is in a tree of at least 2i nodes (this is the same as showing a tree of size n has log n depth). This is easily done with induction.
Induction hypothesis: tree size is >= 2^j for j < i.
Case i == 0: the node is the root, size is 1 = 2^0.
Case i + 1: the length of a path is i + 1 if it was i and the tree was then placed underneath
another tree. By the induction hypothesis, it was in a tree of size >= 2^i at
that time. It is being placed under another tree, which by our merge rules means
it has at least rank i as well, and therefore also had >= 2^i nodes. The new tree
therefor has >= 2^i + 2^i = 2^(i + 1) nodes.
I have seen various posts here that computes the diameter of a binary tree. One such solution can be found here (Look at the accepted solution, NOT the code highlighted in the problem).
I'm confused why the time complexity of the code would be O(n^2). I don't see how traversing the nodes of a tree twice (once for the height (via getHeight()) and once for the diameter (via getDiameter()) would be n^2 instead of n+n which is 2n. Any help would be appreciated.
As you mentioned, the time complexity of getHeight() is O(n).
For each node, the function getHeight() is called. So the complexity for a single node is O(n). Hence the complexity for the entire algorithm (for all nodes) is O(n*n).
It should be O(N) to calculate the height of every subtree rooted at every node, you only have to traverse the tree one time using an in-order traversal.
int treeHeight(root)
{
if(root == null) return -1;
root->height = max(treeHeight(root->rChild),treeHeight(root->lChild)) + 1;
return root->height;
}
This will visit each node 1 time, so has order O(N).
Combine this with the result from the linked source, and you will be able to determine which 2 nodes have the longest path between in at worst another traversal.
Indeed this describes the way to do it in O(N)
The different between this solution (the optimized one) and the referenced one is that the referenced solution re-computes tree height every time after shrinking the search size by only 1 node (the root node). Thus from above the complexity will be O(N + (N - 1) + ... + 1).
The sum
1 + 2 + ... + N
is equal to
= N(N + 1)/2
And so the complexity of sum of all the operations from the repeated calls to getHeight will be O(N^2)
For completeness sake, conversely, the optimized solution getHeight() will have complexity O(1) after the pre computation because each node will store the value as a data member of the node.
All subtree heights may be precalculated (using O(n) time), so what total time complexity of finding the diameter would be O(n).
Below is an iterative algorithm to traverse a Binary Search Tree in in-order fashion (first left child , then the parent , finally right child) without using a Stack :
(Idea : the whole idea is to find the left-most child of a tree and find the successor of the node at hand each time and print its value , until there's no more node left.)
void In-Order-Traverse(Node root){
Min-Tree(root); //finding left-most child
Node current = root;
while (current != null){
print-on-screen(current.key);
current = Successor(current);
}
return;
}
Node Min-Tree(Node root){ // find the leftmost child
Node current = root;
while (current.leftChild != null)
current = current.leftChild;
return current;
}
Node Successor(Node root){
if (root.rightChild != null) // if root has a right child ,find the leftmost child of the right sub-tree
return Min-Tree(root.rightChild);
else{
current = root;
while (current.parent != null && current.parent.leftChild != current)
current = current.parent;
return current.parrent;
}
}
It's been claimed that the time complexity of this algorithm is Theta(n) assuming there are n nodes in the BST , which is for sure correct . However I cannot convince myself as I guess some of the nodes are traversed more than constant number of times which depends on the number of nodes in their sub-trees and summing up all these number of visits wouldn't result time complexity of Theta(n)
Any idea or intuition on how to prove it ?
It is easier to reason with edges rather than nodes. Let us reason based on the code of Successor function.
Case 1 (then branch)
For all nodes with a right child, we will visit the right subtree once ("right-turn" edge), then always visit the left subtree ("left-turn" edges) with Min-Tree function. We can prove that such traversal will create a path whose edges are unique - the edges will not be repeated in any traversal made from any other node with a right child, since the traversal ensures that you never visit any "right-turn" edge of other nodes on the tree. (Proof by construction).
Case 2 (else branch)
For all nodes without a right child (else branch), we will visit the ancestors by following "right-turn" edges until you have to make a "left-turn" edge or encounter the root of the binary tree. Again, the edges in the path generated are unique - will never be repeated in any other traversal made from any other node without a right child. This is because:
Except for the starting node and the node reached by following "left-turn" edge, all other nodes in between has a right child (which means those are excluded from else branch). The starting node of course does not have a right child.
Each node has a unique parent (only the root node does not have parent), and the path to parent is either "left-turn" or "right-turn" (the node is a left child or a right child). Given any node (ignoring the right child condition), there is only one path that creates the pattern: many "right-turn" then a "left-turn".
Since the nodes in between have a right child, there is no way for an edge to appear in 2 traversal starting at different nodes. (Since we are currently considering nodes without a right child).
(The proof here is quite hand-waving, but I think it can be formally proven by contradiction).
Since the edges are unique, the total number of edges traversed in case 1 only (or case 2 only) will be O(n) (since the number of edges in a tree is equal to the number of vertices - 1). Therefore, after summing the 2 cases up, In-Order Traversal will be O(n).
Note that I only know each edge is visited at most once - I don't know whether all edges are visited or not from the proof, but the number of edges is bounded by the number of vertices, which is just right.
We can easily see that it is also Omega(n) (each node is visited once), so we can conclude that it is Theta(n).
The given program runs in Θ(N) time. Θ(N) doesn't mean that each node is visited exactly once. Remember there is a constant factor. So Θ(N) could actually be limited by 5 N or 10 N or even a 1000 N. So as such it doesn't give you an exact count on the number of times a node is visited.
The Time complexity of in-order iterative traversal of Binary Search Tree can be analyzed as follows,
Consider a Tree with N nodes,
Let the execution time be denoted by the complexity function T(N).
Let the left sub tree and right sub tree contain X and N-X-1 nodes respectively,
Then the time complexity T(N) = T(X) + T(N-X-1) + c,
Now consider the two extreme cases of a BST,
CASE 1: A BST which is perfectly balanced, i.e. both the sub trees have equal number of nodes. For example consider the BST shown below,
10
/ \
5 14
/ \ / \
1 6 11 16
For such a Tree the complexity function is,
T(N) = 2 T(⌊N/2⌋) + c
Master Theorem gives us a complexity of Θ(N) in this case.
CASE 2: A fully unbalanced BST, i.e. either the left sub tree or right sub tree is empty. There for X = 0. For example consider the BST shown below,
10
/
9
/
8
/
7
Now T(N) = T(0) + T(N-1) + c,
T(N) = T(N-1) + c
T(N) = T(N-2) + c + c
T(N) = T(N-3) + c + c + c
.
.
.
T(N) = T(0) + N c
Since T(N) = K, where K is a constant,
T(N) = K + N c
There for T(N) = Θ(N).
Thus the complexity is Θ(N) for all the cases.
We focus on edges instead of nodes.
( to have a better intuition look at this picture : http://i.stack.imgur.com/WlK5O.png)
We claim that in this algorithm every edge is visited at most twice, (actually it's visited exactly twice);
First time when it's traversed downward and and the second time when it's traversed upward.
To visit an edge more than twice , we have to traverse that edge it downward again : down , up , down , ....
We prove that it's not possible to have a second downward visit of an edge.
Let's assume that we traverse an edge (u , v) downward for the second time , this means that one of the ancestors of u has a successor which is a decedent of u.
This is not possible :
We know that when we are traversing an edge upward , we are looking for a left-turn edge to find a successor , so while u is on the left side of the the successor, successor of this successor is on the right side of it , by moving to the right side of a successor (to find its successor) reaching u again and therefore edge (u,v) again is impossible. (to find a successor we either move to the right or to the up but not to the left)