I am looking for algorithm to do BFS in O(n) for n-ary tree, I found the following algorithm, but I have a problem in analysing the time complexity.
I am not sure if it's O(n) or O(n^2).
Can someone explain the time complexity or give an alternative algorithm which run in O(n)
Thanks
breadthFirstSearch = (root, output = []) => {
if (!root) return output;
const q = new Queue();
q.enqueue(root);
while (!q.isEmpty()) {
const node = q.dequeue();
output.push(node.val);
for (let child of node.children) {
q.enqueue(child);
}
}
return output;
};
That is indeed a BFS algorithm for a generic tree. If you define 𝑛 as the 𝑛 in 𝑛-ary tree, then the time complexity is not related to that 𝑛.
If however, 𝑛 represents the total number of nodes in the tree, then the time complexity is O(𝑛) because every node is enqueued exactly once, and dequeued exactly once. As queue operations are O(1), the time complexity is O(𝑛).
Related
The sorting algorithm can be described as follows:
1. Create Binary Search Tree from the Array data.
(For multiple occurences, increment occurence variable of the current Node)
2. Traverse BST in inorder fashion.
(Inorder traversal will return Sorted order of elements in array).
3. At each node in inorder traversal, overwrite the array element at current index(index beginning at 0) with current node value.
Here's a Java implementation for the same:
Structure of Node Class
class Node {
Node left;
int data;
int occurence;
Node right;
}
inorder function
(returning type is int just for obtaining correct indices at every call, they serve no other purpose)
public int inorder(Node root,int[] arr,int index) {
if(root == null) return index;
index = inorder(root.left,arr,index);
for(int i = 0; i < root.getOccurence(); i++)
arr[index++] = root.getData();
index = inorder(root.right,arr,index);
return index;
}
main()
public static void main(String[] args) {
int[] arr = new int[]{100,100,1,1,1,7,98,47,13,56};
BinarySearchTree bst = new BinarySearchTree(new Node(arr[0]));
for(int i = 1; i < arr.length; i++)
bst.insert(bst.getRoot(),arr[i]);
int dummy = bst.inorder(bst.getRoot(),arr,0);
System.out.println(Arrays.toString(arr));
}
The space complexity is terrible, I know, but it should not be such a big issue unless the sort is used for an extremely HUGE dataset. However, as I see it, isn't Time Complexity O(n)? (Insertions and Retrieval from BST is O(log n), and each element is touched once, making it O(n)). Correct me if I am wrong as I haven't yet studied Big-O well.
Assuming that the amortized (average) complexity of an insertion is O(log n), then N inserts (construction of the tree) will give O(log(1) + log(2) + ... + log(N-1) + log(N) = O(log(N!)) = O(NlogN) (Stirling's theorem). To read back the sorted array, perform an in-order depth-first traversal, which visits each node once, and is hence O(N). Combining the two you get O(NlogN).
However this requires that the tree is always balanced! This will not be the case in general for the most basic binary tree, as insertions do not check the relative depths of each child tree. There are many variants which are self-balancing - the two most famous being Red-Black trees and AVL trees. However the implementation of balancing is quite complicated and often leads to a higher constant factor in real-life performance.
the goal was to implement an O(n) algorithm to sort an Array of n elements with each element in the range [1, n^2]
In that case Radix sort (counting variation) would be O(n), taking a fixed number of passes (logb(n^2)), where b is the "base" used for the field, and b a function of n, such as b == n, where it would take two passes, or b == sqrt(n), where it would take four passes, or if n is small enough, b == n^2 in where it would take one pass and counting sort could be used. b could be rounded up to the next power of 2 in order to replace division and modulo with binary shift and binary and. Radix sort needs O(n) extra space, but so do the links for a binary tree.
Trying to figure out why complexity is O(n) for this code:
int sum(Node node) {
if (node == null) {
return 0;
}
return sum(node.left) + node.value + sum(node.right);
}
Node is:
class Node {
int value;
Node left;
Node right;
}
This is from CCI book. Shouldn't it be O(2^n) since it iterates through each node?
Yet this one is O(2^n), which is clear to me why:
int f(int n) {
if (n <= 1) {
return 1;
}
return f(n - 1) + f(n - 1);
}
Thanks for help.
An algorithm is said to take linear time, or O(n) time, if its time
complexity is O(n). Informally, this means that for large enough input
sizes the running time increases linearly with the size of the input.
For example, a procedure that adds up all elements of a list requires
time proportional to the length of the list.
From Wikipedia
It is very reasonable that the alogrithm complexity is O(n) since the recursive function numbers of calls is proportional to the number of items in the tree, there is n items in the tree and we will pass each item only once, this sounds very linear realtion to me.
In contrast to the other algorithm which is very similar to recursive Fibonacci Sequence algorithm, it this algorithm we will pass each number from 1 until n much more times than once and not linear proportional to n either, this explains why it has O(2^n) complexity.
I have seen various posts here that computes the diameter of a binary tree. One such solution can be found here (Look at the accepted solution, NOT the code highlighted in the problem).
I'm confused why the time complexity of the code would be O(n^2). I don't see how traversing the nodes of a tree twice (once for the height (via getHeight()) and once for the diameter (via getDiameter()) would be n^2 instead of n+n which is 2n. Any help would be appreciated.
As you mentioned, the time complexity of getHeight() is O(n).
For each node, the function getHeight() is called. So the complexity for a single node is O(n). Hence the complexity for the entire algorithm (for all nodes) is O(n*n).
It should be O(N) to calculate the height of every subtree rooted at every node, you only have to traverse the tree one time using an in-order traversal.
int treeHeight(root)
{
if(root == null) return -1;
root->height = max(treeHeight(root->rChild),treeHeight(root->lChild)) + 1;
return root->height;
}
This will visit each node 1 time, so has order O(N).
Combine this with the result from the linked source, and you will be able to determine which 2 nodes have the longest path between in at worst another traversal.
Indeed this describes the way to do it in O(N)
The different between this solution (the optimized one) and the referenced one is that the referenced solution re-computes tree height every time after shrinking the search size by only 1 node (the root node). Thus from above the complexity will be O(N + (N - 1) + ... + 1).
The sum
1 + 2 + ... + N
is equal to
= N(N + 1)/2
And so the complexity of sum of all the operations from the repeated calls to getHeight will be O(N^2)
For completeness sake, conversely, the optimized solution getHeight() will have complexity O(1) after the pre computation because each node will store the value as a data member of the node.
All subtree heights may be precalculated (using O(n) time), so what total time complexity of finding the diameter would be O(n).
What is the time complexity of binary tree level order traversal ? Is it O(n) or O(log n)?
void levelorder(Node *n)
{ queue < Node * >q;
q.enqueue(n);
while(!q.empty())
{
Node * node = q.front();
DoSmthwith node;
q.dequeue();
if(node->left != NULL)
q.enqueue(node->left);
if (node->right != NULL)
q.enqueue(node->right);
}
}
It is O(n), or to be exact Theta(n).
Have a look on each node in the tree - each node is "visited" at most 3 times, and at least once) - when it is discovered (all nodes), when coming back from the left son (non leaf) and when coming back from the right son (non leaf), so total of 3*n visits at most and n visites at least per node. Each visit is O(1) (queue push/pop), totaling in - Theta(n).
Another way to approach this problem is identifying that a level-order traversal is very similar to the breadth-first search of a graph. A breadth-first traversal has a time complexity that is O(|V| + |E|) where |V| is the number of vertices and |E| is the number of edges.
In a tree, the number of edges is around equal to the number of vertices. This makes it overall linear in the number of nodes.
The time and space complexities are O(n). n = no. of nodes.
Space complexity - Queue size would be proportional to number of nodes
O(n)
Time complexity - O(n) as each node is visited twice. Once during
enqueue operation and once during dequeue operation.
This is a special case of BFS. You can read about BFS (Breadth First Search) http://en.wikipedia.org/wiki/Breadth-first_search .
Given the following algorithm for inserting elements into BST :
void InsertNode(Node* &treeNode, Node *newNode)
{
if (treeNode == NULL)
treeNode = newNode;
else if (newNode->key < treeNode->key)
InsertNode(treeNode->left, newNode);
else
InsertNode(treeNode->right, newNode);
}
This algorithm runs in O(n) worst case .
Is it possible to insert elements into a BST using an algorithm with a lower complexity then O(n) , in the worst case ?
Remark 1 : This is not homework (preparing for an upcoming exam)
Remark 2 : without using AVL trees
Thanks
The insert is equivalent to a search operation. Clearly if your tree is not balanced, the worst-case will always be a tree in the form of a linked-list. So there is no way to avoid the O(n).