Runtime complexity of brute-force for determining balanced binary tree - binary-tree

I have the following codes for the brute-force method to determine whether a binary tree is balanced or not:
public boolean IsBalanced(Node root)
{
if (root == null) return true;
return Math.abs(maxDepth(root.left) - maxDepth(root.right)) <= 1
&& IsBalanced(root.left)
&& IsBalanced(root.right)
}
public int maxDepth(Node root)
{
if (root == null) return 0;
return Math.max(maxDepth(root.left), maxDepth(root.right)) + 1;
}
I don't understand why the worst case runtime complexity is O(n^2) when the tree is a skewed tree. What I think is that if the tree is skewed, then the line
Math.abs(maxDepth(root.left) - maxDepth(root.right)) <= 1
would immediately find that the height of the left subtree of the root is over 1 more than the height of the root's right subtree. Then the time complexity of the skewed tree case should be O(n). What am I missing here? Thanks!

In the method IsBalanced(Node root) for a skewed tree when it first calls
maxDepth(root.left) it takes n recursive calls in maxDepth(root) now still the
root is not null in IsBalanced(Node root) then again it calls
maxDepth(root.left) now for n-1 times and so on.so the time complexity is sum of
first n natural numbers i.e O(n^2).

Related

Time complexity of determining if two binary trees are swap-equivalent

I have solved this problem about determining if two binary trees are equal with the code snippet from below.
The problem is as follows:
For a binary tree T, we can define a flip operation as follows: choose any > node, and swap the left and right child subtrees.
A binary tree X is flip equivalent to a binary tree Y if and only if
we can make X equal to Y after some number of flip operations.
Given the roots of two binary trees root1 and root2, return true if
the two trees are flip equivelent or false otherwise.
My reasoning is that the growth function is T(n)=4(T(n-1)) one for each of the four recursive calls where n is the height of the largest tree. I would then expect this to have a time complexity of O(4^n) but this does not match up with the reasoning in the solution which says that the time complexity is O(max(N_1,N_2)).
What is flawed in this reasoning of being O(4^n) given that the solution has a different time complexity?
class Solution {
public boolean flipEquiv(TreeNode root1, TreeNode root2) {
if (root1 == root2)
return true;
if (root1 == null || root2 == null || root1.val != root2.val)
return false;
return (flipEquiv(root1.left, root2.left) && flipEquiv(root1.right, root2.right) ||
flipEquiv(root1.left, root2.right) && flipEquiv(root1.right, root2.left));
}
}

I am bit confused on Complexity comparison between 2 binary tree, if identical, below is the code for the same

Binary tree identical or not with another binary tree code below gives linear complexity i.e big O (n) where n is number of node of the binary tree with least number of nodes.
boolean identical(Node a, Node b)
{
if (a == null && b == null)
return true;
if (a != null && b != null)
return (a.data == b.data
&& identical(a.left, b.left)
&& identical(a.right, b.right));
/* 3. one empty, one not -> false */
return false;
}
(Fibonacci series using recursion gives exponential complexity)
Complexity of below code is 2^n.
class Fibonacci {
static int fib(int n)
{
if (n <= 1)
return n;
return fib(n-1) + fib(n-2);
}
public static void main (String args[])
{
int n = 9;
}
}
My question is both are looking similar but one has linear complexity and another has exponential. Could anyone clarify on both algorithms.
Fibonacci Series
If you build a tree for the recursive code to generate the fibonacci series, it will be like:
fib(n)
fib(n-1) fib(n-2)
fib(n-2) fib(n-3) fib(n-3) fib(n-4)
at what level you will encounter fib(1) so that the tree can "stop" ?
at ( n-1 )th level you will encounter fib(1) and there the recursion stops.
The number of nodes will be of order of 2^n because there are (n-1) levels.
Binary Tree Comparison
Lets consider your binary tree comparison.
Lets assume both are complete binary trees. According to your algorithm it will visit all nodes once and if 'h' is the height
of the tree , the number of nodes will be order of 2^h. You can say the complexity in that case as O(2^h).
The O(n) in this case is equivalent to O(2^h)
The difference originates in a different definition of n. While the naive recursive algorithm for Fibonacci numbers also performs a kind of traversal in a graph, the value of n is not defined by the number of nodes in that graph, but by the input number.
The binary tree comparison however, has n defined as a number of nodes.
So n has a completely different meaning in these two algorithms, and it explains why the time complexity in terms of n comes out so differently.

Number of Leaf Nodes in a Binary Tree at a given Level?

Given a Binary Tree, how can we find the number of Leaf nodes at a particular level, considering the level of the root is 1 and so on.
You can simply use BFS or DFS algorithm. Something like that (in pseudocode):
Node_counter(root, N):
1. IF root is null or N<1 return 0
2. IF N==1
2.1 if root is leaf return 1
2.2 otherwise return 0
3. Otherwise return Node_counter(root->left, N-1)+Node_counter(root->right, N-1)
Complexity is O(N)
private int noOfleafLevel(Node root, int leaflevel) {
if(root==null)
return 0;
if(root.left==null&&root.right==null&&leaflevel==1)
return 1;
else
return noOfleafLevel(root.left, leaflevel - 1)+noOfleafLevel(root.right, leaflevel - 1);
}
This is the code for getting Leaf at a particular level, using level order traversal.

Time Complexity for Finding the Minimum Value of a Binary Tree

I wrote a recursive function for finding the min value of a binary tree (assume that it is not ordered).
The code is as below.
//assume node values are positive int.
int minValue (Node n) {
if(n == null) return 0;
leftmin = minValue(n.left);
rightmin = minValue(n.right);
return min(n.data, leftmin, rightmin);
}
int min (int a, int b, int c) {
int min = 0;
if(b != 0 && c != 0) {
if(a<=b) min =a;
else min =b;
if(min<=c) return min;
else return c;
}
if(b==0) {
if(a<=c) return a;
else return c;
}
if(c==0) {
if(a<=b) return a;
else return b;
}
}
I guess the time complexity of the minValue function is O(n) by intuition.
Is this correct? Can someone show the formal proof of the time complexity of minValue function?
Assuming your binary tree is not ordered, then your search algorithm will have O(N) running time, so your intuition is correct. The reason it will take O(N) is that you will, on average, have to search half the nodes in the tree to find an input. But this assumes that the tree is completely unordered.
For a sorted and balanced binary tree, searching will take O(logN). The reason for this is that the search will only ever have to traverse one single path down the tree. A balanced tree with N nodes will have a height of log(N), and this explains the complexity for searching. Consider the following tree for example:
5
/ \
3 7
/ \ / \
1 4 6 8
There are 8 (actually 7) nodes in the tree, but the height is only log(8) = 2. You can convince yourself that you will only ever have to traverse this tree once to find a value or fail doing so.
Note that for a binary tree which is not balanced these complexities may not apply.
The number of comparisons is n-1. The proof is an old chestnut, usually applied to the problem of saying how many matches are needed in a single-elimination tennis match. Each comparison removes exactly one number from consideration and so if there's initially n numbers in the tree, you need n-1 comparisons to reduce that to 1.
You can lookup and remove the min/max of a BST in constant time O(1), if you implement it yourself and store a reference to head/tail. Most implementations don't do that, only storing the root-node. But if you analyze how a BST works, given a ref to min/max (or aliased as head/tail), then you can find the next min/max in constant time.
See this for more info:
https://stackoverflow.com/a/74905762/1223975

How to finding first common ancestor of a node in a binary tree?

Following is my algorithm to find first common ancestor. But I don’t know how to calculate it time complexity, can anyone help?
public Tree commonAncestor(Tree root, Tree p, Tree q) {
if (covers(root.left, p) && covers(root.left, q))
return commonAncestor(root.left, p, q);
if (covers(root.right, p) && covers(root.right, q))
return commonAncestor(root.right, p, q);
return root;
}
private boolean covers(Tree root, Tree p) { /* is p a child of root? */
if (root == null) return false;
if (root == p) return true;
return covers(root.left, p) || covers(root.right, p);
}
Ok, so let's start by identifying what the worst case for this algorithm would be. covers searches the tree from left to right, so you get the worst-case behavior if the node you are searching for is the rightmost leaf, or it is not in the subtree at all. At this point you will have visited all the nodes in the subtree, so covers is O(n), where n is the number of nodes in the tree.
Similarly, commonAncestor exhibits worst-case behavior when the first common ancestor of p and q is deep down to the right in the tree. In this case, it will first call covers twice, getting the worst time behavior in both cases. It will then call itself again on the right subtree, which in the case of a balanced tree is of size n/2.
Assuming the tree is balanced, we can describe the run time by the recurrence relation T(n) = T(n/2) + O(n). Using the master theorem, we get the answer T(n) = O(n) for a balanced tree.
Now, if the tree is not balanced, we might in the worst case only reduce the size of the subtree by 1 for each recursive call, yielding the recurrence T(n) = T(n-1) + O(n). The solution to this recurrence is T(n) = O(n^2).
You can do better than this, though.
For example, instead of simply determining which subtree contains p or q with cover, let's determine the entire path to p and q. This takes O(n) just like cover, we're just keeping more information. Now, traverse those paths in parallell and stop where they diverge. This is always O(n).
If you have pointers from each node to their parent you can even improve on this by generating the paths "bottom-up", giving you O(log n) for a balanced tree.
Note that this is a space-time tradeoff, as while your code takes O(1) space, this algorithm takes O(log n) space for a balanced tree, and O(n) space in general.
As hammar’s answer demonstrates, your algorithm is quite inefficient as many operations are repeated.
I would do a different approach: Instead of testing for every potential root node if the two given nodes are not in the same sub-tree (thus making it the first common ancestor) I would determine the the paths from the root to the two given nodes and compare the nodes. The last common node on the paths from the root downwards is then also the first common ancestor.
Here’s an (untested) implementation in Java:
private List<Tree> pathToNode(Tree root, Tree node) {
List<Tree> path = new LinkedList<Tree>(), tmp;
// root is wanted node
if (root == node) return path;
// check if left child of root is wanted node
if (root.left == node) {
path.add(node);
path.add(root.left);
return path;
}
// check if right child of root is wanted node
if (root.right == node) {
path.add(node);
path.add(root.right);
return path;
}
// find path to node in left sub-tree
tmp = pathToNode(root.left, node);
if (tmp != null && tmp.size() > 1) {
// path to node found; add result of recursion to current path
path = tmp;
path.add(0, node);
return path;
}
// find path to node in right sub-tree
tmp = pathToNode(root.right, node);
if (tmp != null && tmp.size() > 1) {
// path to node found; add result of recursion to current path
path = tmp;
path.add(0, node);
return path;
}
return null;
}
public Tree commonAncestor(Tree root, Tree p, Tree q) {
List<Tree> pathToP = pathToNode(root, p),
pathToQ = pathToNode(root, q);
// check whether both paths exist
if (pathToP == null || pathToQ == null) return null;
// walk both paths in parallel until the nodes differ
while (iterP.hasNext() && iterQ.hasNext() && iterP.next() == iterQ.next());
// return the previous matching node
return iterP.previous();
}
Both pathToNode and commonAncestor are in O(n).

Resources