Time complexity of determining if two binary trees are swap-equivalent - algorithm

I have solved this problem about determining if two binary trees are equal with the code snippet from below.
The problem is as follows:
For a binary tree T, we can define a flip operation as follows: choose any > node, and swap the left and right child subtrees.
A binary tree X is flip equivalent to a binary tree Y if and only if
we can make X equal to Y after some number of flip operations.
Given the roots of two binary trees root1 and root2, return true if
the two trees are flip equivelent or false otherwise.
My reasoning is that the growth function is T(n)=4(T(n-1)) one for each of the four recursive calls where n is the height of the largest tree. I would then expect this to have a time complexity of O(4^n) but this does not match up with the reasoning in the solution which says that the time complexity is O(max(N_1,N_2)).
What is flawed in this reasoning of being O(4^n) given that the solution has a different time complexity?
class Solution {
public boolean flipEquiv(TreeNode root1, TreeNode root2) {
if (root1 == root2)
return true;
if (root1 == null || root2 == null || root1.val != root2.val)
return false;
return (flipEquiv(root1.left, root2.left) && flipEquiv(root1.right, root2.right) ||
flipEquiv(root1.left, root2.right) && flipEquiv(root1.right, root2.left));
}
}

Related

I am bit confused on Complexity comparison between 2 binary tree, if identical, below is the code for the same

Binary tree identical or not with another binary tree code below gives linear complexity i.e big O (n) where n is number of node of the binary tree with least number of nodes.
boolean identical(Node a, Node b)
{
if (a == null && b == null)
return true;
if (a != null && b != null)
return (a.data == b.data
&& identical(a.left, b.left)
&& identical(a.right, b.right));
/* 3. one empty, one not -> false */
return false;
}
(Fibonacci series using recursion gives exponential complexity)
Complexity of below code is 2^n.
class Fibonacci {
static int fib(int n)
{
if (n <= 1)
return n;
return fib(n-1) + fib(n-2);
}
public static void main (String args[])
{
int n = 9;
}
}
My question is both are looking similar but one has linear complexity and another has exponential. Could anyone clarify on both algorithms.
Fibonacci Series
If you build a tree for the recursive code to generate the fibonacci series, it will be like:
fib(n)
fib(n-1) fib(n-2)
fib(n-2) fib(n-3) fib(n-3) fib(n-4)
at what level you will encounter fib(1) so that the tree can "stop" ?
at ( n-1 )th level you will encounter fib(1) and there the recursion stops.
The number of nodes will be of order of 2^n because there are (n-1) levels.
Binary Tree Comparison
Lets consider your binary tree comparison.
Lets assume both are complete binary trees. According to your algorithm it will visit all nodes once and if 'h' is the height
of the tree , the number of nodes will be order of 2^h. You can say the complexity in that case as O(2^h).
The O(n) in this case is equivalent to O(2^h)
The difference originates in a different definition of n. While the naive recursive algorithm for Fibonacci numbers also performs a kind of traversal in a graph, the value of n is not defined by the number of nodes in that graph, but by the input number.
The binary tree comparison however, has n defined as a number of nodes.
So n has a completely different meaning in these two algorithms, and it explains why the time complexity in terms of n comes out so differently.

Find the median of an AVL tree in o(log(n)) time using the given information?

Given you know the minimum and maximum value of the AVL tree and that the tree only contains distinct integers, find the median of the tree?
k,tree,direction,currentPosition params to the recursive function
(the antecedent frame will take f(k,tree,null,null) as its arguments to solve the problem. The implementation frame will receive this initially as null,null for direction and position, and if it is the implementation frame, the current position will be adjusted accordingly. If it is not an implementation frame, then we got here by some means, and our current position and direction would have changed, so now our current position at this frame will be adjusted based on the another piece of logic provided as well.
//implementation by Anthony Toorie
f(k,tree,direction,currentPosition){
// let currentPosition = currentPosition
if(!direction && !currentPosition){
currentPosition = tree.left
}
if(direction === "left"){
currentPosition = currentPosition - |tree.right|
}
if(direction === "right"){
currentPosition = currentPosition - |tree.left|
}
if(currentPosition === k){
return tree.value
}
if(currentPosition < k){
return f(k,tree.left,"left",currentPosition)
}
if(currentPosition > k){
return f(k,tree.right,"right",currentPosition)
}
}
Sources :
https://en.wikipedia.org/wiki/Order_statistic_tree
|X| denotes the number of nodes for that tree X or subtree X.
Heavily based off quickselect aglorithm.
This will only be able to give you O(logN) in the worst case given some input N iff the tree is height-conservative or balanced. In the worst case, for a regular aka non balanced BST, the runtime would be O(N). And furthermore, on a regular tree, you would have in the worst case O(N) but also in the best case O(1) and average case O(N) as well. A tree should only contain distinct numbers, and the elements in the tree should be sortable.

Count number of nodes within range inside Binary Search Tree in O(LogN)

Given a BST and two integers 'a' and 'b' (a < b), how can we find the number of nodes such that , a < node value < b, in O(log n)?
I know one can easily find the position of a and b in LogN time, but how to count the nodes in between without doing a traversal, which is O(n)?
In each node of your Binary Search Tree, also keep count of the number of values in the tree that are lesser than its value (or, for a different tree design mentioned in the footnote below, the nodes in its left subtree).
Now, first find the node containing the value a. Get the count of values lesser than a which has been stored in this node. This step is Log(n).
Now find the node containing the value b. Get the count of values lesser than b which are stored in this node. This step is also Log(n).
Subtract the two counts and you have the number of nodes between a and b. Total complexity of this search is 2*Log(n) = O(Log(n)).
See this video. The professor explains your question here by using Splay Trees.
Simple solution:
Start checking from the root node
If Node falls within range, then increase it by 1 and check in left and right child recursively
If Node is not within range, then check the values with range. If range values are less than root, then definitely possible scenarios are left subtree. Else check in right subtree
Here is the sample code. Hope it clears.
if (node == null) {
return 0;
} else if (node.data == x && node.data == y) {
return 1;
} else if (node.data >= x && node.data <= y) {
return 1 + nodesWithInRange(node.left, x, y) + nodesWithInRange(node.right, x, y);
} else if (node.data > x && node.data > y) {
return nodesWithInRange(node.left, x, y);
} else {
return nodesWithInRange(node.right, x, y);
}
Time Complexity :- O(logn)+ O(K)
K is the number of elements between x and y.
It's not very ideal but good in case you would not like to modify the Binary Tree nodes definition.
store the inorder traversal of BST in array( it will be sorted). Searching 'a' and 'b' will take log(n) time and get their index and take the difference. this will give the number of node in range 'a' to 'b'.
space complexity O(n)
Idea is simple.
Traverse the BST starting from root.
For every node check if it lies in range.
If it lies in range then count++. And recur for both of its children.
If current node is smaller than low value of range, then recur for right child, else recur for left child.
Time complexity will be O(height + number of nodes in range)..
For your question that why it is not O(n).
Because we are not traversing the whole tree that is the number of nodes in the tree. We are just traversing the required subtree according to the parent's data.
Pseudocode
int findCountInRange(Node root, int a, int b){
if(root==null)
return 0;
if(root->data <= a && root->data >= b)
return 1 + findCountInRange(root->left, a, b)+findCountInRange(root->right, a, b);
else if(root->data < low)
return findCountInRange(root->right, a, b);
else
return findCountInRange(root->left, a, b);
}

Runtime complexity of brute-force for determining balanced binary tree

I have the following codes for the brute-force method to determine whether a binary tree is balanced or not:
public boolean IsBalanced(Node root)
{
if (root == null) return true;
return Math.abs(maxDepth(root.left) - maxDepth(root.right)) <= 1
&& IsBalanced(root.left)
&& IsBalanced(root.right)
}
public int maxDepth(Node root)
{
if (root == null) return 0;
return Math.max(maxDepth(root.left), maxDepth(root.right)) + 1;
}
I don't understand why the worst case runtime complexity is O(n^2) when the tree is a skewed tree. What I think is that if the tree is skewed, then the line
Math.abs(maxDepth(root.left) - maxDepth(root.right)) <= 1
would immediately find that the height of the left subtree of the root is over 1 more than the height of the root's right subtree. Then the time complexity of the skewed tree case should be O(n). What am I missing here? Thanks!
In the method IsBalanced(Node root) for a skewed tree when it first calls
maxDepth(root.left) it takes n recursive calls in maxDepth(root) now still the
root is not null in IsBalanced(Node root) then again it calls
maxDepth(root.left) now for n-1 times and so on.so the time complexity is sum of
first n natural numbers i.e O(n^2).

How to finding first common ancestor of a node in a binary tree?

Following is my algorithm to find first common ancestor. But I don’t know how to calculate it time complexity, can anyone help?
public Tree commonAncestor(Tree root, Tree p, Tree q) {
if (covers(root.left, p) && covers(root.left, q))
return commonAncestor(root.left, p, q);
if (covers(root.right, p) && covers(root.right, q))
return commonAncestor(root.right, p, q);
return root;
}
private boolean covers(Tree root, Tree p) { /* is p a child of root? */
if (root == null) return false;
if (root == p) return true;
return covers(root.left, p) || covers(root.right, p);
}
Ok, so let's start by identifying what the worst case for this algorithm would be. covers searches the tree from left to right, so you get the worst-case behavior if the node you are searching for is the rightmost leaf, or it is not in the subtree at all. At this point you will have visited all the nodes in the subtree, so covers is O(n), where n is the number of nodes in the tree.
Similarly, commonAncestor exhibits worst-case behavior when the first common ancestor of p and q is deep down to the right in the tree. In this case, it will first call covers twice, getting the worst time behavior in both cases. It will then call itself again on the right subtree, which in the case of a balanced tree is of size n/2.
Assuming the tree is balanced, we can describe the run time by the recurrence relation T(n) = T(n/2) + O(n). Using the master theorem, we get the answer T(n) = O(n) for a balanced tree.
Now, if the tree is not balanced, we might in the worst case only reduce the size of the subtree by 1 for each recursive call, yielding the recurrence T(n) = T(n-1) + O(n). The solution to this recurrence is T(n) = O(n^2).
You can do better than this, though.
For example, instead of simply determining which subtree contains p or q with cover, let's determine the entire path to p and q. This takes O(n) just like cover, we're just keeping more information. Now, traverse those paths in parallell and stop where they diverge. This is always O(n).
If you have pointers from each node to their parent you can even improve on this by generating the paths "bottom-up", giving you O(log n) for a balanced tree.
Note that this is a space-time tradeoff, as while your code takes O(1) space, this algorithm takes O(log n) space for a balanced tree, and O(n) space in general.
As hammar’s answer demonstrates, your algorithm is quite inefficient as many operations are repeated.
I would do a different approach: Instead of testing for every potential root node if the two given nodes are not in the same sub-tree (thus making it the first common ancestor) I would determine the the paths from the root to the two given nodes and compare the nodes. The last common node on the paths from the root downwards is then also the first common ancestor.
Here’s an (untested) implementation in Java:
private List<Tree> pathToNode(Tree root, Tree node) {
List<Tree> path = new LinkedList<Tree>(), tmp;
// root is wanted node
if (root == node) return path;
// check if left child of root is wanted node
if (root.left == node) {
path.add(node);
path.add(root.left);
return path;
}
// check if right child of root is wanted node
if (root.right == node) {
path.add(node);
path.add(root.right);
return path;
}
// find path to node in left sub-tree
tmp = pathToNode(root.left, node);
if (tmp != null && tmp.size() > 1) {
// path to node found; add result of recursion to current path
path = tmp;
path.add(0, node);
return path;
}
// find path to node in right sub-tree
tmp = pathToNode(root.right, node);
if (tmp != null && tmp.size() > 1) {
// path to node found; add result of recursion to current path
path = tmp;
path.add(0, node);
return path;
}
return null;
}
public Tree commonAncestor(Tree root, Tree p, Tree q) {
List<Tree> pathToP = pathToNode(root, p),
pathToQ = pathToNode(root, q);
// check whether both paths exist
if (pathToP == null || pathToQ == null) return null;
// walk both paths in parallel until the nodes differ
while (iterP.hasNext() && iterQ.hasNext() && iterP.next() == iterQ.next());
// return the previous matching node
return iterP.previous();
}
Both pathToNode and commonAncestor are in O(n).

Resources