In-order successor of a Binary Tree - data-structures

I was writing a code for finding the in-order successor for a binary tree ( NOT A BINARY SEARCH TREE ). It's just a practice problem . More like to brush up tree concepts.
I was doing an in-order traversal and keeping track of the previous node . Whenever the previous node becomes equal to the node whose successor we are searching for , I print the current node .
void inOrder(node* root , node* successorFor) {
static node* prev = null;
if(!root)
return;
inOrder(root->left,successorFor);
if(prev == successorFor )
print(root);
prev = root;
inOrder(root->right,successorFor);
}
I was looking for some test cases where my solution might fail ? And whether my approach is correct or not ? If it's not , then how should i go about it ?

The basic logic of this algorithm is a tree walk.
You make a call like print(TREE-SUCCESSOR(root, k).key)
TREE-SUCCESSOR(root, k)
if root == NIL
return NIL
left = TREE-SUCCESSOR(root.left, k)
right = TREE-SUCCESSOR(root.right, k)
successor = NIL
if root.key > k
successor = root
if left
if k < left.key
if successor
if left.key < successor.key
successor = left
else
successor = left
if right
if k < right.key
if successor
if right.key < successor.key
successor = right
else
successor = right
return successor

Related

Split a Binary Tree using specific methods

Given a binary tree, I have to return a tree containing all elements that smaller than k, greater than k and a tree containing only one element - k.
Allowed methods to use:
remove node - O(n)
insert - O(n)
find - O(n)
find min - O(n)
I'm assuming these methods complexity, because in the exercise it's not written that tree is balanced.
Required complexity - O(n)
Original tree have to maintain its structure.
I'm completely stuck. Any help is much appreciated!
Given tree is Binary search tree as well as outputs should be binary search trees.
I see no way to design a O(n) algorithm with the given blackbox functions and their time complexities, given that they could only be called a (maximum) constant number of times (like 3 times) to stay within the O(n) constraint.
But if it is allowed to access and create BSTs with basic, standard node manipulations (traversing via left or right child, setting the left or right child to a given subtree), then you could do the following:
Create three new empty BSTs that will be populated and returned. Name them left, mid, and right, where the first one will have all values less than k, the second one will have at the most one node (with value k), and the final one will have all the rest.
While populating left and right, maintain references to the nodes that are closest to value k: in left that will be the node with the greatest value, and in right the node with the least value.
Follow these steps:
Apply the usual binary search to walk from the root towards the node with value k
While doing this: whenever you choose the left child of a node, the node itself and its right subtree then belong in right. However, the left child should at this moment not be included, so create a new node that copies the current node, but without its left child. Maintain a reference to the node with the least value in right, as that is the node that may get a left new subtree when this step occurs more than once.
Do the similar thing for when you choose the right child of a node.
When the node with k is found, the algorithm can add its left subtree to left and the right subtree to right, and create the single-node tree with value k.
Time complexity
The search towards the node with value k could take O(n) in the worst case, as the BST is not given to be balanced. All the other actions (adding a subtree to a specific node in one of the new BSTs) run in constant time, so in total they are executed O(n) times in the worst case.
If the given BST is balanced (not necessarily perfectly, but like with AVL rules), then the algorithm runs in O(logn) time. However, the output BSTs may not be as balanced, and may violate AVL rules so that rotations would be needed.
Example Implementation
Here is an implementation in JavaScript. When you run this snippet, a test case will run one a BST that has nodes with values 0..19 (inserted in random order) and k=10. The output will iterate the three created BSTs in in-order, so to verify that they output 0..9, 10, and 11..19 respectively:
class Node {
constructor(value, left=null, right=null) {
this.value = value;
this.left = left;
this.right = right;
}
insert(value) { // Insert as a leaf, maintaining the BST property
if (value < this.value) {
if (this.left !== null) {
return this.left.insert(value);
}
this.left = new Node(value);
return this.left;
} else {
if (this.right !== null) {
return this.right.insert(value);
}
this.right = new Node(value);
return this.right;
}
}
// Utility function to iterate the BST values in in-order sequence
* [Symbol.iterator]() {
if (this.left !== null) yield * this.left;
yield this.value;
if (this.right !== null) yield * this.right;
}
}
// The main algorithm
function splitInThree(root, k) {
let node = root;
// Variables for the roots of the trees to return:
let left = null;
let mid = null;
let right = null;
// Reference to the nodes that are lexically closest to k:
let next = null;
let prev = null;
while (node !== null) {
// Create a copy of the current node
newNode = new Node(node.value);
if (k < node.value) {
// All nodes at the right go with it, but it gets no left child at this stage
newNode.right = node.right;
// Merge this with the tree we are creating for nodes with value > k
if (right === null) {
right = newNode;
} else {
next.left = newNode;
}
next = newNode;
node = node.left;
} else if (k > node.value) {
// All nodes at the left go with it, but it gets no right child at this stage
newNode.left = node.left;
// Merge this with the tree we are creating for nodes with value < k
if (left === null) {
left = newNode;
} else {
prev.right = newNode;
}
prev = newNode;
node = node.right;
} else {
// Create the root-only tree for k
mid = newNode;
// The left subtree belongs in the left tree
if (left === null) {
left = node.left;
} else {
prev.right = node.left;
}
// ...and the right subtree in the right tree
if (right === null) {
right = node.right;
} else {
next.left = node.right;
}
// All nodes have been allocated to a target tree
break;
}
}
// return the three new trees:
return [left, mid, right];
}
// === Test code for the algorithm ===
// Utility function
function shuffled(a) {
for (let i = a.length - 1; i > 0; i--) {
const j = Math.floor(Math.random() * (i + 1));
[a[i], a[j]] = [a[j], a[i]];
}
return a;
}
// Create a shuffled array of the integers 0...19
let arr = shuffled([...Array(20).keys()]);
// Insert these values into a new BST:
let root = new Node(arr.pop());
for (let val of arr) root.insert(val);
// Apply the algorithm with k=10
let [left, mid, right] = splitInThree(root, 10);
// Print out the values from the three BSTs:
console.log(...left); // 0..9
console.log(...mid); // 10
console.log(...right); // 11..19
Essentially, your goal is to create a valid BST where k is the root node; in this case, the left subtree is a BST containing all elements less than k, and the right subtree is a BST containing all elements greater than k.
This can be achieved by a series of tree rotations:
First, do an O(n) search for the node of value k, building a stack of its ancestors up to the root node.
While there are any remaining ancestors, pop one from the stack, and perform a tree rotation making k the parent of this ancestor.
Each rotation takes O(1) time, so this algorithm terminates in O(n) time, because there are at most O(n) ancestors. In a balanced tree, the algorithm takes O(log n) time, although the result is not a balanced tree.
In your question you write that "insert" and "remove" operations take O(n) time, but that this is your assumption, i.e. it is not stated in the question that these operations take O(n) time. If you are operating only nodes you already have pointers to, then basic operations take O(1) time.
If it is required not to destroy the original tree, then you can begin by making a copy of it in O(n) time.
I really don't see a simple and efficient way to split with the operations that you mention. But I think that achieving a very efficient split is relatively easy.
If the tree were balanced, then you can perform your split in O(log n) if you define a special operation called join exclusive. Let me first define join_ex() as the operation in question:
Node * join_exclusive(Node *& ts, Node *& tg)
{
if (ts == NULL)
return tg;
if (tg == NULL)
return ts;
tg=.llink) = join_exclusive(ts->rlink, tg->llink);
ts->rlink = tg;
Node * ret_val = ts;
ts = tg = NULL; // empty the trees
return ret_val;
}
join_ex() assumes that you want to build a new tree from two BST ts and tr such that every key in ts is less than everyone in tr.
If you have two exclusive trees T< and T>:
Then join_ex() can be seen as follows:
Note that if you take any node for any BST, then its subtrees meet this condition; every key in the left subtree is less than everyone in the right one. You can design a nice deletion algorithm based on join_ex().
Now we are ready for the split operation:
void split_key_rec(Node * root, const key_type & key, Node *& ts, Node *& tg)
{
if (root == NULL)
{
ts = tg = NULL;
return;
}
if (key < root->key)
{
split_key_rec(root->llink, key, ts, root->llink);
tg = root;
}
else
{
split_key_rec(root->rlink, key, root->rlink, tg)
ts = root;
}
}
If you set root as T in this figure
Then a pictorial representation of split can be seen thus:
split_key_rec() splits the tree into two trees ts and tg according to a key k. At the end of the operation, ts contains a BST with keys less than k and tg is a BST with keys greater or equal than k.
Now, to complete your requirement, you call split_key_rec(t, k, ts, tg) and you get in ts a BST with all the keys less than k. Almost symmetrically, you get in tg a BST with all the keys greater or equal than k. So, the last thing is to verify if the root of tg is k and, if this is the case, you unlink, and you get your result in ts, k, and tg' (tg' is the tree without k).
If k is in the original tree, then the root of tg will be k, and tg won't have left subtree.

Longest path for each node to a leaf in a BST

For each node in a BST, what is the length of the longest path from the node to a leaf? (worst case)
I think in the worst case we have a linear path from a node to a leaf. If there are n nodes in a tree, then the running time is O(n*n). Is this right?
You can do this in linear time, assuming it's "A given node to every leaf" or "Every node to a given leaf." If it's "Every Node to Every Leaf", that's a bit harder.
To do this: Walk from the "target" to the root, marking each node by distance; colors all of these nodes red. (So, the root holds the depth of the target, and the target holds 0). For each red node, walk its non-red children, adding 1 as you descend, starting from the value of the red node.
It's not O(n*n) because you're able to re-use a lot of your work; you don't find one path, then start completely over to find the next.
The longest path from a node to a leaf would be
1. Going all the way from the node to the root
2.Then going from the root down to the deepest leaf
3.Things to ensure would be not to traverse a node twice because if thus was allowed, the path could be made infinitely long by going between any two nodes multiple times
x
/ \
b a
\
c
\
d
The longest path from c to leaf would do two things
1. Go from c to x (count this length)
2. Go from x to deepest leaf which does not have c in its path (in this case that leaf is b)
The code below has a time complexity of O(n) for finding longest distance from a single node
Therefore for finding distance for all nodes would be O(n^2)
public int longestPath(Node n, Node root) {
int path = 0;
Node x = n;
while (x.parent != null) {
x = x.parent;
path++;
}
int height = maxHeight(root, n);
return path + height;
}
private int maxHeight(Node x, Node exclude) {
if (x == exclude)
return -1;
if (x == null)
return -1;
if (x.left == null && x.right == null)
return 0;
int l = maxHeight(x.left, exclude);
int r = maxHeight(x.right, exclude);
return l > r ? l + 1 : r + 1;
}
My solution would be.
1) Check if the node has subtree, if yes then find the height of that subtree.
2) Find the height of rest of the tree, as mentioned by #abhaybhatia
return the max of two heights

Generating all binary trees of N nodes in lexigraphical order

I was just curious if anyone has an algorithm for how to generate binary trees of N nodes in lexigraphical order.
The very first binary tree would be a right chain of length N. The last tree would be a left chain of length N. I would like to know how to generate the trees in between the first binary tree and last binary tree in lexigraphical order.
A tree with 4 nodes would have 14 binary trees.
A tree with 3 nodes would have 5 binary trees.
etc.
EDIT:
So, when you predorder traverse a tree, if you hit a non-null node you output a 1, if you hit a null output a 0. So the tree is ordered from 1,2,3,...N. A tree of node 5 would have a right chain of 1,2,3,4,5 in this order. I was thinking theoreticially this could work: If I just take the lowest numbered leaf node and reverse preorder traverse one position and move this node to that position. If this node was originally a child of its node-1, and after reverse preorder traversing this node is no longer a child of node-1, than I shift all LEAF nodes less than this node to the best possible preorder position (which should probably be the most right branch).
I won't give you a direct solution. But I will give an solution here for a similar problem: Given n, generate all structurally unique BST's (binary search trees) that store values 1...n.
I think it is not hard for you to modify my solution a little bit to solve your problem.
public class Solution {
public List<TreeNode> generateTrees(int n) {
return helper(1, n);
}
private List<TreeNode> helper(int start, int end) {
List<TreeNode> result = new ArrayList<TreeNode>();
if (start > end) {
result.add(null);
return result;
}
// You just need to understand how below for loop works.
for (int i = start; i <= end; i++) {
List<TreeNode> left = helper(start, i - 1);
List<TreeNode> right = helper(i + 1, end);
for (TreeNode l : left) {
for (TreeNode r : right) {
TreeNode root = new TreeNode(i);
root.left = l;
root.right = r;
result.add(root);
}
}
}
return result;
}
}
public class TreeNode {
int val;
TreeNode left;
TreeNode right;
TreeNode(int x) { val = x; left = null; right = null; }
}
The solution is pretty straightforward using Dynamic Programming.

Deleting all nodes in a binary tree using O(1) auxiliary storage space?

The standard algorithm for deleting all nodes in a binary tree uses a postorder traversal over the nodes along these lines:
if (root is not null) {
recursively delete left subtree
recursively delete right subtree
delete root
}
This algorithm uses O(h) auxiliary storage space, where h is the height of the tree, because of the space required to store the stack frames during the recursive calls. However, it runs in time O(n), because every node is visited exactly once.
Is there an algorithm to delete all the nodes in a binary tree using only O(1) auxiliary storage space without sacrificing runtime?
It is indeed possible to delete all the nodes in a binary tree using O(n) and O(1) auxiliary storage space by using an algorithm based on tree rotations.
Given a binary tree with the following shape:
u
/ \
v C
/ \
A B
A right rotation of this tree pulls the node v above the node u and results in the following tree:
v
/ \
A u
/ \
B C
Note that a tree rotation can be done in O(1) time and space by simply changing the root of the tree to be v, setting u's left child to be v's former right child, then setting v's right child to be u.
Tree rotations are useful in this context because a right rotation will always decrease the height of the left subtree of the tree by one. This is useful because of a clever observation: it is extremely easy to delete the root of the tree if it has no left subchild. In particular, if the tree is shaped like this:
v
\
A
Then we can delete all the nodes in the tree by deleting the node v, then deleting all the nodes in its subtree A. This leads to a very simple algorithm for deleting all the nodes in the tree:
while (root is not null) {
if (root has a left child) {
perform a right rotation
} else {
delete the root, and make the root's right child the new root.
}
}
This algorithm clearly uses only O(1) storage space, because it needs at most a constant number of pointers to do a rotation or to change the root and the space for these pointers can be reused across all iterations of the loop.
Moreover, it can be shown that this algorithm also runs in O(n) time. Intuitively, it's possible to see this by looking at how many times a given edge can be rotated. First, notice that whenever a right rotation is performed, an edge that goes from a node to its left child is converted into a right edge that runs from the former child back to its parent. Next, notice that once we perform a rotation that moves node u to be the right child of node v, we will never touch node u again until we have deleted node v and all of v's left subtree. As a result, we can bound the number of total rotations that will ever be done by noting that every edge in the tree will be rotated with its parent at most once. Consequently, there are at most O(n) rotations done, each of which takes O(1) time, and exactly n deletions done. This means that the algorithm runs in time O(n) and uses only O(1) space.
In case it helps, I have a C++ implementation of this algorithm, along with a much more in-depth analysis of the algorithm's behavior. It also includes formal proofs of correctness for all of the steps of the algorithm.
Hope this helps!
Let me start with a serious joke: If you set the root of a BST to null, you effectively delete all the nodes in the tree (the garbage collector will make the space available). While the wording is Java specific, the idea holds for other programming languages. I mention this just in case you were at a job interview or taking an exam.
Otherwise, all you have to do is use a modified version of the DSW algorithm. Basically turn the tree into a backbone and then delete as you would a linked list. Space O(1) and time O(n). You should find talks of DSW in any textbook or online.
Basically DSW is used to balance a BST. But for your case, once you get the backbone, instead of balancing, you delete like you would a linked list.
Algorithm 1, O(n) time and O(1) space:
Delete node immediately unless it has both children. Otherwise get to the leftmost node reversing 'left' links to ensure all nodes are reachable - the leftmost node becomes new root:
void delete_tree(Node *node) {
Node *p, *left, *right;
for (p = node; p; ) {
left = p->left;
right = p->right;
if (left && right) {
Node *prev_p = nullptr;
do {
p->left = prev_p;
prev_p = p;
p = left;
} while ((left = p->left) != nullptr);
p->left = p->right;
p->right = prev_p; //need it on the right to avoid loop
} else {
delete p;
p = (left) ? left : right;
}
}
}
Algorithm 2, O(n) time and O(1) space: Traverse nodes depth-first, replacing child links with links to parent. Each node is deleted on the way up:
void delete_tree(Node *node) {
Node *p, *left, *right;
Node *upper = nullptr;
for (p = node; p; ) {
left = p->left;
right = p->right;
if (left && left != upper) {
p->left = upper;
upper = p;
p = left;
} else if (right && right != upper) {
p->right = upper;
upper = p;
p = right;
} else {
delete p;
p = upper;
if (p)
upper = (p->left) ? p->left : p->right;
}
}
}
I'm surprised by all the answers above that require complicated operations.
Removing nodes from a BST with O(1) additional storage is possible by simply replacing all recursive calls with a loop that searches for the node and also keeps track the current node's parent. Using recursion is only simpler because the recursive calls automatically store all ancestors of the searched node in a stack. However, it's not necessary to store all ancestors. It's only necessary to store the searched node and its parent, so the searched node can be unlinked. Storing all ancestors is simply a waste of space.
Solution in Python 3 is below. Don't be thrown off by the seemingly recursive call to delete --- the maximum recursion depth here is 2 since the second call to delete is guaranteed to result in the delete base case (root node containing the searched value).
class Tree(object):
def __init__(self, x):
self.value = x
self.left = None
self.right = None
def remove_rightmost(parent, parent_left, child):
while child.right is not None:
parent = child
parent_left = False
child = child.right
if parent_left:
parent.left = child.left
else:
parent.right = child.left
return child.value
def delete(t, q):
if t is None:
return None
if t.value == q:
if t.left is None:
return t.right
else:
rightmost_value = remove_rightmost(t, True, t.left)
t.value = rightmost_value
return t
rv = t
while t is not None and t.value != q:
parent = t
if q < t.value:
t = t.left
parent_left = True
else:
t = t.right
parent_left = False
if t is None:
return rv
if parent_left:
parent.left = delete(t, q)
else:
parent.right = delete(t, q)
return rv
def deleteFromBST(t, queries):
for q in queries:
t = delete(t, q)
return t

How to finding first common ancestor of a node in a binary tree?

Following is my algorithm to find first common ancestor. But I don’t know how to calculate it time complexity, can anyone help?
public Tree commonAncestor(Tree root, Tree p, Tree q) {
if (covers(root.left, p) && covers(root.left, q))
return commonAncestor(root.left, p, q);
if (covers(root.right, p) && covers(root.right, q))
return commonAncestor(root.right, p, q);
return root;
}
private boolean covers(Tree root, Tree p) { /* is p a child of root? */
if (root == null) return false;
if (root == p) return true;
return covers(root.left, p) || covers(root.right, p);
}
Ok, so let's start by identifying what the worst case for this algorithm would be. covers searches the tree from left to right, so you get the worst-case behavior if the node you are searching for is the rightmost leaf, or it is not in the subtree at all. At this point you will have visited all the nodes in the subtree, so covers is O(n), where n is the number of nodes in the tree.
Similarly, commonAncestor exhibits worst-case behavior when the first common ancestor of p and q is deep down to the right in the tree. In this case, it will first call covers twice, getting the worst time behavior in both cases. It will then call itself again on the right subtree, which in the case of a balanced tree is of size n/2.
Assuming the tree is balanced, we can describe the run time by the recurrence relation T(n) = T(n/2) + O(n). Using the master theorem, we get the answer T(n) = O(n) for a balanced tree.
Now, if the tree is not balanced, we might in the worst case only reduce the size of the subtree by 1 for each recursive call, yielding the recurrence T(n) = T(n-1) + O(n). The solution to this recurrence is T(n) = O(n^2).
You can do better than this, though.
For example, instead of simply determining which subtree contains p or q with cover, let's determine the entire path to p and q. This takes O(n) just like cover, we're just keeping more information. Now, traverse those paths in parallell and stop where they diverge. This is always O(n).
If you have pointers from each node to their parent you can even improve on this by generating the paths "bottom-up", giving you O(log n) for a balanced tree.
Note that this is a space-time tradeoff, as while your code takes O(1) space, this algorithm takes O(log n) space for a balanced tree, and O(n) space in general.
As hammar’s answer demonstrates, your algorithm is quite inefficient as many operations are repeated.
I would do a different approach: Instead of testing for every potential root node if the two given nodes are not in the same sub-tree (thus making it the first common ancestor) I would determine the the paths from the root to the two given nodes and compare the nodes. The last common node on the paths from the root downwards is then also the first common ancestor.
Here’s an (untested) implementation in Java:
private List<Tree> pathToNode(Tree root, Tree node) {
List<Tree> path = new LinkedList<Tree>(), tmp;
// root is wanted node
if (root == node) return path;
// check if left child of root is wanted node
if (root.left == node) {
path.add(node);
path.add(root.left);
return path;
}
// check if right child of root is wanted node
if (root.right == node) {
path.add(node);
path.add(root.right);
return path;
}
// find path to node in left sub-tree
tmp = pathToNode(root.left, node);
if (tmp != null && tmp.size() > 1) {
// path to node found; add result of recursion to current path
path = tmp;
path.add(0, node);
return path;
}
// find path to node in right sub-tree
tmp = pathToNode(root.right, node);
if (tmp != null && tmp.size() > 1) {
// path to node found; add result of recursion to current path
path = tmp;
path.add(0, node);
return path;
}
return null;
}
public Tree commonAncestor(Tree root, Tree p, Tree q) {
List<Tree> pathToP = pathToNode(root, p),
pathToQ = pathToNode(root, q);
// check whether both paths exist
if (pathToP == null || pathToQ == null) return null;
// walk both paths in parallel until the nodes differ
while (iterP.hasNext() && iterQ.hasNext() && iterP.next() == iterQ.next());
// return the previous matching node
return iterP.previous();
}
Both pathToNode and commonAncestor are in O(n).

Resources