Sorting the elements in Binary trees - algorithm

Here is a question I was recently asked in an interview. A binary tree is given with a condition that each left child is 1 smaller than root and right child is 1 larger. Here is a sample tree
Sort it in O(1) and O(n) time complexity.
Here are the approaches I suggested:
Use a count to maintain the count of each elements and then return once entire traversal is done O(n) time and O(n) space complexity.
Use run length encoding. Form a chain when the element is repeated with the number as the key and count as the value. Requires space for count only when no is repeated and so requires no extra space apart from array but the time complexity will be O(n log n) since we have to traverse the array to see if it is there.
Finally I suggested breadth first traversal. We require O(log n) space for queue and O(n) time complexity (Assuming insertion is O(1) linked list).
What are your approaches?

Fix some Leaf node of the given tree as NewHead .
Write a function Pop() remove some node from the given tree ..!
Write pop node such that you will remove it only when it is ! equal to NewHead .
So pop value from tree , insert it to the New binary search tree with New Head as Head node .
So u will remove an element from tree add it to new search tree .
Until tree head point NewHead .
So all you elements are now in binary search tree pointing to the New head , which will be
obviously in sorted order .
This way promise you a sorting in O(NlogN) .

Analysis
Given your definition of a binary-tree we have the following,
Each Node have a Parent, L-child, and R-child .. where:
L < N
R > N
P > N
We also can do this:
L < N AND R > N => L < N < R => L < R
L < N AND P > N => L < N < P => L < P
R > N AND P > N => N < MIN(P,R)
N < MIN(P,R) AND L < N => L < N < MIN(P,R)
And now let's try expanding it, N.L = Left-child of N:
N.L < N
N.R > N
N.P > N
N.L.L < N.L < MIN(N, N.L.R)
N.L.R > N.L > N.L.L
N.R.L < N.R < MIN(N, N.R.R)
N.R.R > N.R > N.R.L
IF N IS N.P LEFT-CHILD: N < N.P < MIN(N.P.P, N.P.R)
IF N IS N.P RIGHT-CHILD: N > N.P.R
Proposed Solution
This problem seems complex, but my solution will be using merge sort after inserting values in a traversal order Left-Right-Parent which will help the merge sort to get a time complexity somewhere between its average and optimal case, but with a small trick using the comparisons I've done above.
First we collect tree nodes in a list, using Left-Right-Parent traversal, given the fact that: N.L < N < MIN(N.R, N.P) and with giving the parent a higher weight assuming O(N.R) <= O(N.P) with values decrease linearly when we go left-side each time .. > N.R.R > N.R > N > N.L > N.L.L > ...
After collecting the tree nodes in that traversal order, the list have some sorted chunks, which will help the merge sort that we'll use next.
This solution works in: Time = O(n log n + n), Space = O(n)
Here is the algorithm written in Java (not tested):
private class Node Comparable<Node>
{
public Node R;
public Node L;
public int value;
public Node (Node L, int val, Node R)
{
this.L = L;
this.value = val;
this.R = R;
}
#Override
public int compareTo(Node other)
{
return ((other != null) ? (this.value-other.value) : 0);
}
}
class Main
{
private static Node head;
private static void recursive_collect (Node n, ArrayList<Node> list)
{
if (n == null) return;
if (n.left != null) recursive_collect (n.L, list);
if (n.right != null) recursive_collect (n.R, list);
list.add(n.value);
}
public static ArrayList<Node> collect ()
{
ArrayList<Node> list = new ArrayList<Node>();
recursive_collect (head, list);
return list;
}
// sorting the tree: O(n log n + n)
public static ArrayList<Node> sortTree ()
{
// Collecting nodes: O(n)
ArrayList<Node> list = collect();
// Merge Sort: O(n log n)
Collections.sort(list);
return list;
}
// The example in the picture you provided
public static void createTestTree ()
{
Node left1 = new Node (new Node(null,-2,null), -1, new Node(null,0,null));
Node left2 = new Node (new Node(null,-1,null), 0, new Node(null,1,null));
Node right = new Node (left2, 1, new Node(null,2,null));
head = new Node (left1, 0, right);
}
// test
public static void main(String [] args)
{
createTestTree ();
ArrayList<Node> list = sortTree ();
for (Node n : list)
{
System.out.println(n.value);
}
}
}

I guess , you are looking for DFS(depth first search).
In depth-first search the idea is to travel as deep as possible from neighbour to neighbour before backtracking. What determines how deep is possible is that you must follow edges, and you don't visit any vertex twice.
boost already provides it: see here

Use quick sort.
The nodes are sorted at lowermost level in multiple arrays & these arrays of sorted elements are merged i the end.
E.g.
Function quick_sort(node n)
1. Go to left mode, if is not null, call quick_sort on it.
2. Go to right elements, if is not null, call quick_sort on it.
3. Merge results of left node sort & right node sort & current node.
4. Return merged array.

I'm not getting the question. Aren't binary trees already sorted? If you'd like to print out the items in order (or access them in order), this code would work
/**
* Show the contents of the BST in order
*/
public void show () {
show(root);
System.out.println();
}
private static void show(TreeNode node) {
if (node == null) return;
show(node.lchild);
System.out.print(node.datum + " ");
show(node.rchild);
}
I believe this would be o(n) complexity. To return a list instead of printing, just create one and replace each show statement by adding the child to the list

Related

Split a Binary Tree using specific methods

Given a binary tree, I have to return a tree containing all elements that smaller than k, greater than k and a tree containing only one element - k.
Allowed methods to use:
remove node - O(n)
insert - O(n)
find - O(n)
find min - O(n)
I'm assuming these methods complexity, because in the exercise it's not written that tree is balanced.
Required complexity - O(n)
Original tree have to maintain its structure.
I'm completely stuck. Any help is much appreciated!
Given tree is Binary search tree as well as outputs should be binary search trees.
I see no way to design a O(n) algorithm with the given blackbox functions and their time complexities, given that they could only be called a (maximum) constant number of times (like 3 times) to stay within the O(n) constraint.
But if it is allowed to access and create BSTs with basic, standard node manipulations (traversing via left or right child, setting the left or right child to a given subtree), then you could do the following:
Create three new empty BSTs that will be populated and returned. Name them left, mid, and right, where the first one will have all values less than k, the second one will have at the most one node (with value k), and the final one will have all the rest.
While populating left and right, maintain references to the nodes that are closest to value k: in left that will be the node with the greatest value, and in right the node with the least value.
Follow these steps:
Apply the usual binary search to walk from the root towards the node with value k
While doing this: whenever you choose the left child of a node, the node itself and its right subtree then belong in right. However, the left child should at this moment not be included, so create a new node that copies the current node, but without its left child. Maintain a reference to the node with the least value in right, as that is the node that may get a left new subtree when this step occurs more than once.
Do the similar thing for when you choose the right child of a node.
When the node with k is found, the algorithm can add its left subtree to left and the right subtree to right, and create the single-node tree with value k.
Time complexity
The search towards the node with value k could take O(n) in the worst case, as the BST is not given to be balanced. All the other actions (adding a subtree to a specific node in one of the new BSTs) run in constant time, so in total they are executed O(n) times in the worst case.
If the given BST is balanced (not necessarily perfectly, but like with AVL rules), then the algorithm runs in O(logn) time. However, the output BSTs may not be as balanced, and may violate AVL rules so that rotations would be needed.
Example Implementation
Here is an implementation in JavaScript. When you run this snippet, a test case will run one a BST that has nodes with values 0..19 (inserted in random order) and k=10. The output will iterate the three created BSTs in in-order, so to verify that they output 0..9, 10, and 11..19 respectively:
class Node {
constructor(value, left=null, right=null) {
this.value = value;
this.left = left;
this.right = right;
}
insert(value) { // Insert as a leaf, maintaining the BST property
if (value < this.value) {
if (this.left !== null) {
return this.left.insert(value);
}
this.left = new Node(value);
return this.left;
} else {
if (this.right !== null) {
return this.right.insert(value);
}
this.right = new Node(value);
return this.right;
}
}
// Utility function to iterate the BST values in in-order sequence
* [Symbol.iterator]() {
if (this.left !== null) yield * this.left;
yield this.value;
if (this.right !== null) yield * this.right;
}
}
// The main algorithm
function splitInThree(root, k) {
let node = root;
// Variables for the roots of the trees to return:
let left = null;
let mid = null;
let right = null;
// Reference to the nodes that are lexically closest to k:
let next = null;
let prev = null;
while (node !== null) {
// Create a copy of the current node
newNode = new Node(node.value);
if (k < node.value) {
// All nodes at the right go with it, but it gets no left child at this stage
newNode.right = node.right;
// Merge this with the tree we are creating for nodes with value > k
if (right === null) {
right = newNode;
} else {
next.left = newNode;
}
next = newNode;
node = node.left;
} else if (k > node.value) {
// All nodes at the left go with it, but it gets no right child at this stage
newNode.left = node.left;
// Merge this with the tree we are creating for nodes with value < k
if (left === null) {
left = newNode;
} else {
prev.right = newNode;
}
prev = newNode;
node = node.right;
} else {
// Create the root-only tree for k
mid = newNode;
// The left subtree belongs in the left tree
if (left === null) {
left = node.left;
} else {
prev.right = node.left;
}
// ...and the right subtree in the right tree
if (right === null) {
right = node.right;
} else {
next.left = node.right;
}
// All nodes have been allocated to a target tree
break;
}
}
// return the three new trees:
return [left, mid, right];
}
// === Test code for the algorithm ===
// Utility function
function shuffled(a) {
for (let i = a.length - 1; i > 0; i--) {
const j = Math.floor(Math.random() * (i + 1));
[a[i], a[j]] = [a[j], a[i]];
}
return a;
}
// Create a shuffled array of the integers 0...19
let arr = shuffled([...Array(20).keys()]);
// Insert these values into a new BST:
let root = new Node(arr.pop());
for (let val of arr) root.insert(val);
// Apply the algorithm with k=10
let [left, mid, right] = splitInThree(root, 10);
// Print out the values from the three BSTs:
console.log(...left); // 0..9
console.log(...mid); // 10
console.log(...right); // 11..19
Essentially, your goal is to create a valid BST where k is the root node; in this case, the left subtree is a BST containing all elements less than k, and the right subtree is a BST containing all elements greater than k.
This can be achieved by a series of tree rotations:
First, do an O(n) search for the node of value k, building a stack of its ancestors up to the root node.
While there are any remaining ancestors, pop one from the stack, and perform a tree rotation making k the parent of this ancestor.
Each rotation takes O(1) time, so this algorithm terminates in O(n) time, because there are at most O(n) ancestors. In a balanced tree, the algorithm takes O(log n) time, although the result is not a balanced tree.
In your question you write that "insert" and "remove" operations take O(n) time, but that this is your assumption, i.e. it is not stated in the question that these operations take O(n) time. If you are operating only nodes you already have pointers to, then basic operations take O(1) time.
If it is required not to destroy the original tree, then you can begin by making a copy of it in O(n) time.
I really don't see a simple and efficient way to split with the operations that you mention. But I think that achieving a very efficient split is relatively easy.
If the tree were balanced, then you can perform your split in O(log n) if you define a special operation called join exclusive. Let me first define join_ex() as the operation in question:
Node * join_exclusive(Node *& ts, Node *& tg)
{
if (ts == NULL)
return tg;
if (tg == NULL)
return ts;
tg=.llink) = join_exclusive(ts->rlink, tg->llink);
ts->rlink = tg;
Node * ret_val = ts;
ts = tg = NULL; // empty the trees
return ret_val;
}
join_ex() assumes that you want to build a new tree from two BST ts and tr such that every key in ts is less than everyone in tr.
If you have two exclusive trees T< and T>:
Then join_ex() can be seen as follows:
Note that if you take any node for any BST, then its subtrees meet this condition; every key in the left subtree is less than everyone in the right one. You can design a nice deletion algorithm based on join_ex().
Now we are ready for the split operation:
void split_key_rec(Node * root, const key_type & key, Node *& ts, Node *& tg)
{
if (root == NULL)
{
ts = tg = NULL;
return;
}
if (key < root->key)
{
split_key_rec(root->llink, key, ts, root->llink);
tg = root;
}
else
{
split_key_rec(root->rlink, key, root->rlink, tg)
ts = root;
}
}
If you set root as T in this figure
Then a pictorial representation of split can be seen thus:
split_key_rec() splits the tree into two trees ts and tg according to a key k. At the end of the operation, ts contains a BST with keys less than k and tg is a BST with keys greater or equal than k.
Now, to complete your requirement, you call split_key_rec(t, k, ts, tg) and you get in ts a BST with all the keys less than k. Almost symmetrically, you get in tg a BST with all the keys greater or equal than k. So, the last thing is to verify if the root of tg is k and, if this is the case, you unlink, and you get your result in ts, k, and tg' (tg' is the tree without k).
If k is in the original tree, then the root of tg will be k, and tg won't have left subtree.

Extract K largest elements from array of N integers in O(N + K) time

So, we have a list of N integers, from which we want to get K largest integers (unsorted).
The problem is, this needs to be able to run in O(N + K). That's what stated in the assignment.
I've checked various algorithms and even made my own, but the best I can get is O((n-k)*k) which
I don't think is close to O(N + K), unless I'm wrong.
Are there any algorithms out there that can do this in O(N + K) assuming the values in the list are pretty much random, and all positive? (We don't know the max value they can acheive)
Note that I need to find the K largest integers as in not the K-th largest but from N, K integers.
Example: N = 5, K = 2
Input: 5 6 8 9 3
Output: 9 8
A selection algorithm is used to find the kth largest element.
Median of Medians is an O(n) selection algorithm.
Therefore, there's a simple O(n) algorithm for your problem:
Let KTH be the kth largest element as returned by your selection algorithm. This takes O(n) time.
Scan the array and extract all elements >= KTH. This takes O(n) time.
Quickselect is another selection algorithm worth knowing about. It's based on quicksort, so it's only O(n) in the average case.
The idea is to, make a Binary Search Tree, which can be done in O(log N), though in worst case O(N) [where N - is total nodes/array elements in this case].
Now we can do inorder traversal, to get all elements in sorted order, which can be done O(N) [Proof: Complexities of binary tree traversals ]
Now traverse the sorted elements K-times (descending order);
Therefore, overall complexity will be: O(N) + O(N) + O(K) => O(N+K)
IMPLEMENTATION:
public class Solution{
static class BST{
int val;
BST left, right;
public BST(int val) {
this.val = val;
this.left = this.right = null;
}
}
// making bst from the array elements
static BST add(BST root, int item) {
if(root == null) return new BST(item);
if(root.val > item)
root.left = add(root.left, item);
else root.right = add(root.right, item);
return root;
}
// doing inorder to get all elements in sorted order
static void inorder(BST root, List<Integer> list) {
if(root.left != null)
inorder(root.left, list);
list.add(root.val);
if(root.right != null)
inorder(root.right, list);
}
public static void main(String[] args) {
//Example: N = 5, K = 2 Input: 5 6 8 9 3 Output: 9 8
int [] a = {1, 9, 2, 7, 3, -1, 0, 5, 11};
BST root = null;
for(int i=0; i<a.length; i++) {
root = add(root, a[i]);
}
List<Integer> list = new ArrayList<Integer>();
inorder(root, list);
// process the list K times, to get K-th largest elements
}
NB: in case of Duplicate values, you have to make sub-list for each node!

Finding smallest (or largest) k elements in a given balanced binary search tree

Given a balanced binary search tree with integer nodes, I need to write an algorithm to find the smallest k elements and store them in a linked list or array. The tricky part is, it is required such algorithm runs in O(k+log(n)), where n is the number of elements in the tree. I only have an algorithm that runs O(k*log(n)), which uses the rank function. So my question is how to achieve the required performance?
I've written a code doing such algorithm but I don't know if it is running at O(k+log(n)):
(The size function is the number of nodes with the given subtree.)
// find k smallest elements in the tree
public Iterable<Key> kSmallest(int k) {
LinkedList<Key> keys = new LinkedList<Key>();
kSmallest(k, root, keys);
return keys;
}
// find k smallest elements in the subtree given by node and add them to keys
private void kSmallest(int k, Node node, LinkedList<Key> keys) {
if (k <= 0 || node == null) return;
if (node.left != null) {
if (size(node.left) >= k) kSmallest(k, node.left, keys);
else {
keys.add(node.key);
kSmallest(k - 1, node.left, keys);
kSmallest(k - 1 - size(node.left), node.right, keys);
}
}
else {
keys.add(node.key);
kSmallest(k - 1, node.right, keys);
}
}
Just have to to a inorder traversal and stop when you have gone through k nodes.
this would run in O(k+log(n)) time.
code:
int k = nodesRequired;
int A[] = new int[k];
int number_of_nodes=0;
void traverse_tree(tree *l){
if (number_of_nodes<k) {
traverse_tree(l->left);
process_item(l->item);
traverse_tree(l->right);
}
}
void process_item(item){
A.push(item);
++number_of_nodes;
}

Finding the left-most child for every node in a tree in linear time?

A paper I am reading claims that
It is easy to see that there is a linear time algorithm to compute the function l()
where l() gives the left-most child (both input and output are in postorder traversal of the tree). However, I can only think of a naive O(n^2) implementation where n is the number of nodes in the tree.
As an example, consider the following tree:
a
/ \
c b
In postorder traversal, the tree is c b a. The corresponding function l() should give c b c.
Here is my implementation in O(n^2) time.
public Object[] computeFunctionL(){
ArrayList<String> l= new ArrayList<String>();
l= l(this, l);
return l.toArray();
}
private ArrayList<String> l(Node currentRoot, ArrayList<String> l){
for (int i= 0; i < currentRoot.children.size(); i++){
l= l(currentRoot.children.get(i), l);
}
while(currentRoot.children.size() != 0){
currentRoot= currentRoot.children.get(0);
}
l.add(currentRoot.label);
return l;
}
The tree is made as:
public class Node {
private String label;
private ArrayList<Node> children= new ArrayList<Node>();
...
There is a simple recursive algorithm you can use that can compute this information in O(1) time per node. Since there are n total nodes, this would run in O(n) total time.
The basic idea is the following recursive insight:
For any node n with no left child, l(n) = n.
Otherwise, if n has left child L, then l(n) = l(L).
This gives rise to this recursive algorithm, which annotates each node with its l value:
function computeL(node n) {
if n is null, return.
computeL(n.left)
computeL(n.right)
if n has no left child:
set n.l = n
else
set n.l = n.left.l
Hope this helps!
You can find l() for the entire tree in less than O(n^2) time. The idea is to traverse the tree in order, maintaing a stack of the nodes you've visited while traversing the left branch. When you get to a leaf, that is the leftmost node for the entire branch.
Here's an example:
class BTreeNode
{
public readonly int Value;
public BTreeNode LeftChild { get; private set; }
public BTreeNode RightChild { get; private set; }
}
void ShowLeftmost(BTreeNode node, Stack<int> stack)
{
if (node.LeftChild == null)
{
// this is the leftmost node of every node on the stack
while (stack.Count > 0)
{
var v = stack.Pop();
Console.WriteLine("Leftmost node of {0} is {1}", v, node.Value);
}
}
else
{
// push this value onto the stack so that
// we can add its leftmost node when we find it.
stack.Push(node.Value);
ShowLeftmost(node.LeftChild, stack);
}
if (node.RightChild != null)
ShowLeftmost(node.RightChild, stack);
}
The complexity is clearly not O(n^2). Rather, it's O(n).
It takes O(n) to traverse the tree. No node is placed on the stack more than once. The worst case for this algorithm is a tree that contains all left nodes. In that case it's O(n) to traverse the tree and O(n) to enumerate the stack. The best case is a tree that contains all right nodes, in which case there is never any stack to enumerate.
So O(n) time complexity, with O(n) worst case extra space for the stack.
Take a look at section 3.1:
3.1. Notation. Let T[i] be the ith node in the tree according to the left-to-right
postorder numbering, l(i) is the number of the leftmost leaf descendant of the subtree
rooted at T[i].
Given that sentence about notation, I would assume that the function l() is referring to finding a single node in linear time.
There may be a more elegant (better than O(n^2)) way of finding l() for an entire tree but I think it's referring to a single node.

Generating all possible topologies in a full binary tree having n nodes

I want to create all possible topologies of a full binary tree which must have exactly n+1 leaf nodes and n internal nodes.
I want to create it using recursion and tree must be simple binary tree not a binary search tree or BST.
Kindly suggest algorithm to achieve this task.
example: with 4 leaf nodes and 3 internal nodes.
N N N
/ \ / \ /\
N N N N N N
/\ /\ /\ /\
N N N N N N N N
/ \ /\
N N N N
PS: There is a simlar thread .It will be helpful If anybody can elaborate the tree generation algorithm suggested by coproc in this thread.
Thanks in advance.
Here is the code for generating all possibles topologies for given n. Total Number of nodes (internal + leaf nodes) in a full binary tree is odd.
If a tree is full binary tree, then it's left and right sub trees are also full binary trees i.e both left and right sub trees have odd number of nodes.
For a given n, We generate all combinations of full binary tree like this
First Iteration: 1 Node on left hand side, 1 root, n-2 on right hand side.
Second Iteration: 3 Nodes on left hand side, 1 root, n-4 on right hand side.
Third Iteration: 5 Nodes on left hand side, 1 root, n-6 on right hand side.
.
.
.
Last Iteration: n-2 Nodes on left hand side, 1 root, 1 on right hand side
In each iteration, we find all possible left trees and right trees. If L full trees are possible on left hand side, R full trees are possible on right hand side - then total number of trees is L*R
public void createAllTopologies(int n){
if(n%2 == 0) return;
Map<Integer, List<Node>> topologies = new HashMap<Integer, List<Node>>();
topologies.put(1, new ArrayList<Node>());
topologies.get(1).add(new Node(1));
for(int i=3;i<=n;i+=2){
List<Node> list = new ArrayList<Node>();
for(int j=1;j<i;j+=2){
List<Node> left = topologies.get(j);
List<Node> right = topologies.get(i-j-1);
list.addAll(generateAllCombinations(left,right));
}
topologies.put(i, list);
}
List<Node> result = topologies.get(n);
for(int i=0;i<result.size();i++){
printTree(result.get(i),0);
System.out.println("-----------------------------");
}
}
private List<Node> generateAllCombinations(List<Node> left, List<Node> right) {
List<Node> list = new ArrayList<Node>();
for(int i=0;i<left.size();i++){
for(int j=0;j<right.size();j++){
Node nNode = new Node(1);
nNode.left = left.get(i).clone();
nNode.right = right.get(j).clone();
list.add(nNode);
}
}
return list;
}
/* This prints tree from left to right fashion i.e
root at left,
leftNode at bottom,
rightNode at top
*/
protected void printTree(Node nNode,int pos){
if (nNode==null) {
for(int i=0;i<pos;i++) System.out.print("\t");
System.out.println("*");
return;
}
printTree(nNode.right,pos+1);
for(int i=0;i<pos;i++) System.out.print("\t");
System.out.println(nNode.data);
printTree(nNode.left,pos+1);
}
Please refer to complete code here - http://ideone.com/Wz2Jhm
I am using recursion here. This code can be improved by using dynamic programming. I hope this helps. We start by one node in the left, one node as root and N-i-1 nodes in the right. Then, we move nodes from the right to left as pairs.
import java.util.ArrayList;
import java.util.List;
public class NumberOfFullBTrees {
public static void main(String[] args) {
int N = 5;
NumberOfFullBTrees numberOfFullBTrees = new NumberOfFullBTrees();
List<TreeNode> result = numberOfFullBTrees.allPossibleFBT(N);
}
/**
* Builds all possible full binary trees. We start by 1 node in left, 1 note at root and n-2 node in right.
* We move the nodes in pairs from right to left. This method uses recursion. This method can be improved by
* using dynamic programming.
*
* #param N The number of nodes
* #return The possible full binary trees with N nodes as a list
*/
public List<TreeNode> allPossibleFBT(int N) {
List<TreeNode> result = new ArrayList<>();
if (N % 2 == 0) {
return result;
}
if (N == 1) {
result.add(new TreeNode(0));
return result;
}
for (int i = 1; i < N; i += 2) {
List<TreeNode> lefts = allPossibleFBT(i);
List<TreeNode> rights = allPossibleFBT(N - i - 1);
for (TreeNode l : lefts) {
for (TreeNode r : rights) {
TreeNode root = new TreeNode(0);
root.left = l;
root.right = r;
result.add(root);
}
}
}
return result;
}
}

Resources