Connect nodes at same level using constant extra space - algorithm

Objective:To write a function to connect all the adjacent nodes at the same level in a binary tree. Structure of the given Binary Tree node is like following.
struct node{
int data;
struct node* left;
struct node* right;
struct node* nextRight;
}
Initially, all the nextRight pointers point to garbage values. Function should set these pointers to point next right for each node.
Solution:
void connectRecur(struct node* p);
struct node *getNextRight(struct node *p);
// Sets the nextRight of root and calls connectRecur() for other nodes
void connect (struct node *p)
{
// Set the nextRight for root
p->nextRight = NULL;
// Set the next right for rest of the nodes (other than root)
connectRecur(p);
}
/* Set next right of all descendents of p. This function makes sure that
nextRight of nodes ar level i is set before level i+1 nodes. */
void connectRecur(struct node* p)
{
// Base case
if (!p)
return;
/* Before setting nextRight of left and right children, set nextRight
of children of other nodes at same level (because we can access
children of other nodes using p's nextRight only) */
if (p->nextRight != NULL)
connectRecur(p->nextRight);
/* Set the nextRight pointer for p's left child */
if (p->left)
{
if (p->right)
{
p->left->nextRight = p->right;
p->right->nextRight = getNextRight(p);
}
else
p->left->nextRight = getNextRight(p);
/* Recursively call for next level nodes. Note that we call only
for left child. The call for left child will call for right child */
connectRecur(p->left);
}
/* If left child is NULL then first node of next level will either be
p->right or getNextRight(p) */
else if (p->right)
{
p->right->nextRight = getNextRight(p);
connectRecur(p->right);
}
else
connectRecur(getNextRight(p));
}
/* This function returns the leftmost child of nodes at the same level as p.
This function is used to getNExt right of p's right child
If right child of p is NULL then this can also be used for the left child */
struct node *getNextRight(struct node *p)
{
struct node *temp = p->nextRight;
/* Traverse nodes at p's level and find and return
the first node's first child */
while(temp != NULL)
{
if(temp->left != NULL)
return temp->left;
if(temp->right != NULL)
return temp->right;
temp = temp->nextRight;
}
// If all the nodes at p's level are leaf nodes then return NULL
return NULL;
}
What will be the time complexity of this solution?

It is O(n^2) because of the getNextRight.
Easiest to see is to consider you have a complete binary tree. The number of leafs is O(n/2) so O(n). You get to call getNextRight for each leaf.
The first getNextRight is going to be for the last leaf on the right. That takes no passes through the while loop.
Next, you call getNextRight for the next to last leaf on the right. That takes 1 pass through the while loop.
For the next leaf you get 2 passes through the while loop. And so on... you get O(1 + 2 + 3 + ... + n/2) which is O(n^2).
Also the space complexity is not really constant. It is O(log n) if the tree is balanced because of the recursion. You may have a log n sized stack. If the tree is not balanced it will be even worst.

What appears to me is that you are connecting the nodes level-wise starting from the root node.
At any level since the parents have already been connected, you simply reach out to the next branch through the parents. Hence you are accessing each node only once (or twice through a child) at most.
Hence to me, the order of time complexity seems to be O(n) where n is the total number of elements in the tree.

I provided an answer to a similar question here on stackoverflow, using level-order traversal. Space and time complexity is O(n)

Related

How do i calculate space complexity for checking if binary search tree is balanced?

I want to check if binary search tree provided to me is balanced or not.
Since tree already exists in memory, there is no additional storage needed while checking if tree is balanced when traversing through nodes.
So I assume space complexity would be O(1).
Is it correct?
// Definition for binary tree
class TreeNode {
int val;
TreeNode left;
TreeNode right;
TreeNode(int x) {
val = x;
}
}
public class Solution {
public boolean isBalanced(TreeNode root) {
if (root == null)
return true;
if (getHeight(root) == -1)
return false;
return true;
}
public int getHeight(TreeNode root) {
if (root == null)
return 0;
int left = getHeight(root.left);
int right = getHeight(root.right);
if (left == -1 || right == -1)
return -1;
if (Math.abs(left - right) > 1) {
return -1;
}
return Math.max(left, right) + 1;
}
}
Your algorithm is O(height) space, not O(1) - you need to store 1 recursive function call in memory for every node as you recurse down the tree, and you can never go deeper than O(height) in the tree (before you return and can get rid of the already-fully-processed nodes).
For a tree like:
(Image reference - Wikipedia)
Your calls will go like this:
getHeight(8)
getHeight(3)
getHeight(1)
getHeight(6)
getHeight(4)
getHeight(7)
getHeight(10)
getHeight(14)
getHeight(13)
Here, getHeight(8) calls getHeight(3), which calls getHeight(1) and getHeight(6), etc.
After we've finished calling the function for 1 (returning the result to the call for 3), we don't need to keep that in memory any more, thus the maximum number of calls that we keep in memory is equal to the height of the tree.
It can be done in O(1) space, but it's fairly complicated and really slow (much slower than the O(n) that your solution takes).
The key is that we have a binary search tree, so, given a node, we'd know whether that node is in the left or right subtree of any ancestor node, which we can use to determine the parent of a node.
I may create a separate Q&A going into a bit more details at some point.

The Great Tree list recursion program

I faced an interesting problem called as the Great Tree-List Problem. The Problem is as follows :
In the ordered binary tree, each node contains a single data element and "small" and "large" pointers to sub-trees .All the nodes in the "small" sub-tree are less than or equal to the data in the parent node. All the nodes in the "large" sub-tree are greater than the parent node. And a circular doubly linked list consist of previous and next pointers.
The problem is to take an ordered binary tree and rearrange the internal pointers to make a circular doubly linked list out of it. The "small" pointer should play the role of "previous" and the "large" pointer should play the role of "next". The list should be arranged so that the nodes are in increasing order. I have to write a recursive function & Return the head pointer to the new list.
The operation should be done in O(n) time.
I understand that recursion will go down the tree, but how to recursively change the small and large sub-trees into lists, also i have to append those lists together with the parent node.
How should i approach the problem?.. I just need a direction to solve the problem!.
The idea is to create a method that converts a tree node containing subtrees (children nodes) into a loop. And given a node that has converted children (e.g. after recursive calls came back), you create a new loop by pointing the large pointer (next) of the largest node to the smallest node, and the small pointer of the smallest node to the largest node.
May not be complete, but it will be close to this:
tree_node {
small
large
}
convert(node){
//base case 1, if leaf node
if node.small == null && node.large == null
return (self, self)
//recursively convert children
if node.small !=null
smallest, larger = convert(node.small)
else
smallest = larger = self
if node.large !=null
smaller, largest = convert(node.large)
else
smaller = largest = self
//wrap the ends of the chain
largest.large = smallest
smallest.small = largest
//wrap the mid part
smaller.small = larger
larger.large = small
//return pointers to the absolute smallest and largest of this subtree
return (smallest, largest)
}
//actually doing it
convert(tree.root)
The key to recursive programming is to imagine you already have the solution.
So, you already have a function recLink(Tree t) which receives a pointer to a tree, turns that tree into a doubly-linked circular list and returns a pointer to the list's head (leftmost) node:
recLink( n={Node: left, elt, right}) = // pattern match tree to a full node
rt := recLink( right); // already
lt := recLink( left); // have it
n.right := rt; n.left := lt.left; // middle node
lt.left.right := n; rt.left.right := lt; // right edges
lt.left := rt.left; rt.left := n;
return lt;
Finish up with the edge cases (empty child branches etc.). :)
assuming you have a simple tree of 3 nodes
B <--- A ---> C
walk down the left and right sides, get the pointers for each node, then have
B -> C
B <- C
Since your tree is binary, it will be composed of 3 node "subtrees" that can recursively use this strategy.

Delete multiple nodes from a binary search tree at O(n) time

Consider a binary search tree, where all keys are unique. Nodes haven't parent pointers.
We have up to n/2 marked nodes.
I can delete all of them at O(n2) time ( using postorder traversal and when encounter a marked node delete each at O(n)). But it is inappropriate.
I need an algorithm to delete all marked nodes at O(n) time.
Thanks.
EDIT After deletion I need to have nodes order unchanged.
EDIT-2 So it should look like I have deleted each marked node using the typical deletion (finding the rightmost node at the left subtree and exchanging it with the node to delete).
There are many ways, but here is one that should be easy to get right, and give you a perfectly balanced tree as a side effect. It requires linear extra space, however.
Create an array of size N minus the number of marked nodes (or N, if you don't know how many are marked and don't want to check it).
Place the elements in order in the array, by inorder traversal, skipping the marked elements. This takes linear time, and stack space proportional to the height of the tree.
Rebuild the tree top-down using the array. The mid element in the array becomes the new root, the mid element of the left subarray its new left child, and the mid element of the right subarray its new right subchild. Repeat recursively. This takes linear time and logarithmic stack space.
Update: forgot to say skipping the marked elements, but that was obvious, right? ;)
I have found how to do it!
We use an inorder traversal.
We check if node need to be deleted before recursive calls of the function.
When we find a node to delete we stand flag toFindMax and searching the rightmost node in the left subtree.
If we encounter another node to delete we push all our variables to the stack and pop them when the node was deleted.
When we find the maximum in the left subtree we save reference to it and to it's parent.
And then, when we recursively return to the initial node to delete, we delete it (exchange it with the maximum).
static class LinearDeletion {
public static Node MIN_VALUE = new Node(Integer.MIN_VALUE);;
boolean toFindMax = false;
Node parentOfMax = null;
Node max = MIN_VALUE;
Stack<Object> stack = new Stack<>();
public void perform(Node x, Node parent) {
if (x.isToDelete) {
stack.push(toFindMax);
stack.push(parentOfMax);
stack.push(max);
toFindMax = true;
parentOfMax = null;
max = MIN_VALUE;
if (x.left != null) {
perform(x.left, x);
}
if (x.left == null) { //deletion of the node
if (parent.left == x) {
parent.left = x.right;
} else {
parent.right = x.right;
}
} else {
if (x.right == null) {
if (parent.left == x) {
parent.left = x.left;
} else {
parent.right = x.left;
}
} else {
x.key = max.key;
x.isToDelete = max.isToDelete;
if (parentOfMax != x) {
parentOfMax.right = max.left;
} else {
x.left = max.left;
}
}
} // end of deletion
max = (Node) stack.pop();
parentOfMax = (Node) stack.pop();
toFindMax = (boolean) stack.pop();
if (toFindMax) { // check if the current node is the maximum
if (x.key > max.key) {
max = x;
parentOfMax = parent;
}
}
if (x.right != null) {
perform(x.right, x);
}
} else {
if (x.left != null) {
perform(x.left, x);
}
if (toFindMax) {
if (x.key > max.key) {
max = x;
parentOfMax = parent;
}
}
if (x.right != null) {
perform(x.right, x);
}
}
}
}
I don't see why a postorder traversal would be O(n2). The reason that deleting a single node is O(n) is that you need to traverse the tree to find the node, which is an O(n) operation. But once you find a node, it can be deleted in O(1) time.* Thus, you can delete all O(n) marked nodes in a single traversal in O(n) time.
* Unless you need to maintain a balanced tree. However, you don't list that as a requirement.
EDIT As #njlarsson correctly points out in his comment, a delete operation is not normally O(1) even after the node is found. However, since the left and right subtrees are being traversed before visiting a node to be deleted, the minimum (or maximum) elements of the subtrees can be obtained at no additional cost during the subtree traversals. This enables an O(1) deletion.

Efficient algorithm to determine if an alleged binary tree contains a cycle?

One of my favorite interview questions is
In O(n) time and O(1) space, determine whether a linked list contains a cycle.
This can be done using Floyd's cycle-finding algorithm.
My question is whether it's possible to get such good time and space guarantees when trying to detect whether a binary tree contains a cycle. That is, if someone gives you a struct definition along the lines of
struct node {
node* left;
node* right;
};
How efficiently can you verify that the given structure is indeed a binary tree and not, say, a DAG or a graph containing a cycle?
Is there an algorithm that, given the root of a binary tree, can determine whether that tree contains a cycle in O(n) time and better than O(n) space? Clearly this could be done with a standard DFS or BFS, but this requires O(n) space. Can it be done in O(√n) space? O(log n) space? Or (the holy grail) in just O(1) space? I'm curious because in the case of a linked list this can be done in O(1) space, but I've never seen a correspondingly efficient algorithm for this case.
You can't even visit each node of a real, honest-to-god, cycle-free tree in O(1) space, so what you are asking for is clearly impossible. (Tricks with modifying a tree along the way are not O(1) space).
If you are willing to consider stack-based algorithms, then a regular tree walk can be easily modified along the lines of Floyd's algorithm.
it is possible to test in logarithmic space if two vertices of a graph belong to the same connected component (Reingold, Omer (2008), "Undirected connectivity in log-space", Journal of the ACM 55 (4): Article 17, 24 pages, doi:10.1145/1391289.1391291). A connected component is cyclic; hence if you can find two vertices in a graph that belong to the same connected component there is a cycle in the graph. Reingold published the algorithm 26 years after the question of its existence was first posed (see http://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29). Having an O(1) space algorithm sounds unlikely given that it took 25 years to find a log-space solution. Note that picking two vertices from a graph and asking if they belong to a cycle is equivalent to asking if they belong to a connected component.
This algorithm can be extended to a log-space solution for graphs with out-degree 2 (OP: "trees"), as it is enough to check for every pair of a node and one of its immediate siblings if they belong to the same connect component, and these pairs can be enumerated in O(log n) space using standard recursive tree descent.
If you assume that the cycle points to a node at the same depth or smaller depth in the "tree", then you can do a BFS (iterative version) with two stacks, one for the turtle (x1) and one for the hare (x2 speed). At some point, the Hare's stack will be either empty (no cycle), or be a subset of the turtle's stack (a cycle was found). The time required is O(n k), and the space is O(lg n) where n is the number of used nodes, and k the time required to check the subset condition which can be upper bounded by lg(n). Note that the initial assumption about the cycle does not constraints the original problem, since it is assumed to be a tree but for a finite number of arcs that form a cycle with previous nodes; a link to a deeper node in the tree will not form a cycle but destroy the tree structure.
If it can be further assumed that the cycle points to an ancestor, then, the subset condition can be changed by checking that both stacks are equal, which is faster.
Visited Aware
You would need to redefine the structure as such (I am going to leave pointers out of this):
class node {
node left;
node right;
bool visited = false;
};
And use the following recursive algorithm (obviously re-working it to use a custom stack if your tree could grow big enough):
bool validate(node value)
{
if (value.visited)
return (value.visited = false);
value.visited = true;
if (value.left != null && !validate(value.left))
return (value.visited = false);
if (value.right != null && !validate(value.right))
return (value.visited = false);
value.visited = false;
return true;
}
Comments: It does technically have O(n) space; because of the extra field in the struct. The worst case for it would also be O(n+1) if all the values are on a single side of the tree and every value is in the cycle.
Depth Aware
When inserting into the tree you could keep track of the maximum depth:
struct node {
node left;
node right;
};
global int maximumDepth = 0;
void insert(node item) { insert(root, item, 1); }
void insert(node parent, node item, int depth)
{
if (depth > maximumDepth)
maximumDepth = depth;
// Do your insertion magic, ensuring to pass in depth + 1 to any other insert() calls.
}
bool validate(node value, int depth)
{
if (depth > maximumDepth)
return false;
if (value.left != null && !validate(value.left, depth + 1))
return false;
if (value.right != null && !validate(value.right, depth + 1))
return false;
return true;
}
Comments: The storage space is O(n+1) because we are storing the depth on the stack (as well as the maximum depth); the time is still O(n+1). This would do better on invalid trees.
Managed to get it right!
Runtime: O(n). I suspect it goes through an edge at most a constant number of times. No formal proof.
Space: O(1). Only stores a few nodes. Does not create new nodes or edges, only rearranges them.
Destructive: Yes. It flattens the tree, every node has the inorder successor as its right child, and null as the left.
The algorithm tries to flatten the binary tree by moving the whole left subtree of the current node to above it, making it the rightmost node of the subtree, then updating the current node to find further left subtrees in the newly discovered nodes. If we know both the left child and the predecessor of the current node, we can move the entire subtree in a few operations, in a similar manner to inserting a list into another. Such a move preserves the in-order sequence of the tree and it invariably makes the tree more right slanted.
There are three cases depending on the local configuration of nodes around the current one: left child is the same as the predecessor, left child is different from the predecessor, or no left subtree. The first case is trivial. The second case requires finding the predecessor, the third case requires finding a node on the right with a left subtree. Graphical representation helps in understanding them.
In the latter two cases we can run into cycles. Since we only traverse a list of the right childs, we can use Floyd's cycle detection algorithm to find and report loops. Sooner or later every cycle will be rotated into such a form.
#include <cstdio>
#include <iostream>
#include <queue>
#define null NULL
#define int32 int
using namespace std;
/**
* Binary tree node class
**/
template <class T>
class Node
{
public:
/* Public Attributes */
Node* left;
Node* right;
T value;
};
/**
* This exception is thrown when the flattener & cycle detector algorithm encounters a cycle
**/
class CycleException
{
public:
/* Public Constructors */
CycleException () {}
virtual ~CycleException () {}
};
/**
* Biny tree flattener and cycle detector class.
**/
template <class T>
class Flattener
{
public:
/* Public Constructors */
Flattener () :
root (null),
parent (null),
current (null),
top (null),
bottom (null),
turtle (null),
{}
virtual ~Flattener () {}
/* Public Methods */
/**
* This function flattens an alleged binary tree, throwing a new CycleException when encountering a cycle. Returns the root of the flattened tree.
**/
Node<T>* flatten (Node<T>* pRoot)
{
init(pRoot);
// Loop while there are left subtrees to process
while( findNodeWithLeftSubtree() ){
// We need to find the topmost and rightmost node of the subtree
findSubtree();
// Move the entire subtree above the current node
moveSubtree();
}
// There are no more left subtrees to process, we are finished, the tree does not contain cycles
return root;
}
protected:
/* Protected Methods */
void init (Node<T>* pRoot)
{
// Keep track of the root node so the tree is not lost
root = pRoot;
// Keep track of the parent of the current node since it is needed for insertions
parent = null;
// Keep track of the current node, obviously it is needed
current = root;
}
bool findNodeWithLeftSubtree ()
{
// Find a node with a left subtree using Floyd's cycle detection algorithm
turtle = parent;
while( current->left == null and current->right != null ){
if( current == turtle ){
throw new CycleException();
}
parent = current;
current = current->right;
if( current->right != null ){
parent = current;
current = current->right;
}
if( turtle != null ){
turtle = turtle->right;
}else{
turtle = root;
}
}
return current->left != null;
}
void findSubtree ()
{
// Find the topmost node
top = current->left;
// The topmost and rightmost nodes are the same
if( top->right == null ){
bottom = top;
return;
}
// The rightmost node is buried in the right subtree of topmost node. Find it using Floyd's cycle detection algorithm applied to right childs.
bottom = top->right;
turtle = top;
while( bottom->right != null ){
if( bottom == turtle ){
throw new CycleException();
}
bottom = bottom->right;
if( bottom->right != null ){
bottom = bottom->right;
}
turtle = turtle->right;
}
}
void moveSubtree ()
{
// Update root; if the current node is the root then the top is the new root
if( root == current ){
root = top;
}
// Add subtree below parent
if( parent != null ){
parent->right = top;
}
// Add current below subtree
bottom->right = current;
// Remove subtree from current
current->left = null;
// Update current; step up to process the top
current = top;
}
Node<T>* root;
Node<T>* parent;
Node<T>* current;
Node<T>* top;
Node<T>* bottom;
Node<T>* turtle;
private:
Flattener (Flattener&);
Flattener& operator = (Flattener&);
};
template <class T>
void traverseFlat (Node<T>* current)
{
while( current != null ){
cout << dec << current->value << " # 0x" << hex << reinterpret_cast<int32>(current) << endl;
current = current->right;
}
}
template <class T>
Node<T>* makeCompleteBinaryTree (int32 maxNodes)
{
Node<T>* root = new Node<T>();
queue<Node<T>*> q;
q.push(root);
int32 nodes = 1;
while( nodes < maxNodes ){
Node<T>* node = q.front();
q.pop();
node->left = new Node<T>();
q.push(node->left);
nodes++;
if( nodes < maxNodes ){
node->right = new Node<T>();
q.push(node->right);
nodes++;
}
}
return root;
}
template <class T>
void inorderLabel (Node<T>* root)
{
int32 label = 0;
inorderLabel(root, label);
}
template <class T>
void inorderLabel (Node<T>* root, int32& label)
{
if( root == null ){
return;
}
inorderLabel(root->left, label);
root->value = label++;
inorderLabel(root->right, label);
}
int32 main (int32 argc, char* argv[])
{
if(argc||argv){}
typedef Node<int32> Node;
// Make binary tree and label it in-order
Node* root = makeCompleteBinaryTree<int32>(1 << 24);
inorderLabel(root);
// Try to flatten it
try{
Flattener<int32> F;
root = F.flatten(root);
}catch(CycleException*){
cout << "Oh noes, cycle detected!" << endl;
return 0;
}
// Traverse its flattened form
// traverseFlat(root);
}
As said by Karl by definition a "Tree" is free of cycles. But still i get the spirit in which the question is asked. Why do you need fancy algorithms for detecting cycle in any graph. You can simply run a BFS or DFS and if you visit a node which is visited already it implies a cycle. This will run in O(n) time but the space complexity is also O(n), dont know if this can be reduced.
As mentioned by ChingPing, a simple DFS should do the trick. You would need to mark each node as visited(need to do some mapping from Reference of Node to Integer) and if an reentry is attempted on an already visited Node, that means there is a cycle.
This is O(n) in memory though.
At first glance you can see that this problem would be solved by a non-deterministic application of Floyd's algorithm. So what happens if we apply Floyd's in a split-and-branch way?
Well we can use Floyd's from the base node, and then add an additional Floyd's at each branch.
So for each terminal path we have an instance of Floyd's algorithm which ends there. And if a cycle ever arises, there is a turtle which MUST have a corresponding hare chasing it.
So the algorithm finishes. (and as a side effect, each terminal node is only reached by one hare/turtle pair so there are O(n) visits and thus O(n) time. (store the nodes which have been branched from, this doesn't increase the order of magnitude of memory and prevents memory blowouts in the case of cycles)
Furthermore, doing this ensures the memory footprint is the same as the number of terminal nodes. The number of terminal nodes is O(log n) but O(n) in worst case.
TL;DR: apply Floyd's and branch at each time you have a choice to make:
time: O(n)
space: O(log n)
I don't believe there exists an algorithm for walking a tree with less than O(N) space. And, for a (purported) binary tree, it takes no more space/time (in "order" terms) to detect cycles than it does to walk the tree. I believe that DFS will walk a tree in O(N) time, so probably O(N) is the limit in both measures.
Okay after further thought I believe I found a way, provided you
know the number of nodes in advance
can make modifications to the binary tree
The basic idea is to traverse the tree with Morris inorder tree traversal, and count the number of visited nodes, both in the visiting phase and individual predecessor finding phases. If any of these exceeds the number of nodes, you definitely have a cycle. If you don't have a cycle, then it is equivalent to the plain Morris traversal, and your binary tree will be restored.
I'm not sure if it is possible without knowing the number of nodes in advance though. Will think about it more.

Create Balanced Binary Search Tree from Sorted linked list

What's the best way to create a balanced binary search tree from a sorted singly linked list?
How about creating nodes bottom-up?
This solution's time complexity is O(N). Detailed explanation in my blog post:
http://www.leetcode.com/2010/11/convert-sorted-list-to-balanced-binary.html
Two traversal of the linked list is all we need. First traversal to get the length of the list (which is then passed in as the parameter n into the function), then create nodes by the list's order.
BinaryTree* sortedListToBST(ListNode *& list, int start, int end) {
if (start > end) return NULL;
// same as (start+end)/2, avoids overflow
int mid = start + (end - start) / 2;
BinaryTree *leftChild = sortedListToBST(list, start, mid-1);
BinaryTree *parent = new BinaryTree(list->data);
parent->left = leftChild;
list = list->next;
parent->right = sortedListToBST(list, mid+1, end);
return parent;
}
BinaryTree* sortedListToBST(ListNode *head, int n) {
return sortedListToBST(head, 0, n-1);
}
You can't do better than linear time, since you have to at least read all the elements of the list, so you might as well copy the list into an array (linear time) and then construct the tree efficiently in the usual way, i.e. if you had the list [9,12,18,23,24,51,84], then you'd start by making 23 the root, with children 12 and 51, then 9 and 18 become children of 12, and 24 and 84 become children of 51. Overall, should be O(n) if you do it right.
The actual algorithm, for what it's worth, is "take the middle element of the list as the root, and recursively build BSTs for the sub-lists to the left and right of the middle element and attach them below the root".
Best isn't only about asynmptopic run time. The sorted linked list has all the information needed to create the binary tree directly, and I think this is probably what they are looking for
Note that the first and third entries become children of the second, then the fourth node has chidren of the second and sixth (which has children the fifth and seventh) and so on...
in psuedo code
read three elements, make a node from them, mark as level 1, push on stack
loop
read three elemeents and make a node of them
mark as level 1
push on stack
loop while top two enties on stack have same level (n)
make node of top two entries, mark as level n + 1, push on stack
while elements remain in list
(with a bit of adjustment for when there's less than three elements left or an unbalanced tree at any point)
EDIT:
At any point, there is a left node of height N on the stack. Next step is to read one element, then read and construct another node of height N on the stack. To construct a node of height N, make and push a node of height N -1 on the stack, then read an element, make another node of height N-1 on the stack -- which is a recursive call.
Actually, this means the algorithm (even as modified) won't produce a balanced tree. If there are 2N+1 nodes, it will produce a tree with 2N-1 values on the left, and 1 on the right.
So I think #sgolodetz's answer is better, unless I can think of a way of rebalancing the tree as it's built.
Trick question!
The best way is to use the STL, and advantage yourself of the fact that the sorted associative container ADT, of which set is an implementation, demands insertion of sorted ranges have amortized linear time. Any passable set of core data structures for any language should offer a similar guarantee. For a real answer, see the quite clever solutions others have provided.
What's that? I should offer something useful?
Hum...
How about this?
The smallest possible meaningful tree in a balanced binary tree is 3 nodes.
A parent, and two children. The very first instance of such a tree is the first three elements. Child-parent-Child. Let's now imagine this as a single node. Okay, well, we no longer have a tree. But we know that the shape we want is Child-parent-Child.
Done for a moment with our imaginings, we want to keep a pointer to the parent in that initial triumvirate. But it's singly linked!
We'll want to have four pointers, which I'll call A, B, C, and D. So, we move A to 1, set B equal to A and advance it one. Set C equal to B, and advance it two. The node under B already points to its right-child-to-be. We build our initial tree. We leave B at the parent of Tree one. C is sitting at the node that will have our two minimal trees as children. Set A equal to C, and advance it one. Set D equal to A, and advance it one. We can now build our next minimal tree. D points to the root of that tree, B points to the root of the other, and C points to the... the new root from which we will hang our two minimal trees.
How about some pictures?
[A][B][-][C]
With our image of a minimal tree as a node...
[B = Tree][C][A][D][-]
And then
[Tree A][C][Tree B]
Except we have a problem. The node two after D is our next root.
[B = Tree A][C][A][D][-][Roooooot?!]
It would be a lot easier on us if we could simply maintain a pointer to it instead of to it and C. Turns out, since we know it will point to C, we can go ahead and start constructing the node in the binary tree that will hold it, and as part of this we can enter C into it as a left-node. How can we do this elegantly?
Set the pointer of the Node under C to the node Under B.
It's cheating in every sense of the word, but by using this trick, we free up B.
Alternatively, you can be sane, and actually start building out the node structure. After all, you really can't reuse the nodes from the SLL, they're probably POD structs.
So now...
[TreeA]<-[C][A][D][-][B]
[TreeA]<-[C]->[TreeB][B]
And... Wait a sec. We can use this same trick to free up C, if we just let ourselves think of it as a single node instead of a tree. Because after all, it really is just a single node.
[TreeC]<-[B][A][D][-][C]
We can further generalize our tricks.
[TreeC]<-[B][TreeD]<-[C][-]<-[D][-][A]
[TreeC]<-[B][TreeD]<-[C]->[TreeE][A]
[TreeC]<-[B]->[TreeF][A]
[TreeG]<-[A][B][C][-][D]
[TreeG]<-[A][-]<-[C][-][D]
[TreeG]<-[A][TreeH]<-[D][B][C][-]
[TreeG]<-[A][TreeH]<-[D][-]<-[C][-][B]
[TreeG]<-[A][TreeJ]<-[B][-]<-[C][-][D]
[TreeG]<-[A][TreeJ]<-[B][TreeK]<-[D][-]<-[C][-]
[TreeG]<-[A][TreeJ]<-[B][TreeK]<-[D][-]<-[C][-]
We are missing a critical step!
[TreeG]<-[A]->([TreeJ]<-[B]->([TreeK]<-[D][-]<-[C][-]))
Becomes :
[TreeG]<-[A]->[TreeL->([TreeK]<-[D][-]<-[C][-])][B]
[TreeG]<-[A]->[TreeL->([TreeK]<-[D]->[TreeM])][B]
[TreeG]<-[A]->[TreeL->[TreeN]][B]
[TreeG]<-[A]->[TreeO][B]
[TreeP]<-[B]
Obviously, the algorithm can be cleaned up considerably, but I thought it would be interesting to demonstrate how one can optimize as you go by iteratively designing your algorithm. I think this kind of process is what a good employer should be looking for more than anything.
The trick, basically, is that each time we reach the next midpoint, which we know is a parent-to-be, we know that its left subtree is already finished. The other trick is that we are done with a node once it has two children and something pointing to it, even if all of the sub-trees aren't finished. Using this, we can get what I am pretty sure is a linear time solution, as each element is touched only 4 times at most. The problem is that this relies on being given a list that will form a truly balanced binary search tree. There are, in other words, some hidden constraints that may make this solution either much harder to apply, or impossible. For example, if you have an odd number of elements, or if there are a lot of non-unique values, this starts to produce a fairly silly tree.
Considerations:
Render the element unique.
Insert a dummy element at the end if the number of nodes is odd.
Sing longingly for a more naive implementation.
Use a deque to keep the roots of completed subtrees and the midpoints in, instead of mucking around with my second trick.
This is a python implementation:
def sll_to_bbst(sll, start, end):
"""Build a balanced binary search tree from sorted linked list.
This assumes that you have a class BinarySearchTree, with properties
'l_child' and 'r_child'.
Params:
sll: sorted linked list, any data structure with 'popleft()' method,
which removes and returns the leftmost element of the list. The
easiest thing to do is to use 'collections.deque' for the sorted
list.
start: int, start index, on initial call set to 0
end: int, on initial call should be set to len(sll)
Returns:
A balanced instance of BinarySearchTree
This is a python implementation of solution found here:
http://leetcode.com/2010/11/convert-sorted-list-to-balanced-binary.html
"""
if start >= end:
return None
middle = (start + end) // 2
l_child = sll_to_bbst(sll, start, middle)
root = BinarySearchTree(sll.popleft())
root.l_child = l_child
root.r_child = sll_to_bbst(sll, middle+1, end)
return root
Instead of the sorted linked list i was asked on a sorted array (doesn't matter though logically, but yes run-time varies) to create a BST of minimal height, following is the code i could get out:
typedef struct Node{
struct Node *left;
int info;
struct Node *right;
}Node_t;
Node_t* Bin(int low, int high) {
Node_t* node = NULL;
int mid = 0;
if(low <= high) {
mid = (low+high)/2;
node = CreateNode(a[mid]);
printf("DEBUG: creating node for %d\n", a[mid]);
if(node->left == NULL) {
node->left = Bin(low, mid-1);
}
if(node->right == NULL) {
node->right = Bin(mid+1, high);
}
return node;
}//if(low <=high)
else {
return NULL;
}
}//Bin(low,high)
Node_t* CreateNode(int info) {
Node_t* node = malloc(sizeof(Node_t));
memset(node, 0, sizeof(Node_t));
node->info = info;
node->left = NULL;
node->right = NULL;
return node;
}//CreateNode(info)
// call function for an array example: 6 7 8 9 10 11 12, it gets you desired
// result
Bin(0,6);
HTH Somebody..
This is the pseudo recursive algorithm that I will suggest.
createTree(treenode *root, linknode *start, linknode *end)
{
if(start == end or start = end->next)
{
return;
}
ptrsingle=start;
ptrdouble=start;
while(ptrdouble != end and ptrdouble->next !=end)
{
ptrsignle=ptrsingle->next;
ptrdouble=ptrdouble->next->next;
}
//ptrsignle will now be at the middle element.
treenode cur_node=Allocatememory;
cur_node->data = ptrsingle->data;
if(root = null)
{
root = cur_node;
}
else
{
if(cur_node->data (less than) root->data)
root->left=cur_node
else
root->right=cur_node
}
createTree(cur_node, start, ptrSingle);
createTree(cur_node, ptrSingle, End);
}
Root = null;
The inital call will be createtree(Root, list, null);
We are doing the recursive building of the tree, but without using the intermediate array.
To get to the middle element every time we are advancing two pointers, one by one element, other by two elements. By the time the second pointer is at the end, the first pointer will be at the middle.
The running time will be o(nlogn). The extra space will be o(logn). Not an efficient solution for a real situation where you can have R-B tree which guarantees nlogn insertion. But good enough for interview.
Similar to #Stuart Golodetz and #Jake Kurzer the important thing is that the list is already sorted.
In #Stuart's answer, the array he presented is the backing data structure for the BST. The find operation for example would just need to perform index array calculations to traverse the tree. Growing the array and removing elements would be the trickier part, so I'd prefer a vector or other constant time lookup data structure.
#Jake's answer also uses this fact but unfortunately requires you to traverse the list to find each time to do a get(index) operation. But requires no additional memory usage.
Unless it was specifically mentioned by the interviewer that they wanted an object structure representation of the tree, I would use #Stuart's answer.
In a question like this you'd be given extra points for discussing the tradeoffs and all the options that you have.
Hope the detailed explanation on this post helps:
http://preparefortechinterview.blogspot.com/2013/10/planting-trees_1.html
A slightly improved implementation from #1337c0d3r in my blog.
// create a balanced BST using #len elements starting from #head & move #head forward by #len
TreeNode *sortedListToBSTHelper(ListNode *&head, int len) {
if (0 == len) return NULL;
auto left = sortedListToBSTHelper(head, len / 2);
auto root = new TreeNode(head->val);
root->left = left;
head = head->next;
root->right = sortedListToBSTHelper(head, (len - 1) / 2);
return root;
}
TreeNode *sortedListToBST(ListNode *head) {
int n = length(head);
return sortedListToBSTHelper(head, n);
}
If you know how many nodes are in the linked list, you can do it like this:
// Gives path to subtree being built. If branch[N] is false, branch
// less from the node at depth N, if true branch greater.
bool branch[max depth];
// If rem[N] is true, then for the current subtree at depth N, it's
// greater subtree has one more node than it's less subtree.
bool rem[max depth];
// Depth of root node of current subtree.
unsigned depth = 0;
// Number of nodes in current subtree.
unsigned num_sub = Number of nodes in linked list;
// The algorithm relies on a stack of nodes whose less subtree has
// been built, but whose right subtree has not yet been built. The
// stack is implemented as linked list. The nodes are linked
// together by having the "greater" handle of a node set to the
// next node in the list. "less_parent" is the handle of the first
// node in the list.
Node *less_parent = nullptr;
// h is root of current subtree, child is one of its children.
Node *h, *child;
Node *p = head of the sorted linked list of nodes;
LOOP // loop unconditionally
LOOP WHILE (num_sub > 2)
// Subtract one for root of subtree.
num_sub = num_sub - 1;
rem[depth] = !!(num_sub & 1); // true if num_sub is an odd number
branch[depth] = false;
depth = depth + 1;
num_sub = num_sub / 2;
END LOOP
IF (num_sub == 2)
// Build a subtree with two nodes, slanting to greater.
// I arbitrarily chose to always have the extra node in the
// greater subtree when there is an odd number of nodes to
// split between the two subtrees.
h = p;
p = the node after p in the linked list;
child = p;
p = the node after p in the linked list;
make h and p into a two-element AVL tree;
ELSE // num_sub == 1
// Build a subtree with one node.
h = p;
p = the next node in the linked list;
make h into a leaf node;
END IF
LOOP WHILE (depth > 0)
depth = depth - 1;
IF (not branch[depth])
// We've completed a less subtree, exit while loop.
EXIT LOOP;
END IF
// We've completed a greater subtree, so attach it to
// its parent (that is less than it). We pop the parent
// off the stack of less parents.
child = h;
h = less_parent;
less_parent = h->greater_child;
h->greater_child = child;
num_sub = 2 * (num_sub - rem[depth]) + rem[depth] + 1;
IF (num_sub & (num_sub - 1))
// num_sub is not a power of 2
h->balance_factor = 0;
ELSE
// num_sub is a power of 2
h->balance_factor = 1;
END IF
END LOOP
IF (num_sub == number of node in original linked list)
// We've completed the full tree, exit outer unconditional loop
EXIT LOOP;
END IF
// The subtree we've completed is the less subtree of the
// next node in the sequence.
child = h;
h = p;
p = the next node in the linked list;
h->less_child = child;
// Put h onto the stack of less parents.
h->greater_child = less_parent;
less_parent = h;
// Proceed to creating greater than subtree of h.
branch[depth] = true;
num_sub = num_sub + rem[depth];
depth = depth + 1;
END LOOP
// h now points to the root of the completed AVL tree.
For an encoding of this in C++, see the build member function (currently at line 361) in https://github.com/wkaras/C-plus-plus-intrusive-container-templates/blob/master/avl_tree.h . It's actually more general, a template using any forward iterator rather than specifically a linked list.

Resources