Related
I received this interview question that I didn't know how to solve.
Design a snapshot set functionality.
Once the snapshot is taken, the iterator of the class should only return values that were present in the function.
The class should provide add, remove, and contains functionality. The iterator always returns elements that were present in the snapshot even though the element might be removed from set after the snapshot.
The snapshot of the set is taken when the iterator function is called.
interface SnapshotSet {
void add(int num);
void remove(num);
boolean contains(num);
Iterator<Integer> iterator(); // the first call to this function should trigger a snapshot of the set
}
The interviewer said that the space requirement is that we cannot create a copy (snapshot) of the entire list of keys when calling iterator.
The first step is to handle only one iterator being created and being iterated over at a time. The followup question: how to handle the scenario of multiple iterators?
An example:
SnapshotSet set = new SnapshotSet();
set.add(1);
set.add(2);
set.add(3);
set.add(4);
Iterator<Integer> itr1 = set.iterator(); // iterator should return 1, 2, 3, 4 (in any order) when next() is called.
set.remove(1);
set.contains(1); // returns false; because 1 was removed.
Iterator<Integer> itr2 = set.iterator(); // iterator should return 2, 3, 4 (in any order) when next() is called.
I came up with an O(n) space solution where I created a copy of the entire list of keys when calling iterator. The interviewer said this was not space efficient enough.
I think it is fine to have a solution that focuses on reducing space at the cost of time complexity (but the time complexity should still be as efficient as possible).
Here is a solution that makes all operations reasonably fast. So it is like a set that has all history, all the time.
First we'll need to review the idea of a skiplist. Without the snapshot functionality.
What we do is start with a linked list on the bottom which will always be kept in sorted order. Draw that in a line. Half the values are randomly selected to also be part of another linked list that you draw above the first. Then half of those are selected to be part of another linked list, and so on. If the bottom layer has size n, the whole structure usually requires around 2n nodes. (Because 1 + 1/2 + 1/4 + 1/8 + ... = 2.) Each node in the entire 2-dimensional structure has the following data:
value: the value of the node
height: the height of the node in the skip list
next: the next node at the current level (is null at the end)
down: the same value node, one level down (is null at height 0)
And now your set is represented by a stack of nodes whose values are ignored, that points at the starting node at each level.
Here is a basic picture:
set
|
start(3) -> 2
| |
start(2) -> 2 -> 5 -> 9
| | | |
start(1) -> 2 -> 4 -> 5 -> 9
| | | | |
start(0) -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9 -> 10
Now suppose I want to find whether 8 is in the set. What I do is start from the set, find the topmost start, then:
while True:
if node.next is null or 8 < node.next.value:
if node.down is null:
return False
else:
node = node.down
elif 8 == node.next.value:
return True
else:
node = node.next
In this case we go from set to start(3) to the top 2, down one to 2, forward to 5, down 2x to 5, then go 6, 7, and find 8.
That's contains. To remove we follow the same search idea, but if we find that node.next.value == 5 then we assign node.next = node.next.next, then continue searching.
To add we randomly choose a height (which can be int(-log(random())/log(2))). And then we search forward until we've arrived at that height at a node whose node.next should be our desired new value. Then we do something complicated.
prev_added = null
while node is not null:
if node.next is null or new_value < node.next.value:
if node.height <= desired_height:
adding_node = Node(new_value, node.height, node.next, null)
node.next = adding_node
if prev_added is not null:
prev_added.down = adding_node
prev_added = adding_node
node = node.down
else:
node = node.next
You can verify that expected performance of all three operations is O(log(n)).
So, how do we add snapshotting to this?
First we add a version to the set data structure. This will be tied to snapshot. Next, we replace every single pointer with a linked list of pointers and versions. And now instead of directly modifying pointers, if the top one has an older version than we're now inserting, you add to the head of the list and leave the older version be.
And NOW we can implement a snapshot as follows.
set.version = set.version+1
node = set.start
while node.down is not null:
node = node.down
snapshot = Snapshot(set, set.version, node)
Now snapshotting is very quick. And to traverse a particular past version of the set (including simply iterating over a snapshot) for any pointer we need to traverse back until we get past any too new pointers, and find an old enough one. It turns out that any given pointer will tend to have a fairly small number of pointers, so this has only a modest amount of overhead.
Traversal of the current version of the set is just a question of always looking at the most recent version of a pointer. So it is just an additional layer of indirection, but same expected performance.
And now we have a version of this with all snapshotted versions available forever. It is possible to add garbage collection to reduce how much of a problem that is. But this description is long enough already.
This is a very different but ultimately much better answer than the one I gave at first. The idea is simply to have the data structure be a read-only reasonably well balanced sorted tree. Since it is read-only, it is easy to iterate over it.
But then how do you make modifications? Well, you simply create a new copy of the tree from the modification on up to the root. This will be O(log(n)) new nodes. Better yet the O(log(n)) old nodes that were replaced can be trivially garbage collected if they are not in use.
All operations are O(log(n)) except iteration which is O(n). I also included both an explicit iterator using callbacks, and an implicit one using Python's generators.
And for fun I coded it up in Python.
class TreeNode:
def __init__ (self, value, left=None, right=None):
self.value = value
count = 1
if left is not None:
count += left.count
if right is not None:
count += right.count
self.count = count
self.left = left
self.right = right
def left_count (self):
if self.left is None:
return 0
else:
return self.left.count
def right_count (self):
if self.right is None:
return 0
else:
return self.right.count
def attach_left (self, child):
# New node for balanced tree with self.left replaced by child.
if id(child) == id(self.left):
return self
elif child is None:
return TreeNode(self.value).attach_right(self.right)
elif child.left_count() < child.right_count() + self.right_count():
return TreeNode(self.value, child, self.right)
else:
new_right = TreeNode(self.value, child.right, self.right)
return TreeNode(child.value, child.left, new_right)
def attach_right (self, child):
# New node for balanced tree with self.right replaced by child.
if id(child) == id(self.right):
return self
elif child is None:
return TreeNode(self.value).attach_left(self.left)
elif child.right_count() < child.left_count() + self.left_count():
return TreeNode(self.value, self.left, child)
else:
new_left = TreeNode(self.value, self.left, child.left)
return TreeNode(child.value, new_left, child.right)
def merge_right (self, other):
# New node for balanced tree with all of self, then all of other.
if other is None:
return self
elif self.right is None:
return self.attach_right(other)
elif other.left is None:
return other.attach_left(self)
else:
child = self.right.merge_right(other.left)
if self.left_count() < other.right_count():
child = self.attach_right(child)
return other.attach_left(child)
else:
child = other.attach_left(child)
return self.attach_right(child)
def add (self, value):
if value < self.value:
if self.left is None:
child = TreeNode(value)
else:
child = self.left.add(value)
return self.attach_left(child)
elif self.value < value:
if self.right is None:
child = TreeNode(value)
else:
child = self.right.add(value)
return self.attach_right(child)
else:
return self
def remove (self, value):
if value < self.value:
if self.left is None:
return self
else:
return self.attach_left(self.left.remove(value))
elif self.value < value:
if self.right is None:
return self
else:
return self.attach_right(self.right.remove(value))
else:
if self.left is None:
return self.right
elif self.right is None:
return self.left
else:
return self.left.merge_right(self.right)
def __str__ (self):
if self.left is None:
left_lines = []
else:
left_lines = str(self.left).split("\n")
left_lines.pop()
left_lines = [" " + l for l in left_lines]
if self.right is None:
right_lines = []
else:
right_lines = str(self.right).split("\n")
right_lines.pop()
right_lines = [" " + l for l in right_lines]
return "\n".join(left_lines + [str(self.value)] + right_lines) + "\n"
# Pythonic iterator.
def __iter__ (self):
if self.left is not None:
yield from self.left
yield self.value
if self.right is not None:
yield from self.right
class SnapshottableSet:
def __init__ (self, root=None):
self.root = root
def contains (self, value):
node = self.root
while node is not None:
if value < node.value:
node = node.left
elif node.value < value:
node = node.right
else:
return True
return False
def add (self, value):
if self.root is None:
self.root = TreeNode(value)
else:
self.root = self.root.add(value)
def remove (self, value):
if self.root is not None:
self.root = self.root.remove(value)
# Pythonic built-in approach
def __iter__ (self):
if self.root is not None:
yield from self.root
# And explicit approach
def iterator (self):
nodes = []
if self.root is not None:
node = self.root
while node is not None:
nodes.append(node)
node = node.left
def next_value ():
if len(nodes):
node = nodes.pop()
value = node.value
node = node.right
while node is not None:
nodes.append(node)
node = node.left
return value
else:
raise StopIteration
return next_value
s = SnapshottableSet()
for i in range(10):
s.add(i)
it = s.iterator()
for i in range(5):
s.remove(2*i)
print("Current contents")
for v in s:
print(v)
print("Original contents")
while True:
print(it())
So I have a binary search tree and need to produce a list with the BSTtoList method, but I'm not sure what the general steps are or what I have to do.
class BinarySearchTree[A](comparator: (A, A) => Boolean) {
var root: BinaryTreeNode[A] = null
def search(a: A): BinaryTreeNode[A] = {
searchHelper(a, this.root)
}
def searchHelper(a: A, node: BinaryTreeNode[A]): BinaryTreeNode[A] = {
if(node == null){
null
}else if(comparator(a, node.value)){
searchHelper(a, node.left)
}else if(comparator(node.value, a)){
searchHelper(a, node.right)
}else{
node
}
}
def BSTtoList: List[A] = {
var sortedList = List()
if (root.left != null) {
sortedList :+ searchHelper(root.value, root.left).value
}
else if (root.right != null){
sortedList :+ searchHelper(root.value, root.right).value
}
sortedList
}
}
Let's first think about how a BST works. At any given node, say with value x, all the nodes in the left subtree will have values < x and all nodes in the right subtree will have values > x. Thus, to return the sorted list of the subtree rooted at node x, you just have to return [sorted list of left subtree] + [x] + [sorted list of right subtree], so you just have to call BSTtoList recursively on the left and right subtrees, and then return the list described above. From there you just have to handle the base case of returning an empty list at a NULL node.
The above algorithm is O(N^2) time, and there's a better solution using tail recursion that runs in O(N) time, pseudocode for which:
def BSTtoList(root, accumulator):
if root == NULL:
return accumulator
else:
return BSTtoList(root.left_child, [root.value] + BSTtoList(root.right_child, accumulator)
Where BSTtoList is initially called with an empty list as the accumulator. This second solution works similarly to the first but is optimized by minimizing array merges (this version works best if the language used has O(1) insertion into the front of a list; implementation is a bit different if the language allows O(1) insertion into the back).
I have a tree as input to the breadth first search and I want to know as the algorithm progresses at which level it is?
# Breadth First Search Implementation
graph = {
'A':['B','C','D'],
'B':['A'],
'C':['A','E','F'],
'D':['A','G','H'],
'E':['C'],
'F':['C'],
'G':['D'],
'H':['D']
}
def breadth_first_search(graph,source):
"""
This function is the Implementation of the breadth_first_search program
"""
# Mark each node as not visited
mark = {}
for item in graph.keys():
mark[item] = 0
queue, output = [],[]
# Initialize an empty queue with the source node and mark it as explored
queue.append(source)
mark[source] = 1
output.append(source)
# while queue is not empty
while queue:
# remove the first element of the queue and call it vertex
vertex = queue[0]
queue.pop(0)
# for each edge from the vertex do the following
for vrtx in graph[vertex]:
# If the vertex is unexplored
if mark[vrtx] == 0:
queue.append(vrtx) # mark it as explored
mark[vrtx] = 1 # and append it to the queue
output.append(vrtx) # fill the output vector
return output
print breadth_first_search(graph, 'A')
It takes tree as an input graph, what I want is, that at each iteration it should print out the current level which is being processed.
Actually, we don't need an extra queue to store the info on the current depth, nor do we need to add null to tell whether it's the end of current level. We just need to know how many nodes the current level contains, then we can deal with all the nodes in the same level, and increase the level by 1 after we are done processing all the nodes on the current level.
int level = 0;
Queue<Node> queue = new LinkedList<>();
queue.add(root);
while(!queue.isEmpty()){
int level_size = queue.size();
while (level_size-- != 0) {
Node temp = queue.poll();
if (temp.right != null) queue.add(temp.right);
if (temp.left != null) queue.add(temp.left);
}
level++;
}
You don't need to use extra queue or do any complicated calculation to achieve what you want to do. This idea is very simple.
This does not use any extra space other than queue used for BFS.
The idea I am going to use is to add null at the end of each level. So the number of nulls you encountered +1 is the depth you are at. (of course after termination it is just level).
int level = 0;
Queue <Node> queue = new LinkedList<>();
queue.add(root);
queue.add(null);
while(!queue.isEmpty()){
Node temp = queue.poll();
if(temp == null){
level++;
queue.add(null);
if(queue.peek() == null) break;// You are encountering two consecutive `nulls` means, you visited all the nodes.
else continue;
}
if(temp.right != null)
queue.add(temp.right);
if(temp.left != null)
queue.add(temp.left);
}
Maintain a queue storing the depth of the corresponding node in BFS queue. Sample code for your information:
queue bfsQueue, depthQueue;
bfsQueue.push(firstNode);
depthQueue.push(0);
while (!bfsQueue.empty()) {
f = bfsQueue.front();
depth = depthQueue.front();
bfsQueue.pop(), depthQueue.pop();
for (every node adjacent to f) {
bfsQueue.push(node), depthQueue.push(depth+1);
}
}
This method is simple and naive, for O(1) extra space you may need the answer post by #stolen_leaves.
Try having a look at this post. It keeps track of the depth using the variable currentDepth
https://stackoverflow.com/a/16923440/3114945
For your implementation, keep track of the left most node and a variable for the depth. Whenever the left most node is popped from the queue, you know you hit a new level and you increment the depth.
So, your root is the leftMostNode at level 0. Then the left most child is the leftMostNode. As soon as you hit it, it becomes level 1. The left most child of this node is the next leftMostNode and so on.
With this Python code you can maintain the depth of each node from the root by increasing the depth only after you encounter a node of new depth in the queue.
queue = deque()
marked = set()
marked.add(root)
queue.append((root,0))
depth = 0
while queue:
r,d = queue.popleft()
if d > depth: # increase depth only when you encounter the first node in the next depth
depth += 1
for node in edges[r]:
if node not in marked:
marked.add(node)
queue.append((node,depth+1))
If your tree is perfectly ballanced (i.e. each node has the same number of children) there's actually a simple, elegant solution here with O(1) time complexity and O(1) space complexity. The main usecase where I find this helpful is in traversing a binary tree, though it's trivially adaptable to other tree sizes.
The key thing to realize here is that each level of a binary tree contains exactly double the quantity of nodes compared to the previous level. This allows us to calculate the total number of nodes in any tree given the tree's depth. For instance, consider the following tree:
This tree has a depth of 3 and 7 total nodes. We don't need to count the number of nodes to figure this out though. We can compute this in O(1) time with the formaula: 2^d - 1 = N, where d is the depth and N is the total number of nodes. (In a ternary tree this is 3^d - 1 = N, and in a tree where each node has K children this is K^d - 1 = N). So in this case, 2^3 - 1 = 7.
To keep track of depth while conducting a breadth first search, we simply need to reverse this calculation. Whereas the above formula allows us to solve for N given d, we actually want to solve for d given N. For instance, say we're evaluating the 5th node. To figure out what depth the 5th node is on, we take the following equation: 2^d - 1 = 5, and then simply solve for d, which is basic algebra:
If d turns out to be anything other than a whole number, just round up (the last node in a row is always a whole number). With that all in mind, I propose the following algorithm to identify the depth of any given node in a binary tree during breadth first traversal:
Let the variable visited equal 0.
Each time a node is visited, increment visited by 1.
Each time visited is incremented, calculate the node's depth as depth = round_up(log2(visited + 1))
You can also use a hash table to map each node to its depth level, though this does increase the space complexity to O(n). Here's a PHP implementation of this algorithm:
<?php
$tree = [
['A', [1,2]],
['B', [3,4]],
['C', [5,6]],
['D', [7,8]],
['E', [9,10]],
['F', [11,12]],
['G', [13,14]],
['H', []],
['I', []],
['J', []],
['K', []],
['L', []],
['M', []],
['N', []],
['O', []],
];
function bfs($tree) {
$queue = new SplQueue();
$queue->enqueue($tree[0]);
$visited = 0;
$depth = 0;
$result = [];
while ($queue->count()) {
$visited++;
$node = $queue->dequeue();
$depth = ceil(log($visited+1, 2));
$result[$depth][] = $node[0];
if (!empty($node[1])) {
foreach ($node[1] as $child) {
$queue->enqueue($tree[$child]);
}
}
}
print_r($result);
}
bfs($tree);
Which prints:
Array
(
[1] => Array
(
[0] => A
)
[2] => Array
(
[0] => B
[1] => C
)
[3] => Array
(
[0] => D
[1] => E
[2] => F
[3] => G
)
[4] => Array
(
[0] => H
[1] => I
[2] => J
[3] => K
[4] => L
[5] => M
[6] => N
[7] => O
)
)
Set a variable cnt and initialize it to the size of the queue cnt=queue.size(), Now decrement cnt each time you do a pop. When cnt gets to 0, increase the depth of your BFS and then set cnt=queue.size() again.
In Java it would be something like this.
The idea is to look at the parent to decide the depth.
//Maintain depth for every node based on its parent's depth
Map<Character,Integer> depthMap=new HashMap<>();
queue.add('A');
depthMap.add('A',0); //this is where you start your search
while(!queue.isEmpty())
{
Character parent=queue.remove();
List<Character> children=adjList.get(parent);
for(Character child :children)
{
if (child.isVisited() == false) {
child.visit(parent);
depthMap.add(child,depthMap.get(parent)+1);//parent's depth + 1
}
}
}
Use a dictionary to keep track of the level (distance from start) of each node when exploring the graph.
Example in Python:
from collections import deque
def bfs(graph, start):
queue = deque([start])
levels = {start: 0}
while queue:
vertex = queue.popleft()
for neighbour in graph[vertex]:
if neighbour in levels:
continue
queue.append(neighbour)
levels[neighbour] = levels[vertex] + 1
return levels
I write a simple and easy to read code in python.
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution:
def dfs(self, root):
assert root is not None
queue = [root]
level = 0
while queue:
print(level, [n.val for n in queue if n is not None])
mark = len(queue)
for i in range(mark):
n = queue[i]
if n.left is not None:
queue.append(n.left)
if n.right is not None:
queue.append(n.right)
queue = queue[mark:]
level += 1
Usage,
# [3,9,20,null,null,15,7]
n3 = TreeNode(3)
n9 = TreeNode(9)
n20 = TreeNode(20)
n15 = TreeNode(15)
n7 = TreeNode(7)
n3.left = n9
n3.right = n20
n20.left = n15
n20.right = n7
DFS().dfs(n3)
Result
0 [3]
1 [9, 20]
2 [15, 7]
I don't see this method posted so far, so here's a simple one:
You can "attach" the level to the node. For e.g., in case of a tree, instead of the typical queue<TreeNode*>, use a queue<pair<TreeNode*,int>> and then push the pairs of {node,level}s into it. The root would be pushed in as, q.push({root,0}), its children as q.push({root->left,1}), q.push({root->right,1}) and so on...
We don't need to modify the input, append nulls or even (asymptotically speaking) use any extra space just to track the levels.
I was given this question during a recent interview: Given a BST whose nodes contains an Integer as value, find all subtrees whose nodes fall between integers X (min) and Y (max), where X<Y. These subtrees cannot overlap each other.
I have solved variations of this problem, example - print keys of a BST that fall in a given range. But couldn't figure this one out, since it involves finding all connected sub-graphs of the main graph/tree that satisfy very specific constraints. Any pointers/help/pseudo code is appreciated.
Added notes -
The problem defined the datastructure of node as having a left pointer, right pointer, and a integer value. There was no way to mark a node.
Was asked to solve this in Java.
When i said subtree/subgraph, i meant a connected set of nodes, not a list of disjoint nodes. sorry for the confusion.
The concrete solution depends on the definition of a subtree. Consider the following BST:
5
3
2
4
8
-
9
And we want to find the subtrees in the range [4,8]. It is obvious that the 4 node belongs to the output. But what about the other half tree? If a subtree refers to a node with all of its children, then that's the entire result. If a subtree is actually a subset of the input nodes, the nodes 5 and 8 belong to the result but their connections to the 3 and 9 nodes have to be stripped away.
In any case, the following algorithm can handle both. The preprocessor define WHOLE_SUBTREES defines whether subtrees are entire subcomponents with all children.
static List<BSTNode> FindSubtreesInRange(BSTNode root, int rangeMin, int rangeMax)
{
var result = new List<BSTNode>();
if (IsTreeWithinRange(root, rangeMin, rangeMax, int.MinValue, int.MaxValue, result))
result.Add(root);
return result;
}
static bool IsTreeWithinRange(BSTNode root, int rangeMin, int rangeMax, int treeRangeMin, int treeRangeMax, List<BSTNode> resultList)
{
if (treeRangeMin >= rangeMin && treeRangeMax <= rangeMax)
return true;
if ( treeRangeMin > rangeMax || treeRangeMax < rangeMin)
return false;
if (root.Key < rangeMin)
{
if (root.Right != null && IsTreeWithinRange(root.Right, rangeMin, rangeMax, root.Key + 1, treeRangeMax, resultList))
resultList.Add(root.Right);
return false;
}
if (root.Key > rangeMax)
{
if (root.Left != null && IsTreeWithinRange(root.Left, rangeMin, rangeMax, treeRangeMin, root.Key, resultList))
resultList.Add(root.Left);
return false;
}
if (root.Left == null && root.Right == null)
return true;
if (root.Left == null)
{
#if WHOLE_SUBTREES
if (!IsTreeWithinRange(root.Right, rangeMin, rangeMax, root.Key + 1, treeRangeMax, resultList))
root.Right = null;
return true;
#else
return IsTreeWithinRange(root.Right, rangeMin, rangeMax, root.Key + 1, treeRangeMax, resultList);
#endif
}
if (root.Right == null)
{
#if WHOLE_SUBTREES
if (!IsTreeWithinRange(root.Left, rangeMin, rangeMax, treeRangeMin, root.Key, resultList))
root.Left = null;
return true;
#else
return IsTreeWithinRange(root.Left, rangeMin, rangeMax, treeRangeMin, root.Key, resultList);
#endif
}
var leftInRange = IsTreeWithinRange(root.Left, rangeMin, rangeMax, treeRangeMin, root.Key, resultList);
var rightInRange = IsTreeWithinRange(root.Right, rangeMin, rangeMax, root.Key + 1, treeRangeMax, resultList);
if (leftInRange && rightInRange)
return true;
#if WHOLE_SUBTREES
if (!leftInRange)
root.Left = null;
if (!rightInRange)
root.Right = null;
return true;
#else
if (leftInRange)
resultList.Add(root.Left);
if (rightInRange)
resultList.Add(root.Right);
return false;
#endif
}
The idea is as follows: If only one subtree of a given node lies within the given range, then this must be the root of a new subtree. If both lie in the range, then they are not the root of a subtree. Instead, the parent level should handle the according decision.
The algorithm starts with the following: We traverse the tree and remember in which ranges the keys may be (treeRangeMin/Max). This allows a fast check if an entire subtree lies in the given range (first statement of the IsTreeWithinRange method.
The next two statements handle the case if the current node's key lies outside the given range. Then only one of it's subtrees might be within the range. If that's the case, this subtree is added to the result list.
Next, we check if the subtrees exist. If both do not, then the current tree is completely contained within the range.
If only one subtree exists, then the action differs based on whether we may split trees. If we may split the tree, the following happens: If the subtree is not within the range, we cut it off and return true (because the current node is contained within the given range). If we may not split trees, we just propagate the result of the recursive call.
Lastly, if both children exist. If one of them is not contained within the range, we cut it off (if we are allowed to). If we are not allowed, we add the subtree to the result list that lies within the given range.
This is pretty much simple to solve. For having subtrees that do not overlap, i included a marked field, initialized to false for every node.
The algorithm is as follows:
Traverse the BST beginning from root using DFS method. Now if a node is encountered in DFS , that is not marked and it satisfies the constraint(falls between X and Y), then there is a solution with a subtree rooted at that node, but we do not know how big that subtree can be? So we do the following:
Pass its left and right child to another method check, which will do the following:
Traverse the subtree rooted at the node and traverse it in DFS fashion as long as constraints are satisfied and nodes encountered are unmarked. As soon as any one condition is violated, then return.
Now, the original DFS method may be called on already marked vertices but the if condition will evaluate to false. Hence the target is achieved.
I solved it using JAVA language and for condition that keys lie between 10 and 21(exclusive). Here is the code:
One more thing,if nothing is printed after subtree rooted at X with childs as, then it denotes a subtree with single node.
class BST
{
public Node insert(Node x,int key)
{
if(x==null)
return new Node(key,null,null,false);
else if(key>x.key)
{
x.right=insert(x.right,key);
return x;
}
else if(key<x.key)
{
x.left=insert(x.left,key);
return x;
}
else {x.key=key;return x;}
}
public void DFS(Node x)
{
if(x==null)
return;
if(x.marked==false&&x.key<21&&x.key>10)
{
System.out.println("Subtree rooted at "+x.key+" with childs as");
x.marked=true;
check(x.left);
check(x.right);
}
DFS(x.left);
DFS(x.right);
}
public void check(Node ch)
{
if(ch==null)
return;
if(ch.marked==false&&ch.key<21&&ch.key>10)
{
System.out.println(ch.key);
ch.marked=true;
check(ch.left);
check(ch.right);
}
else return;
}
public static void main(String []args)
{
BST tree1=new BST();
Node root=null;
root=tree1.insert(root,14);
root=tree1.insert(root,16);
root=tree1.insert(root,5);
root=tree1.insert(root,3);
root=tree1.insert(root,12);
root=tree1.insert(root,10);
root=tree1.insert(root,13);
root=tree1.insert(root,20);
root=tree1.insert(root,18);
root=tree1.insert(root,23);
root=tree1.insert(root,15);
tree1.DFS(root);
}
}
class Node
{
Node left,right;
int key;
boolean marked;
Node(int key,Node left,Node right,boolean b)
{
b=false;
this.key=key;
this.left=left;
this.right=right;
}
}
Feel free for any queries.
This can be done recursively, and we keep a list of subtrees which we append to whenever a compliant subtree is found. The recursive function returns true when the subtree rooted at the argument node is wholly in range. It's the caller decision (the parent node) to determine what to do when the child's recusruve call returns true or false. For example, if the current node value is in the range , and its children's subtrees are also completely in range, then we simply return true. But if only one of the children's subtrees is in the range, and the other is not in range, then we return false (since the not all of the current node subtree is in the range), but we also append the child that was in the range to the list. If the current node value is not in the range we return false, but we also check either the left or right child, and append it to the list of subtrees if it's compliant:
def subtree_in_range(root, x, y):
def _subtree_in_range(node):
in_range=True
if node:
if node.val>=x and node.val<=y:
if not _subtree_in_range(node.left):
in_range=False
if node.right and _subtree_in_range(node.right):
l.append(node.right)
elif not _subtree_in_range(node.right):
in_range=False
if node.left:
l.append(node.left)
else:
in_range=False
s=node.left
if node.val<x:
s=node.right
if s and _subtree_in_range(s):
l.append(s)
return in_range
l=[]
if _subtree_in_range(root):
l.append(root)
return l
When doing range search, workhorse function for range, written in some generic language, might like this:
function range(node, results, X, Y)
{
if node is null then return
if node.key is in [X, Y] then results.add(node.key)
if node.key < Y then range(node.right, results, X, Y)
if node.key > X then range(node.left, results, X, Y)
}
For subtree version problem we need to store subtree root nodes instead of keys and keep track if we are in subtree or not. The latter can be solved by passing subtree wise parent in range call, which also is required for new structure creation. Desired function is below. As you can see, main change is one extra argument and node.key in [X, Y] branch
function range_subtrees(node, parent, results, X, Y)
{
if node is null then return
node_clone = null
if node.key is in [X, Y] then
node_clone = node.clone()
if parent is null then
results.add(node_clone)
else
parent.add_child(node_clone)
if node.key < Y then range_subtrees(node.right, node_clone, results, X, Y)
if node.key > X then range_subtrees(node.left, node_clone, results, X, Y)
}
This should create a collection of subtree root nodes, where each subtree is a copy of original tree's structure.
I need to print the different variations of printing valid tags "<" and ">" given the number of times the tags should appear and below is the solution in python using recursion.
def genBrackets(c):
def genBracketsHelper(r,l,currentString):
if l > r or r == -1 or l == -1:
return
if r == l and r == 0:
print currentString
genBracketsHelper(r,l-1, currentString + '<')
genBracketsHelper(r-1,l, currentString + '>')
return
genBracketsHelper(c, c, '')
#display options with 4 tags
genBrackets(4)
I am having a hard time really understanding this and want to try to convert this into a iterative version but I haven't had any success.
As per this thread: Can every recursion be converted into iteration? - it looks like it should be possible and the only exception appears to be the Ackermann function.
If anyone has any tips on how to see the "stack" maintained in Eclipse - that would also be appreciated.
PS. This is not a homework question - I am just trying to understand recursion-to-iteration conversion better.
Edit by Matthieu M. an example of output for better visualization:
>>> genBrackets(3)
<<<>>>
<<><>>
<<>><>
<><<>>
<><><>
I tried to keep basically the same structure as your code, but using an explicit stack rather than function calls to genBracketsHelper:
def genBrackets(c=1):
# genBracketsStack is a list of tuples, each of which
# represents the arguments to a call of genBracketsHelper
# Push the initial call onto the stack:
genBracketsStack = [(c, c, '')]
# This loop replaces genBracketsHelper itself
while genBracketsStack != []:
# Get the current arguments (now from the stack)
(r, l, currentString) = genBracketsStack.pop()
# Basically same logic as before
if l > r or r == -1 or l == -1:
continue # Acts like return
if r == l and r == 0:
print currentString
# Recursive calls are now pushes onto the stack
genBracketsStack.append((r-1,l, currentString + '>'))
genBracketsStack.append((r,l-1, currentString + '<'))
# This is kept explicit since you had an explicit return before
continue
genBrackets(4)
Note that the conversion I am using relies on all of the recursive calls being at the end of the function; the code would be more complicated if that wasn't the case.
You asked about doing this without a stack.
This algorithm walks the entire solution space, so it does a bit more work than the original versions, but it's basically the same concept:
each string has a tree of possible suffixes in your grammar
since there are only two tokens, it's a binary tree
the depth of the tree will always be c*2, so...
there must be 2**(c*2) paths through the tree
Since each path is a sequence of binary decisions, the paths correspond to the binary representations of the integers between 0 and 2**(c*2)-1.
So: just loop through those numbers and see if the binary representation corresponds to a balanced string. :)
def isValid(string):
"""
True if and only if the string is balanced.
"""
count = { '<': 0, '>':0 }
for char in string:
count[char] += 1
if count['>'] > count['<']:
return False # premature closure
if count['<'] != count['>']:
return False # unbalanced
else:
return True
def genBrackets(c):
"""
Generate every possible combination and test each one.
"""
for i in range(0, 2**(c*2)):
candidate = bin(i)[2:].zfill(8).replace('0','<').replace('1','>')
if isValid(candidate):
print candidate
In general, a recursion creates a Tree of calls, the root being the original call, and the leaves being the calls that do not recurse.
A degenerate case is when a each call only perform one other call, in this case the tree degenerates into a simple list. The transformation into an iteration is then simply achieved by using a stack, as demonstrated by #Jeremiah.
In the more general case, as here, when each call perform more (strictly) than one call. You obtain a real tree, and there are, therefore, several ways to traverse it.
If you use a queue, instead of a stack, you are performing a breadth-first traversal. #Jeremiah presented a traversal for which I know no name. The typical "recursion" order is normally a depth-first traversal.
The main advantage of the typical recursion is that the length of the stack does not grow as much, so you should aim for depth-first in general... if the complexity does not overwhelm you :)
I suggest beginning by writing a depth first traversal of a tree, once this is done adapting it to your algorithm should be fairly simple.
EDIT: Since I had some time, I wrote the Python Tree Traversal, it's the canonical example:
class Node:
def __init__(self, el, children):
self.element = el
self.children = children
def __repr__(self):
return 'Node(' + str(self.element) + ', ' + str(self.children) + ')'
def depthFirstRec(node):
print node.element
for c in node.children: depthFirstRec(c)
def depthFirstIter(node):
stack = [([node,], 0), ]
while stack != []:
children, index = stack.pop()
if index >= len(children): continue
node = children[index]
print node.element
stack.append((children, index+1))
stack.append((node.children, 0))
Note that the stack management is slightly complicated by the need to remember the index of the child we were currently visiting.
And the adaptation of the algorithm following the depth-first order:
def generateBrackets(c):
# stack is a list of pairs children/index
stack = [([(c,c,''),], 0), ]
while stack != []:
children, index = stack.pop()
if index >= len(children): continue # no more child to visit at this level
stack.append((children, index+1)) # register next child at this level
l, r, current = children[index]
if r == 0 and l == 0: print current
# create the list of children of this node
# (bypass if we are already unbalanced)
if l > r: continue
newChildren = []
if l != 0: newChildren.append((l-1, r, current + '<'))
if r != 0: newChildren.append((l, r-1, current + '>'))
stack.append((newChildren, 0))
I just realized that storing the index is a bit "too" complicated, since I never visit back. The simple solution thus consists in removing the list elements I don't need any longer, treating the list as a queue (in fact, a stack could be sufficient)!
This applies with minimum transformation.
def generateBrackets2(c):
# stack is a list of queues of children
stack = [[(c,c,''),], ]
while stack != []:
children = stack.pop()
if children == []: continue # no more child to visit at this level
stack.append(children[1:]) # register next child at this level
l, r, current = children[0]
if r == 0 and l == 0: print current
# create the list of children of this node
# (bypass if we are already unbalanced)
if l > r: continue
newChildren = []
if l != 0: newChildren.append((l-1, r, current + '<'))
if r != 0: newChildren.append((l, r-1, current + '>'))
stack.append(newChildren)
Yes.
def genBrackets(c):
stack = [(c, c, '')]
while stack:
right, left, currentString = stack.pop()
if left > right or right == -1 or left == -1:
pass
elif right == left and right == 0:
print currentString
else:
stack.append((right, left-1, currentString + '<'))
stack.append((right-1, left, currentString + '>'))
The output order is different, but the results should be the same.