This code is for deleting a node from a given position in a linked list.
Why do we have to put return head in the end?
def deleteNode(head, position):
if position == 0:
return head.next
else:
curr = head
for _ in range(position):
prev=curr
curr = curr.next
prev.next=curr.next
return head
Because you need to return the list to the function that called this one, in order to keep track of it. head is a reference to the beginning of the list, so it is the way to return the list back to where it was.
Related
I received this interview question that I didn't know how to solve.
Design a snapshot set functionality.
Once the snapshot is taken, the iterator of the class should only return values that were present in the function.
The class should provide add, remove, and contains functionality. The iterator always returns elements that were present in the snapshot even though the element might be removed from set after the snapshot.
The snapshot of the set is taken when the iterator function is called.
interface SnapshotSet {
void add(int num);
void remove(num);
boolean contains(num);
Iterator<Integer> iterator(); // the first call to this function should trigger a snapshot of the set
}
The interviewer said that the space requirement is that we cannot create a copy (snapshot) of the entire list of keys when calling iterator.
The first step is to handle only one iterator being created and being iterated over at a time. The followup question: how to handle the scenario of multiple iterators?
An example:
SnapshotSet set = new SnapshotSet();
set.add(1);
set.add(2);
set.add(3);
set.add(4);
Iterator<Integer> itr1 = set.iterator(); // iterator should return 1, 2, 3, 4 (in any order) when next() is called.
set.remove(1);
set.contains(1); // returns false; because 1 was removed.
Iterator<Integer> itr2 = set.iterator(); // iterator should return 2, 3, 4 (in any order) when next() is called.
I came up with an O(n) space solution where I created a copy of the entire list of keys when calling iterator. The interviewer said this was not space efficient enough.
I think it is fine to have a solution that focuses on reducing space at the cost of time complexity (but the time complexity should still be as efficient as possible).
Here is a solution that makes all operations reasonably fast. So it is like a set that has all history, all the time.
First we'll need to review the idea of a skiplist. Without the snapshot functionality.
What we do is start with a linked list on the bottom which will always be kept in sorted order. Draw that in a line. Half the values are randomly selected to also be part of another linked list that you draw above the first. Then half of those are selected to be part of another linked list, and so on. If the bottom layer has size n, the whole structure usually requires around 2n nodes. (Because 1 + 1/2 + 1/4 + 1/8 + ... = 2.) Each node in the entire 2-dimensional structure has the following data:
value: the value of the node
height: the height of the node in the skip list
next: the next node at the current level (is null at the end)
down: the same value node, one level down (is null at height 0)
And now your set is represented by a stack of nodes whose values are ignored, that points at the starting node at each level.
Here is a basic picture:
set
|
start(3) -> 2
| |
start(2) -> 2 -> 5 -> 9
| | | |
start(1) -> 2 -> 4 -> 5 -> 9
| | | | |
start(0) -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9 -> 10
Now suppose I want to find whether 8 is in the set. What I do is start from the set, find the topmost start, then:
while True:
if node.next is null or 8 < node.next.value:
if node.down is null:
return False
else:
node = node.down
elif 8 == node.next.value:
return True
else:
node = node.next
In this case we go from set to start(3) to the top 2, down one to 2, forward to 5, down 2x to 5, then go 6, 7, and find 8.
That's contains. To remove we follow the same search idea, but if we find that node.next.value == 5 then we assign node.next = node.next.next, then continue searching.
To add we randomly choose a height (which can be int(-log(random())/log(2))). And then we search forward until we've arrived at that height at a node whose node.next should be our desired new value. Then we do something complicated.
prev_added = null
while node is not null:
if node.next is null or new_value < node.next.value:
if node.height <= desired_height:
adding_node = Node(new_value, node.height, node.next, null)
node.next = adding_node
if prev_added is not null:
prev_added.down = adding_node
prev_added = adding_node
node = node.down
else:
node = node.next
You can verify that expected performance of all three operations is O(log(n)).
So, how do we add snapshotting to this?
First we add a version to the set data structure. This will be tied to snapshot. Next, we replace every single pointer with a linked list of pointers and versions. And now instead of directly modifying pointers, if the top one has an older version than we're now inserting, you add to the head of the list and leave the older version be.
And NOW we can implement a snapshot as follows.
set.version = set.version+1
node = set.start
while node.down is not null:
node = node.down
snapshot = Snapshot(set, set.version, node)
Now snapshotting is very quick. And to traverse a particular past version of the set (including simply iterating over a snapshot) for any pointer we need to traverse back until we get past any too new pointers, and find an old enough one. It turns out that any given pointer will tend to have a fairly small number of pointers, so this has only a modest amount of overhead.
Traversal of the current version of the set is just a question of always looking at the most recent version of a pointer. So it is just an additional layer of indirection, but same expected performance.
And now we have a version of this with all snapshotted versions available forever. It is possible to add garbage collection to reduce how much of a problem that is. But this description is long enough already.
This is a very different but ultimately much better answer than the one I gave at first. The idea is simply to have the data structure be a read-only reasonably well balanced sorted tree. Since it is read-only, it is easy to iterate over it.
But then how do you make modifications? Well, you simply create a new copy of the tree from the modification on up to the root. This will be O(log(n)) new nodes. Better yet the O(log(n)) old nodes that were replaced can be trivially garbage collected if they are not in use.
All operations are O(log(n)) except iteration which is O(n). I also included both an explicit iterator using callbacks, and an implicit one using Python's generators.
And for fun I coded it up in Python.
class TreeNode:
def __init__ (self, value, left=None, right=None):
self.value = value
count = 1
if left is not None:
count += left.count
if right is not None:
count += right.count
self.count = count
self.left = left
self.right = right
def left_count (self):
if self.left is None:
return 0
else:
return self.left.count
def right_count (self):
if self.right is None:
return 0
else:
return self.right.count
def attach_left (self, child):
# New node for balanced tree with self.left replaced by child.
if id(child) == id(self.left):
return self
elif child is None:
return TreeNode(self.value).attach_right(self.right)
elif child.left_count() < child.right_count() + self.right_count():
return TreeNode(self.value, child, self.right)
else:
new_right = TreeNode(self.value, child.right, self.right)
return TreeNode(child.value, child.left, new_right)
def attach_right (self, child):
# New node for balanced tree with self.right replaced by child.
if id(child) == id(self.right):
return self
elif child is None:
return TreeNode(self.value).attach_left(self.left)
elif child.right_count() < child.left_count() + self.left_count():
return TreeNode(self.value, self.left, child)
else:
new_left = TreeNode(self.value, self.left, child.left)
return TreeNode(child.value, new_left, child.right)
def merge_right (self, other):
# New node for balanced tree with all of self, then all of other.
if other is None:
return self
elif self.right is None:
return self.attach_right(other)
elif other.left is None:
return other.attach_left(self)
else:
child = self.right.merge_right(other.left)
if self.left_count() < other.right_count():
child = self.attach_right(child)
return other.attach_left(child)
else:
child = other.attach_left(child)
return self.attach_right(child)
def add (self, value):
if value < self.value:
if self.left is None:
child = TreeNode(value)
else:
child = self.left.add(value)
return self.attach_left(child)
elif self.value < value:
if self.right is None:
child = TreeNode(value)
else:
child = self.right.add(value)
return self.attach_right(child)
else:
return self
def remove (self, value):
if value < self.value:
if self.left is None:
return self
else:
return self.attach_left(self.left.remove(value))
elif self.value < value:
if self.right is None:
return self
else:
return self.attach_right(self.right.remove(value))
else:
if self.left is None:
return self.right
elif self.right is None:
return self.left
else:
return self.left.merge_right(self.right)
def __str__ (self):
if self.left is None:
left_lines = []
else:
left_lines = str(self.left).split("\n")
left_lines.pop()
left_lines = [" " + l for l in left_lines]
if self.right is None:
right_lines = []
else:
right_lines = str(self.right).split("\n")
right_lines.pop()
right_lines = [" " + l for l in right_lines]
return "\n".join(left_lines + [str(self.value)] + right_lines) + "\n"
# Pythonic iterator.
def __iter__ (self):
if self.left is not None:
yield from self.left
yield self.value
if self.right is not None:
yield from self.right
class SnapshottableSet:
def __init__ (self, root=None):
self.root = root
def contains (self, value):
node = self.root
while node is not None:
if value < node.value:
node = node.left
elif node.value < value:
node = node.right
else:
return True
return False
def add (self, value):
if self.root is None:
self.root = TreeNode(value)
else:
self.root = self.root.add(value)
def remove (self, value):
if self.root is not None:
self.root = self.root.remove(value)
# Pythonic built-in approach
def __iter__ (self):
if self.root is not None:
yield from self.root
# And explicit approach
def iterator (self):
nodes = []
if self.root is not None:
node = self.root
while node is not None:
nodes.append(node)
node = node.left
def next_value ():
if len(nodes):
node = nodes.pop()
value = node.value
node = node.right
while node is not None:
nodes.append(node)
node = node.left
return value
else:
raise StopIteration
return next_value
s = SnapshottableSet()
for i in range(10):
s.add(i)
it = s.iterator()
for i in range(5):
s.remove(2*i)
print("Current contents")
for v in s:
print(v)
print("Original contents")
while True:
print(it())
So I have a binary search tree and need to produce a list with the BSTtoList method, but I'm not sure what the general steps are or what I have to do.
class BinarySearchTree[A](comparator: (A, A) => Boolean) {
var root: BinaryTreeNode[A] = null
def search(a: A): BinaryTreeNode[A] = {
searchHelper(a, this.root)
}
def searchHelper(a: A, node: BinaryTreeNode[A]): BinaryTreeNode[A] = {
if(node == null){
null
}else if(comparator(a, node.value)){
searchHelper(a, node.left)
}else if(comparator(node.value, a)){
searchHelper(a, node.right)
}else{
node
}
}
def BSTtoList: List[A] = {
var sortedList = List()
if (root.left != null) {
sortedList :+ searchHelper(root.value, root.left).value
}
else if (root.right != null){
sortedList :+ searchHelper(root.value, root.right).value
}
sortedList
}
}
Let's first think about how a BST works. At any given node, say with value x, all the nodes in the left subtree will have values < x and all nodes in the right subtree will have values > x. Thus, to return the sorted list of the subtree rooted at node x, you just have to return [sorted list of left subtree] + [x] + [sorted list of right subtree], so you just have to call BSTtoList recursively on the left and right subtrees, and then return the list described above. From there you just have to handle the base case of returning an empty list at a NULL node.
The above algorithm is O(N^2) time, and there's a better solution using tail recursion that runs in O(N) time, pseudocode for which:
def BSTtoList(root, accumulator):
if root == NULL:
return accumulator
else:
return BSTtoList(root.left_child, [root.value] + BSTtoList(root.right_child, accumulator)
Where BSTtoList is initially called with an empty list as the accumulator. This second solution works similarly to the first but is optimized by minimizing array merges (this version works best if the language used has O(1) insertion into the front of a list; implementation is a bit different if the language allows O(1) insertion into the back).
My reverse linked list function seems to be buggy, but even after looking online for solutions, I could not understand why my method fails in a dry run with three nodes :(
In other solutions the starred part is usually written as 'head.next.next = head'. Isn't my line with 'n.next = head' doing the same thing?
Also some other solution had a line before calling the method as:
Node secondElem = head.next;
head.next = NULL;
I didn't understand why this is needed either :(
I came up with this solution and can't seem to proceed from here:
Node reverseLL(Node head){
if (head == NULL || head.next == NULL) return head;
Node n = reverseLL(head.next);
n.next = head; //**
head.next = NULL
return n;
}
Can someone please explain this to me?
When you reverse a linked list recursively, you need to attach the former head to the very end. This is something where the result of the reversal of the rest of the list is not really helpful -- in fact you'll just want to return that in the end.
But there is one thing that is important to be aware of: The second element will become the last element of the reversed rest of the list. And head.next will still hold a reference to it after reversal of the rest.
Example
Original list
head = 1 -> 2 -> 3 -> NULL
The reversed sublist returned from the recursive call (head will still be 1 and head.next will still point to 2)
3 -> 2 -> NULL
You can use this to append the head to the end after the recursive call that reverses the rest:
Node reverseLL(Node head) {
if (head == NULL || head.next == NULL) return head;
Node n = reverseLL(head.next);
head.next.next = head;
head.next = NULL;
return n;
}
I Wanna Concatenate 2 Singly Linked List with Ordered into one linked list.Each lists nude have a score value. And ı wanna order and concatenate 2 list.
SinglyLinkedList* conList = list1->concatLists(list2);
SinglyLinkedList* SinglyLinkedList::concatLists(SinglyLinkedList* list2)
{
}
You need to walk through your first list and get the very final node and set the next node to the second node.
SinglyLinkedList* n = list1;
while(n->next != null){
n = n->next;
}
n->next = list2
This is a problem I've encountered a few times, and haven't been convinced that I've used the most efficient logic.
As an example, presume I have two trees: one is a folder structure, the other is an in-memory 'model' of that folder structure. I wish to compare the two trees, and produce a list of nodes that are present in one tree and not the other - and vice versa.
Is there an accepted algorithm to handle this?
Seems like you just want to do a pre-order traversal, essentially. Where "visiting" a node means checking for children that are in one version but not the other.
More precisely: start at the root. At each node, get a set of items in each of the two versions of the node. The symmetric difference of the two sets contains the items in one but not the other. Print/output those. The intersection contains the items that are common to both. For each item in the intersection (I assume you aren't going to look further into the items that are missing from one tree), call "visit" recursively on that node to check its contents. It's a O(n) operation, with a little recursion overhead.
public boolean compareTrees(TreeNode root1, TreeNode root2) {
if ((root1 == null && root2 != null) ||
(root1 != null && root2 == null)) {
return false;
}
if (root1 == null && root2 == null) {
return true;
}
if (root1.data != root2.data) {
return false;
}
return compareTrees(root1.left, root2.left) &&
compareTrees(root1.right, root2.right);
}
If you use a sort tree, like an AVL tree, you can also traverse your tree efficiently in-order. That will return your paths in sorted order from "low" to "high".
Then you can sort your directory array (e.g. Using quicksort) using the same compare method as you use in your tree algorithm.
Then start comparing the two side by side, advancing to the next item by traversing your tree in-order and checking the next item in your sorted directory array.
This should be more efficient in practice, but only benchmarking can tell.
A simple example code in python.
class Node(object):
def __init__(self, val):
self.val = val
self.child = {}
def get_left(self):
# if left is not in the child dictionary that means the element does not have a left child
if 'left' in self.child:
return self.child['left']
else:
return None
def get_right(self):
# if right is not in the child dictionary that means the element does not have a right child
if 'right' in self.child:
return self.child['right']
else:
return None
def traverse_tree(a):
if a is not None:
print 'current_node : %s' % a.val
if 'left' in a.child:
traverse_tree(a.child['left'])
if 'right' in a.child:
traverse_tree(a.child['right'])
def compare_tree(a, b):
if (a is not None and b is None) or (a is None and b is not None):
return 0
elif a is not None and b is not None:
print a.val, b.val
# print 'currently comparing a : %s, b : %s, left : %s, %s , right : %s, %s' % (a.val, b.val, a.child['left'].val, b.child['left'].val, a.child['right'].val, b.child['right'].val)
if a.val==b.val and compare_tree(a.get_left(), b.get_left()) and compare_tree(a.get_right(), b.get_right()):
return 1
else:
return 0
else:
return 1
# Example
a = Node(1)
b = Node(0)
a.child['left'] = Node(2)
a.child['right'] = Node(3)
a.child['left'].child['left'] = Node(4)
a.child['left'].child['right'] = Node(5)
a.child['right'].child['left'] = Node(6)
a.child['right'].child['right'] = Node(7)
b.child['left'] = Node(2)
b.child['right'] = Node(3)
b.child['left'].child['left'] = Node(4)
#b.child['left'].child['right'] = Node(5)
b.child['right'].child['left'] = Node(6)
b.child['right'].child['right'] = Node(7)
if compare_tree(a, b):
print 'trees are equal'
else:
print 'trees are unequal'
# DFS traversal
traverse_tree(a)
Also pasted an example that you can run.
You may also want to have a look at how git does it. Essentially whenever you do a git diff, under the hood a tree comparison is done.