Implementing a PriorityQueue Algorithm - algorithm

Here is my implementation of a PriorityQueue algorithm. I have a feeling that my pop function is wrong. But I am not sure where exactly it is wrong. I have checked multiple times on where my logic went wrong but it seems to be perfectly correct(checked with CLRS pseudo code).
class PriorityQueue:
"""Array-based priority queue implementation."""
def __init__(self):
"""Initially empty priority queue."""
self.queue = []
self.min_index = None
def parent(self, i):
return int(i/2)
def left(self, i):
return 2*i+1
def right(self, i):
return 2*i+2
def min_heapify(self, heap_size, i):
#Min heapify as written in CLRS
smallest = i
l = self.left(i)
r = self.right(i)
#print([l,r,len(self.queue),heap_size])
try:
if l <= heap_size and self.queue[l] < self.queue[i]:
smallest = l
else:
smallest = i
except IndexError:
pass
try:
if r <= heap_size and self.queue[r] < self.queue[smallest]:
smallest = r
except IndexError:
pass
if smallest != i:
self.queue[i], self.queue[smallest] = self.queue[smallest], self.queue[i]
self.min_heapify(heap_size, smallest)
def heap_decrease_key(self, i, key):
#Implemented as specified in CLRS
if key > self.queue[i]:
raise ValueError("new key is larger than current key")
#self.queue[i] = key
while i > 0 and self.queue[self.parent(i)] > self.queue[i]:
self.queue[i], self.queue[self.parent(i)] = self.queue[self.parent(i)], self.queue[i]
i = self.parent(i)
def __len__(self):
# Number of elements in the queue.
return len(self.queue)
def append(self, key):
"""Inserts an element in the priority queue."""
if key is None:
raise ValueError('Cannot insert None in the queue')
self.queue.append(key)
heap_size = len(self.queue)
self.heap_decrease_key(heap_size - 1, key)
def min(self):
"""The smallest element in the queue."""
if len(self.queue) == 0:
return None
return self.queue[0]
def pop(self):
"""Removes the minimum element in the queue.
Returns:
The value of the removed element.
"""
if len(self.queue) == 0:
return None
self._find_min()
popped_key = self.queue[self.min_index]
self.queue[0] = self.queue[len(self.queue)-1]
del self.queue[-1]
self.min_index = None
self.min_heapify(len(self.queue), 0)
return popped_key
def _find_min(self):
# Computes the index of the minimum element in the queue.
#
# This method may crash if called when the queue is empty.
if self.min_index is not None:
return
min = self.queue[0]
self.min_index = 0
Any hint or input will be highly appreciated

The main issue is that the parent function is wrong. As it should do the opposite from the left and right methods, you should first subtract 1 from i before halving that value:
def parent(self, i):
return int((i-1)/2)
Other things to note:
You don't really have a good use for the member self.min_index. It is either 0 or None, and the difference is not really used in your code, as it follows directly from whether the heap is empty or not. This also means you don't need the method _find_min, (which in itself is strange: you assign to min, but never use that). Any way, drop that method, and the line where you call it. Also drop the line where you assign None to self.min_index, and the only other place where you read the value, just use 0.
You have two ways to protect against index errors in the min_heapify method: <= heapsize and a try block. The first protection should really have < instead of <=, but you should use only one way, not two. So either test the less-than, or trap the exception.
The else block with smallest = i is unnecessary, because at that time smallest == i.
min_heapify has a first parameter that always receives the full size of the heap. So it is an unnecessary parameter. It would also not make sense to ever call this method with another value for it. So drop that argument from the method definition and all calls. And then define heap_size = len(self.queue) as a local name in that function
In heap_decrease_key you commented out the assignment #self.queue[i] = key, which is fine as long as you never call this method to really decrease a key. But although you never do that from "inside" the class, the user of the class may well want to use it in that way (since that is what the method's name is suggesting). So better uncomment that assignment.
With the above changes, your instance would only have queue as its data property. This is fine, but you could consider to let PriorityQueue inherit from list, so that you don't need this property either, and can just work with the list that you inherit. By consequence, you should then replace self.queue with self throughout your code, and you can drop the __init__ and __len__ methods, since the list implementation of those is just what you need. A bit of care is needed in the case where you want to call a list original method, when you have overridden it, like append. In that case use super().append.
With all of the above changes applied:
class PriorityQueue(list):
"""Array-based priority queue implementation."""
def parent(self, i):
return int((i-1)/2)
def left(self, i):
return 2*i+1
def right(self, i):
return 2*i+2
def min_heapify(self, i):
#Min heapify as written in CLRS
heap_size = len(self)
smallest = i
l = self.left(i)
r = self.right(i)
if l < heap_size and self[l] < self[i]:
smallest = l
if r < heap_size and self[r] < self[smallest]:
smallest = r
if smallest != i:
self[i], self[smallest] = self[smallest], self[i]
self.min_heapify(smallest)
def heap_decrease_key(self, i, key):
#Implemented as specified in CLRS
if key > self[i]:
raise ValueError("new key is larger than current key")
self[i] = key
while i > 0 and self[self.parent(i)] > self[i]:
self[i], self[self.parent(i)] = self[self.parent(i)], self[i]
i = self.parent(i)
def append(self, key):
"""Inserts an element in the priority queue."""
if key is None:
raise ValueError('Cannot insert None in the queue')
super().append(key)
heap_size = len(self)
self.heap_decrease_key(heap_size - 1, key)
def min(self):
"""The smallest element in the queue."""
if len(self) == 0:
return None
return self[0]
def pop(self):
"""Removes the minimum element in the queue.
Returns:
The value of the removed element.
"""
if len(self) == 0:
return None
popped_key = self[0]
self[0] = self[-1]
del self[-1]
self.min_heapify(0)
return popped_key

Your parent function is already wrong.
The root element of your heap is stored in array index 0, the children are in 1 and 2. The parent of 1 is 0, that is correct, but the parent of 2 should also be 0, whereas your function returns 1.
Usually the underlying array of a heap does not use the index 0, instead the root element is at index 1. This way you can compute parent and children like this:
parent(i): i // 2
left_child(i): 2 * i
right_child(i): 2 * i + 1

Related

Create a space-efficient Snapshot Set

I received this interview question that I didn't know how to solve.
Design a snapshot set functionality.
Once the snapshot is taken, the iterator of the class should only return values that were present in the function.
The class should provide add, remove, and contains functionality. The iterator always returns elements that were present in the snapshot even though the element might be removed from set after the snapshot.
The snapshot of the set is taken when the iterator function is called.
interface SnapshotSet {
void add(int num);
void remove(num);
boolean contains(num);
Iterator<Integer> iterator(); // the first call to this function should trigger a snapshot of the set
}
The interviewer said that the space requirement is that we cannot create a copy (snapshot) of the entire list of keys when calling iterator.
The first step is to handle only one iterator being created and being iterated over at a time. The followup question: how to handle the scenario of multiple iterators?
An example:
SnapshotSet set = new SnapshotSet();
set.add(1);
set.add(2);
set.add(3);
set.add(4);
Iterator<Integer> itr1 = set.iterator(); // iterator should return 1, 2, 3, 4 (in any order) when next() is called.
set.remove(1);
set.contains(1); // returns false; because 1 was removed.
Iterator<Integer> itr2 = set.iterator(); // iterator should return 2, 3, 4 (in any order) when next() is called.
I came up with an O(n) space solution where I created a copy of the entire list of keys when calling iterator. The interviewer said this was not space efficient enough.
I think it is fine to have a solution that focuses on reducing space at the cost of time complexity (but the time complexity should still be as efficient as possible).
Here is a solution that makes all operations reasonably fast. So it is like a set that has all history, all the time.
First we'll need to review the idea of a skiplist. Without the snapshot functionality.
What we do is start with a linked list on the bottom which will always be kept in sorted order. Draw that in a line. Half the values are randomly selected to also be part of another linked list that you draw above the first. Then half of those are selected to be part of another linked list, and so on. If the bottom layer has size n, the whole structure usually requires around 2n nodes. (Because 1 + 1/2 + 1/4 + 1/8 + ... = 2.) Each node in the entire 2-dimensional structure has the following data:
value: the value of the node
height: the height of the node in the skip list
next: the next node at the current level (is null at the end)
down: the same value node, one level down (is null at height 0)
And now your set is represented by a stack of nodes whose values are ignored, that points at the starting node at each level.
Here is a basic picture:
set
|
start(3) -> 2
| |
start(2) -> 2 -> 5 -> 9
| | | |
start(1) -> 2 -> 4 -> 5 -> 9
| | | | |
start(0) -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9 -> 10
Now suppose I want to find whether 8 is in the set. What I do is start from the set, find the topmost start, then:
while True:
if node.next is null or 8 < node.next.value:
if node.down is null:
return False
else:
node = node.down
elif 8 == node.next.value:
return True
else:
node = node.next
In this case we go from set to start(3) to the top 2, down one to 2, forward to 5, down 2x to 5, then go 6, 7, and find 8.
That's contains. To remove we follow the same search idea, but if we find that node.next.value == 5 then we assign node.next = node.next.next, then continue searching.
To add we randomly choose a height (which can be int(-log(random())/log(2))). And then we search forward until we've arrived at that height at a node whose node.next should be our desired new value. Then we do something complicated.
prev_added = null
while node is not null:
if node.next is null or new_value < node.next.value:
if node.height <= desired_height:
adding_node = Node(new_value, node.height, node.next, null)
node.next = adding_node
if prev_added is not null:
prev_added.down = adding_node
prev_added = adding_node
node = node.down
else:
node = node.next
You can verify that expected performance of all three operations is O(log(n)).
So, how do we add snapshotting to this?
First we add a version to the set data structure. This will be tied to snapshot. Next, we replace every single pointer with a linked list of pointers and versions. And now instead of directly modifying pointers, if the top one has an older version than we're now inserting, you add to the head of the list and leave the older version be.
And NOW we can implement a snapshot as follows.
set.version = set.version+1
node = set.start
while node.down is not null:
node = node.down
snapshot = Snapshot(set, set.version, node)
Now snapshotting is very quick. And to traverse a particular past version of the set (including simply iterating over a snapshot) for any pointer we need to traverse back until we get past any too new pointers, and find an old enough one. It turns out that any given pointer will tend to have a fairly small number of pointers, so this has only a modest amount of overhead.
Traversal of the current version of the set is just a question of always looking at the most recent version of a pointer. So it is just an additional layer of indirection, but same expected performance.
And now we have a version of this with all snapshotted versions available forever. It is possible to add garbage collection to reduce how much of a problem that is. But this description is long enough already.
This is a very different but ultimately much better answer than the one I gave at first. The idea is simply to have the data structure be a read-only reasonably well balanced sorted tree. Since it is read-only, it is easy to iterate over it.
But then how do you make modifications? Well, you simply create a new copy of the tree from the modification on up to the root. This will be O(log(n)) new nodes. Better yet the O(log(n)) old nodes that were replaced can be trivially garbage collected if they are not in use.
All operations are O(log(n)) except iteration which is O(n). I also included both an explicit iterator using callbacks, and an implicit one using Python's generators.
And for fun I coded it up in Python.
class TreeNode:
def __init__ (self, value, left=None, right=None):
self.value = value
count = 1
if left is not None:
count += left.count
if right is not None:
count += right.count
self.count = count
self.left = left
self.right = right
def left_count (self):
if self.left is None:
return 0
else:
return self.left.count
def right_count (self):
if self.right is None:
return 0
else:
return self.right.count
def attach_left (self, child):
# New node for balanced tree with self.left replaced by child.
if id(child) == id(self.left):
return self
elif child is None:
return TreeNode(self.value).attach_right(self.right)
elif child.left_count() < child.right_count() + self.right_count():
return TreeNode(self.value, child, self.right)
else:
new_right = TreeNode(self.value, child.right, self.right)
return TreeNode(child.value, child.left, new_right)
def attach_right (self, child):
# New node for balanced tree with self.right replaced by child.
if id(child) == id(self.right):
return self
elif child is None:
return TreeNode(self.value).attach_left(self.left)
elif child.right_count() < child.left_count() + self.left_count():
return TreeNode(self.value, self.left, child)
else:
new_left = TreeNode(self.value, self.left, child.left)
return TreeNode(child.value, new_left, child.right)
def merge_right (self, other):
# New node for balanced tree with all of self, then all of other.
if other is None:
return self
elif self.right is None:
return self.attach_right(other)
elif other.left is None:
return other.attach_left(self)
else:
child = self.right.merge_right(other.left)
if self.left_count() < other.right_count():
child = self.attach_right(child)
return other.attach_left(child)
else:
child = other.attach_left(child)
return self.attach_right(child)
def add (self, value):
if value < self.value:
if self.left is None:
child = TreeNode(value)
else:
child = self.left.add(value)
return self.attach_left(child)
elif self.value < value:
if self.right is None:
child = TreeNode(value)
else:
child = self.right.add(value)
return self.attach_right(child)
else:
return self
def remove (self, value):
if value < self.value:
if self.left is None:
return self
else:
return self.attach_left(self.left.remove(value))
elif self.value < value:
if self.right is None:
return self
else:
return self.attach_right(self.right.remove(value))
else:
if self.left is None:
return self.right
elif self.right is None:
return self.left
else:
return self.left.merge_right(self.right)
def __str__ (self):
if self.left is None:
left_lines = []
else:
left_lines = str(self.left).split("\n")
left_lines.pop()
left_lines = [" " + l for l in left_lines]
if self.right is None:
right_lines = []
else:
right_lines = str(self.right).split("\n")
right_lines.pop()
right_lines = [" " + l for l in right_lines]
return "\n".join(left_lines + [str(self.value)] + right_lines) + "\n"
# Pythonic iterator.
def __iter__ (self):
if self.left is not None:
yield from self.left
yield self.value
if self.right is not None:
yield from self.right
class SnapshottableSet:
def __init__ (self, root=None):
self.root = root
def contains (self, value):
node = self.root
while node is not None:
if value < node.value:
node = node.left
elif node.value < value:
node = node.right
else:
return True
return False
def add (self, value):
if self.root is None:
self.root = TreeNode(value)
else:
self.root = self.root.add(value)
def remove (self, value):
if self.root is not None:
self.root = self.root.remove(value)
# Pythonic built-in approach
def __iter__ (self):
if self.root is not None:
yield from self.root
# And explicit approach
def iterator (self):
nodes = []
if self.root is not None:
node = self.root
while node is not None:
nodes.append(node)
node = node.left
def next_value ():
if len(nodes):
node = nodes.pop()
value = node.value
node = node.right
while node is not None:
nodes.append(node)
node = node.left
return value
else:
raise StopIteration
return next_value
s = SnapshottableSet()
for i in range(10):
s.add(i)
it = s.iterator()
for i in range(5):
s.remove(2*i)
print("Current contents")
for v in s:
print(v)
print("Original contents")
while True:
print(it())

Difference between case 1 vs case 2?

I've worked on this question, Convert Sorted Array.
At first
I didn't know what to be returned in the end, so I created another function within the given one to carry out the recursion
**case 1**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
def bst(num_list):
# base_case
if len(nums) < 2:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = bst(nums[:mid])
node.right = bst(nums[mid + 1:])
ans = bst(nums)
return ans
but, it kept giving me 'time limit exceeded or maximum depth in recursion' as a result.
Then, as soon as I removed the inner 'bts' function and then just did the same recursion process the given function(sortedArrayToBST) itself, the error had gone just like magic...
**case 2**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
if not nums:
return None
if len(nums) == 1:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = self.sortedArrayToBST(nums[:mid])
node.right = self.sortedArrayToBST(nums[mid + 1:])
return node
However, having said that, I can't see what's different between the two codes. There must be a key difference between the two but can't work it out on my own.
Could you please enlighten me on what the difference is between case 1 and case 2 so what causes error in one but not in the other.
In case 1, the length of the processed list does not decrease across recursive calls because, while the parameter of bts is num_list, nums is being processed in its body. The error would disappear in case 1 if num_list is processed (instead of nums) in bts.

A* Search implementation for the "Travelling Salesman P"

I've been struggling for a long time after writing my A* Search algorithm with the fact that when the number of cities is greater than 8, the algorithm won't return any answer (or is ridiculously slow).
The cities are stored in a 2-d array where point cityList[x][y] is the distance between city x and city y (it is also the same as cityList[y][x]).
It's slightly messy as I had to use both the city class and the beginning of each route to remember route lengths and which routes had already been attempted.
A new route is also created for each city that is added on.
Could anyone help to optimize it or ensure it can work on an increasingly large number of cities.
class city(object):
def __init__(self, number):
self.number = number
global size
self.possibleChildren = []
for i in range(0,size):
if i != number:
self.possibleChildren.append(i)
def getNo(self):
return self.number
def getSize(self):
return len(self.possibleChildren)
def getOptions(self, i):
self.i = i
return self.possibleChildren[self.i]
def deleteNo(self, option):
self.option = option
if(self.option in self.possibleChildren):
self.possibleChildren.remove(self.option)
def addFinalStep(self, beginning):
self.beginning = beginning
self.possibleChildren.append(self.beginning)
def Astar():
routeList = []
#routeList[i][0] = distance travelled, routeList[i][1] = number of cities past
for i in range(0, size):
newRoute = []
newCity = city(i)
newRoute.append(0)
newRoute.append(1)
newRoute.append(newCity)
routeList.append(newRoute)
while True:
toUse = 0
smallest = -2
#Now search through the routeList to find the shortest route length
for i in range(0, len(routeList)):
#getSize() checks if there are any cities that can be visited by
this route that have not already been tried, this list is
stored in the city class
if routeList[i][1 + routeList[i][1]].getSize() != 0:
if routeList[i][0] < smallest or smallest == -2:
smallest = routeList[i][0]
toUse = i
elif routeList[i][1 + routeList[i][1]].getSize() == 0:
routeList.remove(i)
routeRec = []
#Creates the new route
for i in range (0, len(routeList[toUse])):
routeRec.append(routeList[toUse][i])
currentCity = routeRec[1 + routeRec[1]]
possibleChildren = []
for i in range(0, currentCity.getSize()):
possibleChildren.append(currentCity.getOptions(i))
smallest = 0
newCityNo = 2
for i in range(0, size):
if(i in possibleChildren):
#Finds the closest city
if smallest > cityList[currentCity.getNo()][i] or smallest == 0:
newCityNo = i
smallest = cityList[currentCity.getNo()][i]
#If the new city to visit is the same as the first, the algorithm
#has looped to the beginning and finished
if newCityNo == routeRec[2].getNo():
print("Tour Length")
print(routeRec[0] + smallest)
print("Route: ")
for i in range(2,len(routeRec)):
print(routeRec[i].getNo())
return(routeRec[0] + smallest)
#deletes all cities that have been tried
routeList[toUse][1 + routeRec[1]].deleteNo(newCityNo)
nextCity = city(newCityNo)
#deletes all cities that are already in the route
for i in range(2, 2 + routeRec[1]):
nextCity.deleteNo(routeRec[i].getNo())
#When the route is full, it can return to the first city
routeRec[1] = routeRec[1] + 1
print(routeRec[1])
if routeRec[1] == size:
#first city added to potential cities to visit
nextCity.addFinalStep(routeRec[2].getNo())
routeRec.append(nextCity)
routeRec[0] = routeRec[0] + smallest
routeList.append(routeRec)
You need to do some decomposition here.
You need data structure for saving and getting nodes to expand, in this case heap.
You also need some heuristic which you can pass it as a parameter.
Implementation of A*
# A* Algorithm
# heuristic - Heuristic function expecting state as operand
# world - Global
# sstate - Start state
# endstate - End state, state to find
# expandfn - function(world, state) returning array of tuples (nstate, cost)
# where nstate is the state reached from expanded state and cost is the 'size'
# of the edge.
def Astar(heuristic, world, sstate, endstate, expandfn):
been = set()
heap = Heap()
end = None
# We hape tuples (state, pathlen, parent state tuple) in heap
heap.push((sstate, 0, None), 0) # Heap inserting with weight 0
while not heap.isempty():
# Get next state to expand from heap
nstate = heap.pop()
# If already seen state not expand
if nstate[0] in been:
continue
# If goal reached
if nstate[0] === endstate:
end = nstate
break
# Add state as expanded
been.add(nstate[0])
# For each state reached from current state
for expstate in expandfn(world, nstate[0]):
pathlen = nstate[1] + expstate[1]
# Expanding state with heap weight 'path length + heuristic'
heap.push((expstate[0], pathlen, nstate), pathlen + heuristic(expstate[0]))
# End of while loop
if end is None:
return None
# Getting path from end node
pathlen = end[1]
res = []
while end[2] is not None:
res.append(end[0])
end = end[2]
res.reverse()
return res, pathlen

Why does this code take up so much memory?

I created a solution to this problem on leetcode:
All DNA is composed of a series of nucleotides abbreviated as A, C, G, and T, for example: "ACGAATTCCG". When studying DNA, it is sometimes useful to identify repeated sequences within the DNA.
Write a function to find all the 10-letter-long sequences (substrings) that occur more than once in a DNA molecule.
My solution causes an out of memory exception:
class Solution:
# #param {string} s
# #return {string[]}
def findRepeatedDnaSequences(self, s):
seen_so_far = set()
results = set()
for seq in self.window(10, s):
if seq in seen_so_far:
results.add(seq)
else:
seen_so_far.add(seq)
return list(results)
def window(self, window_size, array):
window_start = 0
window_end = window_size
while window_end < len(array):
yield array[window_start:window_end+1]
window_end += window_size
This solution works:
class Solution:
# #param {string} s
# #return {string[]}
def findRepeatedDnaSequences(self, s):
d = {}
res = []
for i in range(len(s)):
key = s[i:i+10]
if key not in d:
d[key] = 1
else:
d[key] += 1
for e in d:
if d[e] > 1:
res.append(e)
return res
They seem essentially the same. What am I missing?
Your window function is incorrect. It yields a sequence of substrings [0:11], [0:21], [0:31], ... (note that window_start remains zero). It can be fixed e.g. as
def window(self, window_size, array):
window_start = 0
while window_start < len(array) - window_size + 1:
yield array[window_start:window_start+window_size]
window_start += 1
Edit: substring ending indices were off by 1.
In this:
def window(self, window_size, array):
window_start = 0
window_end = window_size
while window_end < len(array):
yield array[window_start:window_end+1]
window_end += window_size
the value of window_start is never changed. So, given that window_size is 10, you first yield the slice 0:11 (but wanted 0:10), then the slice 0:21 (but wanted 1:11), then the slice 0:31 (but wanted 2:12), and so on. The total length of all the slices you return grows proportionally to the square of len(array). If array is long, that would explain it. But not enough info was given to be sure about that.

Design a data structure that supports min, getLast, append, deleteLast in O(1), memory bounded by n (not O(n)) + O(1)

I need to design a data structure that supports the following:
getMinElement, getLastElement, insertElement, deleteLastElement - in O(1) run time.
Memory is bounded by n (not O(n)) i.e. You can keep at most n elements at a given moment. plus O(1) memory.
(Important: pointers are regarded as 1 as well, so linked list, for instance, is out of the question).
Example:
insert(6)
min() -> 6
insert(10)
insert(5)
min() -> 5
insert(7)
delete()
min() -> 5
delete()
min() -> 6
We'll store the most recent minimum directly. This is O(1) space.
We'll also use an array of integers, since that seems to be our only option for variable-length space. But we won't store the elements directly. Instead, when we insert an element, we'll store the difference between that element and the (prior) minimum. When we delete an element, we can use that difference to restore the prior minimum if necessary.
In Python:
class MinStorage:
def __init__(self):
self.offsets = []
self.min = None
def insertElement(self, element):
offset = 0 if self.min is None else (element - self.min)
self.offsets.append(offset)
if self.min is None or offset < 0:
self.min = element
def getMinElement(self):
return self.min
def getLastElement(self):
offset = self.offsets[-1]
if offset < 0:
# Last element defined a new minimum, so offset is an offset
# from the prior minimum, not an offset from self.min.
return self.min
else:
return self.min + offset
def deleteLastElement(self):
offset = self.offsets[-1]
self.offsets.pop()
if len(self.offsets) == 0:
self.min = None
if offset < 0:
self.min -= offset
Here's a version that allows any unsigned 16-bit integer as an element, and only stores unsigned 16-bit integers in the array:
class MinStorage:
Cap = 65536
def __init__(self):
self.offsets = []
self.min = None
def insertElement(self, element):
assert 0 <= element < self.Cap
offset = 0 if self.min is None else (element - self.min)
if offset < 0: offset += self.Cap
self.offsets.append(offset)
if self.min is None or element < self.min:
self.min = element
def getMinElement(self):
return self.min
def getLastElement(self):
element = self.__getLastElementUnchecked()
if element < self.min:
# Last element defined a new minimum, so offset is an offset
# from the prior minimum, not an offset from self.min.
return self.min
else:
return element
def deleteLastElement(self):
element = self.__getLastElementUnchecked()
self.offsets.pop()
if len(self.offsets) == 0:
self.min = None
elif element < self.min:
# Popped element defined a new minimum.
self.min = element
def __getLastElementUnchecked(self):
offset = self.offsets[-1]
element = self.min + offset
if element >= self.Cap:
element -= self.Cap
return element
Note that in a language with unsigned 16-bit arithmetic that wraps on overflow/underflow, you wouldn't need the checks and adjustments involving self.Cap. In C (§6.2.5/9) and C++ (§3.9.1/4), unsigned arithmetic is required to behave as needed. However, Python doesn't support unsigned 16-bit arithmetic.
Use a stack that stores both the inserted value and the current min. The current min is updated when inserting (push) a value, by comparing the value against the current min, and when deleting (pop) a value peeking the current min value from the top of the stack.
If you can assume something like "data is in the range 0 to 255 (int8)", and you are allowed to store an integer of twice the precision (int16), then you could store the "cumulative minimum" in the upper byte and the data point in the lower byte. Other than something like this, I don't believe this is possible within the constraints you have given.

Resources