Data Structure that supports queue like operations and mode finding - algorithm

This was an interview question asked to me almost 3 years back and I was pondering about this a while back.
Design a data structure that supports the following operations:
insert_back(), remove_front() and find_mode(). Best complexity
required.
The best solution I could think of was O(logn) for insertion and deletion and O(1) for mode. This is how I solved it: Keep a queue DS for handling which element is inserted and deleted.
Also keep an array which is max heap ordered and a hash table.
The hashtable contains an integer key and an index into the heap array location of that element. The heap array contains an ordered pair (count,element) and is ordered on the count property.
Insertion : Insert the element into the queue. Find the location of the heap array index from the hashtable. If none exists, then add the element to the heap and heapify upwards. Then add the final location into the hashtable. Increment the count in that location and heapify upwards or downwards as needed to restore the heap property.
Deletion : Remove element from the head of the queue. From the hash table, find a location in the heap array index. Decrement the count in the heap and reheapify upward or downwards as needed to restore the heap property.
Find Mode: The element at the head of the array heap (getMax()) will give us the mode.
Can someone please suggest something better. The only optimization I could think of was using a Fibonacci heap but I am not sure if that is a good fit in this problem.

I think there is a solution with O(1) for all operations.
You need a deque, and two hashtables.
The first one is a linked hashtable, where for each element you store its count, the next element in count order and a previous element in count order. Then you can look the next and previous element's entries in that hashtable in a constant time. For this hashtable you also keep and update the element with the largest count. (element -> count, next_element, previous_element)
In the second hashtable for each distinct number of elements, you store the elements with that count in the start and in the end of the range in the first hashtable. Note that the size of this hashtable will be less than n (it's O(sqrt(n)), I think). (count -> (first_element, last_element))
Basically, when you add an element to or remove an element from the deque, you can find its new position in the first hashtable by analyzing its next and previous elements, and the values for the old and new count in the second hashtable in constant time. You can remove and add elements in the first hashtable in constant time, using algorithms for linked lists. You can also update the second hashtable and the element with the maximum count in constant time as well.
I'll try writing pseudocode if needed, but it seems to be quite complex with many special cases.

Related

Data structure that supports random access by index and key, insertion, deletion in logaritmic time with order maintained

I'm looking for the data structure that stores an ordered list of E = (K, V) elements and supports the following operations in at most O(log(N)) time where N is the number of elements. Memory usage is not a problem.
E get(index) // get element by index
int find(K) // find the index of the element whose K matches
delete(index) // delete element at index, the following elements have their indexes decreased by 1
insert(index, E) // insert element at index, the following elements have their indexes increased by 1
I have considered the following incorrect solutions:
Use array: find, delete, and insert will still O(N)
Use array + map of K to index: delete and insert will still cost O(N) for shifting elements and updating map
Use linked list + map of K to element address: get and find will still cost O(N)
In my imagination, the last solution is the closest, but instead of linked list, a self-balancing tree where each node stores the number of elements on the left of it will make it possible for us to do get in O(log(N)).
However I'm not sure if I'm correct, so I want to ask whether my imagination is correct and whether there is a name for this kind of data structure so I can look for off-the-shelf solution.
The closest data structure i could think of is treaps.
Implicit treap is a simple modification of the regular treap which is a very powerful data structure. In fact, implicit treap can be considered as an array with the following procedures implemented (all in O(logN)O(log⁡N) in the online mode):
Inserting an element in the array in any location
Removal of an arbitrary element
Finding sum, minimum / maximum element etc. on an arbitrary interval
Addition, painting on an arbitrary interval
Reversing elements on an arbitrary interval
Using modification with implicit keys allows you to do all operation except the second one (find the index of the element whose K matches). I'll edit this answer if i come up with a better idea :)

Delete certain element from heap [duplicate]

As I understand, binary heap does not support removing random elements. What if I need to remove random elements from a binary heap?
Obviously, I can remove an element and re-arrange the entire heap in O(N). Can I do better?
Yes and no.
The problem is a binary heap does not support search for an arbitrary element. Finding it is itself O(n).
However, if you have a pointer to the element (and not only its value) - you can swap the element with the rightest leaf, remove this leaf, and than re-heapify the relevant sub-heap (by sifting down the newly placed element as much as needed). This results in O(logn) removal, but requires a pointer to the actual element you are looking for.
Amit is right in his answer but here is one more nuance:
the position of the removed item (where you put the right-most leaf) can be required to be bubbled up (compare with parent and move up until parent is larger than you).
Sometimes it is required to bubble down (compare with children and move down until all children are smaller than you). It all depends on the case.
Depends on what is meant by "random element." If it means that the heap contains elements [e1, e2, ..., eN] and one wants to delete some ei (1 <= i <= N), then this is possible.
If you are using a binary heap implementation from some library, it might be that it doesn't provide you with the API that you need. In that case, you should look for another library that has it.
If you were to implement it yourself, you would need two additional calls:
A procedure deleteAtIndex(heap, i) that deletes the node at index i
by positioning the last element in the heap array at i, decrementing the
element count, and finally shuffling down/up
the new ith element to maintain the heap invariant. The most
common use of this procedure is to "pop" the heap by calling
deleteAtIndex(heap, 1) -- assuming 1-origin indexing. This operation
will run O(log n) (though, to be complete, I'll note that the highest bound can be
improved up to O(log(log n)) depending on some assumptions about your elements' keys).
A procedure deleteElement(heap, e) that deletes the element e (your arbitrary element).
Your heap algorithm would maintain an array ElementIndex such that ElementIndex[e] returns
the current index of element e: calling deleteAtIndex(heap, ElementIndex[e])
will then do what you want. It will also run in O(log n) because the array access is constant.
Since binary heaps are often used in algorithms that merely pop the highest (or lowest)
priority element (rather than deleting arbitrary elements), I imagine that some libraries might miss on the deleteAtIndex API to save space (the extra ElementIndex array mentioned above).

How do I further optimize this Data Structure?

I was recently asked to build a data structure that supports four operations, namely,
Push: Add an element to the DS.
Pop: Remove the last pushed element.
Find_max: Find the maximum element out of the currently stored elements.
Pop_max: Remove the maximum element from the DS.
The elements are integers.
Here is the solution I suggested:
Take a stack.
Store a pair of elements in it. The pair should be (element, max_so_far), where element is the element at that index and max_so_far is the maximum valued element seen so far.
While pushing an element into the stack, check the max_so_far of the topmost stack element. If current number is greater than that, put the current pair's max_so_far value as the current element's value, else store the previous max_so_far. This mean that pushing would simply be an O(1) operation.
For pop, simply pop an element out of the stack. Again, this operation is O(1).
For Find_max, return the value of the max_so_far of the topmost element in the stack. Again, O(1).
Popping the max element would involve going through the stack and explicitly removing the max element and pushing back the elements on top of it, after allotting new max_so_far values. This would be linear.
I was asked to improve it, but I couldn't.
In terms of time complexity, the overall time can be improved if all operations happen in O(logn), I guess. How to do that, is something I'm unable to get.
One approach would be to store pointers to the elements in a doubly-linked list, and also in a max-heap data structure (sorted by value).
Each element would store its position in the doubly-linked list and also in the max-heap.
In this case all of your operations would require O(1) time in the doubly-linked list, plus O(log(n)) time in the heap data structure.
One way to get O(log n)-time operations is to mash up two data structures, in this case a doubly linked list and a priority queue (a pairing heap is a good choice) . We have a node structure like
struct Node {
Node *previous, *next; // doubly linked list
Node **back, *child, *sibling; // pairing heap
int value;
} list_head, *heap_root;
Now, to push, we push in both structures. To find_max, we return the value of the root of the pairing heap. To pop or pop_max, we pop from the appropriate data structure and then use the other node pointers to delete in the other data structure.
Usually, when you need to find elements by quality A (value), and also by quality B (insert order), then you start eyeballing a data structure that actually has two data structures inside that reference each other, or are otherwise interleaved.
For instance: two maps that who's keys are quality A and quality B, who's values are a shared pointer to a struct that contains iterators back to both maps, and the value. Then you have log(n) to find an element via either quality, and erasure is ~O(logn) to remove the two iterators from either map.
struct hybrid {
struct value {
std::map<std::string, std::shared_ptr<value>>::iterator name_iter;
std::map<int, std::shared_ptr<value>>::iterator height_iter;
mountain value;
};
std::map<std::string, std::shared_ptr<value>> name_map;
std::map<int, std::shared_ptr<value>> height_map;
mountain& find_by_name(std::string s) {return name_map[s]->value;}
mountain& find_by_height(int height h) {return height_map[s]->value;}
void erase_by_name(std::string s) {
value& v = name_map[s];
name_map.erase(v.name_iter);
height_iter.erase(v.height_iter); //note that this invalidates the reference v
}
};
However, in your case, you can do even better than this O(logn), since you only need "the most recent" and "the next highest". To make "pop highest" fast, you need a fast way to detect the next highest, which means that needs to be precalculated at insert. To find the "height" position relative to the rest, you need a map of some sort. To make "pop most recent" fast, you need a fast way to detect the next most recent, but that's trivially calculated. I'd recommend creating a map or heap of nodes, where keys are the value for finding the max, and the values are a pointer to the next most recent value. This gives you O(logn) insert, O(1) find most recent, O(1) or O(logn) find maximum value (depending on implementation), and ~O(logn) erasure by either index.
One more way to do this is:-
Create max-heap with elements. In this way we will be able to get/delete max-element in O(1) operations.
Along with this we can maintain a pointer to last pushed element.And as far as I know delete in heaps can be constructed in O(log n).

Deleting a node from the middle of a heap

Deleting a node from the middle of the heap can be done in O(lg n) provided we can find the element in the heap in constant time. Suppose the node of a heap contains id as its field. Now if we provide the id, how can we delete the node in O(lg n) time ?
One solution can be that we can have a address of a location in each node, where we maintain the index of the node in the heap. This array would be ordered by node ids. This requires additional array to be maintained though. Is there any other good method to achieve the same.
PS: I came across this problem while implementing Djikstra's Shortest Path algorithm.
The index (id, node) can be maintained separately in a hashtable which has O(1) lookup complexity (on average). The overall complexity then remains O(log n).
Each data structure is designed with certain operations in mind. From wikipedia about heap operations
The operations commonly performed with a heap are:
create-heap: create an empty heap
find-max or find-min: find the maximum item of a max-heap or a minimum item of a min-heap, respectively
delete-max or delete-min: removing the root node of a max- or min-heap, respectively
increase-key or decrease-key: updating a key within a max- or min-heap, respectively
insert: adding a new key to the heap
merge joining two heaps to form a valid new heap containing all the elements of both.
This means, heap is not the best data structure for the operation you are looking for. I would advice you to look for a better suited data structure(depending on your requirements)..
I've had a similar problem and here's what I've come up with:
Solution 1: if your calls to delete some random item will have a pointer to item, you can store your individual data items outside of the heap; have the heap be of pointers to these items; and have each item contain its current heap array index.
Example: the heap contains pointers to items with keys [2 10 5 11 12 6]. The item holding value 10 has a field called ArrayIndex = 1 (counting from 0). So if I have a pointer to item 10 and want to delete it, I just look at its ArrayIndex and use that in the heap for a normal delete. O(1) to find heap location, then usual O(log n) to delete it via recursive heapify.
Solution 2: If you only have the key field of the item you want to delete, not its address, try this. Switch to a red-black tree, putting your payload data in the actual tree nodes. This is also O( log n ) for insert and delete. It can additionally find an item with a given key in O( log n ), which makes delete-by-key continue to be log n.
Between these, solution 1 will require an overhead of constantly updating ArrayIndex fields with every swap. It also results in a kind of strange one-off data structure that the next code maintainer would need to study and understand. I think solution 2 would be about as fast, and has the advantage that it's a well-understood algo.

Data design Issue to find insertion deletion and getMin in O(1)

As said in the title i need to define a datastructure that takes only O(1) time for insertion deletion and getMIn time.... NO SPACE CONSTRAINTS.....
I have searched SO for the same and all i have found is for insertion and deletion in O(1) time.... even a stack does. i saw previous post in stack overflow all they say is hashing...
with my analysis for getMIn in O(1) time we can use heap datastructure
for insertion and deletion in O(1) time we have stack...
so inorder to achieve my goal i think i need to tweak around heapdatastructure and stack...
How will i add hashing technique to this situation ...
if i use hashtable then what should my hash function look like how to analize the situation in terms of hashing... any good references will be appreciated ...
If you go with your initial assumption that insertion and deletion are O(1) complexity (if you only want to insert into the top and delete/pop from the top then a stack works fine) then in order to have getMin return the minimum value in constant time you would need to store the min somehow. If you just had a member variable keep track of the min then what would happen if it was deleted off the stack? You would need the next minimum, or the minimum relative to what's left in the stack. To do this you could have your elements in a stack contain what it believes to be the minimum. The stack is represented in code by a linked list, so the struct of a node in the linked list would look something like this:
struct Node
{
int value;
int min;
Node *next;
}
If you look at an example list: 7->3->1->5->2. Let's look at how this would be built. First you push in the value 2 (to an empty stack), this is the min because it's the first number, keep track of it and add it to the node when you construct it: {2, 2}. Then you push the 5 onto the stack, 5>2 so the min is the same push {5,2}, now you have {5,2}->{2,2}. Then you push 1 in, 1<2 so the new min is 1, push {1, 1}, now it's {1,1}->{5,2}->{2,2} etc. By the end you have:
{7,1}->{3,1}->{1,1}->{5,2}->{2,2}
In this implementation, if you popped off 7, 3, and 1 your new min would be 2 as it should be. And all of your operations is still in constant time because you just added a comparison and another value to the node. (You could use something like C++'s peek() or just use a pointer to the head of the list to take a look at the top of the stack and grab the min there, it'll give you the min of the stack in constant time).
A tradeoff in this implementation is that you'd have an extra integer in your nodes, and if you only have one or two mins in a very large list it is a waste of memory. If this is the case then you could keep track of the mins in a separate stack and just compare the value of the node that you're deleting to the top of this list and remove it from both lists if it matches. It's more things to keep track of so it really depends on the situation.
DISCLAIMER: This is my first post in this forum so I'm sorry if it's a bit convoluted or wordy. I'm also not saying that this is "one true answer" but it is the one that I think is the simplest and conforms to the requirements of the question. There are always tradeoffs and depending on the situation different approaches are required.
This is a design problem, which means they want to see how quickly you can augment existing data-structures.
start with what you know:
O(1) update, i.e. insertion/deletion, is screaming hashtable
O(1) getMin is screaming hashtable too, but this time ordered.
Here, I am presenting one way of doing it. You may find something else that you prefer.
create a HashMap, call it main, where to store all the elements
create a LinkedHashMap (java has one), call it mins where to track the minimum values.
the first time you insert an element into main, add it to mins as well.
for every subsequent insert, if the new value is less than what's at the head of your mins map, add it to the map with something equivalent to addToHead.
when you remove an element from main, also remove it from mins. 2*O(1) = O(1)
Notice that getMin is simply peeking without deleting. So just peek at the head of mins.
EDIT:
Amortized algorithm:
(thanks to #Andrew Tomazos - Fathomling, let's have some more fun!)
We all know that the cost of insertion into a hashtable is O(1). But in fact, if you have ever built a hash table you know that you must keep doubling the size of the table to avoid overflow. Each time you double the size of a table with n elements, you must re-insert the elements and then add the new element. By this analysis it would
seem that worst-case cost of adding an element to a hashtable is O(n). So why do we say it's O(1)? because not all the elements take worst-case! Indeed, only the elements where doubling occurs takes worst-case. Therefore, inserting n elements takes n+summation(2^i where i=0 to lg(n-1)) which gives n+2n = O(n) so that O(n)/n = O(1) !!!
Why not apply the same principle to the linkedHashMap? You have to reload all the elements anyway! So, each time you are doubling main, put all the elements in main in mins as well, and sort them in mins. Then for all other cases proceed as above (bullets steps).
A hashtable gives you insertion and deletion in O(1) (a stack does not because you can't have holes in a stack). But you can't have getMin in O(1) too, because ordering your elements can't be faster than O(n*Log(n)) (it is a theorem) which means O(Log(n)) for each element.
You can keep a pointer to the min to have getMin in O(1). This pointer can be updated easily for an insertion but not for the deletion of the min. But depending on how often you use deletion it can be a good idea.
You can use a trie. A trie has O(L) complexity for both insertion, deletion, and getmin, where L is the length of the string (or whatever) you're looking for. It is of constant complexity with respect to n (number of elements).
It requires a huge amount of memory, though. As they emphasized "no space constraints", they were probably thinking of a trie. :D
Strictly speaking your problem as stated is provably impossible, however consider the following:
Given a type T place an enumeration on all possible elements of that type such that value i is less than value j iff T(i) < T(j). (ie number all possible values of type T in order)
Create an array of that size.
Make the elements of the array:
struct PT
{
T t;
PT* next_higher;
PT* prev_lower;
}
Insert and delete elements into the array maintaining double linked list (in order of index, and hence sorted order) storage
This will give you constant getMin and delete.
For insertition you need to find the next element in the array in constant time, so I would use a type of radix search.
If the size of the array is 2^x then maintain x "skip" arrays where element j of array i points to the nearest element of the main array to index (j << i).
This will then always require a fixed x number of lookups to update and search so this will give constant time insertion.
This uses exponential space, but this is allowed by the requirements of the question.
in your problem statement " insertion and deletion in O(1) time we have stack..."
so I am assuming deletion = pop()
in that case, use another stack to track min
algo:
Stack 1 -- normal stack; Stack 2 -- min stack
Insertion
push to stack 1.
if stack 2 is empty or new item < stack2.peek(), push to stack 2 as well
objective: at any point of time stack2.peek() should give you min O(1)
Deletion
pop() from stack 1.
if popped element equals stack2.peek(), pop() from stack 2 as well

Resources