Data structure that supports random access by index and key, insertion, deletion in logaritmic time with order maintained - algorithm

I'm looking for the data structure that stores an ordered list of E = (K, V) elements and supports the following operations in at most O(log(N)) time where N is the number of elements. Memory usage is not a problem.
E get(index) // get element by index
int find(K) // find the index of the element whose K matches
delete(index) // delete element at index, the following elements have their indexes decreased by 1
insert(index, E) // insert element at index, the following elements have their indexes increased by 1
I have considered the following incorrect solutions:
Use array: find, delete, and insert will still O(N)
Use array + map of K to index: delete and insert will still cost O(N) for shifting elements and updating map
Use linked list + map of K to element address: get and find will still cost O(N)
In my imagination, the last solution is the closest, but instead of linked list, a self-balancing tree where each node stores the number of elements on the left of it will make it possible for us to do get in O(log(N)).
However I'm not sure if I'm correct, so I want to ask whether my imagination is correct and whether there is a name for this kind of data structure so I can look for off-the-shelf solution.

The closest data structure i could think of is treaps.
Implicit treap is a simple modification of the regular treap which is a very powerful data structure. In fact, implicit treap can be considered as an array with the following procedures implemented (all in O(logN)O(log⁡N) in the online mode):
Inserting an element in the array in any location
Removal of an arbitrary element
Finding sum, minimum / maximum element etc. on an arbitrary interval
Addition, painting on an arbitrary interval
Reversing elements on an arbitrary interval
Using modification with implicit keys allows you to do all operation except the second one (find the index of the element whose K matches). I'll edit this answer if i come up with a better idea :)

Related

Processing "update elements" & "get min value among all element" queries efficiently

Question
You are given an array a = [a0, a1, ..., an-1], process these Q queries. The queries has following two types:
Given two integers i and x, update ai to x
Find the minimum value among all elements in array
I already know the algorithm with segment tree (range minimum query), and the time complexity is O(n log n). But this way also can calculate the minimum value among any section, so I think there is more simple and good performance way that can process these two types of queries.
Is there any other way to solve?
Use an array and a minimum heap with references to the heap in the array.
The array has the elements by index (it's basically the actual array you have) and the heap is ordered by value so that the minimum is always on top. You add a reference (a pointer) from each array element to its corresponding node in the heap so you can find it easily there.
To perform the first query you access the array at index i and set the element value to x (after index validation and all that). Then you update the node in the heap that ai points to and heapify. This costs O(log n).
To perform the second query just get the minimum from the heap. O(1).

Find the sum of nodes in a binary search tree whose value lie in a certain range by augmenting the BST

I want to augment a binary search tree such that search, insertion and delete be still supported in O(h) time and then I want to implement an algorithm to find the sum of all node values in a given range.
If you add an additional data structure to your BST class, specifically a Hashmap or Hashtable. Your keys will be the different numbers your BST contains and your values the number of occurrences for each. BST search(...) will not be impacted however insert(...) and delete(...) will need slight code changes.
Insert
When adding nodes to the BST check to see if that value exist in the Hashmap as a key. If it does exist increment occurrence count by 1. If it doesn't exist add it to the Hashmap with an initial value of 1.
Delete
When deleting decrement the occurrence count in the Hashmap (assuming your aren't being told to delete a node that doesn't exist)
Sum
Now for the sum function
sum(int start, int end)
You can iteratively check your Hashmap to see which numbers from the range exist in your map and their number of occurrences. Using this you can build out your sum by adding up all of the values in the Map that are in the range multiplied by their number of occurrences.
Complexities
Space: O(n)
Time of sum method: O(range size).
All other method time complexity isn't impacted.
You didn't mention a space restraint so hopefully this is OK. I am very interested to see if you can some how use the properties of a BST to solve this more efficiently nothing comes to mind for me.

Ordered list with O(1) random access and removal

Does there exist a data structure with the following properties:
Elements are stored in some order
Accessing the element at a given index takes O(1) time (possibly amortized)
Removing an element takes amortized O(1) time, and changes the indices appropriately (so if element 0 is removed, the next access to element 0 should return the old element 1)
For context, I reduced an algorithm question from a programming competition to:
Over m queries, return the kth smallest positive number that hasn't been returned yet. You can assume the returned number is less than some constant n.
If the data structure above exists, then you can do this in O(m) time, by creating a list of numbers 1 to n. Then, for each query, find the element at index k and remove it. During the contest itself, my solution ended up being O(m^2) on certain inputs.
I'm pretty sure you can do this in O(m log m) with binary search trees, but I'm wondering if the ideal O(m) is reachable. Stuff I've found online tends to be close, but not quite there - the tricky part is that the elements you remove can be from anywhere in the list.
well the O(1) removal is possible with linked list
each element has pointer to next and previous element so removal just deletes element and sets the pointers of its neighbors like:
element[ix-1].next=element[ix+1].prev
accessing ordered elements at index in O(1) can be done with indexed arrays
so you have unordered array like dat[] and index array like idx[] the access of element ix is just:
dat[idx[ix]]
Now the problem is to have these properties at once
you can try to have linked list with index array but the removal needs to update index table which is O(N) in the worst case.
if you have just index array then the removal is also O(N)
if you have the index in some form of a tree structure then the removal can be close to O(log(N)) but the access will be also about O(log(N))
I believe there is a structure that would do both of this in O(n) time, where n was the number of points which had been removed, and not the total size. So if the number you're removing is small compared to the size of the array, it's close to O(1).
Basically, all the data is stored in an array. There is also a priority queue for deleted elements. Initialise like so:
Data = [0, 1, 2, ..., m]
removed = new list
Then, to remove an element, you add it's original index (see below for how to get this) to the priority queue (which is sorted by size of element with smallest at the front), and leave the array as is. So removing the 3rd element:
Data = [0, 1, 2, 3,..., m]
removed = 2
Then what's now the 4th and was the 5th:
Data = [0, 1, 2, 3,..., m]
removed = 2 -> 4
Then what's now the 3rd and was the 4th:
Data = [0, 1, 2, 3,..., m]
removed = 2 -> 3 -> 4
Now to access an element, you start with it's index. You then iterate along the removed list, increasing the index by one each time, until you reach an element which is larger than the increased value of the index. This will give you the original index(ie. position in Data) of the element you're looking for, and is the index you needed for removal.
This operation of iterating along the queue effectively increases the index by the number of elements before it that were removed.
Sorry if I haven't explained very well, it was clear in my head but hard to write down.
Comments:
Access is O(n), with n number of removed items
Removal is approximately twice the time of access, but still O(n)
A disadvantage is that memory use doesn't shrink with removal.
Could potentially 're-initialise' when removed list is large to reset memory use and access and removal times. This operation takes O(N), with N total array size.
So it's not quite what OP was looking for but in the right situation could be close.

Data Structure for fast position lookup

Looking for a datastructure that logically represents a sequence of elements keyed by unique ids (for the purpose of simplicity let's consider them to be strings, or at least hashable objects). Each element can appear only once, there are no gaps, and the first position is 0.
The following operations should be supported (demonstrated with single-letter strings):
insert(id, position) - add the element keyed by id into the sequence at offset position. Naturally, the position of each element later in the sequence is now incremented by one. Example: [S E L F].insert(H, 1) -> [S H E L F]
remove(position) - remove the element at offset position. Decrements the position of each element later in the sequence by one. Example: [S H E L F].remove(2) -> [S H L F]
lookup(id) - find the position of element keyed by id. [S H L F].lookup(H) -> 1
The naïve implementation would be either a linked list or an array. Both would give O(n) lookup, remove, and insert.
In practice, lookup is likely to be used the most, with insert and remove happening frequently enough that it would be nice not to be linear (which a simple combination of hashmap + array/list would get you).
In a perfect world it would be O(1) lookup, O(log n) insert/remove, but I actually suspect that wouldn't work from a purely information-theoretic perspective (though I haven't tried it), so O(log n) lookup would still be nice.
A combination of trie and hash map allows O(log n) lookup/insert/remove.
Each node of trie contains id as well as counter of valid elements, rooted by this node and up to two child pointers. A bit string, determined by left (0) or right (1) turns while traversing the trie from its root to given node, is part of the value, stored in the hash map for corresponding id.
Remove operation marks trie node as invalid and updates all counters of valid elements on the path from deleted node to the root. Also it deletes corresponding hash map entry.
Insert operation should use the position parameter and counters of valid elements in each trie node to search for new node's predecessor and successor nodes. If in-order traversal from predecessor to successor contains any deleted nodes, choose one with lowest rank and reuse it. Otherwise choose either predecessor or successor, and add a new child node to it (right child for predecessor or left one for successor). Then update all counters of valid elements on the path from this node to the root and add corresponding hash map entry.
Lookup operation gets a bit string from the hash map and uses it to go from trie root to corresponding node while summing all the counters of valid elements to the left of this path.
All this allow O(log n) expected time for each operation if the sequence of inserts/removes is random enough. If not, the worst case complexity of each operation is O(n). To get it back to O(log n) amortized complexity, watch for sparsity and balancing factors of the tree and if there are too many deleted nodes, re-create a new perfectly balanced and dense tree; if the tree is too imbalanced, rebuild the most imbalanced subtree.
Instead of hash map it is possible to use some binary search tree or any dictionary data structure. Instead of bit string, used to identify path in the trie, hash map may store pointer to corresponding node in trie.
Other alternative to using trie in this data structure is Indexable skiplist.
O(log N) time for each operation is acceptable, but not perfect. It is possible, as explained by Kevin, to use an algorithm with O(1) lookup complexity in exchange for larger complexity of other operations: O(sqrt(N)). But this can be improved.
If you choose some number of memory accesses (M) for each lookup operation, other operations may be done in O(M*N1/M) time. The idea of such algorithm is presented in this answer to related question. Trie structure, described there, allows easily converting the position to the array index and back. Each non-empty element of this array contains id and each element of hash map maps this id back to the array index.
To make it possible to insert element to this data structure, each block of contiguous array elements should be interleaved with some empty space. When one of the blocks exhausts all available empty space, we should rebuild the smallest group of blocks, related to some element of the trie, that has more than 50% empty space. When total number of empty space is less than 50% or more than 75%, we should rebuild the whole structure.
This rebalancing scheme gives O(MN1/M) amortized complexity only for random and evenly distributed insertions/removals. Worst case complexity (for example, if we always insert at leftmost position) is much larger for M > 2. To guarantee O(MN1/M) worst case we need to reserve more memory and to change rebalancing scheme so that it maintains invariant like this: keep empty space reserved for whole structure at least 50%, keep empty space reserved for all data related to the top trie nodes at least 75%, for next level trie nodes - 87.5%, etc.
With M=2, we have O(1) time for lookup and O(sqrt(N)) time for other operations.
With M=log(N), we have O(log(N)) time for every operation.
But in practice small values of M (like 2 .. 5) are preferable. This may be treated as O(1) lookup time and allows this structure (while performing typical insert/remove operation) to work with up to 5 relatively small contiguous blocks of memory in a cache-friendly way with good vectorization possibilities. Also this limits memory requirements if we require good worst case complexity.
You can achieve everything in O(sqrt(n)) time, but I'll warn you that it's going to take some work.
Start by having a look at a blog post I wrote on ThriftyList. ThriftyList is my implementation of the data structure described in Resizable Arrays in Optimal Time and Space along with some customizations to maintain O(sqrt(n)) circular sublists, each of size O(sqrt(n)). With circular sublists, one can achieve O(sqrt(n)) time insertion/removal by the standard insert/remove-then-shift in the containing sublist followed by a series of push/pop operations across the circular sublists themselves.
Now, to get the index at which a query value falls, you'll need to maintain a map from value to sublist/absolute-index. That is to say, a given value maps to the sublist containing the value, plus the absolute index at which the value falls (the index at which the item would fall were the list non-circular). From these data, you can compute the relative index of the value by taking the offset from the head of the circular sublist and summing with the number of elements which fall behind the containing sublist. To maintain this map requires O(sqrt(n)) operations per insert/delete.
Sounds roughly like Clojure's persistent vectors - they provide O(log32 n) cost for lookup and update. For smallish values of n O(log32 n) is as good as constant....
Basically they are array mapped tries.
Not quite sure on the time complexity for remove and insert - but I'm pretty sure that you could get a variant of this data structure with O(log n) removes and inserts as well.
See this presentation/video: http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey
Source code (Java): https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/PersistentVector.java

What is the fastest way of updating an ordered array of numbers?

I need to calculate a 1d histogram that must be dynamically maintained and looked up frequently. One idea I had involves keeping an ordered array with the data (cause thus I can determine percentiles in O(1), and this suffices for quickly finding a histogram with non-uniform bins with the exactly same amount of points inside each bin).
So, is there a way that is less than O(N) to insert a number into an ordered array while keeping it ordered?
I guess the answer is very well known but I don't know a lot about algorithms (physicists doing numerical calculations rarely do).
In the general case, you could use a more flexible tree-like data structure. This would allow access, insertion and deletion in O(log) time and is also relatively easy to get ready-made from a library (ex.: C++'s STL map).
(Or a hash map...)
An ordered array with binary search does the same things as a tree, but is more rigid. It might probably be faster for acess and memory use but you will pay when having to insert or delete things in the middle (O(n) cost).
Note, however, that an ordered array might be enough for you: if your data points are often the same, you can mantain a list of pairs {key, count}, ordered by key, being able to quickly add another instance of an existing item (but still having to do more work to add a new item)
You could use binary search. This is O(log(n)).
If you like to insert number x, then take the number in the middle of your array and compare it to x. if x is smaller then then take the number in the middle of the first half else the number in the middle of the second half and so on.
You can perform insertions in O(1) time if you rearrange your array as a bunch of linked-lists hanging off of each element:
keys = Array([0][1][2][3][4]......)
a c b e f . .
d g i . . .
h j .
|__|__|__|__|__|__|__/linked lists
There's also the strategy of keeping two datastructures at the same time, if your update workload supports it without increasing time-complexity of common operations.
So, is there a way that is less than O(N) to insert a number into an
ordered array while keeping it ordered?
Yes, you can use an array to implement a binary search tree using arrays and do the insertion in O(log n) time. How?
Keep index 0 empty; index 1 = root; if node is the left child of parent node, index of node = 2 * index of parent node; if node is the right child of parent node, index of node = 2 * index of parent node + 1.
Insertion will thus be O(log n). Unfortunately, you might notice that the binary search tree for an ordered list might degenerate to a linear search if you don't balance the tree i.e. O(n), which is pointless. Here, you may have to implement a red black tree to keep the height balanced. However, this is quite complicated, BUT insertion can be done with arrays in O(log n). Note that the array elements will no longer be ints; instead, they'll have to be objects with a colour attribute.
I wouldn't recommend it.
Any particular reason this demands an array? You need an data structure which keeps data ordered and allows you to insert quickly. Why not a binary search tree? Or better still, a red black tree. In C++, you could use the Set structure in the Standard template library which is implemented as a red black tree. Gives you O(log(n)) insertion time and the ability to iterate over it like an array.

Resources