Interval search algorithm with O(1) lookup speed - algorithm

I need to design an interval search algorithm that works on 64-bit keys. The match is when key k is between k1 and k2. An important requirement is that the lookup speed is better than O(log n). Researching available literature didn't turn up anything better than interval search trees. I wonder if it's feasible at all.

If your keys have distribution, closed to uniform, you can use
Interpolation search, which has O(log log N) time - this is much better, than O(log n).
UPD: Just an idea:
If you have enough extra memory, you can build trie-like structure. There will be O(1) search time. Idea following: For example, lets we set tree of arrays[256], where each array indexed by some byte of key. Arrays linked to trie. So, root element of trie - is array[265], where index is high byte of the key. But anyway this is not practical, because of in the bottom node, for search borders, need to perform linear search with ~64 iterations.

You can dispatch by leading bytes until the problem is small. That avoids most of the overhead of an interval tree, while maintaining the flexibility of one.
So you have a table of 256 structs that point to 256 structs on down as far as needed until you either encounter a flag saying, "no match", or you are pointed to a small interval tree for the exact matching condition. Processing the top of this tree with straightforward jumps rather than traversing multiple comparisons, possible pipeline stalls, etc, may be a significant performance improvement for you.

Related

Data structure which maps non-overlapping ranges to values?

I need a data structure which maps non-overlapping ranges (eg. 8..15, 16..19) to a pointer to a struct.
I will need to be able to look up any element in that range and retrieve that pointer. For example:
structure[8..15] = 0x12345678;
structure[16..19] = 0xdeadbeef;
structure[7]; // => NULL
structure[12]; // => 0x12345678
structure[18]; // => 0xdeadbeef
I'm considering using a binary search tree at the moment. Since the ranges will never overlap, I can search for indexes relatively easily in logarithmic time.
However, I'm wondering if there are any data structures more suitable for this case. I need to be able to efficiently insert, delete and lookup. All of these operations are O(log n) in a BST, but I'm wondering if there's anything that's faster for this.
If you want something faster than O(log n), use Van Emde Boas tree.
It should be used the same way you use a binary search tree: use start of each range as a key, end of the range - as part of value (together with the pointer), mapped to this key. Time complexity is O(log log M), where M is range of keys (INT_MAX, if any integer value is possible for start of range).
In some cases Van Emde Boas tree has large memory overhead. If that is not acceptable, use either a simple trie, as explained by Beni, or Y-fast trie.
I don't think you can do much better.
Non-overlapping ranges are equivallent to a sequence of alternating start/end points. So lookup is just "find largest element ≤ x" followed by O(1) check if it's a start or an end. I.e. an ordered map.
The usual suspects for that - binary trees, B trees, various tries - are all essentially O(log n). Which is best in practice is a matter of tuning, depends on knowing something about the ranges. Are they sparse or dense? Are they of similar size or vary widely? How large is the data (fits in cache / ram / disk)? Do you insert/delete a lot or are lookup dominant? Is access random or with high locality?
One tradeoff applicable to many schemes is splitting ranges, replicating the same pointer in several places. This may speed up lookups at expense of insert/delete and memory usage. An extreme application is just a flat array indexed by point, where lookup is O(1) but insertion is O(size of range); this begs for a multi-level structure: an array of k uniform subranges, pointing to value if entirely covered by one range or to sub-array if not. Hey, I just described a trie! Lookup is log(maxint)/log(k), very fast if k is power of 2 (e.g. 256); insertion and memory are k*log(n).
But remember that wasting memory hurts cache performance, so any such "optimization" may actually be counter-productive, even for lookups.

What Big-O equation describes my search?

I have a sorted array of doubles (latitudes actually) that relatively uniformally spread out over a range of -10 to -43. Now, if I did a binary search over that list I get O(log N).
But, I can further optimise by search by having a lookup table where I have 34 keys (-10 to -43) that can then jump straight to the starting point of that number.
Eg: -23.123424 first look up key 23 and know the start-end range of all -23 values. I can then binary search from the middle of that.
What would my Big-O look like?
It's still O(log n). Consider: it takes constant time to look up the starting indices in your integer lookup table, so that part doesn't add anything. Then it's O(log n) to do the binary search. Actually it will take roughly log n/34 because you expect to search through an array 34 times smaller on average (the values are distributed in 34 different intervals with boundaries from -43 to -10), but constant multipliers aren't considered in big-O notation.
It would still be O(log N), but for a reduced dataset (think smaller value for N).
Since the lookup table provides ca. 1/34, which is close to 1/32 or 5 steps in the binary search, you might want to benchmark, if this really helps things: The additional code paths with lots of cache misses and one or the other wrong branch prediction/pipeline clearing might make this slower than the direct binary search.
Additionally, if lookup time for an in-memory table is the bottleneck, you might want to consider representing your lats as Int32 values - definitly precise enough, but much faster to search through.
It sounds like your optimization would help, but I'm thinking it's still considered O(log N) because you still have to search the exact value. If it took you directly to the value it would be O(1)
This is a limitation of the Big-Oh analysis. It doesn't take in account that you reduced the amount of values you have to search.
Your concept is close to that of interpolation search, except instead of only "interpolating" once on the integral part of the key, it recursively uses interpolation to intelligently drive a binary search. Since your domain is relatively uniform, the expected runtime is O(log log n).

Best continuously sorting algorithm?

I have a set of double-precision data and I need their list to be always sorted. What is the best algorithm to sort the data as it is being added?
As best I mean least Big-O in data count, Small-O in data count (worst case scenario), and least Small-O in the space needed, in that order if possible.
The set size is really variable, from a small number (30) to lots of data (+10M).
Building a self-balancing binary tree like a red-black tree or AVL tree will allow for Θ(lg n) insertion and removal, and Θ(n) retrieval of all elements in sorted order (by doing a depth-first traversal), with Θ(n) memory usage. The implementation is somewhat complex, but they're efficient, and most languages will have library implementations, so they're a good first choice in most cases.
Additionally, retreiving the i-th element can be done by annotating each edge (or, equivalently, node) in the tree with the total number of nodes below it. Then one can find the i-th element in Θ(lg n) time and Θ(1) space with something like:
node *find_index(node *root, int i) {
while (node) {
if (i == root->left_count)
return root;
else if (i < root->left_count)
root = root->left;
else {
i -= root->left_count + 1;
root = root->right;
}
}
return NULL; // i > number of nodes
}
An implementation that supports this can be found in debian's libavl; unfortunately, the maintainer's site seems down, but it can be retrieved from debian's servers.
The structure that is used for indexes of database programs is a B+ Tree. It is a balanced bucketed n-ary tree.
From Wikipedia:
For a b-order B+ tree with h levels of index:
The maximum number of records stored is n = b^h
The minimum number of keys is 2(b/2)^(h−1)
The space required to store the tree is O(n)
Inserting a record requires O(log-b(n)) operations in the worst case
Finding a record requires O(log-b(n)) operations in the worst case
Removing a (previously located) record requires O(log-b(n)) operations in the worst case
Performing a range query with k elements occurring within the range requires O(log-b(n+k)) operations in the worst case.
I use this in my program. You can add your data to the structure as it comes and you can always traverse it in order, front to back or back to front, or search quickly for any value. If you don't find the value, you will have the insertion point where you can add the value.
You can optimize the structure for your program by playing around with b, the size of the buckets.
An interesting presentation about B+ trees: Tree-Structured Indexes
You can get the entire code in C++.
Edit: Now I see your comment that your requirement to know the "i-th sorted element in the set" is an important one. All of a sudden, that makes many data structures less than optimal.
You are probably best off with a SortedList or even better, a SortedDictionary. See the article: Squeezing more performance from SortedList. Both structures have a GetKey function that will return the i-th element.
Likely a heap sort. Heaps are only O(log N) to add new data, and you can pop off the net results at any time in O(N log N) time.
If you always need the whole list sorted every time, then there's not many other options than an insertion sort. It will likely be O(N^2) though with HUGE hassle of linked skip lists you can make it O(N log N).
I would use a heap/priority queue. Worst case is same as average case for runtime. Next element can be found in O(log n) time.
Here is a templatized C# implementation that I derived from this code.
If you just need to know the ith smallest element as it says in the comments, use the BFPRT algorithm which is named after the last names of the authors: Blum, Floyd, Pratt, Rivest, and Tarjan and is generally agreed to be the biggest concentration of big computer science brains in the same paper. O(n) worst-case.
Ok, you want you data sorted, but you need to extract it via an index number.
Start with a basic Tree such as the afforementioned Red-Black trees.
Modify the tree algo such that as you insert elements into the tree all nodes encountered during insertion and deletion keep a count of the number of elements under each branch.
Then when you are extracting data from the tree you can calculate the index as you go, and know which branch to take based on whether is greater or less than the index you are trying to extract.
One other consideration. 10M elements+ in a tree that uses dynamic memory allocation will suck up alot of memory overhead. i.e. The pointers may take up more space than your actual data, plus whatever other member is used to implement the data structure. This will lead to serious memory fragmentation, and in the worst cases, degrade the system's overall performance. (Churning data back and forth from virtual memory.) You might want to consider implementing a combination of block and dynamic memory allocation. Something where in you sort the tree into blocks of data, thus reducing the memory overhead.
Check out the comparison of sorting algorithms in Wikipedia.
Randomized Jumplists are interesting as well.
They require less space as BST and skiplists.
Insertion and deletion is O(log n)
By a "set of double data," do you mean a set of real-valued numbers? One of the more commonly used algorithms for that is a heap sort, I'd check that out. Most of its operations are O( n * log(n) ), which is pretty good but doesn't meet all of your criteria. The advantages of heapsort is that it's reasonably simple to code on your own, and many languages provide libraries to manage a sorted heap.

Fast Algorithm to Quickly Find the Range a Number Belongs to in a Set of Ranges?

The Scenario
I have several number ranges. Those ranges are not overlapping - as they are not overlapping, the logical consequence is that no number can be part of more than one range at any time. Each range is continuously (there are no holes within a single range, so a range 8 to 16 will really contain all numbers between 8 and 16), but there can be holes between two ranges (e.g. range starts at 64 and goes to 128, next range starts at 256 and goes to 384), so some numbers may not belong to any range at all (numbers 129 to 255 would not belong to any range in this example).
The Problem
I'm getting a number and need to know to which range the number belongs to... if it belongs to any range at all. Otherwise I need to know that it does not belong to any range. Of course speed is important; I can not simply check all the ranges which would be O(n), as there might be thousands of ranges.
Simple Solutions
A simple solution was keeping all numbers in a sorted array and run a binary search on it. That would give me at least O(log n). Of course the binary search must be somewhat modified as it must always check against the smallest and biggest number of a range. If the number to look for is in between, we have found the correct range, otherwise we must search ranges below or above the current one. If there is only one range left in the end and the number is not within that range, the number is within no range at all and we can return a "not found" result.
Ranges could also be chained together in some kind of tree structure. This is basically like a sorted list with binary search. The advantage is that it will be faster to modify a tree than a sorted array (adding/removing range), but unlike we waste some extra time on keeping the tree balanced, the tree might get very unbalanced over the time and that will lead to much slower searches than a binary search on a sorted array.
One can argue which solution is better or worse as in practice the number of searches and modification operations will be almost balanced (there will be an equal number of searches and add/remove operations performed per second).
Question
Is there maybe a better data structure than a sorted list or a tree for this kind of problem? Maybe one that could be even better than O(log n) in best case and O(log n) in worst case?
Some additional information that might help here is the following: All ranges always start and end at multiple of a power of two. They always all start and end at the same power of two (e.g. they all start/end at a multiple of 4 or at a multiple of 8 or at a multiple of 16 and so on). The power of two cannot change during run time. Before the first range is added, the power of two must be set and all ranges ever added must start/end at a multiple of this value till the application terminates. I think this can be used for optimization, as if they all start at a multiple of e.g. 8, I can ignore the first 3 bits for all comparison operations, the other bits alone will tell me the range if any.
I read about section and ranges trees. Are these optimal solutions to the problem? Are there possibly better solutions? The problem sounds similar to what a malloc implementation must do (e.g. every free'd memory block belongs to a range of available memory and the malloc implementation must find out to which one), so how do those commonly solve the issue?
After running various benchmarks, I came to the conclusion that only a tree like structure can work here. A sorted list shows of course good lookup performance - O(log n) - but it shows horribly update performance (inserts and removals are slower by more than the factor 10 compared to trees!).
A balanced binary tree also has O(log n) lookup performance, however it is much faster to update, also around O(log n), while a sorted list is more like O(n) for updates (O(log n) to find the position for insert or the element to delete, but then up to n elements must be moved within the list and this is O(n)).
I implemented an AVL tree, a red-black tree, a Treap, an AA-Tree and various variations of B-Trees (B means Bayer Tree here, not Binary). Result: Bayer trees almost never win. Their lookup is good, but their update performance is bad (as within each node of a B-Tree you have a sorted list again!). Bayer trees are only superior in cases where reading/writing a node is a very slow operation (e.g. when the nodes are directly read or written from/to hard disk) - as a B-Tree must read/write much less nodes than any other tree, so in such a case it will win. If we are having the tree in memory though, it stands no chance against other trees, sorry for all the B-Tree fans out there.
A Treap was easiest to implement (less than half the lines of code you need for other balanced trees, only twice the code you need for an unbalanced tree) and shows good average performance for lookups and updates... but we can do better than that.
An AA-Tree shows amazing good lookup performance - I have no idea why. They sometimes beat all other trees (not by far, but still enough to not be coincident)... and the removal performance is okay, however unless I'm too stupid to implement them correctly, the insert performance is really bad (it performs much more tree rotations on every insert than any other tree - even B-Trees have faster insert performance).
This leaves us with two classics, AVL and RB-Tree. They are both pretty similar but after hours of benchmarking, one thing is clear: AVL Trees definitely have better lookup performance than RB-Trees. The difference is not gigantic, but in 2/3 out of all benchmarks they will win the lookup test. Not too surprising, after all AVL Trees are more strictly balanced than RB-Trees, so they are closer to the optimal binary tree in most cases. We are not talking about a huge difference here, it is always a close race.
On the other hand RB Trees beat AVL Trees for inserts in almost all test runs and that is not such a close race. As before, that is expected. Being less strictly balanced RB Trees perform much less tree rotations on inserts compared to AVL Trees.
How about removal of nodes? Here it seems to depend a lot on the number of nodes. For small node numbers (everything less than half a million) RB Trees again own AVL Trees; the difference is even bigger than for inserts. Rather unexpected is that once the node number grows beyond a million nodes AVL Trees seems to catch up and the difference to RB Trees shrinks until they are more or less equally fast. This could be an effect of the system, though. It could have to do with memory usage of the process or CPU caching or the like. Something that has a more negative effect on RB Trees than it has on AVL Trees and thus AVL Trees can catch up. The same effect is not observed for lookups (AVL usually faster, regardless how many nodes) and inserts (RB usually faster, regardless how many nodes).
Conclusion:
I think the fastest I can get is when using RB-Trees, since the number of lookups will only be somewhat higher than the number of inserts and deletions and no matter how fast AVL is on lookups, the overall performance will suffer from their worse insert/deletion performance.
That is, unless anyone here may come up with a much better data structure that will own RB Trees big time ;-)
Create a sorted list and sort by the lower margin / start. That's easiest to implement and fast enough unless you have millions of ranges (and maybe even then).
When looking for a range, find the range where start <= position. You can use a binary search here since the list is sorted. The number is in the range if position <= end.
Since the end of any range is guaranteed to be smaller than start of the next range, you don't need to care about the end until you have found a range where the position might be contained.
All other data structures become interesting when you get intersections or you have a whole lot of ranges and when you build the structure one and query often.
A balanced, sorted tree with ranges on each node seems to be the answer.
I can't prove it's optimal, but if I were you I wouldn't look any further.
If the total range of numbers is low, and you have enough memory, you could create a huge table with all the numbers.
For example, if you have one million of numbers, you can create a table that references the range object.
As an alternative to O(log n) balanced binary search trees (BST), you could consider building a bitwise (compressed) trie. I.e. a prefix tree on the bits of the numbers you're storing.
This gives you O(w)-search, insert and delete performance; where w = number of bits (e.g. 32 or 64 minus whatever power of 2 your ranges were based on).
Not saying that it'll perform better or worse, but it seems like a true alternative in the sense it is different from BST but still has good theoretic performance and allows for predecessor queries just like BST.

Best self-balancing BST for quick insertion of a large number of nodes

I've been able to find details on several self-balancing BSTs through several sources, but I haven't found any good descriptions detailing which one is best to use in different situations (or if it really doesn't matter).
I want a BST that is optimal for storing in excess of ten million nodes. The order of insertion of the nodes is basically random, and I will never need to delete nodes, so insertion time is the only thing that would need to be optimized.
I intend to use it to store previously visited game states in a puzzle game, so that I can quickly check if a previous configuration has already been encountered.
Red-black is better than AVL for insertion-heavy applications. If you foresee relatively uniform look-up, then Red-black is the way to go. If you foresee a relatively unbalanced look-up where more recently viewed elements are more likely to be viewed again, you want to use splay trees.
Why use a BST at all? From your description a dictionary will work just as well, if not better.
The only reason for using a BST would be if you wanted to list out the contents of the container in key order. It certainly doesn't sound like you want to do that, in which case go for the hash table. O(1) insertion and search, no worries about deletion, what could be better?
The two self-balancing BSTs I'm most familiar with are red-black and AVL, so I can't say for certain if any other solutions are better, but as I recall, red-black has faster insertion and slower retrieval compared to AVL.
So if insertion is a higher priority than retrieval, red-black may be a better solution.
[hash tables have] O(1) insertion and search
I think this is wrong.
First of all, if you limit the keyspace to be finite, you could store the elements in an array and do an O(1) linear scan. Or you could shufflesort the array and then do a linear scan in O(1) expected time. When stuff is finite, stuff is easily O(1).
So let's say your hash table will store any arbitrary bit string; it doesn't much matter, as long as there's an infinite set of keys, each of which are finite. Then you have to read all the bits of any query and insertion input, else I insert y0 in an empty hash and query on y1, where y0 and y1 differ at a single bit position which you don't look at.
But let's say the key lengths are not a parameter. If your insertion and search take O(1), in particular hashing takes O(1) time, which means that you only look at a finite amount of output from the hash function (from which there's likely to be only a finite output, granted).
This means that with finitely many buckets, there must be an infinite set of strings which all have the same hash value. Suppose I insert a lot, i.e. ω(1), of those, and start querying. This means that your hash table has to fall back on some other O(1) insertion/search mechanism to answer my queries. Which one, and why not just use that directly?

Resources