I am trying to find out which will be more efficient in terms of speed of search, whether trie or B-Tree. I have a dictionary of English words and I want to locate a word in that dictionary efficiently.
If by "more efficient in time of search" you refer to theoretical time complexity, then B Tree offers O(logn * |S|)1 time complexity for search, while a trie offers O(|S|) time complexity, where |S| is the length of the searched string, and n is the number of elements in dictionary.
If by "more efficient in time of search" you refer to actual real life run time, that depends on the actual implementation, actual data and actual search behavior. Some examples that might influence the answer:
Size of data
Storage system (for example: RAM/Flah/disk/distributed filesystem/...)
Distribution of searches
Code optimizations of each implementation
(and much more)
(1) There are O(logn) comparisons, and each comparison takes O(|S|) times, since you need to traverse the entire string to decide which is higher (worst case analysis).
It depends on what's your need. If you want to get the whole subtree, a B+Tree is your best choice because it is space efficient and also the branching factor of the B+ Tree affects its performance (the number of intermediary nodes). If h is the height of the tree, then nmax ~~ bh. Therefore h ~~ log(nmax) / log(b).
With n = 1 000 000 000 and b = 100, we have h ~~ 5. Therefore it means only 5 pointer dereferencing for going from the root to the leaf. It's more cache-friendly than a Trie.
But if you want to get the first N children from a substree, then a Trie is the best choice because you simply visit less nodes than in a B+ Tree scenario.
Also the word prefix completion is well handled by trie.
Related
I need to design an interval search algorithm that works on 64-bit keys. The match is when key k is between k1 and k2. An important requirement is that the lookup speed is better than O(log n). Researching available literature didn't turn up anything better than interval search trees. I wonder if it's feasible at all.
If your keys have distribution, closed to uniform, you can use
Interpolation search, which has O(log log N) time - this is much better, than O(log n).
UPD: Just an idea:
If you have enough extra memory, you can build trie-like structure. There will be O(1) search time. Idea following: For example, lets we set tree of arrays[256], where each array indexed by some byte of key. Arrays linked to trie. So, root element of trie - is array[265], where index is high byte of the key. But anyway this is not practical, because of in the bottom node, for search borders, need to perform linear search with ~64 iterations.
You can dispatch by leading bytes until the problem is small. That avoids most of the overhead of an interval tree, while maintaining the flexibility of one.
So you have a table of 256 structs that point to 256 structs on down as far as needed until you either encounter a flag saying, "no match", or you are pointed to a small interval tree for the exact matching condition. Processing the top of this tree with straightforward jumps rather than traversing multiple comparisons, possible pipeline stalls, etc, may be a significant performance improvement for you.
I am storing 10^9 keys in a BST.
Compared to having lets say having multiple BSTs of size 10^6 containing chunk of the bigger tree? Search through all of them executing in parallel.
I am talking about only search performance here, Given that processing power is not a bottle neck.
It depends entirely on your key schema.
For example, let's say your keys are surnames, equally distributed across the twenty-six English letters. If you're looking for Pax Diablo, you can immediately remove 25/26ths of your search space, looking only in the D tree (for Diablo).
With a balanced binary tree, you would have to traverse about 4.7 tree levels on average (log226 is about 4.700439718).
So, yes, it can be more efficient, provided the up-front operation has minimal complexity. In the given example, the selection of one of twenty-six tress is O(1), based on the first character of the name and an array lookup to find the tree.
In the case where the keys are actually numbers from zero to a billion as your comments indicate, you could still have the same efficiency, depending on data distribution. If they're equally distributed (or even close), you could maintain a thousand different trees (from your statement that you want trees of size one million) based on the first three digits of the number and reduce the initial search by a factor of 1000 (about ten tree levels).
Of course, the distribution is important. If all your numbers turn out to be less that a million, they'll all be in the first tree and this scheme will save you nothing (in fact it'll add a useless first step).
Consider using a hash table. Look up for such big keyset should be noticably faster. A hashmap will have constant amortized search complexity as opposed to the logarithmic of a BST.
Also as you are talking about a huge tree here maybe you should take a look at b+ trees.
I doubt the approach you try to take will be more efficient than using the suggestions above. The depth of a binary tree grows very slowly(assuming it is balanced). On the other hand with your approach synchronization when you produce the output will be cumbersome.
How does Trie and B+ tree compare for indexing lexicographically sorted strings [on the order some billions]?
It should support range queries as well.
From perf. as well as implementation complexity point of view.
I would say it depends on what you mean by Range.
If your range is expressed as All words beginning by, then a Trie is the right choice I'd say. On the other hand, Trie are not meant for requests like All words between XX and ZZ.
Note that the branching factor of the B+ Tree affects its performance (the number of intermediary nodes). If h is the height of the tree, then nmax ~~ bh. Therefore h ~~ log(nmax) / log(b).
With n = 1 000 000 000 and b = 100, we have h ~~ 5. Therefore it means only 5 pointer dereferencing for going from the root to the leaf. It's more cache-friendly than a Trie.
Finally, B+ Tree is admittedly more difficult to implement than a Trie: it's more on a Red-Black Tree level of complexity.
Depends on your actual task:
If you want to get the whole subtree, a B+Tree is your best choice because it is space efficient.
But if you want to get the first N children from a substree, then a Trie is the best choice because you simply visit less nodes than in a B+ Tree scenario.
The most popular task which is well-handled by a Trie is a word prefix completion.
Wikipedia has some algorithmic complexity facts:
B+ tree (section Characteristics), Trie (unfortunately spread all over the article). Hope that helps.
I have a set of double-precision data and I need their list to be always sorted. What is the best algorithm to sort the data as it is being added?
As best I mean least Big-O in data count, Small-O in data count (worst case scenario), and least Small-O in the space needed, in that order if possible.
The set size is really variable, from a small number (30) to lots of data (+10M).
Building a self-balancing binary tree like a red-black tree or AVL tree will allow for Θ(lg n) insertion and removal, and Θ(n) retrieval of all elements in sorted order (by doing a depth-first traversal), with Θ(n) memory usage. The implementation is somewhat complex, but they're efficient, and most languages will have library implementations, so they're a good first choice in most cases.
Additionally, retreiving the i-th element can be done by annotating each edge (or, equivalently, node) in the tree with the total number of nodes below it. Then one can find the i-th element in Θ(lg n) time and Θ(1) space with something like:
node *find_index(node *root, int i) {
while (node) {
if (i == root->left_count)
return root;
else if (i < root->left_count)
root = root->left;
else {
i -= root->left_count + 1;
root = root->right;
}
}
return NULL; // i > number of nodes
}
An implementation that supports this can be found in debian's libavl; unfortunately, the maintainer's site seems down, but it can be retrieved from debian's servers.
The structure that is used for indexes of database programs is a B+ Tree. It is a balanced bucketed n-ary tree.
From Wikipedia:
For a b-order B+ tree with h levels of index:
The maximum number of records stored is n = b^h
The minimum number of keys is 2(b/2)^(h−1)
The space required to store the tree is O(n)
Inserting a record requires O(log-b(n)) operations in the worst case
Finding a record requires O(log-b(n)) operations in the worst case
Removing a (previously located) record requires O(log-b(n)) operations in the worst case
Performing a range query with k elements occurring within the range requires O(log-b(n+k)) operations in the worst case.
I use this in my program. You can add your data to the structure as it comes and you can always traverse it in order, front to back or back to front, or search quickly for any value. If you don't find the value, you will have the insertion point where you can add the value.
You can optimize the structure for your program by playing around with b, the size of the buckets.
An interesting presentation about B+ trees: Tree-Structured Indexes
You can get the entire code in C++.
Edit: Now I see your comment that your requirement to know the "i-th sorted element in the set" is an important one. All of a sudden, that makes many data structures less than optimal.
You are probably best off with a SortedList or even better, a SortedDictionary. See the article: Squeezing more performance from SortedList. Both structures have a GetKey function that will return the i-th element.
Likely a heap sort. Heaps are only O(log N) to add new data, and you can pop off the net results at any time in O(N log N) time.
If you always need the whole list sorted every time, then there's not many other options than an insertion sort. It will likely be O(N^2) though with HUGE hassle of linked skip lists you can make it O(N log N).
I would use a heap/priority queue. Worst case is same as average case for runtime. Next element can be found in O(log n) time.
Here is a templatized C# implementation that I derived from this code.
If you just need to know the ith smallest element as it says in the comments, use the BFPRT algorithm which is named after the last names of the authors: Blum, Floyd, Pratt, Rivest, and Tarjan and is generally agreed to be the biggest concentration of big computer science brains in the same paper. O(n) worst-case.
Ok, you want you data sorted, but you need to extract it via an index number.
Start with a basic Tree such as the afforementioned Red-Black trees.
Modify the tree algo such that as you insert elements into the tree all nodes encountered during insertion and deletion keep a count of the number of elements under each branch.
Then when you are extracting data from the tree you can calculate the index as you go, and know which branch to take based on whether is greater or less than the index you are trying to extract.
One other consideration. 10M elements+ in a tree that uses dynamic memory allocation will suck up alot of memory overhead. i.e. The pointers may take up more space than your actual data, plus whatever other member is used to implement the data structure. This will lead to serious memory fragmentation, and in the worst cases, degrade the system's overall performance. (Churning data back and forth from virtual memory.) You might want to consider implementing a combination of block and dynamic memory allocation. Something where in you sort the tree into blocks of data, thus reducing the memory overhead.
Check out the comparison of sorting algorithms in Wikipedia.
Randomized Jumplists are interesting as well.
They require less space as BST and skiplists.
Insertion and deletion is O(log n)
By a "set of double data," do you mean a set of real-valued numbers? One of the more commonly used algorithms for that is a heap sort, I'd check that out. Most of its operations are O( n * log(n) ), which is pretty good but doesn't meet all of your criteria. The advantages of heapsort is that it's reasonably simple to code on your own, and many languages provide libraries to manage a sorted heap.
The Scenario
I have several number ranges. Those ranges are not overlapping - as they are not overlapping, the logical consequence is that no number can be part of more than one range at any time. Each range is continuously (there are no holes within a single range, so a range 8 to 16 will really contain all numbers between 8 and 16), but there can be holes between two ranges (e.g. range starts at 64 and goes to 128, next range starts at 256 and goes to 384), so some numbers may not belong to any range at all (numbers 129 to 255 would not belong to any range in this example).
The Problem
I'm getting a number and need to know to which range the number belongs to... if it belongs to any range at all. Otherwise I need to know that it does not belong to any range. Of course speed is important; I can not simply check all the ranges which would be O(n), as there might be thousands of ranges.
Simple Solutions
A simple solution was keeping all numbers in a sorted array and run a binary search on it. That would give me at least O(log n). Of course the binary search must be somewhat modified as it must always check against the smallest and biggest number of a range. If the number to look for is in between, we have found the correct range, otherwise we must search ranges below or above the current one. If there is only one range left in the end and the number is not within that range, the number is within no range at all and we can return a "not found" result.
Ranges could also be chained together in some kind of tree structure. This is basically like a sorted list with binary search. The advantage is that it will be faster to modify a tree than a sorted array (adding/removing range), but unlike we waste some extra time on keeping the tree balanced, the tree might get very unbalanced over the time and that will lead to much slower searches than a binary search on a sorted array.
One can argue which solution is better or worse as in practice the number of searches and modification operations will be almost balanced (there will be an equal number of searches and add/remove operations performed per second).
Question
Is there maybe a better data structure than a sorted list or a tree for this kind of problem? Maybe one that could be even better than O(log n) in best case and O(log n) in worst case?
Some additional information that might help here is the following: All ranges always start and end at multiple of a power of two. They always all start and end at the same power of two (e.g. they all start/end at a multiple of 4 or at a multiple of 8 or at a multiple of 16 and so on). The power of two cannot change during run time. Before the first range is added, the power of two must be set and all ranges ever added must start/end at a multiple of this value till the application terminates. I think this can be used for optimization, as if they all start at a multiple of e.g. 8, I can ignore the first 3 bits for all comparison operations, the other bits alone will tell me the range if any.
I read about section and ranges trees. Are these optimal solutions to the problem? Are there possibly better solutions? The problem sounds similar to what a malloc implementation must do (e.g. every free'd memory block belongs to a range of available memory and the malloc implementation must find out to which one), so how do those commonly solve the issue?
After running various benchmarks, I came to the conclusion that only a tree like structure can work here. A sorted list shows of course good lookup performance - O(log n) - but it shows horribly update performance (inserts and removals are slower by more than the factor 10 compared to trees!).
A balanced binary tree also has O(log n) lookup performance, however it is much faster to update, also around O(log n), while a sorted list is more like O(n) for updates (O(log n) to find the position for insert or the element to delete, but then up to n elements must be moved within the list and this is O(n)).
I implemented an AVL tree, a red-black tree, a Treap, an AA-Tree and various variations of B-Trees (B means Bayer Tree here, not Binary). Result: Bayer trees almost never win. Their lookup is good, but their update performance is bad (as within each node of a B-Tree you have a sorted list again!). Bayer trees are only superior in cases where reading/writing a node is a very slow operation (e.g. when the nodes are directly read or written from/to hard disk) - as a B-Tree must read/write much less nodes than any other tree, so in such a case it will win. If we are having the tree in memory though, it stands no chance against other trees, sorry for all the B-Tree fans out there.
A Treap was easiest to implement (less than half the lines of code you need for other balanced trees, only twice the code you need for an unbalanced tree) and shows good average performance for lookups and updates... but we can do better than that.
An AA-Tree shows amazing good lookup performance - I have no idea why. They sometimes beat all other trees (not by far, but still enough to not be coincident)... and the removal performance is okay, however unless I'm too stupid to implement them correctly, the insert performance is really bad (it performs much more tree rotations on every insert than any other tree - even B-Trees have faster insert performance).
This leaves us with two classics, AVL and RB-Tree. They are both pretty similar but after hours of benchmarking, one thing is clear: AVL Trees definitely have better lookup performance than RB-Trees. The difference is not gigantic, but in 2/3 out of all benchmarks they will win the lookup test. Not too surprising, after all AVL Trees are more strictly balanced than RB-Trees, so they are closer to the optimal binary tree in most cases. We are not talking about a huge difference here, it is always a close race.
On the other hand RB Trees beat AVL Trees for inserts in almost all test runs and that is not such a close race. As before, that is expected. Being less strictly balanced RB Trees perform much less tree rotations on inserts compared to AVL Trees.
How about removal of nodes? Here it seems to depend a lot on the number of nodes. For small node numbers (everything less than half a million) RB Trees again own AVL Trees; the difference is even bigger than for inserts. Rather unexpected is that once the node number grows beyond a million nodes AVL Trees seems to catch up and the difference to RB Trees shrinks until they are more or less equally fast. This could be an effect of the system, though. It could have to do with memory usage of the process or CPU caching or the like. Something that has a more negative effect on RB Trees than it has on AVL Trees and thus AVL Trees can catch up. The same effect is not observed for lookups (AVL usually faster, regardless how many nodes) and inserts (RB usually faster, regardless how many nodes).
Conclusion:
I think the fastest I can get is when using RB-Trees, since the number of lookups will only be somewhat higher than the number of inserts and deletions and no matter how fast AVL is on lookups, the overall performance will suffer from their worse insert/deletion performance.
That is, unless anyone here may come up with a much better data structure that will own RB Trees big time ;-)
Create a sorted list and sort by the lower margin / start. That's easiest to implement and fast enough unless you have millions of ranges (and maybe even then).
When looking for a range, find the range where start <= position. You can use a binary search here since the list is sorted. The number is in the range if position <= end.
Since the end of any range is guaranteed to be smaller than start of the next range, you don't need to care about the end until you have found a range where the position might be contained.
All other data structures become interesting when you get intersections or you have a whole lot of ranges and when you build the structure one and query often.
A balanced, sorted tree with ranges on each node seems to be the answer.
I can't prove it's optimal, but if I were you I wouldn't look any further.
If the total range of numbers is low, and you have enough memory, you could create a huge table with all the numbers.
For example, if you have one million of numbers, you can create a table that references the range object.
As an alternative to O(log n) balanced binary search trees (BST), you could consider building a bitwise (compressed) trie. I.e. a prefix tree on the bits of the numbers you're storing.
This gives you O(w)-search, insert and delete performance; where w = number of bits (e.g. 32 or 64 minus whatever power of 2 your ranges were based on).
Not saying that it'll perform better or worse, but it seems like a true alternative in the sense it is different from BST but still has good theoretic performance and allows for predecessor queries just like BST.