Nested balanced trees inside B-tree nodes? - data-structures

A commonly cited benefit of B-trees is that the degree of branching can be high, which is useful in limiting the number of disk access required to reach a node.
However, suppose we have a (k, 2k) B-tree and naively implement the nodes. Search is actually going to be in
log( n ) * k / log(k)
One might instead opt to represent the values inside the nodes in nested, balanced trees, so that insertion and deletion of keys in those nodes will stay in log(k) and search will remain in log (n) even for very large k.
Are there papers suggesting this approach or implementations following it, or is the branching factor k generally too low to make it worth the hassle?

This is an interesting idea, but typically you don’t see this done. There are a couple of reasons why.
For starters, when you’re doing an insertion or deletion on a B-tree, the main bottleneck is the number of I/O transfers performed rather than the amount of CPU work. In that sense, the cost of shifting over a bunch of array elements is likely not going to be all that large a fraction of the cost of the B-tree operation.
Next, there’s the question of the frequency of these operations. If you use a B+-tree, in which the keys are stored purely in the leaves and each internal node just stores routing information, the overwhelming majority of insertions and deletions won’t actually make changes to the internal nodes of the tree and will just touch the leaves (figure only roughly one of every 2k insertions needs to split a node, and only one of (2k)2 operations needs to split two nodes, etc.). That means that there aren’t that many array edits in the first place, so speeding them up wouldn’t necessarily be worthwhile.
Another major issue you’d run into is how you’d allocate the memory for those tree nodes. B-trees and the like are optimized for storage on disk. This means that a typical B-tree operation would work by paging in a block of raw bytes from disk into some buffer, then treating those bytes as your node. That in turn means that if you were to store a BST this way, the nodes in that BST couldn’t store conventional pointers because the memory addresses of the nodes, once loaded into memory, wouldn’t be consistent from run to run. That precludes using things like malloc or new, and you’d need to implement your own memory manager to divvy up the bytes of the disk page to your nodes. That’s going to introduce a lot of overhead and it’ll be tricky to get this to be time- and space-efficient enough to warrant switching to BSTs over a raw array.
There’s also the memory overhead of using BSTs compared to raw arrays. BSTs require two extra pointers / offsets per item compared with raw arrays, which decreases the number of keys you can cram into a mode. Since the main speed of a B-tree derives from fitting as many keys as possible into a node, this might eat into the advantage of B-trees in the first place.
Hope this helps!

Related

Are there real-world reasons to employ a Binary Search Tree over a Binary Search of semi-contiguous list?

I'm watching university lectures on algorithms and it seems so many of them rely almost entirely binary search trees of some particular sort for querying/database/search tasks.
I don't understand this obsession with Binary Search Trees. It seems like in the vast majority of scenarios, a BSP could be replaced with a sorted array in the case of a static data, or a sorted bucketed list if insertions occur dynamically, and then a Binary Search could be employed over them.
With this approach, you get the same algorithmic complexity (for querying at least) as a BST, way better cache coherency, way less memory fragmentation (and less gc allocs depending on what language you're in), and are likely much simpler to write.
The fundamental issue is that BSP are completely memory naïve -- their focus is entirely on O(n) complexity and they ignore the very real performance considerations of memory fragmentation and cache coherency... Am I missing something?
Binary search trees (BST) are not totally equivalent to the proposed data structure. Their asymptotic complexity is better when it comes to both insert and remove sorted values dynamically (assuming they are balanced correctly). For example, when you when to build an index of the top-k values dynamically:
while end_of_stream(stream):
value <- stream.pop_value()
tree.insert(value)
tree.remove_max()
Sorted arrays are not efficient in this case because of the linear-time insertion. The complexity of bucketed lists is not better than plain list asymptotically and also suffer from a linear-time search. One can note that a heap can be used in this case, and in fact it is probably better to use a heap here, although they are not always interchangeable.
That being said, your are right : BST are slow, cause a lot of cache miss and fragmentation, etc. Thus, they are often replaced by more compact variants like B-trees. B-tree uses a sorted array index to reduce the amount of node jumps and make the data-structure much more compact. They can be mixed with some 4-byte pointer optimizations to make them even more compact. B-trees are to BST what bucketed linked-lists are to plain linked-lists. B-trees are very good for building dynamic database index of huge datasets stored on a slow storage device (because of the size): they enable applications to fetch values associated to a key using very few storage-device lookups (which as very slow on HDD for example). Another example of real-world use-case is interval-trees.
Note that memory fragmentation can be reduced using compaction methods. For BSTs/B-trees, one can reorder the root nodes like in a heap. However, compaction is not always easy to apply, especially on native languages with pointers like in C/C++ although some very clever methods exists to do so.
Keep in mind that B-trees shine only on big datasets (especially the ones that do not fit in cache). On relatively small ones, using just plain arrays or even sorted array is often a very good solution.

Best algorithm for sequential access of nodes

I want to know the best algorithm where I can create a "sorted" list based on the key (ranging from 0 to 2 power 32) and traverse them in sorted order when needed in an embedded device. I am aware of possible options namely
sorted linklist
As number of nodes in the linked list increases searching for the right node in the list for insertion/update operations takes more time O(n)
Hash
Might be the current best choice until and unless we do not have collisions with the hashing logic
Table of size 2 power 32
Wastage of space
Is there any other best alternative which is suited to be used in an embedded device ?
There are many design choices to be weighed.
Generalities
Since you're working on an embedded device, it's reasonable to assume that you have limited memory. In such a situation, you'll probably want to choose memory-compact data structures over performant data structures.
Linked lists tend to scatter their contents across memory in a way which can make accesses slow, though this will depend somewhat on your architecture.
The Options You've Proposed
Sorted linked-list. This structure is already slow to access (O(n)), slow to construct (O(N²)), and slow to traverse (because a linked-list scatters memory, which reduces your ability to pre-fetch).
Hash table: This is a fast structure (O(1) access, O(N) construction). There are two problems, though. If you use open addressing, the table must be no more than about 70% full or performance will degrade. That means you'll be wasting some memory. Alternatively, you can use linked list buckets, but this has performance implications for traversal. I have an answer here which shows order-of-magnitude differences in traversal performance between a linked-list bucket design and open addressing for a hash table. More problematically, hash tables work by "randomly" distributing data across memory space. Getting an in-order traversal out of that will require an additional data structure of some sort.
Table of size 2 power 32. There's significant wastage of space for this solution. But also poor performance since, I expect, most of the entries of this table will be empty, but they must all be traversed.
An Alternative
Sort before use
If you do not need your list to always be sorted, I'd suggest adding new entries to an array and then sorting just prior to traversal. This gives you tight control over your memory layout, which is contiguous, so you'll get good memory performance. Insertion is quick: just throw your new data at the beginning or end of the array. Traversal, when it happens, will be fast because you just walk along the array. The only potentially slow bit is the sort.
You have several options for sort. You'll want to keep in mind that your array is either mostly sorted (only a few insertions between traversals) or mostly unsorted (many insertions between traversals). In the mostly-sorted case, insertion sort is a good choice. In the mostly-unsorted case, [quicksort](https://en.wikipedia.org/wiki/Quicksort] is solid. Both have the benefit of being in-place, which reduces memory consumption. Timsort balances these strategies.

Can heaps really only use O(1) auxiliary storage?

One of the biggest advantages of using heaps for priority queues (as opposed to, say, red-black trees) seems to be space-efficiency: unlike balanced BSTs, heaps only require O(1) auxiliary storage.
i.e, the ordering of the elements alone is sufficient to satisfy the invariants and guarantees of a heap, and no child/parent pointers are necessary.
However, my question is: is the above really true?
It seems to me that, in order to use an array-based heap while satisfying the O(log n) running time guarantee, the array must be dynamically expandable.
The only way we can dynamically expand an array in O(1) time is to over-allocate the memory that backs the array, so that we can amortize the memory operations into the future.
But then, doesn't that overallocation imply a heap also requires auxiliary storage?
If the above is true, that seems to imply heaps have no complexity advantages over balanced BSTs whatsoever, so then, what makes heaps "interesting" from a theoretical point of view?
You appear to confuse binary heaps, heaps in general, and implementations of binary heaps that use an array instead of an explicit tree structure.
A binary heap is simply a binary tree with properties that make it theoretically interesting beyond memory use. They can be built in linear time, whereas building a BST necessarily takes n log n time. For example, this can be used to select the k smallest/largest values of a sequence in better-than-n log n time.
Implementing a binary heap as an array yields an implicit data structure. This is a topic formalized by theorists, but not actively pursued by most of them. But in any case, the classification is justified: To dynamically expand this structure one indeed needs to over-allocate, but not every data structure has to grow dynamically, so the case of a statically-sized pre-allocated data structure is interesting as well.
Furthermore, there is a difference between space needed for faster growth and space needed because each element is larger than it has to be. The first can be avoided by not growing, and also reduced to arbitrarily small constant factor of the total size, at the cost of a greater constant factor on running time. The latter kind of space overhead is usually unavoidable and can't be reduced much (the pointers in a tree are at least log n bits, period).
Finally, there are many heaps other than binary heaps (fibonacci, binominal, leftist, pairing, ...), and almost all except binary heaps offer better bounds for at least some operations. The most common ones are decrease-key (alter the value of a key already in the structure in a certain way) and merge (combined two heaps into one). The complexity of these operations are important for the analysis of several algorithms using priority queues, and hence the motivation for a lot of research into heaps.
In practice the memory use is important. But (with over-allocation and without), the difference is only a constant factor overall, so theorists are not terribly interested in binary heaps. They'd rather get better complexity for decrease-key and merge; most of them are happy if the data structure takes O(n) space. The extremely high memory density, ease of implementation and cache friendliness are far more interesting for practitioners, and it's them who sing the praise of binary heaps far and wide.

Are there O(1) random access data structures that don't rely on contiguous storage?

The classic O(1) random access data structure is the array. But an array relies on the programming language being used supporting guaranteed continuous memory allocation (since the array relies on being able to take a simple offset of the base to find any element).
This means that the language must have semantics regarding whether or not memory is continuous, rather than leaving this as an implementation detail. Thus it could be desirable to have a data structure that has O(1) random access, yet doesn't rely on continuous storage.
Is there such a thing?
How about a trie where the length of keys is limited to some contant K (for example, 4 bytes so you can use 32-bit integers as indices). Then lookup time will be O(K), i.e. O(1) with non-contiguous memory. Seems reasonable to me.
Recalling our complexity classes, don't forget that every big-O has a constant factor, i.e. O(n) + C, This approach will certainly have a much larger C than a real array.
EDIT: Actually, now that I think about it, it's O(K*A) where A is the size of the "alphabet". Each node has to have a list of up to A child nodes, which will have to be a linked list keep the implementation non-contiguous. But A is still constant, so it's still O(1).
In practice, for small datasets using contiguous storage is not a problem, and for large datasets O(log(n)) is just as good as O(1); the constant factor is rather more important.
In fact, For REALLY large datasets, O(root3(n)) random access is the best you can get in a 3-dimensional physical universe.
Edit:
Assuming log10 and the O(log(n)) algorithm being twice as fast as the O(1) one at a million elements, it will take a trillion elements for them to become even, and a quintillion for the O(1) algorithm to become twice as fast - rather more than even the biggest databases on earth have.
All current and foreseeable storage technologies require a certain physical space (let's call it v) to store each element of data. In a 3-dimensional universe, this means for n elements there is a minimum distance of root3(n*v*3/4/pi) between at least some of the elements and the place that does the lookup, because that's the radius of a sphere of volume n*v. And then, the speed of light gives a physical lower boundary of root3(n*v*3/4/pi)/c for the access time to those elements - and that's O(root3(n)), no matter what fancy algorithm you use.
Apart from a hashtable, you can have a two-level array-of-arrays:
Store the first 10,000 element in the first sub-array
Store the next 10,000 element in the next sub-array
etc.
Thus it could be desirable to have a data structure that has O(1) random access, yet
doesn't rely on continuous storage.
Is there such a thing?
No, there is not. Sketch of proof:
If you have a limit on your continuous block size, then obviously you'll have to use indirection to get to your data items. Fixed depth of indirection with a limited block size gets you only a fixed-sized graph (although its size grows exponentially with the depth), so as your data set grows, the indirection depth will grow (only logarithmically, but not O(1)).
Hashtable?
Edit:
An array is O(1) lookup because a[i] is just syntactic sugar for *(a+i). In other words, to get O(1), you need either a direct pointer or an easily-calculated pointer to every element (along with a good-feeling that the memory you're about to lookup is for your program). In the absence of having a pointer to every element, it's not likely to have an easily-calculated pointer (and know the memory is reserved for you) without contiguous memory.
Of course, it's plausible (if terrible) to have a Hashtable implementation where each lookup's memory address is simply *(a + hash(i)) Not being done in an array, i.e. being dynamically created at the specified memory location if you have that sort of control.. the point is that the most efficient implementation is going to be an underlying array, but it's certainly plausible to take hits elsewhere to do a WTF implementation that still gets you constant-time lookup.
Edit2:
My point is that an array relies on contiguous memory because it's syntactic sugar, but a Hashtable chooses an array because it's the best implementation method, not because it's required. Of course I must be reading the DailyWTF too much, since I'm imagining overloading C++'s array-index operator to also do it without contiguous memory in the same fashion..
Aside from the obvious nested structures to finite depth noted by others, I'm not aware of a data structure with the properties you describe. I share others' opinions that with a well-designed logarithmic data structure, you can have non-contiguous memory with fast access times to any data that will fit in main memory.
I am aware of an interesting and closely related data structure:
Cedar ropes are immutable strings that provide logarithmic rather than constant-time access, but they do provide a constant-time concatenation operation and efficient insertion of characters. The paper is copyrighted but there is a Wikipedia explanation.
This data structure is efficient enough that you can represent the entire contents of a large file using it, and the implementation is clever enough to keep bits on disk unless you need them.
Surely what you're talking about here is not contiguous memory storage as such, but more the ability to index a containing data structure. It is common to internally implement a dynamic array or list as an array of pointers with the actual content of each element elsewhere in memory. There are a number of reasons for doing this - not least that it enables each entry to be a different size. As others have pointed out, most hashtable implementations also rely on indexing too. I can't think of a way to implement an O(1) algorithm that doesn't rely on indexing, but this implies contiguous memory for the index at least.
Distributed hash maps have such a property. Well, actually, not quite, basically a hash function tells you what storage bucket to look in, in there you'll probably need to rely on traditional hash maps. It doesn't completely cover your requirements, as the list containing the storage areas / nodes (in a distributed scenario), again, is usually a hash map (essentially making it a hash table of hash tables), although you could use some other algorithm, e.g. if the number of storage areas is known.
EDIT:
Forgot a little tid-bit, you'd probably want to use different hash functions for the different levels, otherwise you'll end up with a lot of similar hash-values within each storage area.
A bit of a curiosity: the hash trie saves space by interleaving in memory the key-arrays of trie nodes that happen not to collide. That is, if node 1 has keys A,B,D while node 2 has keys C,X,Y,Z, for example, then you can use the same contiguous storage for both nodes at once. It's generalized to different offsets and an arbitrary number of nodes; Knuth used this in his most-common-words program in Literate Programming.
So this gives O(1) access to the keys of any given node, without reserving contiguous storage to it, albeit using contiguous storage for all nodes collectively.
It's possible to allocate a memory block not for the whole data, but only for a reference array to pieces of data. This brings dramatic increase decrease in length of necessary contiguous memory.
Another option, If the elements can be identified with keys and these keys can be uniquely mapped to the available memory locations, it is possible not to place all the objects contiguously, leaving spaces between them. This requires control over the memory allocation so you can still distribute free memory and relocate 2nd-priroty objects to somewhere else when you have to use that memory location for your 1st-priority object. They would still be contiguous in a sub-dimension, though.
Can I name a common data structure which answers your question? No.
Some pseudo O(1) answers-
A VList is O(1) access (on average), and doesn't require that the whole of the data is contiguous, though it does require contiguous storage in small blocks. Other data structures based on numerical representations are also amortized O(1).
A numerical representation applies the same 'cheat' that a radix sort does, yielding an O(k) access structure - if there is another upper bound of the index, such as it being a 64 bit int, then a binary tree where each level correspond to a bit in the index takes a constant time. Of course, that constant k is greater than lnN for any N which can be used with the structure, so it's not likely to be a performance improvement (radix sort can get performance improvements if k is only a little greater than lnN and the implementation of the radix sort runs better exploits the platform).
If you use the same representation of a binary tree that is common in heap implementations, you end up back at an array.

B-tree faster than AVL or RedBlack-Tree? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know that performance never is black and white, often one implementation is faster in case X and slower in case Y, etc. but in general - are B-trees faster then AVL or RedBlack-Trees? They are considerably more complex to implement then AVL trees (and maybe even RedBlack-trees?), but are they faster (does their complexity pay off) ?
Edit: I should also like to add that if they are faster then the equivalent AVL/RedBlack tree (in terms of nodes/content) - why are they faster?
Sean's post (the currently accepted one) contains several incorrect claims. Sorry Sean, I don't mean to be rude; I hope I can convince you that my statement is based in fact.
They're totally different in their use cases, so it's not possible to make a comparison.
They're both used for maintaining a set of totally ordered items with fast lookup, insertion and deletion. They have the same interface and the same intention.
RB trees are typically in-memory structures used to provide fast access (ideally O(logN)) to data. [...]
always O(log n)
B-trees are typically disk-based structures, and so are inherently slower than in-memory data.
Nonsense. When you store search trees on disk, you typically use B-trees. That much is true. When you store data on disk, it's slower to access than data in memory. But a red-black tree stored on disk is also slower than a red-black tree stored in memory.
You're comparing apples and oranges here. What is really interesting is a comparison of in-memory B-trees and in-memory red-black trees.
[As an aside: B-trees, as opposed to red-black trees, are theoretically efficient in the I/O-model. I have experimentally tested (and validated) the I/O-model for sorting; I'd expect it to work for B-trees as well.]
B-trees are rarely binary trees, the number of children a node can have is typically a large number.
To be clear, the size range of B-tree nodes is a parameter of the tree (in C++, you may want to use an integer value as a template parameter).
The management of the B-tree structure can be quite complicated when the data changes.
I remember them to be much simpler to understand (and implement) than red-black trees.
B-tree try to minimize the number of disk accesses so that data retrieval is reasonably deterministic.
That much is true.
It's not uncommon to see something like 4 B-tree access necessary to lookup a bit of data in a very database.
Got data?
In most cases I'd say that in-memory RB trees are faster.
Got data?
Because the lookup is binary it's very easy to find something. B-tree can have multiple children per node, so on each node you have to scan the node to look for the appropriate child. This is an O(N) operation.
The size of each node is a fixed parameter, so even if you do a linear scan, it's O(1). If we big-oh over the size of each node, note that you typically keep the array sorted so it's O(log n).
On a RB-tree it'd be O(logN) since you're doing one comparison and then branching.
You're comparing apples and oranges. The O(log n) is because the height of the tree is at most O(log n), just as it is for a B-tree.
Also, unless you play nasty allocation tricks with the red-black trees, it seems reasonable to conjecture that B-trees have better caching behavior (it accesses an array, not pointers strewn about all over the place, and has less allocation overhead increasing memory locality even more), which might help it in the speed race.
I can point to experimental evidence that B-trees (with size parameters 32 and 64, specifically) are very competitive with red-black trees for small sizes, and outperforms it hands down for even moderately large values of n. See http://idlebox.net/2007/stx-btree/stx-btree-0.8.3/doxygen-html/speedtest.html
B-trees are faster. Why? I conjecture that it's due to memory locality, better caching behavior and less pointer chasing (which are, if not the same things, overlapping to some degree).
Actually Wikipedia has a great article that shows every RB-Tree can easily be expressed as a B-Tree. Take the following tree as sample:
now just convert it to a B-Tree (to make this more obvious, nodes are still colored R/B, what you usually don't have in a B-Tree):
Same Tree as B-Tree
(cannot add the image here for some weird reason)
Same is true for any other RB-Tree. It's taken from this article:
http://en.wikipedia.org/wiki/Red-black_tree
To quote from this article:
The red-black tree is then
structurally equivalent to a B-tree of
order 4, with a minimum fill factor of
33% of values per cluster with a
maximum capacity of 3 values.
I found no data that one of both is significantly better than the other one. I guess one of both had already died out if that was the case. They are different regarding how much data they must store in memory and how complicated it is to add/remove nodes from the tree.
Update:
My personal tests suggest that B-Trees are better when searching for data, as they have better data locality and thus the CPU cache can do compares somewhat faster. The higher the order of a B-Tree (the order is the number of children a note can have), the faster the lookup will get. On the other hand, they have worse performance for adding and removing new entries the higher their order is. This is caused by the fact that adding a value within a node has linear complexity. As each node is a sorted array, you must move lots of elements around within that array when adding an element into the middle: all elements to the left of the new element must be moved one position to the left or all elements to the right of the new element must be moved one position to the right. If a value moves one node upwards during an insert (which happens frequently in a B-Tree), it leaves a hole which must be also be filled either by moving all elements from the left one position to the right or by moving all elements to the right one position to the left. These operations (in C usually performed by memmove) are in fact O(n). So the higher the order of the B-Tree, the faster the lookup but the slower the modification. On the other hand if you choose the order too low (e.g. 3), a B-Tree shows little advantages or disadvantages over other tree structures in practice (in such a case you can as well use something else). Thus I'd always create B-Trees with high orders (at least 4, 8 and up is fine).
File systems, which often base on B-Trees, use much higher orders (order 200 and even a lot more) - this is because they usually choose the order high enough so that a node (when containing maximum number of allowed elements) equals either the size of a sector on harddrive or of a cluster of the filesystem. This gives optimal performance (since a HD can only write a full sector at a time, even when just one byte is changed, the full sector is rewritten anyway) and optimal space utilization (as each data entry on drive equals at least the size of one cluster or is a multiple of the cluster sizes, no matter how big the data really is). Caused by the fact that the hardware sees data as sectors and the file system groups sectors to clusters, B-Trees can yield much better performance and space utilization for file systems than any other tree structure can; that's why they are so popular for file systems.
When your app is constantly updating the tree, adding or removing values from it, a RB-Tree or an AVL-Tree may show better performance on average compared to a B-Tree with high order. Somewhat worse for the lookups and they might also need more memory, but therefor modifications are usually fast. Actually RB-Trees are even faster for modifications than AVL-Trees, therefor AVL-Trees are a little bit faster for lookups as they are usually less deep.
So as usual it depends a lot what your app is doing. My recommendations are:
Lots of lookups, little modifications: B-Tree (with high order)
Lots of lookups, lots of modifiations: AVL-Tree
Little lookups, lots of modifications: RB-Tree
An alternative to all these trees are AA-Trees. As this PDF paper suggests, AA-Trees (which are in fact a sub-group of RB-Trees) are almost equal in performance to normal RB-Trees, but they are much easier to implement than RB-Trees, AVL-Trees, or B-Trees. Here is a full implementation, look how tiny it is (the main-function is not part of the implementation and half of the implementation lines are actually comments).
As the PDF paper shows, a Treap is also an interesting alternative to classic tree implementation. A Treap is also a binary tree, but one that doesn't try to enforce balancing. To avoid worst case scenarios that you may get in unbalanced binary trees (causing lookups to become O(n) instead of O(log n)), a Treap adds some randomness to the tree. Randomness cannot guarantee that the tree is well balanced, but it also makes it highly unlikely that the tree is extremely unbalanced.
Nothing prevents a B-Tree implementation that works only in memory. In fact, if key comparisons are cheap, in-memory B-Tree can be faster because its packing of multiple keys in one node will cause less cache misses during searches. See this link for performance comparisons. A quote: "The speed test results are interesting and show the B+ tree to be significantly faster for trees containing more than 16,000 items." (B+Tree is just a variation on B-Tree).
The question is old but I think it is still relevant. Jonas Kölker and Mecki gave very good answers but I don't think the answers cover the whole story. I would even argue that the whole discussion is missing the point :-).
What was said about B-Trees is true when entries are relatively small (integers, small strings/words, floats, etc). When entries are large (over 100B) the differences become smaller/insignificant.
Let me sum up the main points about B-Trees:
They are faster than any Binary Search Tree (BSTs) due to memory locality (resulting in less cache and TLB misses).
B-Trees are usually more space efficient if entries are relatively
small or if entries are of variable size. Free space management is
easier (you allocate larger chunks of memory) and the extra metadata
overhead per entry is lower. B-Trees will waste some space as nodes
are not always full, however, they still end up being more compact
that Binary Search Trees.
The big O performance ( O(logN) ) is the same for both. Moreover, if you do binary search inside each B-Tree node, you will even end up with the same number of comparisons as in a BST (it is a nice math exercise to verify this).
If the B-Tree node size is sensible (1-4x cache line size), linear searching inside each node is still faster because of
the hardware prefetching. You can also use SIMD instructions for
comparing basic data types (e.g. integers).
B-Trees are better suited for compression: there is more data per node to compress. In certain cases this can be a huge benefit.
Just think of an auto-incrementing key in a relational database table that is used to build an index. The lead nodes of a B-Tree contain consecutive integers that compress very, very well.
B-Trees are clearly much, much faster when stored on secondary storage (where you need to do block IO).
On paper, B-Trees have a lot of advantages and close to no disadvantages. So should one just use B-Trees for best performance?
The answer is usually NO -- if the tree fits in memory. In cases where performance is crucial you want a thread-safe tree-like data-structure (simply put, several threads can do more work than a single one). It is more problematic to make a B-Tree support concurrent accesses than to make a BST. The most straight-forward way to make a tree support concurrent accesses is to lock nodes as you are traversing/modifying them. In a B-Tree you lock more entries per node, resulting in more serialization points and more contended locks.
All tree versions (AVL, Red/Black, B-Tree, an others) have countless variants that differ in how they support concurrency. The vanilla algorithms that are taught in a university course or read from some introductory books are almost never used in practice. So, it is hard to say which tree performs best as there is no official agreement on the exact algorithms are behind each tree. I would suggest to think of the trees mentioned more like data-structure classes that obey certain tree-like invariants rather than precise data-structures.
Take for example the B-Tree. The vanilla B-Tree is almost never used in practice -- you cannot make it to scale well! The most common B-Tree variant used is the B+-Tree (widely used in file-systems, databases). The main differences between the B+-Tree and the B-Tree: 1) you dont store entries in the inner nodes of the tree (thus you don't need write locks high in the tree when modifying an entry stored in an inner node); 2) you have links between nodes at the same level (thus you do not have to lock the parent of a node when doing range searches).
I hope this helps.
Guys from Google recently released their implementation of STL containers, which is based on B-trees. They claim their version is faster and consume less memory compared to standard STL containers, implemented via red-black trees.
More details here
For some applications, B-trees are significantly faster than BSTs.
The trees you may find here:
http://freshmeat.net/projects/bps
are quite fast. They also use less memory than regular BST implementations, since they do not require the BST infrastructure of 2 or 3 pointers per node, plus some extra fields to keep the balancing information.
THey are sed in different circumstances - B-trees are used when the tree nodes need to be kept together in storage - typically because storage is a disk page and so re-balancing could be vey expensive. RB trees are used when you don't have this constraint. So B-trees will probably be faster if you want to implement (say) a relational database index, while RB trees will probably be fasterv for (say) an in memory search.
They all have the same asymptotic behavior, so the performance depends more on the implementation than which type of tree you are using.
Some combination of tree structures might actually be the fastest approach, where each node of a B-tree fits exactly into a cache-line and some sort of binary tree is used to search within each node. Managing the memory for the nodes yourself might also enable you to achieve even greater cache locality, but at a very high price.
Personally, I just use whatever is in the standard library for the language I am using, since it's a lot of work for a very small performance gain (if any).
On a theoretical note... RB-trees are actually very similar to B-trees, since they simulate the behavior of 2-3-4 trees. AA-trees are a similar structure, which simulates 2-3 trees instead.
moreover ...the height of a red black tree is O(log[2] N) whereas that of B-tree is O(log[q] N) where ceiling[N]<= q <= N . So if we consider comparisons in each key array of B-tree (that is fixed as mentioned above) then time complexity of B-tree <= time complexity of Red-black tree. (equal case for single record equal in size of a block size)

Resources