Can B+tree search perform better than Binary Search Tree search where all keys-data of the leaf nodes are in the memory? - performance

Assume that we are implementing a B+ tree in memory, keys are at the internal nodes and key-data pairs are in the leaf nodes.
If B+tree with a fan-out f, this means that B+ tree will have a height of log_f N where N is the number of keys, whereas the corresponding BST will have height of log_2 N.
If we are not doing any disk reads and writes, can B+tree search performance be better than Binary Search Tree search performance? How?
Since for B+tree at each internal node we have make a decision on F many choices instead if 1 for BST?

At least when compared to cache, main memory has many of the same characteristics as a disk drive--it has fairly high bandwidth, but much higher latency than cache. It has a fairly large minimum read size, and gives substantially higher bandwidth when reads are predictable (e.g., when you read a number a number of cache lines at contiguous addresses). As such, it benefits from the same general kinds of optimizations (though the details often vary a bit).
B-trees (and variants like B* and B+ trees) were explicitly designed to work well with the access patterns supported well by disk drives. Since you have to read a fairly substantial amount of data anyway, you might as well pack the data to maximize the amount you accomplish from the memory you have to read. In both cases, you also frequently get a substantial bandwidth gain by reading some multiple of the minimum read in a predictable pattern (especially, a number of successive reads at successive addresses). As such, it often makes sense to increase the size of a single page to something even larger than the minimum you can read at once.
Likewise, in both cases we can plan on descending through a number of layers of nodes in the tree before we find the data we really care about. Much like when reading from disk, we benefit from maximizing the density of keys in the data we read, until we've actually found the data we care about. With a typical binary tree:
template <class T, class U>
struct node {
T key;
U data;
node *left;
node *right;
};
...we end up reading a number of data items for which we have no real use. It's only when we've found the right key that we need/want to get the associated data. In fairness, we can do that with a binary tree as well, with only a fairly minor modification to the node structure:
template <class T, class U>
struct node {
T key;
U *data;
node *left;
node *right;
};
Now the node contains only a pointer to the data rather than the data itself. This won't accomplish anything if data is small, but can accomplish a great deal if it's large.
Summary: from the viewpoint of the CPU, reads from main memory have the same basic characteristics as reads from disk; a disk just shows a more extreme version of those same characteristics. As such, most of the design considerations that led to the design of B-trees (and variants) now apply similarly to data stored in main memory.
B-trees work well and often provide substantial benefits when used for in-memory storage.

Related

is not the benefit of B-Tree lost when it is saved in File?

I was reading about B-Tree and it was interesting to know that it is specifically built for storing in secondary memory. But i am little puzzled with few points:
If we save the B-Tree in secondary memory (via serialization in Java) is not the advantage of B-Tree lost ? because once the node is serialized we will not have access to reference to child nodes (as we get in primary memory). So what that means is, we will have to read all the nodes one by one (as no reference is available for child). And if we have to read all the nodes then whats the advantage of the tree ? i mean, in that case we are not using the binary search on the tree. Any thoughts ?
When a B-Tree is used on disk, it is not read from file, deserialized, modified, and serialized, and written back.
A B-Tree on disk is a disk-based data structure consisting of blocks of data, and those blocks are read and written one block at a time. Typically:
Each node in the B-Tree is a block of data (bytes). Blocks have fixed sizes.
Blocks are addressed by their position in the file, if a file is used, or by their sector address if B-Tree blocks are mapped directly to disk sectors.
A "pointer to a child node" is just a number that is the node's block address.
Blocks are large. Typically large enough to have 1000 children or more. That's because reading a block is expensive, but the cost doesn't depend much on the block size. By keeping blocks big enough so that there are only 3 or 4 levels in the whole tree, we minimize the number of reads or writes required to access any specific item.
Caching is usually used so that most accesses only need to touch the lowest level of the tree on disk.
So to find an item in a B-Tree, you would read the root block (it will probably come out of cache), look through it to find the appropriate child block and read that (again probably out of cache), maybe do that again, finally read the appropriate leaf block and extract the data.

compare B+tree implementation: storing internal nodes on disk

is there any implementation where internal nodes of B+tree is also stored on disk? I am just wondering if any one is aware of such an implementation or see real advantage doing it this way? Normally, one stores the leaf nodes on disk and develop the B+ tree as per need.
But it is also possible to save the current state of B+tree's internal nodes (by replacing the pointers by disk block number it points to): I see there are other challenges like keeping the internal nodes in memory in sync with the disk blocks: but the B+ tree may be implemented on nvram or say battery backed dram or some other method to keep it in sync.
Just wondering if anyone has already implemented it this way like linux's bcache or another implementation?
cheers, cforfun!
All persistent B+Tree implementations I've ever seen - as opposed to pure 'transient' in-memory structures - store both node types on disk.
Not doing so would require scanning the all the data (the external nodes, a.k.a. 'sequence set') on every load in order to rebuild the index, something that is feasible only when you're dealing with piddling small amounts of data or very special circumstances.
I've seen single-user implementations that sync the disk image only when the page manager ejects a dirty page and on program shutdown, which has the effect that often-used internal nodes - which are rarely replaced/ejected - can go without sync-to-disk for a long time. This is somewhat justified by the fact that internal ('index') nodes can be rebuilt after a crash, so that only the external ('data') nodes need the full fault-tolerant persistence treatment. The advantage of such schemes is that they eliminate the wasted writes for nodes close to the root whose update frequency is fairly high. Think SSDs, for example.
One way of increasing disk efficiency for persisted in-memory structures is to persist only the log to disk, and to rebuild the whole tree from the log on each restart. One very successful Java package uses this approach to great advantage.

How to decide order of a B-tree

B trees are said to be particularly useful in case of huge amount of data that cannot fit in main memory.
My question is then how do we decide order of B tree or how many keys to store in a node ? Or how many children a node should have ?
I came across that everywhere people are using 4/5 keys per node. How does it solve the huge data and disk read problem ?
Typically, you'd choose the order so that the resulting node is as large as possible while still fitting into the block device page size. If you're trying to build a B-tree for an on-disk database, you'd probably pick the order such that each node fits into a single disk page, thereby minimizing the number of disk reads and writes necessary to perform each operation. If you wanted to build an in-memory B-tree, you'd likely pick either the L2 or L3 cache line sizes as your target and try to fit as many keys as possible into a node without exceeding that size. In either case, you'd have to look up the specs to determine what size to use.
Of course, you could always just experiment and try to determine this empirically as well. :-)
Hope this helps!

B-Tree for on-disk storage

Why is a B-Tree the preferred structure for on-disk storage.
What quality makes it preferrable over a binary tree for secondary storage.
Is that specific 'quality' a feature of the alogrithm itself;or the way in which it is implemented?
Any reference or pointer would be much appreciated.
Disk seeks are expensive. B-Tree structure is designed specifically to avoid disk seeks as much as possible. Therefore B-Tree packs much more keys/pointers into a single node than a binary tree. This property makes the tree very flat. Usually most B-Trees are only 3 or 4 levels deep and the root node can be easily cached. This requires only 2-3 seeks to find anything in the tree. Leaves are also "packed" this way, so iterating a tree (e.g. full scan or range scan) is very efficient, because you read hundreds/thousands data-rows per single block (seek).
In binary tree of the same capacity, you'd have several tens of levels and sequential visiting every single value would require at least one seek.

Knn search for large data?

I'm interested in performing knn search on large dataset.
There are some libs: ANN and FLANN, but I'm interested in the question: how to organize the search if you have a database that does not fit entirely into memory(RAM)?
I suppose it depends on how much bigger your index is in comparison to the memory. Here are my first spontaneous ideas:
Supposing it was tens of times the size of the RAM, I would try to cluster my data using, for instance, hierarchical clustering trees (implemented in FLANN). I would modify the implementation of the trees so that they keep the branches in memory and save the leaves (the clusters) on the disk. Therefore, the appropriate cluster would have to be loaded each time. You could then try to optimize this in different ways.
If it was not that bigger (let's say twice the size of the RAM), I would separate the dataset in two parts and create one index for each. I would therefore need to find the nearest neighbor in each dataset and then choose between them.
It depends if your data is very high-dimensional or not. If it is relatively low-dimensional, you can use an existing on-disk R-Tree implementation, such as Spatialite.
If it is a higher dimensional data, you can use X-Trees, but I don't know of any on-disk implementations off the top of my head.
Alternatively, you can implement locality sensitive hashing with on disk persistence, for example using mmap.

Resources