Implementing the Rope data structure using binary search trees (splay trees) - data-structures

In a standard implementation of the Rope data structure using splay trees, the nodes would be ordered according to a rank statistic measuring the position of each one from the start of the string, so the keys normally found in binary search tree would be irrelevant, would they not?
I ask because the keys shown in the graphic below (thanks Wikipedia!) are letters, which would presumably become non-unique once the number of nodes exceeded the length of the chosen alphabet. Wouldn't it be better to use integers or avoid using keys altogether?
Separately, can anyone point me to a good implementation of the logic to recompute rank statistics after each operation?
Presumably, if the index for a split falls within the substring attached to a particular node, say, between "Hel" and "llo_" on the node E above, you would remove the substring from E, split it and reattach it as two children of E. Correct?
Finally, after a certain number of such operations, the tree could, I suppose, end up with as many leaves as letters. What would be the best way to keep track of that and prune the tree (by combining substrings) as necessary?
Thanks!

For what it's worth, you can implement a Rope using Splay Trees by attaching a substring to each node of the binary search tree (not just to the leaf nodes as shown above).
The rank of each node is its size plus the size of its left subtree. But when recomputing ranks during splay operations, you need to remember to walk down the node.left.right branch, too.
If each node records a reference to the substring it represents (cf. the actual substring itself), everything runs faster. That way when a split operation falls within an existing node, you just need to modify the node's attributes to reflect the right part of the substring you want to split, then add another node to represent the left part and merge it with the left subtree.
Done as above, each node records (in addition its left, right and parent attributes etc.) its rank, size (in characters) and the location of the first character it represents in the string you're trying to modify. That way, you never actually modify the initial string: you just do your operations on bits of the tree and reproduce the final string when you're ready by walking it in order.

Related

B-tree insertion: during the descend in the tree, Why we split every node with 2t-1 elements?

In B-tree insertion algorithm, I see that in order to solve the case in which we need to insert an element to a leaf with 2t-1 elements, we need to do split algorithm to the tree. Something I don't understand is why in the insertion algorithm during the descend in the tree (to the willing point) we split every node with 2t-1 elements, even though I seems useless. for example
example
I understand that there is a case in which couple of nodes above the leaf got 2t-1 elements, and in case we want move the median to them we face problem, but why not to give pinpoint solution for that, instead of doing split every time.
correct me if I say something wrong.
We split the full nodes on the way down to the target position because we don't know if we will need to "go back up." You can do it the way you are thinking, where we go down to the target node, split it, and then insert the median of the split into the parent, recursively splitting nodes as needed. But this requires us to go from the root, down to the target, and back up, potentially all the way to the root again. This might be undesirable, e.g. if accessing the nodes twice would be too expensive. In that case, it may be better to go in one pass straight down, where you split any full nodes to anticipate the need for more space.
For a demonstration, you can try inserting 10 into the trees in the middle and on the bottom of your drawing. The tree on the bottom, unsplit, needs to be split all the way to the root in the same way as the middle tree, because the two-pass algorithm didn't leave any space. In the middle tree, inserting 10 still causes a split, but it doesn't extend all the way up because the top two layers of the tree are very spacious.
There is an important caveat, though. Let t be the minimum number of children per node. For the two pass algorithm, the maximum number of children a node can have needs to be at least u = 2t - 1. If it is less, like 2t - 2, then splitting a full node (2t - 3 elements), even with the additional element to insert, will not be able to make two non-deficient nodes. The one pass algorithm requires a higher maximum, u = 2t. This is because the two-pass algorithm always has an element on hand to cancel exactly one deficiency. The one-pass algorithm does not have this ability, as it sometimes splits nodes unnecessarily, so it can't stick the element it's holding into one of the deficiencies. It might not belong there.
I've implemented B-trees several times, and have never split nodes on the way down.
Usually I do insert recursively, such that node->insert(key,data) can return a new key to insert in the parent. The parent calls insert on the child node, and if the child splits it returns a new key to the parent. If the parent splits then it returns the a key to it's parent, etc.
I've found that the insert implementation can stay pretty clean this way.

B-Tree deletion in a single pass

Is it possible to remove an element from a B-Tree in a single pass?
Wikipedia says "Do a single pass down the tree, but before entering (visiting) a node, restructure the tree so that once the key to be deleted is encountered, it can be deleted without triggering the need for any further restructuring"
but doesn't say anything about how it is done.
Google only gives me the process of removing an element having to reestructure the tree.
Cormen also doesn't say anything about it.
It's possible in a variant of B+ tree called PO-B+ tree. In this "preparatory operations B+ tree" the number of keys in a node may be between n-1 and 2n+1 rather than n and 2n in the usual B+-tree (quoted from the paper). For delete operation (called PO-delete in the paper) you just merge (called "catenate" in the paper) all the nodes (except the root) that could be merged (or take a key from a neighbor), while moving toward the leaf. For PO-insert operation you split all the nodes (including the root). The description is given in the paper.
This preemptive restructuring only makes sense if the tree is used in multi-threaded environment, as it reduces the locking, and increases the concurency. It does not pay if a tree is accessed by only one actor.

Insertion In 2-3-4 tree

Consider the following 2-3-4 tree (i.e., B-tree with a minimum degree of two) in
which each data item is a letter. The usual alphabetical ordering of letters is used
in constructing the tree.
What is the result of inserting G in the above tree?
I am getting the answer as
But the answer in solution key is
Can anyone explain how to get the answer provided by the solution key?
As long the invariants are not violated, the operation is technically valid. The insertion algorithm in CLRS splits on the way down, so it would split the root like you did.
However, another implementation might observe that the second child is empty and the first is full. That means the "rotation" can be done and the root node count is unaffected. The rotation involves pushing L down into the second child (prepending) and pulling up I up into L's previous place in the root. Now the first child has only two entries and you can insert into it.
Animated insertion using the CLRS method you used

Disadvantages of top-down node splitting on insertion into B+ tree

For a B+ tree insertion why would you traverse down the tree then back upwards splitting the parents?
Wikipedia suggests this method of insertion:
Perform a search to determine what bucket the new record should go
into.
If the bucket is not full (at most b - 1 entries after the insertion), add the record.
Otherwise, split the bucket.
Allocate new leaf and move half the bucket's elements to the new bucket.
Insert the new leaf's
smallest key and address into the parent.
If the parent is full, split it too.
Add the middle key to the parent node.
Repeat until a parent is found that need not split.
If the root splits, create a new root which has one key and two
pointers.
Why would you traverse down then tree and then go back up performing the splits? Why not split the nodes as you encounter them on the way down?
To me, the proposed method performs twice the work and requires more bookkeeping as well.
Can anyone explain why this is the preferred method for insertion as opposed to splitting on the way down and what the disadvantages are for inserting during the traversal?
You have to backtrack up the tree because you don't actually know whether a split is required at the lowest level until you get there.
It's all there in the phrase "If the bucket is not full, ...".
You should also be aware that it's nowhere near twice the work. Since you're remembering all sorts of stuff on the way down (node pointers, indexes within the node, and so on), there's not as much calculation or searching on the way back up.

Data structure for range query

I was recently asked a coding question on the below problem.
I have some solution to this problem but I am not very sure if those are most efficient.
Problem:
Write a program to track set of text ranges. Start point and end point will be string.
Text range example : [AbA-Ef]
Aa would fall before this range
AB would fall inside this range
etc.
String comparison would be like 'A' < 'a' < 'B' < 'b' ... 'Z' < 'z'
We need to support following operations on this range
Add range - this should merge the ranges if applicable
Delete range - this deletes range from tracked ranges and recompute the ranges
Query range - Given a character, function should return whether it is part of any of tracked ranges or not.
Note that tracked ranges can be dis-continuous.
My solutions:
I came up with two approaches.
Store ranges as doubly linked list or
Store ranges as some sort of balanced tree with leaf node having actual data and they are inter-connected as linked list.
Do you think that this solution are good enough or you can think of any better way of doing this so that those three API gives your best performance ?
You are probably looking for an interval tree.
Use the data structure with your custom comparator to indicate "What's on range", and you will be able to do the required operations efficiently.
Note, an interval tree is actually an efficient way to implement your 2nd idea (Store ranges as a some sort of balanced tree)
I'm not clear on what the "delete range" operation is supposed to do. Does it;
Delete a previously inserted range, and recompute the merge of the remaining ranges?
Stop tracking the deleted range, regardless of how many times parts of it have been added.
That doesn't make a huge difference algorithmically; it's just bookkeeping. But it's important to clarify. Also, are the ranges closed or half-open? (Another detail which doesn't affect the algorithm but does affect the implementation).
The basic approach to this problem is to merge the tracked set into a sorted list of disjoint (non-overlapping) ranges; either as a vector or a binary search tree, or basically any structure which supports O(log n) searching.
One approach is to put both endpoints of every disjoint range into the datastructure. To find out if a target value is in a range, find the index of the smallest endpoint greater than the target. If the index is odd the target is in some range; even means it's outside.
Alternatively, index all the disjoint ranges by their start points; find the target by searching for the largest start-point not greater than the target, and then compare the target with the associated end-point.
I usually use the first approach with sorted vectors, which are plausible if (a) space utilization is important and (b) insert and merge are relatively rare. With binary search trees, I go for the second approach. But they differ only in details and constants.
Merging and deleting are not difficult, but there are an annoying number of cases. You start by finding the ranges corresponding to the endpoints of the range to be inserted/deleted (using the standard find operation), remove all the ranges in between the two, and fiddle with the endpoints to correct the partially overlapping ranges. While the find operation is always O(log n), the tree/vector manipulation is o(n) (if the inserted/deleted range is large, anyway).
Most languages, including Java and C++, have a some sort of ordered map or ordered set in which you can both look up a value and find the next value after or the first value before a value. You could use this as a building block - If it contains a set of disjoint ranges then it will have a least element of a range followed by a greatest element of a range followed by the least element of a range followed by the greatest element of a range and so on. When you add a range you can check to see if you have preserved this property. If not, you need to merge ranges. Similarly, you want to preserve this when you delete. Then you can query by just looking to see if there is a least element just before your query point and a greatest element just after.
If you want to create your own datastructure from scratch, I would think about some sort of radix trie structure, because this avoids doing lots of repeated string comparisons.
I think you would go for B+ tree it's the same which you have mentioned as your second approach.
Here are some properties of B+ tree:
All data is stored leaf nodes.
Every leaf is at the same level.
All leaf nodes have links to other leaf nodes.
Here are few applications B+ tree:
It reduces the number of I/O operations required to find an element in the tree.
Often used in the implementation of database indexes.
The primary value of a B+ tree is in storing data for efficient retrieval in a block-oriented storage context — in particular, file systems.
NTFS uses B+ trees for directory indexing.
Basically it helps for range queries look ups, minimizes tree traversing.

Resources