Problem:
Given a large (~100 million) list of unsigned 32-bit integers, an unsigned 32-bit integer input value, and a maximum Hamming Distance, return all list members that are within the specified Hamming Distance of the input value.
Actual data structure to hold the list is open, performance requirements dictate an in-memory solution, cost to build the data structure is secondary, low cost to query the data structure is critical.
Example:
For a maximum Hamming Distance of 1 (values typically will be quite small)
And input:
00001000100000000000000001111101
The values:
01001000100000000000000001111101
00001000100000000010000001111101
should match because there is only 1 position in which the bits are different.
11001000100000000010000001111101
should not match because 3 bit positions are different.
My thoughts so far:
For the degenerate case of a Hamming Distance of 0, just use a sorted list and do a binary search for the specific input value.
If the Hamming Distance would only ever be 1, I could flip each bit in the original input and repeat the above 32 times.
How can I efficiently (without scanning the entire list) discover list members with a Hamming Distance > 1.
Question: What do we know about the Hamming distance d(x,y)?
Answer:
It is non-negative: d(x,y) ≥ 0
It is only zero for identical inputs: d(x,y) = 0 ⇔ x = y
It is symmetric: d(x,y) = d(y,x)
It obeys the triangle inequality, d(x,z) ≤ d(x,y) + d(y,z)
Question: Why do we care?
Answer: Because it means that the Hamming distance is a metric for a metric space. There are algorithms for indexing metric spaces.
Metric tree (Wikipedia)
BK-tree (Wikipedia)
M-tree (Wikipedia)
VP-tree (Wikipedia)
Cover tree (Wikipedia)
You can also look up algorithms for "spatial indexing" in general, armed with the knowledge that your space is not Euclidean but it is a metric space. Many books on this subject cover string indexing using a metric such as the Hamming distance.
Footnote: If you are comparing the Hamming distance of fixed width strings, you may be able to get a significant performance improvement by using assembly or processor intrinsics. For example, with GCC (manual) you do this:
static inline int distance(unsigned x, unsigned y)
{
return __builtin_popcount(x^y);
}
If you then inform GCC that you are compiling for a computer with SSE4a, then I believe that should reduce to just a couple opcodes.
Edit: According to a number of sources, this is sometimes/often slower than the usual mask/shift/add code. Benchmarking shows that on my system, a C version outperform's GCC's __builtin_popcount by about 160%.
Addendum: I was curious about the problem myself, so I profiled three implementations: linear search, BK tree, and VP tree. Note that VP and BK trees are very similar. The children of a node in a BK tree are "shells" of trees containing points that are each a fixed distance from the tree's center. A node in a VP tree has two children, one containing all the points within a sphere centered on the node's center and the other child containing all the points outside. So you can think of a VP node as a BK node with two very thick "shells" instead of many finer ones.
The results were captured on my 3.2 GHz PC, and the algorithms do not attempt to utilize multiple cores (which should be easy). I chose a database size of 100M pseudorandom integers. Results are the average of 1000 queries for distance 1..5, and 100 queries for 6..10 and the linear search.
Database: 100M pseudorandom integers
Number of tests: 1000 for distance 1..5, 100 for distance 6..10 and linear
Results: Average # of query hits (very approximate)
Speed: Number of queries per second
Coverage: Average percentage of database examined per query
-- BK Tree -- -- VP Tree -- -- Linear --
Dist Results Speed Cov Speed Cov Speed Cov
1 0.90 3800 0.048% 4200 0.048%
2 11 300 0.68% 330 0.65%
3 130 56 3.8% 63 3.4%
4 970 18 12% 22 10%
5 5700 8.5 26% 10 22%
6 2.6e4 5.2 42% 6.0 37%
7 1.1e5 3.7 60% 4.1 54%
8 3.5e5 3.0 74% 3.2 70%
9 1.0e6 2.6 85% 2.7 82%
10 2.5e6 2.3 91% 2.4 90%
any 2.2 100%
In your comment, you mentioned:
I think BK-trees could be improved by generating a bunch of BK-trees with different root nodes, and spreading them out.
I think this is exactly the reason why the VP tree performs (slightly) better than the BK tree. Being "deeper" rather than "shallower", it compares against more points rather than using finer-grained comparisons against fewer points. I suspect that the differences are more extreme in higher dimensional spaces.
A final tip: leaf nodes in the tree should just be flat arrays of integers for a linear scan. For small sets (maybe 1000 points or fewer) this will be faster and more memory efficient.
I wrote a solution where I represent the input numbers in a bitset of 232 bits, so I can check in O(1) whether a certain number is in the input. Then for a queried number and maximum distance, I recursively generate all numbers within that distance and check them against the bitset.
For example for maximum distance 5, this is 242825 numbers (sumd = 0 to 5 {32 choose d}). For comparison, Dietrich Epp's VP-tree solution for example goes through 22% of the 100 million numbers, i.e., through 22 million numbers.
I used Dietrich's code/solutions as the basis to add my solution and compare it with his. Here are speeds, in queries per second, for maximum distances up to 10:
Dist BK Tree VP Tree Bitset Linear
1 10,133.83 15,773.69 1,905,202.76 4.73
2 677.78 1,006.95 218,624.08 4.70
3 113.14 173.15 27,022.32 4.76
4 34.06 54.13 4,239.28 4.75
5 15.21 23.81 932.18 4.79
6 8.96 13.23 236.09 4.78
7 6.52 8.37 69.18 4.77
8 5.11 6.15 23.76 4.68
9 4.39 4.83 9.01 4.47
10 3.69 3.94 2.82 4.13
Prepare 4.1s 21.0s 1.52s 0.13s
times (for building the data structure before the queries)
For small distances, the bitset solution is by far the fastest of the four. Question author Eric commented below that the largest distance of interest would probably be 4-5. Naturally, my bitset solution becomes slower for larger distances, even slower than the linear search (for distance 32, it would go through 232 numbers). But for distance 9 it still easily leads.
I also modified Dietrich's testing. Each of the above results is for letting the algorithm solve at least three queries and as many queries as it can in about 15 seconds (I do rounds with 1, 2, 4, 8, 16, etc queries, until at least 10 seconds have passed in total). That's fairly stable, I even get similar numbers for just 1 second.
My CPU is an i7-6700. My code (based on Dietrich's) is here (ignore the documentation there at least for now, not sure what to do about that, but the tree.c contains all the code and my test.bat shows how I compiled and ran (I used the flags from Dietrich's Makefile)). Shortcut to my solution.
One caveat: My query results contain numbers only once, so if the input list contains duplicate numbers, that may or may not be desired. In question author Eric's case, there were no duplicates (see comment below). In any case, this solution might be good for people who either have no duplicates in the input or don't want or need duplicates in the query results (I think it's likely that the pure query results are only a means to an end and then some other code turns the numbers into something else, for example a map mapping a number to a list of files whose hash is that number).
A common approach (at least common to me) is to divide your bit string in several chunks and query on these chunks for an exact match as pre-filter step. If you work with files, you create as many files as you have chunks (e.g. 4 here) with each chunk permuted in front and then sort the files. You can use a binary search and you can even expand you search above and below a matching chunk for bonus.
You then can perform a bitwise hamming distance computation on the returned results which should be only a smaller subset of your overall dataset. This can be done using data files or SQL tables.
So to recap: Say you have a bunch of 32 bits strings in a DB or files and that you want to find every hash that are within a 3 bits hamming distance or less of your "query" bit string:
create a table with four columns: each will contain an 8 bits (as a string or int) slice of the 32 bits hashes, islice 1 to 4. Or if you use files, create four files, each being a permutation of the slices having one "islice" at the front of each "row"
slice your query bit string the same way in qslice 1 to 4.
query this table such that any of qslice1=islice1 or qslice2=islice2 or qslice3=islice3 or qslice4=islice4. This gives you every string that are within 7 bits (8 - 1) of the query string. If using a file, do a binary search in each of the four permuted files for the same results.
for each returned bit string, compute the exact hamming distance pair-wise with you query bit string (reconstructing the index-side bit strings from the four slices either from the DB or from a permuted file)
The number of operations in step 4 should be much less than a full pair-wise hamming computation of your whole table and is very efficient in practice.
Furthermore, it is easy to shard the files in smaller files as need for more speed using parallelism.
Now of course in your case, you are looking for a self-join of sort, that is all the values that are within some distance of each other. The same approach still works IMHO, though you will have to expand up and down from a starting point for permutations (using files or lists) that share the starting chunk and compute the hamming distance for the resulting cluster.
If running in memory instead of files, your 100M 32 bits strings data set would be in the range of 4 GB. Hence the four permuted lists may need about 16GB+ of RAM. Though I get excellent results with memory mapped files instead and must less RAM for similar size datasets.
There are open source implementations available. The best in the space is IMHO the one done for Simhash by Moz, C++ but designed for 64 bits strings and not 32 bits.
This bounded happing distance approach was first described AFAIK by Moses Charikar in its "simhash" seminal paper and the corresponding Google patent:
APPROXIMATE NEAREST NEIGHBOR SEARCH IN HAMMING SPACE
[...]
Given bit vectors consisting of d bits each, we choose
N = O(n 1/(1+ ) ) random permutations of the bits. For each
random permutation σ, we maintain a sorted order O σ of
the bit vectors, in lexicographic order of the bits permuted
by σ. Given a query bit vector q, we find the approximate
nearest neighbor by doing the following:
For each permutation σ, we perform a binary search on O σ to locate the
two bit vectors closest to q (in the lexicographic order obtained by bits permuted by σ). We now search in each of
the sorted orders O σ examining elements above and below
the position returned by the binary search in order of the
length of the longest prefix that matches q.
Monika Henziger expanded on this in her paper "Finding near-duplicate web pages: a large-scale evaluation of algorithms":
3.3 The Results for Algorithm C
We partitioned the bit string of each page into 12 non-
overlapping 4-byte pieces, creating 20B pieces, and computed the C-similarity of all pages that had at least one
piece in common. This approach is guaranteed to find all
pairs of pages with difference up to 11, i.e., C-similarity 373,
but might miss some for larger differences.
This is also explained in the paper Detecting Near-Duplicates for Web Crawling by Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma:
THE HAMMING DISTANCE PROBLEM
Definition: Given a collection of f -bit fingerprints and a
query fingerprint F, identify whether an existing fingerprint
differs from F in at most k bits. (In the batch-mode version
of the above problem, we have a set of query fingerprints
instead of a single query fingerprint)
[...]
Intuition: Consider a sorted table of 2 d f -bit truly random fingerprints. Focus on just the most significant d bits
in the table. A listing of these d-bit numbers amounts to
“almost a counter” in the sense that (a) quite a few 2 d bit-
combinations exist, and (b) very few d-bit combinations are
duplicated. On the other hand, the least significant f − d
bits are “almost random”.
Now choose d such that |d − d| is a small integer. Since
the table is sorted, a single probe suffices to identify all fingerprints which match F in d most significant bit-positions.
Since |d − d| is small, the number of such matches is also
expected to be small. For each matching fingerprint, we can
easily figure out if it differs from F in at most k bit-positions
or not (these differences would naturally be restricted to the
f − d least-significant bit-positions).
The procedure described above helps us locate an existing
fingerprint that differs from F in k bit-positions, all of which
are restricted to be among the least significant f − d bits of
F. This takes care of a fair number of cases. To cover all
the cases, it suffices to build a small number of additional
sorted tables, as formally outlined in the next Section.
Note: I posted a similar answer to a related DB-only question
You could pre-compute every possible variation of your original list within the specified hamming distance, and store it in a bloom filter. This gives you a fast "NO" but not necessarily a clear answer about "YES."
For YES, store a list of all the original values associated with each position in the bloom filter, and go through them one at a time. Optimize the size of your bloom filter for speed / memory trade-offs.
Not sure if it all works exactly, but seems like a good approach if you've got runtime RAM to burn and are willing to spend a very long time in pre-computation.
How about sorting the list and then doing a binary search in that sorted list on the different possible values within you Hamming Distance?
One possible approach to solve this problem is using a Disjoint-set data structure. The idea is merge list members with Hamming distance <= k in the same set. Here is the outline of the algorithm:
For each list member calculate every possible value with Hamming distance <= k. For k=1, there are 32 values (for 32-bit values). For k=2, 32 + 32*31/2 values.
For each calculated value, test if it is in the original input. You can use an array with size 2^32 or a hash map to do this check.
If the value is in the original input, do a "union" operation with the list member.
Keep the number of union operations executed in a variable.
You start the algorithm with N disjoint sets (where N is the number of elements in the input). Each time you execute an union operation, you decrease by 1 the number of disjoint sets. When the algorithm terminates, the disjoint-set data structure will have all the values with Hamming distance <= k grouped in disjoint sets. This disjoint-set data structure can be calculated in almost linear time.
Here's a simple idea: do a byte-wise radix sort of the 100m input integers, most significant byte first, keeping track of bucket boundaries on the first three levels in some external structure.
To query, start with a distance budget of d and your input word w. For each bucket in the top level with byte value b, calculate the Hamming distance d_0 between b and the high byte of w. Recursively search that bucket with a budget of d - d_0: that is, for each byte value b', let d_1 be the Hamming distance between b' and the second byte of w. Recursively search into the third layer with a budget of d - d_0 - d_1, and so on.
Note that the buckets form a tree. Whenever your budget becomes negative, stop searching that subtree. If you recursively descend into a leaf without blowing your distance budget, that leaf value should be part of the output.
Here's one way to represent the external bucket boundary structure: have an array of length 16_777_216 (= (2**8)**3 = 2**24), where the element at index i is the starting index of the bucket containing values in range [256*i, 256*i + 255]. To find the index one beyond the end of that bucket, look up at index i+1 (or use the end of the array for i + 1 = 2**24).
Memory budget is 100m * 4 bytes per word = 400 MB for the inputs, and 2**24 * 4 bytes per address = 64 MiB for the indexing structure, or just shy of half a gig in total. The indexing structure is a 6.25% overhead on the raw data. Of course, once you've constructed the indexing structure you only need to store the lowest byte of each input word, since the other three are implicit in the index into the indexing structure, for a total of ~(64 + 50) MB.
If your input is not uniformly distributed, you could permute the bits of your input words with a (single, universally shared) permutation which puts all the entropy towards the top of the tree. That way, the first level of pruning will eliminate larger chunks of the search space.
I tried some experiments, and this performs about as well as linear search, sometimes even worse. So much for this fancy idea. Oh well, at least it's memory efficient.
Related
I have an Array A(size <= 10^5) of numbers(<= 10^8), and I need to answer some queries(50000), for L, R, how many subsets for elements in the range [L, R], the XOR of the subset is a number that has 0 or 1 bit set(power of 2). Also, point modifications in the array are being done in between the queries, so can't really do some offline processing or use techniques like square root decomposition etc.
I have an approach where I use DP to calculate for a given range, something on the lines of this:
https://www.geeksforgeeks.org/count-number-of-subsets-having-a-particular-xor-value/
But this is clearly too slow. This feels like a classical segment tree problem, but can't seem to find as to what data points to store at each node, so that I can use the left child and right child to compute the answer for the given range.
Yeah, that DP won't be fast enough.
What will be fast enough is applying some linear algebra over GF(2), the Galois field with two elements. Each number can be interpreted as a bit-vector; adding/subtracting vectors is XOR; scalar multiplication isn't really relevant.
The data you need for each segment is (1) how many numbers are there in the segment (2) a basis for the subspace of numbers generated by numbers in the segment, which will consist of at most 27 numbers because all numbers are less than 2^27. The basis for a one-element segment is just that number if it's nonzero, else the empty set. To find the span of the union of two bases, use Gaussian elimination and discard the zero vectors.
Given the length of an interval and a basis for it, you can count the number of good subsets using the rank-nullity theorem. Basically, for each target number, use your Gaussian elimination routine to test whether the target number belongs to the subspace. If so, there are 2^(length of interval minus size of basis) subsets. If not, the answer is zero.
Problem:
Given a large (~100 million) list of unsigned 32-bit integers, an unsigned 32-bit integer input value, and a maximum Hamming Distance, return all list members that are within the specified Hamming Distance of the input value.
Actual data structure to hold the list is open, performance requirements dictate an in-memory solution, cost to build the data structure is secondary, low cost to query the data structure is critical.
Example:
For a maximum Hamming Distance of 1 (values typically will be quite small)
And input:
00001000100000000000000001111101
The values:
01001000100000000000000001111101
00001000100000000010000001111101
should match because there is only 1 position in which the bits are different.
11001000100000000010000001111101
should not match because 3 bit positions are different.
My thoughts so far:
For the degenerate case of a Hamming Distance of 0, just use a sorted list and do a binary search for the specific input value.
If the Hamming Distance would only ever be 1, I could flip each bit in the original input and repeat the above 32 times.
How can I efficiently (without scanning the entire list) discover list members with a Hamming Distance > 1.
Question: What do we know about the Hamming distance d(x,y)?
Answer:
It is non-negative: d(x,y) ≥ 0
It is only zero for identical inputs: d(x,y) = 0 ⇔ x = y
It is symmetric: d(x,y) = d(y,x)
It obeys the triangle inequality, d(x,z) ≤ d(x,y) + d(y,z)
Question: Why do we care?
Answer: Because it means that the Hamming distance is a metric for a metric space. There are algorithms for indexing metric spaces.
Metric tree (Wikipedia)
BK-tree (Wikipedia)
M-tree (Wikipedia)
VP-tree (Wikipedia)
Cover tree (Wikipedia)
You can also look up algorithms for "spatial indexing" in general, armed with the knowledge that your space is not Euclidean but it is a metric space. Many books on this subject cover string indexing using a metric such as the Hamming distance.
Footnote: If you are comparing the Hamming distance of fixed width strings, you may be able to get a significant performance improvement by using assembly or processor intrinsics. For example, with GCC (manual) you do this:
static inline int distance(unsigned x, unsigned y)
{
return __builtin_popcount(x^y);
}
If you then inform GCC that you are compiling for a computer with SSE4a, then I believe that should reduce to just a couple opcodes.
Edit: According to a number of sources, this is sometimes/often slower than the usual mask/shift/add code. Benchmarking shows that on my system, a C version outperform's GCC's __builtin_popcount by about 160%.
Addendum: I was curious about the problem myself, so I profiled three implementations: linear search, BK tree, and VP tree. Note that VP and BK trees are very similar. The children of a node in a BK tree are "shells" of trees containing points that are each a fixed distance from the tree's center. A node in a VP tree has two children, one containing all the points within a sphere centered on the node's center and the other child containing all the points outside. So you can think of a VP node as a BK node with two very thick "shells" instead of many finer ones.
The results were captured on my 3.2 GHz PC, and the algorithms do not attempt to utilize multiple cores (which should be easy). I chose a database size of 100M pseudorandom integers. Results are the average of 1000 queries for distance 1..5, and 100 queries for 6..10 and the linear search.
Database: 100M pseudorandom integers
Number of tests: 1000 for distance 1..5, 100 for distance 6..10 and linear
Results: Average # of query hits (very approximate)
Speed: Number of queries per second
Coverage: Average percentage of database examined per query
-- BK Tree -- -- VP Tree -- -- Linear --
Dist Results Speed Cov Speed Cov Speed Cov
1 0.90 3800 0.048% 4200 0.048%
2 11 300 0.68% 330 0.65%
3 130 56 3.8% 63 3.4%
4 970 18 12% 22 10%
5 5700 8.5 26% 10 22%
6 2.6e4 5.2 42% 6.0 37%
7 1.1e5 3.7 60% 4.1 54%
8 3.5e5 3.0 74% 3.2 70%
9 1.0e6 2.6 85% 2.7 82%
10 2.5e6 2.3 91% 2.4 90%
any 2.2 100%
In your comment, you mentioned:
I think BK-trees could be improved by generating a bunch of BK-trees with different root nodes, and spreading them out.
I think this is exactly the reason why the VP tree performs (slightly) better than the BK tree. Being "deeper" rather than "shallower", it compares against more points rather than using finer-grained comparisons against fewer points. I suspect that the differences are more extreme in higher dimensional spaces.
A final tip: leaf nodes in the tree should just be flat arrays of integers for a linear scan. For small sets (maybe 1000 points or fewer) this will be faster and more memory efficient.
I wrote a solution where I represent the input numbers in a bitset of 232 bits, so I can check in O(1) whether a certain number is in the input. Then for a queried number and maximum distance, I recursively generate all numbers within that distance and check them against the bitset.
For example for maximum distance 5, this is 242825 numbers (sumd = 0 to 5 {32 choose d}). For comparison, Dietrich Epp's VP-tree solution for example goes through 22% of the 100 million numbers, i.e., through 22 million numbers.
I used Dietrich's code/solutions as the basis to add my solution and compare it with his. Here are speeds, in queries per second, for maximum distances up to 10:
Dist BK Tree VP Tree Bitset Linear
1 10,133.83 15,773.69 1,905,202.76 4.73
2 677.78 1,006.95 218,624.08 4.70
3 113.14 173.15 27,022.32 4.76
4 34.06 54.13 4,239.28 4.75
5 15.21 23.81 932.18 4.79
6 8.96 13.23 236.09 4.78
7 6.52 8.37 69.18 4.77
8 5.11 6.15 23.76 4.68
9 4.39 4.83 9.01 4.47
10 3.69 3.94 2.82 4.13
Prepare 4.1s 21.0s 1.52s 0.13s
times (for building the data structure before the queries)
For small distances, the bitset solution is by far the fastest of the four. Question author Eric commented below that the largest distance of interest would probably be 4-5. Naturally, my bitset solution becomes slower for larger distances, even slower than the linear search (for distance 32, it would go through 232 numbers). But for distance 9 it still easily leads.
I also modified Dietrich's testing. Each of the above results is for letting the algorithm solve at least three queries and as many queries as it can in about 15 seconds (I do rounds with 1, 2, 4, 8, 16, etc queries, until at least 10 seconds have passed in total). That's fairly stable, I even get similar numbers for just 1 second.
My CPU is an i7-6700. My code (based on Dietrich's) is here (ignore the documentation there at least for now, not sure what to do about that, but the tree.c contains all the code and my test.bat shows how I compiled and ran (I used the flags from Dietrich's Makefile)). Shortcut to my solution.
One caveat: My query results contain numbers only once, so if the input list contains duplicate numbers, that may or may not be desired. In question author Eric's case, there were no duplicates (see comment below). In any case, this solution might be good for people who either have no duplicates in the input or don't want or need duplicates in the query results (I think it's likely that the pure query results are only a means to an end and then some other code turns the numbers into something else, for example a map mapping a number to a list of files whose hash is that number).
A common approach (at least common to me) is to divide your bit string in several chunks and query on these chunks for an exact match as pre-filter step. If you work with files, you create as many files as you have chunks (e.g. 4 here) with each chunk permuted in front and then sort the files. You can use a binary search and you can even expand you search above and below a matching chunk for bonus.
You then can perform a bitwise hamming distance computation on the returned results which should be only a smaller subset of your overall dataset. This can be done using data files or SQL tables.
So to recap: Say you have a bunch of 32 bits strings in a DB or files and that you want to find every hash that are within a 3 bits hamming distance or less of your "query" bit string:
create a table with four columns: each will contain an 8 bits (as a string or int) slice of the 32 bits hashes, islice 1 to 4. Or if you use files, create four files, each being a permutation of the slices having one "islice" at the front of each "row"
slice your query bit string the same way in qslice 1 to 4.
query this table such that any of qslice1=islice1 or qslice2=islice2 or qslice3=islice3 or qslice4=islice4. This gives you every string that are within 7 bits (8 - 1) of the query string. If using a file, do a binary search in each of the four permuted files for the same results.
for each returned bit string, compute the exact hamming distance pair-wise with you query bit string (reconstructing the index-side bit strings from the four slices either from the DB or from a permuted file)
The number of operations in step 4 should be much less than a full pair-wise hamming computation of your whole table and is very efficient in practice.
Furthermore, it is easy to shard the files in smaller files as need for more speed using parallelism.
Now of course in your case, you are looking for a self-join of sort, that is all the values that are within some distance of each other. The same approach still works IMHO, though you will have to expand up and down from a starting point for permutations (using files or lists) that share the starting chunk and compute the hamming distance for the resulting cluster.
If running in memory instead of files, your 100M 32 bits strings data set would be in the range of 4 GB. Hence the four permuted lists may need about 16GB+ of RAM. Though I get excellent results with memory mapped files instead and must less RAM for similar size datasets.
There are open source implementations available. The best in the space is IMHO the one done for Simhash by Moz, C++ but designed for 64 bits strings and not 32 bits.
This bounded happing distance approach was first described AFAIK by Moses Charikar in its "simhash" seminal paper and the corresponding Google patent:
APPROXIMATE NEAREST NEIGHBOR SEARCH IN HAMMING SPACE
[...]
Given bit vectors consisting of d bits each, we choose
N = O(n 1/(1+ ) ) random permutations of the bits. For each
random permutation σ, we maintain a sorted order O σ of
the bit vectors, in lexicographic order of the bits permuted
by σ. Given a query bit vector q, we find the approximate
nearest neighbor by doing the following:
For each permutation σ, we perform a binary search on O σ to locate the
two bit vectors closest to q (in the lexicographic order obtained by bits permuted by σ). We now search in each of
the sorted orders O σ examining elements above and below
the position returned by the binary search in order of the
length of the longest prefix that matches q.
Monika Henziger expanded on this in her paper "Finding near-duplicate web pages: a large-scale evaluation of algorithms":
3.3 The Results for Algorithm C
We partitioned the bit string of each page into 12 non-
overlapping 4-byte pieces, creating 20B pieces, and computed the C-similarity of all pages that had at least one
piece in common. This approach is guaranteed to find all
pairs of pages with difference up to 11, i.e., C-similarity 373,
but might miss some for larger differences.
This is also explained in the paper Detecting Near-Duplicates for Web Crawling by Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma:
THE HAMMING DISTANCE PROBLEM
Definition: Given a collection of f -bit fingerprints and a
query fingerprint F, identify whether an existing fingerprint
differs from F in at most k bits. (In the batch-mode version
of the above problem, we have a set of query fingerprints
instead of a single query fingerprint)
[...]
Intuition: Consider a sorted table of 2 d f -bit truly random fingerprints. Focus on just the most significant d bits
in the table. A listing of these d-bit numbers amounts to
“almost a counter” in the sense that (a) quite a few 2 d bit-
combinations exist, and (b) very few d-bit combinations are
duplicated. On the other hand, the least significant f − d
bits are “almost random”.
Now choose d such that |d − d| is a small integer. Since
the table is sorted, a single probe suffices to identify all fingerprints which match F in d most significant bit-positions.
Since |d − d| is small, the number of such matches is also
expected to be small. For each matching fingerprint, we can
easily figure out if it differs from F in at most k bit-positions
or not (these differences would naturally be restricted to the
f − d least-significant bit-positions).
The procedure described above helps us locate an existing
fingerprint that differs from F in k bit-positions, all of which
are restricted to be among the least significant f − d bits of
F. This takes care of a fair number of cases. To cover all
the cases, it suffices to build a small number of additional
sorted tables, as formally outlined in the next Section.
Note: I posted a similar answer to a related DB-only question
You could pre-compute every possible variation of your original list within the specified hamming distance, and store it in a bloom filter. This gives you a fast "NO" but not necessarily a clear answer about "YES."
For YES, store a list of all the original values associated with each position in the bloom filter, and go through them one at a time. Optimize the size of your bloom filter for speed / memory trade-offs.
Not sure if it all works exactly, but seems like a good approach if you've got runtime RAM to burn and are willing to spend a very long time in pre-computation.
How about sorting the list and then doing a binary search in that sorted list on the different possible values within you Hamming Distance?
One possible approach to solve this problem is using a Disjoint-set data structure. The idea is merge list members with Hamming distance <= k in the same set. Here is the outline of the algorithm:
For each list member calculate every possible value with Hamming distance <= k. For k=1, there are 32 values (for 32-bit values). For k=2, 32 + 32*31/2 values.
For each calculated value, test if it is in the original input. You can use an array with size 2^32 or a hash map to do this check.
If the value is in the original input, do a "union" operation with the list member.
Keep the number of union operations executed in a variable.
You start the algorithm with N disjoint sets (where N is the number of elements in the input). Each time you execute an union operation, you decrease by 1 the number of disjoint sets. When the algorithm terminates, the disjoint-set data structure will have all the values with Hamming distance <= k grouped in disjoint sets. This disjoint-set data structure can be calculated in almost linear time.
Here's a simple idea: do a byte-wise radix sort of the 100m input integers, most significant byte first, keeping track of bucket boundaries on the first three levels in some external structure.
To query, start with a distance budget of d and your input word w. For each bucket in the top level with byte value b, calculate the Hamming distance d_0 between b and the high byte of w. Recursively search that bucket with a budget of d - d_0: that is, for each byte value b', let d_1 be the Hamming distance between b' and the second byte of w. Recursively search into the third layer with a budget of d - d_0 - d_1, and so on.
Note that the buckets form a tree. Whenever your budget becomes negative, stop searching that subtree. If you recursively descend into a leaf without blowing your distance budget, that leaf value should be part of the output.
Here's one way to represent the external bucket boundary structure: have an array of length 16_777_216 (= (2**8)**3 = 2**24), where the element at index i is the starting index of the bucket containing values in range [256*i, 256*i + 255]. To find the index one beyond the end of that bucket, look up at index i+1 (or use the end of the array for i + 1 = 2**24).
Memory budget is 100m * 4 bytes per word = 400 MB for the inputs, and 2**24 * 4 bytes per address = 64 MiB for the indexing structure, or just shy of half a gig in total. The indexing structure is a 6.25% overhead on the raw data. Of course, once you've constructed the indexing structure you only need to store the lowest byte of each input word, since the other three are implicit in the index into the indexing structure, for a total of ~(64 + 50) MB.
If your input is not uniformly distributed, you could permute the bits of your input words with a (single, universally shared) permutation which puts all the entropy towards the top of the tree. That way, the first level of pruning will eliminate larger chunks of the search space.
I tried some experiments, and this performs about as well as linear search, sometimes even worse. So much for this fancy idea. Oh well, at least it's memory efficient.
E.g. given a unordered list of N elements, find the medians for sub ranges 0..100, 25..200, 400..1000, 10..500, ...
I don't see any better way than going through each sub range and run the standard median finding algorithms.
A simple example: [5 3 6 2 4]
The median for 0..3 is 5 . (Not 4, since we are asking the median of the first three elements of the original list)
INTEGER ELEMENTS:
If the type of your elements are integers, then the best way is to have a bucket for each number lies in any of your sub-ranges, where each bucket is used for counting the number its associated integer found in your input elements (for example, bucket[100] stores how many 100s are there in your input sequence). Basically you can achieve it in the following steps:
create buckets for each number lies in any of your sub-ranges.
iterate through all elements, for each number n, if we have bucket[n], then bucket[n]++.
compute the medians based on the aggregated values stored in your buckets.
Put it in another way, suppose you have a sub-range [0, 10], and you would like to compute the median. The bucket approach basically computes how many 0s are there in your inputs, and how many 1s are there in your inputs and so on. Suppose there are n numbers lies in range [0, 10], then the median is the n/2th largest element, which can be identified by finding the i such that bucket[0] + bucket[1] ... + bucket[i] greater than or equal to n/2 but bucket[0] + ... + bucket[i - 1] is less than n/2.
The nice thing about this is that even your input elements are stored in multiple machines (i.e., the distributed case), each machine can maintain its own buckets and only the aggregated values are required to pass through the intranet.
You can also use hierarchical-buckets, which involves multiple passes. In each pass, bucket[i] counts the number of elements in your input lies in a specific range (for example, [i * 2^K, (i+1) * 2^K]), and then narrow down the problem space by identifying which bucket will the medium lies after each step, then decrease K by 1 in the next step, and repeat until you can correctly identify the medium.
FLOATING-POINT ELEMENTS
The entire elements can fit into memory:
If your entire elements can fit into memory, first sorting the N element and then finding the medians for each sub ranges is the best option. The linear time heap solution also works well in this case if the number of your sub-ranges is less than logN.
The entire elements cannot fit into memory but stored in a single machine:
Generally, an external sort typically requires three disk-scans. Therefore, if the number of your sub-ranges is greater than or equal to 3, then first sorting the N elements and then finding the medians for each sub ranges by only loading necessary elements from the disk is the best choice. Otherwise, simply performing a scan for each sub-ranges and pick up those elements in the sub-range is better.
The entire elements are stored in multiple machines:
Since finding median is a holistic operator, meaning you cannot derive the final median of the entire input based on the medians of several parts of input, it is a hard problem that one cannot describe its solution in few sentences, but there are researches (see this as an example) have been focused on this problem.
I think that as the number of sub ranges increases you will very quickly find that it is quicker to sort and then retrieve the element numbers you want.
In practice, because there will be highly optimized sort routines you can call.
In theory, and perhaps in practice too, because since you are dealing with integers you need not pay n log n for a sort - see http://en.wikipedia.org/wiki/Integer_sorting.
If your data are in fact floating point and not NaNs then a little bit twiddling will in fact allow you to use integer sort on them - from - http://en.wikipedia.org/wiki/IEEE_754-1985#Comparing_floating-point_numbers - The binary representation has the special property that, excluding NaNs, any two numbers can be compared like sign and magnitude integers (although with modern computer processors this is no longer directly applicable): if the sign bit is different, the negative number precedes the positive number (except that negative zero and positive zero should be considered equal), otherwise, relative order is the same as lexicographical order but inverted for two negative numbers; endianness issues apply.
So you could check for NaNs and other funnies, pretend the floating point numbers are sign + magnitude integers, subtract when negative to correct the ordering for negative numbers, and then treat as normal 2s complement signed integers, sort, and then reverse the process.
My idea:
Sort the list into an array (using any appropriate sorting algorithm)
For each range, find the indices of the start and end of the range using binary search
Find the median by simply adding their indices and dividing by 2 (i.e. median of range [x,y] is arr[(x+y)/2])
Preprocessing time: O(n log n) for a generic sorting algorithm (like quick-sort) or the running time of the chosen sorting routine
Time per query: O(log n)
Dynamic list:
The above assumes that the list is static. If elements can freely be added or removed between queries, a modified Binary Search Tree could work, with each node keeping a count of the number of descendants it has. This will allow the same running time as above with a dynamic list.
The answer is ultimately going to be "in depends". There are a variety of approaches, any one of which will probably be suitable under most of the cases you may encounter. The problem is that each is going to perform differently for different inputs. Where one may perform better for one class of inputs, another will perform better for a different class of inputs.
As an example, the approach of sorting and then performing a binary search on the extremes of your ranges and then directly computing the median will be useful when the number of ranges you have to test is greater than log(N). On the other hand, if the number of ranges is smaller than log(N) it may be better to move elements of a given range to the beginning of the array and use a linear time selection algorithm to find the median.
All of this boils down to profiling to avoid premature optimization. If the approach you implement turns out to not be a bottleneck for your system's performance, figuring out how to improve it isn't going to be a useful exercise relative to streamlining those portions of your program which are bottlenecks.
Is there an algorithm that can quickly determine if a number is a factor of a given set of numbers ?
For example, 12 is a factor of [24,33,52] while 5 is not.
Is there a better approach than linear search O(n)? The set will contain a few million elements. I don't need to find the number, just a true or false result.
If a large number of numbers are checked against a constant list one possible approach to speed up the process is to factorize the numbers in the list into their prime factors first. Then put the list members in a dictionary and have the prime factors as the keys. Then when a number (potential factor) comes you first factorize it into its prime factors and then use the constructed dictionary to check whether the number is a factor of the numbers which can be potentially multiples of the given number.
I think in general O(n) search is what you will end up with. However, depending on how large the numbers are in general, you can speed up the search considerably assuming that the set is sorted (you mention that it can be) by observing that if you are searching to find a number divisible by D and you have currently scanned x and x is not divisible by D, the next possible candidate is obviously at floor([x + D] / D) * D. That is, if D = 12 and the list is
5 11 13 19 22 25 27
and you are scanning at 13, the next possible candidate number would be 24. Now depending on the distribution of your input, you can scan forwards using binary search instead of linear search, as you are searching now for the least number not less than 24 in the list, and the list is sorted. If D is large then you might save lots of comparisons in this way.
However from pure computational complexity point of view, sorting and then searching is going to be O(n log n), whereas just a linear scan is O(n).
For testing many potential factors against a constant set you should realize that if one element of the set is just a multiple of two others, it is irrelevant and can be removed. This approach is a variation of an ancient algorithm known as the Sieve of Eratosthenes. Trading start-up time for run-time when testing a huge number of candidates:
Pick the smallest number >1 in the set
Remove any multiples of that number, except itself, from the set
Repeat 2 for the next smallest number, for a certain number of iterations. The number of iterations will depend on the trade-off with start-up time
You are now left with a much smaller set to exhaustively test against. For this to be efficient you either want a data structure for your set that allows O(1) removal, like a linked-list, or just replace "removed" elements with zero and then copy non-zero elements into a new container.
I'm not sure of the question, so let me ask another: Is 12 a factor of [6,33,52]? It is clear that 12 does not divide 6, 33, or 52. But the factors of 12 are 2*2*3 and the factors of 6, 33 and 52 are 2*2*2*3*3*11*13. All of the factors of 12 are present in the set [6,33,52] in sufficient multiplicity, so you could say that 12 is a factor of [6,33,52].
If you say that 12 is not a factor of [6,33,52], then there is no better solution than testing each number for divisibility by 12; simply perform the division and check the remainder. Thus 6%12=6, 33%12=9, and 52%12=4, so 12 is not a factor of [6.33.52]. But if you say that 12 is a factor of [6,33,52], then to determine if a number f is a factor of a set ns, just multiply the numbers ns together sequentially, after each multiplication take the remainder modulo f, report true immediately if the remainder is ever 0, and report false if you reach the end of the list of numbers ns without a remainder of 0.
Let's take two examples. First, is 12 a factor of [6,33,52]? The first (trivial) multiplication results in 6 and gives a remainder of 6. Now 6*33=198, dividing by 12 gives a remainder of 6, and we continue. Now 6*52=312 and 312/12=26r0, so we have a remainder of 0 and the result is true. Second, is 5 a factor of [24,33,52]? The multiplication chain is 24%5=5, (5*33)%5=2, and (2*52)%5=4, so 5 is not a factor of [24,33,52].
A variant of this algorithm was recently used to attack the RSA cryptosystem; you can read about how the attack worked here.
Since the set to be searched is fixed any time spent organising the set for search will be time well spent. If you can get the set in memory, then I expect that a binary tree structure will suit just fine. On average searching for an element in a binary tree is an O(log n) operation.
If you have reason to believe that the numbers in the set are evenly distributed throughout the range [0..10^12] then a binary search of a sorted set in memory ought to perform as well as searching a binary tree. On the other hand, if the middle element in the set (or any subset of the set) is not expected to be close to the middle value in the range encompassed by the set (or subset) then I think the binary tree will have better (practical) performance.
If you can't get the entire set in memory then decomposing it into chunks which will fit into memory and storing those chunks on disk is probably the way to go. You would store the root and upper branches of the set in memory and use them to index onto the disk. The depth of the part of the tree which is kept in memory is something you should decide for yourself, but I'd be surprised if you needed more than the root and 2 levels of branch, giving 8 chunks on disk.
Of course, this only solves part of your problem, finding whether a given number is in the set; you really want to find whether the given number is the factor of any number in the set. As I've suggested in comments I think any approach based on factorising the numbers in the set is hopeless, giving an expected running time beyond polynomial time.
I'd approach this part of the problem the other way round: generate the multiples of the given number and search for each of them. If your set has 10^7 elements then any given number N will have about (10^7)/N multiples in the set. If the given number is drawn at random from the range [0..10^12] the mean value of N is 0.5*10^12, which suggests (counter-intuitively) that in most cases you will only have to search for N itself.
And yes, I am aware that in many cases you would have to search for many more values.
This approach would parallelise relatively easily.
A fast solution which requires some precomputation:
Organize your set in a binary tree with the following rules:
Numbers of the set are on the leaves.
The root of the tree contains r the minimum of all prime numbers that divide a number of the set.
The left subtree correspond to the subset of multiples of r (divided by r so that r won't be repeated infinitly).
The right subtree correspond to the subset of numbers not multiple of r.
If you want to test if a number N divides some element of the set, compute its prime decomposition and go through the tree until you reach a leaf. If the leaf contains a number then N divides it, else if the leaf is empty then N divides no element in the set.
Simply calculate the product of the set and mod the result with the test factor.
In your example
{24,33,52} P=41184
Tf 12: 41184 mod 12 = 0 True
Tf 5: 41184 mod 5 = 4 False
The set can be broken into chunks if calculating the product would overflow the arithmetic of the calculator, but huge numbers are possible by storing a strings.
I stumbled across the Wikipedia page for them:
Fusion tree
And I read the class notes pdfs linked at the bottom, but it gets hand-wavy about the data structure itself and goes into a lot of detail about the sketch(x) function. I think part of my confusion is that the papers are trying to be very general, and I would like a specific example to visualize.
Is this data structure appropriate for storing data based on arbitrary 32 or 64 bit integer keys? How does it differ from a B-tree? There is one section that says it's basically a B-tree with a branching factor B = (lg n)^(1/5). For a fully populated tree with 32 bit keys, B would be 2. Does this just become a binary tree? Is this data structure intended to use much longer bit-strings as keys?
My Googling didn't turn up anything terribly useful, but I would welcome any good links on the topic. This is really just a passing curiosity, so I haven't been willing to pay for the PDFs at portal.acm.org yet.
You've asked a number of great questions here:
Is a fusion tree a good data structure for storing 32-bit or 64-bit numbers? Or is it designed to store longer bitstrings?
How does a fusion tree differ from a B-tree?
A fusion tree picks b = w1/5, where w is the machine word size. Does this mean that b = 2 on a 32-bit machine, and does that make it just a binary tree?
Why is so much of the discussion of a fusion tree focused on sketching?
Is there a visualization of a fusion tree available to help understand how the structure works?
I'd like to address each of these questions in turn.
Q1: What do you store in a fusion tree? Are they good for 32-bit integers?
Your first question was about what fusion trees are designed to store. The fusion tree data structure is specifically designed to store integers that fit into a single machine word. As a result, on a 32-bit machine, you'd use the fusion tree to store integers of up to 32 bits, and on a 64-bit machine you'd use a fusion tree to store integers of up to 64 bits.
Fusion trees are not designed to handle arbitrarily long bitstrings. The design of fusion trees, which we'll get to in a little bit, is based on a technique called word-level parallelism, in which individual operations on machine words (multiplications, shifts, subtractions, etc.) are performed to implicitly operate on a large collection of numbers in parallel. In order for these techniques to work correctly, the numbers being stored need to fit into individual machine words. (It is technically possible to adapt the techniques here to work for numbers that fit into a constant number of machine words, though.)
But before we go any further, I need to include a major caveat: fusion trees are of theoretical interest only. Although fusion trees at face value seem to have excellent runtime guarantees (O(logw n) time per operation, where w is the size of the machine word), the actual implementation details are such that the hidden constant factors are enormous and a major barrier to practical adoption. The original paper on fusion trees was mostly geared toward proving that it was possible to surpass the Ω(log n) lower bound on BST operations by using word-level parallelism and without regard to wall-clock runtime costs. So in that sense, if your goal in understanding fusion trees is to use one in practice, I would recommend stopping here and searching for another data structure. On the other hand, if you're interested in seeing just how much latent power is available in humble machine words, then please read on!
Q2: How does a fusion tree differ from a regular B-tree?
At a high level, you can think of a fusion tree as a regular B-tree with some extra magic thrown in to speed up searches.
As a reminder, a B-tree of order b is a multiway search tree where, intuitively, each node stores (roughly) b keys. The B-tree is a multiway search tree, meaning that the keys in each node are stored in sorted order, and the child trees store elements that are ordered relative to those keys. For example, consider this B-tree node:
+-----+-----+-----+-----+
| 103 | 161 | 166 | 261 |
+-----+-----+-----+-----+
/ | | | \
/ | | | \
A B C D E
Here, A, B, C, D, and E are subtrees of the root node. The subtree A consists of keys strictly less than 103, since it's to the left of 103. Subtree B consists of keys between 103 and 161, since subtree B is sandwiched between 103 and 161. Similarly, subtree C consists of keys between 161 and 166, subtree D consists of keys between 166 and 261, and subtree E consists of keys greater than 261.
To perform a search in a B-tree, you begin at the root node and repeatedly ask which subtree you need to descend into to continue the search. For example, if I wanted to look up 137 in the above tree, I'd need to somehow determine that 137 resides in subtree B. There are two "natural" ways that we could do this search:
Run a linear search over the keys to find the spot where we need to go. Time: O(b), where b is the number of keys in the node.
Run a binary search over the keys to find the spot where we need to go. Time: O(log b), where b is the number of keys in the node.
Because each node in a B-tree has a branching factor of b or greater, the height of a B-tree of order b is O(logb n). Therefore, if we use the first strategy (linear search) to find what tree to descend into, the worst-case work required for a search is O(b logb n), since we do O(b) work per level across O(logb n) levels. Fun fact: the quantity b logb n is minimized when b = e, and gets progressively worse as we increase b beyond this limit.
On the other hand, if we use a binary search to find the tree to descend into, the runtime ends up being O(log b · logb n). Using the change of base formula for logarithms, notice that
log b · logb n = log b · (log n / log b) = log n,
so the runtime of doing lookups this way is O(log n), independent of b. This matches the time bounds of searching a regular balanced BST.
The magic of the fusion tree is in finding a way to determine which subtree to descend into in time O(1). Let that sink in for a minute - we can have multiple children per node in our B-tree, stored in sorted order, and yet we can find which two keys our element is between in time O(1)! Doing so is decidedly nontrivial and is the bulk of the magic of the fusion tree. But for now, assuming that we can do this, notice that the runtime of searching the fusion tree would be O(logb n), since we do O(1) work times O(logb layers) in the tree!
The question now is how to do this.
Q3: A fusion tree picks b = w1/5, where w is the machine word size. Does this mean that b = 2 on a 32-bit machine, and does that make it just a binary tree?
For technical reasons that will become clearer later on, a fusion tree works by choosing, as the branching parameter for the B-tree, the value b = w1/5, where w is the machine word size. On a 32-bit machine, that means that we'd pick
b = w1/5 = (25)1/5 = 2,
and on a 64-bit machine we'd pick
b = w1/5 = (26)1/5 = 26/5 ≈ 2.29,
which we'd likely round down to 2. So does that mean that a fusion tree is just a binary tree?
The answer is "not quite." In a B-tree, each node stores between b - 1 and 2b - 1 total keys. With b = 2, that means that each node stores between 1 and 3 total keys. (In other words, our B-tree would be a 2-3-4 tree, if you're familiar with that lovely data structure). This means that we'll be branching slightly more than a regular binary search tree, but not much more.
Returning to our earlier point, fusion trees are primarily of theoretical interest. The fact that we'd pick b = 2 on a real machine and barely do better than a regular binary search tree is one of the many reasons why this is the case.
On the other hand, if we were working on, say, a machine whose word size was 32,768 bits (I'm not holding my breath on seeing one of these in my lifetime), then we'd get a branching factor of b = 8, and we might actually start seeing something that beats a regular BST.
Q4: Why is so much of the discussion of a fusion tree focused on sketching?
As mentioned above, the "secret sauce" of the fusion tree is the ability to augment each node in the B-tree with some auxiliary information that makes it possible to efficiently (in time O(1)) determine which subtree of the B-tree to descend into. Once you have the ability to get this step working, the remainder of the data structure is basically just a regular B-tree. Consequently, it makes sense to focus extensively (exclusively?) on how this step works.
This is also, by far, the most complicated step in the process. Getting this step working requires the development of several highly nontrivial subroutines that, collectively, give the overall behavior.
The first technique that we'll need is a parallel rank operation. Let's return to the key question about our B-tree search: how do we determine which subtree to descend into? Let's look back to our B-tree node, as shown here:
+-----+-----+-----+-----+
| 103 | 161 | 166 | 261 |
+-----+-----+-----+-----+
/ | | | \
/ | | | \
T0 T1 T2 T3 T4
This is the same drawing as before, but instead of labeling the subtrees A, B, C, D, and E, I've labeled them T0, T1, T2, T3, and T4.
Let's imagine I want to search for 162. That should put me into subtree T2. One way to see this is that 162 is bigger than 161 and less than 166. But there's another perspective we can take here: we want to search T2 because 162 is greater than both 103 and 161, the two keys that come before it. Interesting - we want tree index 2, and we're bigger than two of the keys in the node. Hmmm.
Now, search for 196. That puts us in tree T3, and 196 happens to be bigger than 103, 161, and 166, a total of three keys. Interesting. What about 17? That would be in tree T0, and 17 is greater than zero of the keys.
This hints at a key strategy we're going to use to get the fusion tree to work:
To determine which subtree to descend into, we need to count how many keys our search key is greater than. (This number is called the rank of the search key.)
The key insight in fusion tree is how to do this in time O(1).
Before jumping into sketching, let's build out a key primitive that we'll need for later on. The idea is the following: suppose that you have a collection of small integers, where, here, "small" means "so small that lots of them can be packed into a single machine word." Through some very clever techniques, if you can pack multiple small integers into a machine word, you can solve the following problem in time O(1):
Parallel rank: Given a key k, which is a small integer, and a fixed collection of small integers x1, ..., xb, determine how many of the xi's are less than or equal to k.
For example, we might have a bunch of 6-bit numbers, for example, 31, 41, 59, 26, and 53, and we could then execute queries like "how many of these numbers are less than or equal to 37?"
To give a brief glimpse of how this technique works, the idea is to pack all of the small integers into a single machine word, separated by zero bits. That number might look like this:
00111110101001011101100110100110101
0 31 0 41 0 59 0 26 0 53
Now, suppose we want to see how many of these numbers are less than or equal to 37. To do so, we begin by forming an integer that consists of several replicated copies of the number 37, each of which is preceded by a 1 bit. That would look like this:
11001011100101110010111001011100101
1 37 1 37 1 37 1 37 1 37
Something very cool happens if we subtract the first number from this second number. Watch this:
11001011100101110010111001011100101 1 37 1 37 1 37 1 37 1 37
- 00111110101001011101100110100110101 - 0 31 0 41 0 59 0 26 0 53
----------------------------------- ---------------------------------
10001100111100010101010010110110000 1 6 0 -4 0 -12 1 9 0 -16
^ ^ ^ ^ ^ ^ ^ ^ ^ ^
The bits that I've highlighted here are the extra bits that we added in to the front of each number Notice that
if the top number is greater than or equal to the bottom number, then the bit in front of the subtraction result will be 1, and
if the top number is smaller than the bottom number, then the bit in front of the subtraction result will be 0.
To see why this is, if the top number is greater than or equal to the bottom number, then when we perform the subtraction, we'll never need to "borrow" from that extra 1 bit we put in front of the top number, so that bit will stay a 1. Otherwise, the top number is smaller, so to make the subtraction work out we have to borrow from that 1 bit, marking it as a zero. In other words, this single subtraction operation can be thought of as doing a parallel comparison between the original key and each of the small numbers. We're doing one subtraction, but, logically, it's five comparisons!
If we can count up how many of the marked bits are 1s, then we have the answer we want. This turns out to require some additional creativity to work in time O(1), but it is indeed possible.
This parallel rank operation shows that if we have a lot of really small keys - so small that we can pack them into a machine word - we could indeed go and compute the rank of our search key in time O(1), which would tell us which subtree we need to descend into. However, there's a catch - this strategy assumes that our keys are really small, but in general, we have no reason to assume this. If we're storing full 32-bit or 64-bit machine words as keys, we can't pack lots of them into a single machine word. We can fit exactly one key into a machine word!
To address this, fusion trees use another insight. Let's imagine that we pick the branching factor of our B-tree to be very small compared to the number of bits in a machine word (say, b = w1/5). If you have a small number of machine words, the main insight you need is that only a few of the bits in those machine words are actually relevant for determining the ordering. For example, suppose I have the following 32-bit numbers:
A: 00110101000101000101000100000101
B: 11001000010000001000000000000000
C: 11011100101110111100010011010101
D: 11110100100001000000001000000000
Now, imagine I wanted to sort these numbers. To do so, I only really need to look at a few of the bits. For example, some of the numbers differ in their first bit (the top number A has a 0 there, and the rest have a 1). So I'll write down that I need to look at the first bit of the number. The second bit of these numbers doesn't actually help sort things - anything that differs at the second bit already differs at the first bit (do you see why?). The third bit of the number similarly does help us rank them, because numbers B, C, and D, which have the same first bit, diverge at the third bit into the groups (B, C) and D. I also would need to look at the fourth bit, which splits (B, C) apart into B and C.
In other words, to compare these numbers against one another, we'd only need to store these marked bits. If we process these bits, in order, we'd never need to look at any others:
A: 00110101000101000101000100000101
B: 11001000010000001000000000000000
C: 11011100101110111100010011010101
D: 11110100100001000000001000000000
^ ^^
This is the sketching step you were referring to in your question, and it's used to take a small number of large numbers and turn them into a small number of small numbers. Once we have a small number of small numbers, we can then use our parallel rank step from earlier on to do rank operations in time O(1), which is what we needed to do.
Of course, there are a lot of steps that I'm skipping over here. How do you determine which bits are "interesting" bits that we need to look at? How do you extract those bits from the numbers? If you're given a number that isn't in the group, how do you figure out how it compares against the numbers in the group, given that it might differ in other bit positions? These aren't trivial questions to answer, and they're what give rise to most of the complexity of the fusion tree.
Q5: Is there a visualization of a fusion tree available to help understand how the structure works?
Yes, and no. I'll say "yes" because there are resources out there that show how the different steps work. However, I'll say "no" because I don't believe there's any one picture you can look at that will cause the whole data structure to suddenly click into focus.
I teach a course in advanced data structures and spent two 80-minute lectures building up to the fusion tree by using techniques from word-level parallelism. The discussion here is based on those lectures, which go into more depth about each step and include visualizations of the different substeps (how to compute rank in constant time, how the sketching step works, etc.), and each of those steps individually might give you a better sense for how the whole structure works. Those materials are linked here:
Part One discusses word-level parallelism, computing ranks in time O(1), building a variant of the fusion tree that works for very small integers, and computing most-significant bits in time O(1).
Part Two explores the full version of the fusion tree, introducing the basics behind the sketching step (which I call "Patricia codes" based on the connection to the Patricia trie).
To Summarize
In summary:
A fusion tree is a modification of a B-tree. The basic structure matches that of a regular B-tree, except that each node has some auxiliary information to speed up searching.
Fusion trees are purely of theoretical interest at this point. The hidden constant factors are too high and the branching factor too low to meaningfully compete with binary search trees.
Fusion trees use word-level parallelism to speed up searches, commonly by packing multiple numbers into a single machine word and using individual operations to simulate parallel processing.
The sketching step is used to reduce the number of bits in the input numbers to a point where parallel processing with a machine word is possible.
There are lecture slides detailing this in a lot more depth.
Hope this helps!
I've read (just a quick pass) the seminal paper and seems interesting. It also answers most of your questions in the first page.
You may download the paper from here
HTH!
I've read the fusion tree paper. The ideas are pretty clever, and by O notation terms he can make a case for a win.
It isn't clear to me that it is a win in practice. The constant factor matters a lot, and the chip designers work really hard to manage cheap local references.
He has to have B in his faux B-trees pretty small for real machines (B=5 for 32 bits, maybe 10 for 64 bits). That many pointers pretty much fits in a cache line. After the first cache line touch (which he can't avoid) of several hundred cycles, you can pretty much do a linear search through the keys in a few cycles per key, which means a carefully coded B-tree traditional implementation seems like it should outrun fusion trees. (I've built such B-tree code to support our program transformation system).
He claims a list of applications, but there are no comparative numbers.
Anybody have any hard evidence? (Implementations and comparisons?)
The idea behind the fusion tree is actually fairly simple. Suppose you have w-bit (say 64 bit) keys, the idea is to compress (i.e. sketching) every consecutive 64 keys in to an 64-element array. The sketching function assures a constant time mapping between the original keys and the array index for a given group. Then searching for the key becomes searching for the group containing the key, which is O(log(n/64)).
As you can see, the main challenge is the sketching function.