Understanding the Count Sketch data structure and associated algorithms - data-structures

Working on wrapping my head around the CountSketch data structure and its associated algorithms. It seems to be a great tool for finding common elements in streaming data, and the additive nature of it makes for some fun properties with finding large changes in frequency, perhaps similar to what Twitter uses for trending topics.
The paper is a little difficult to understand for someone that has been away from more academic approaches for a while, and a previous post here did help some, for me at least it still left quite a few questions.
As I understand it, the Count Sketch structure is similar to a bloom filter. However the selection of hash functions has me confused. The structure is an N by M table with N hash functions with M possible values determining the "bucket" to alter, and another hash function s for each N that is "pairwise independent"
Are the hashes to be selected from a universal hashing family, say something of the h(x) = ((ax+b) % some_prime) % M?
And if so, where are the s hashes that return either +1 or -1 chosen from? And what is the reason for ever subtracting from one of the buckets?

They subtract from the buckets to make average effect of additions/subtractions caused by other occurrences to be 0. If half the time I add the count of 'foo', and half the time I subtract the count of 'foo', then in expectation, the count of 'foo' does not influence the estimate of the count for 'bar'.
Picking a universal hash function like you describe will indeed work, but it's mostly important for the theory rather than the practice. Salting your favorite reasonable hash function will work too, you just can't meaningfully write proofs based on the expected values using a few fixed hash functions.

Related

Why Counting Sort is made harder?

I was reading: https://en.wikipedia.org/wiki/Counting_sort and https://www.geeksforgeeks.org/counting-sort/
There is one little detail which I don't get at all, why to complicate things where they can be so much easier? What's the problem of allocating an array of size k where the field of numbers is [1...k] and count how many times each number appeared and lastly walking down the array and printing according to the counter in each cell.
What's the problem of allocating an array of size k where the field of numbers is [1...k] and count how many times each number appeared and lastly walking down the array and printing according to the counter in each cell.
From your phrase "how many times each number appeared", it sounds like you're picturing an array of positive integers, where you want to sort them in increasing order, and where you can use those integers directly as indices in your helper array?
But that's not what the Wikipedia article describes. The algorithm in the Wikipedia article is for an array whose elements can have whatever data-type we choose, provided there's a function key that maps from that data-type to the set of indices in the helper array, with the property that we want to stably sort elements according to the result of key (so, if key(x) < key(y) then we want to sort x before y, and if key(x) = key(y) then we want to keep x and y in the same order they originally had).
In particular, the counting-sort algorithm in the Wikipedia article is useful as a component of radix sort: first you sort by the last digit (using a key function that gives the last digit of a number), then by the second-to-last digit, and so on, until an array of numbers is sorted.
There is one little detail which I don't get at all, why to complicate things where they can be so much easier?
A pro tip: we all usually think that our own code is "easier" and that other people are "complicating things", because code is easier to write than to read, so the code that we understand best is the code that we've come up with ourselves.
As it happens, in this case the Wikipedia code really is more complicated, because it serves a much more general use-case than you were picturing; but in general, it's not a good idea to just assume that everyone will agree that your code is the easy version and that others' is unnecessarily complicated.

Hash tables Ω(n^2) runtime?

I am really confused about this. Having read the textbook and done exercises I still don't get how it works, and unfortunately I can't go in person to see the professor and it's somewhat difficult to get in touch (summer online course, different time zones). I feel like it would 'click' if I just understood how to do this problem. The textbook details hash functions and runtime individually but I feel like this question is outside the scope of what we've learned. If someone could point me at anything that might help, that would be great.
1) Consider the process of inserting m keys into a hash table T[0..m − 1], where m is a prime, and we use open addressing. The hash function we use is h(k, i) = (k + i) mod m. Give an example of m keys k1, k2 ... km, such that the following sequence of operations takes Ω(n^2) time:
insert(k1), insert(k2), ..., insert(km)
I understand that insert operations are supposed to take O(1) time or, in some cases, O(n). How exactly am I supposed to come up with keys that will turn that into Ω(n^2) time? I'm hoping to understand this and I feel like I'm missing some huge hint, because the textbook chapter seems simple, makes sense to me, and doesn't help with this at all. In the question it's stated that m is a prime, is this important? I'm just so lost, and Google for once fails me.
The keyword here is hash collision:
In order for a hash function to work well, you need the values for a certain input to be well-distributed over all m possible values the entries are stored in. If the hash table has about as many entries as elements were inserted, you can expect every element to be stored at (or near) its hash value (meaning only small amounts of probing are necessary), making access, insertion and deletion a constant-time operation.
If you however find different input values for which the hash function maps to the same value every time (collisions), during insertion the probing step will have to skip over all previously added elements, taking Ω(n) time per element on average. Thus we get a runtime of Ω(n²)

Efficiently calculating containers hash codes

The algorithm I know about for calculating the hash code of containers works by combining the hash of all elements in it recursively. How the hashes are combined is irrelevant for my question. But because the algorithm recurses, the calculation can become very expensive. O(n), where n is the total number of elements reachable.
My question is if there are any more efficient methods to do it? For example, if you have an array with 100k elements, you could calculate the hash by combining the hash of only 100 of the elements contained. That would make the calculation 1000 times faster, while still being a good hash function, wouldn't it?
The 100 elements you pick could be the 100 first or every 1000th (in the above example) or picked using some other deterministic formula.
So to answer my question, can you either tell me why my idea can't work or tell me where my idea has already been investigated. Like has any programming language implemented "sub O(n) sequence hashing" like I'm proposing?
In general, designing an appropriate hash function requires trading off computation time against quality, and this will be particularly true for very large objects.
Hashing only a fixed-size subset of a large object is a valid strategy (Lua uses this strategy for hashing large strings, for example), but it can obviously lead to problems if the hashed objects have few differences and it happens that the differences are not in the hashed subset. That opens the possibility of denial-of-service attacks (or inputs which accidentally trigger the same problem), so it is not generally a good idea if you are hashing uncontrolled inputs. (And if you're using the hash as part of a cryptographic exercise, then omitting part of the object makes falsification trivial, so in that context it's a really bad idea.)
Assuming you're using the hash as part of a database indexing strategy (that is, a hash table), remember that in the end you will need to compare the value being looked up with each potential match in the table; those comparisons are necessarily O(n) (unless you believe that almost all lookups will fail). Each false positive requires an additional comparison, so the quality-versus-computation-time tradeoff may turn out to be a false economy.
But, in the end, there is no definitive answer; you will have to decide based on the precise use case you have, including a consideration of what you are using the hash for, what the distribution of the data is (or is likely to be) and so on.

Fuzzy matching deduplication in less than exponential time?

I have a large database (potentially in the millions of records) with relatively short strings of text (on the order of street address, names, etc).
I am looking for a strategy to remove inexact duplicates, and fuzzy matching seems to be the method of choice. My issue: many articles and SO questions deal with matching a single string against all records in a database. I am looking to deduplicate the entire database at once.
The former would be a linear time problem (comparing a value against a million other values, calculating some similarity measure each time). The latter is an exponential time problem (compare every record's values against every other record's value; for a million records, that's approx 5 x 10^11 calculations vs the 1,000,000 calculations for the former option).
I'm wondering if there is another approach than the "brute-force" method I mentioned. I was thinking of possibly generating a string to compare each record's value against, and then group strings that had roughly equal similarity measures, and then run the brute-force method through these groups. I wouldn't achieve linear time, but it might help. Also, if I'm thinking through this properly, this could miss a potential fuzzy match between strings A and B because the their similarity to string C (the generated check-string) is very different despite being very similar to each other.
Any ideas?
P.S. I realize I may have used the wrong terms for time complexity - it is a concept that I have a basic grasp of, but not well enough so I could drop an algorithm into the proper category on the spot. If I used the terms wrong, I welcome corrections, but hopefully I got my point across at least.
Edit
Some commenters have asked, given fuzzy matches between records, what my strategy was to choose which ones to delete (i.e. given "foo", "boo", and "coo", which would be marked the duplicate and deleted). I should note that I am not looking for an automatic delete here. The idea is to flag potential duplicates in a 60+ million record database for human review and assessment purposes. It is okay if there are some false positives, as long as it is a roughly predictable / consistent amount. I just need to get a handle on how pervasive the duplicates are. But if the fuzzy matching pass-through takes a month to run, this isn't even an option in the first place.
Have a look at http://en.wikipedia.org/wiki/Locality-sensitive_hashing. One very simple approach would be to divide up each address (or whatever) into a set of overlapping n-grams. This STACKOVERFLOW becomes the set {STACKO, TACKO, ACKOV, CKOVE... , RFLOW}. Then use a large hash-table or sort-merge to find colliding n-grams and check collisions with a fuzzy matcher. Thus STACKOVERFLOW and SXACKOVRVLOX will collide because both are associated with the colliding n-gram ACKOV.
A next level up in sophistication is to pick an random hash function - e.g. HMAC with an arbitrary key, and of the n-grams you find, keep only the one with the smallest hashed value. Then you have to keep track of fewer n-grams, but will only see a match if the smallest hashed value in both cases is ACKOV. There is obviously a trade-off here between the length of the n-gram and the probability of false hits. In fact, what people seem to do is to make n quite small and get higher precision by concatenating the results from more than one hash function in the same record, so you need to get a match in multiple different hash functions at the same time - I presume the probabilities work out better this way. Try googling for "duplicate detection minhash"
I think you may have mis-calculated the complexity for all the combinations. If comparing one string with all other strings is linear, this means due to the small lengths, each comparison is O(1). The process of comparing each string with every other string is not exponential but quadratic, which is not all bad. In simpler terms you are comparing nC2 or n(n-1)/2 pairs of strings, so its just O(n^2)
I couldnt think of a way you can sort them in order as you cant write an objective comparator, but even if you do so, sorting would take O(nlogn) for merge sort and since you have so many records and probably would prefer using no extra memory, you would use quick sort, which takes O(n^2) in worst case, no improvement over the worst case time in brute force.
You could use a Levenshtein transducer, which "accept[s] a query term and return[s] all terms in a dictionary that are within n spelling errors away from it". Here's a demo.
Pairwise comparisons of all the records is O(N^2) not exponential. There basically two ways to go to cut down on that complexity.
The first is blocking, where you only compare records that already have something in common that's easy to compute, like the first three letters or a common n-gram. This is basically the same idea as Locally Sensitive Hashing. The dedupe python library implements a number of blocking techniques and the documentation gives a good overview of the general approach.
In the worse case, pairwise comparisons with blocking is still O(N^2). In the best case it is O(N). Neither best or worst case are really met in practice. Typically, blocking reduces the number of pairs to compare by over 99.9%.
There are some interesting, alternative paradigms for record linkage that are not based on pairwise comparisons. These have better worse case complexity guarantees. See the work of Beka Steorts and Michael Wick.
I assume this is a one-time cleanup. I think the problem won't be having to do so many comparisons, it'll be having to decide what comparisons are worth making. You mention names and addresses, so see this link for some of the comparison problems you'll have.
It's true you have to do almost 500 billion brute-force compares for comparing a million records against themselves, but that's assuming you never skip any records previously declared a match (ie, never doing the "break" out of the j-loop in the pseudo-code below).
My pokey E-machines T6532 2.2gHz manages to do 1.4m seeks and reads per second of 100-byte text file records, so 500 billion compares would take about 4 days. Instead of spending 4 days researching and coding up some fancy solution (only to find I still need another x days to actually do the run), and assuming my comparison routine can't compute and save the keys I'd be comparing, I'd just let it brute-force all those compares while I find something else to do:
for i = 1 to LASTREC-1
seektorec(i)
getrec(i) into a
for j = i+1 to LASTREC
getrec(j) into b
if similarrecs(a, b) then [gotahit(); break]
Even if a given run only locates easy-to-define matches, hopefully it reduces the remaining unmatched records to a more reasonable smaller set for which further brute-force runs aren't so time-consuming.
But it seems unlikely similarrecs() can't independently compute and save the portions of a + b being compared, in which case the much more efficient approach is:
for i = 1 to LASTREC
getrec(i) in a
write fuzzykey(a) into scratchfile
sort scratchfile
for i = 1 to LASTREC-1
if scratchfile(i) = scratchfile(i+1) then gothit()
Most databases can do the above in one command line, if you're allowed to invoke your own custom code for computing each record's fuzzykey().
In any case, the hard part is going to be figuring out what makes two records a duplicate, per the link above.
Equivalence relations are particularly nice kinds of matching; they satisfy three properties:
reflexivity: for any value A, A ~ A
symmetry: if A ~ B, then necessarily B ~ A
transitivity: if A ~ B and B ~ C, then necessarily A ~ C
What makes these nice is that they allow you to partition your data into disjoint sets such that each pair of elements in any given set are related by ~. So, what you can do is apply the union-find algorithm to first partition all your data, then pick out a single representative element from each set in the partition; this completely de-duplicates the data (where "duplicate" means "related by ~"). Moreover, this solution is canonical in the sense that no matter which representatives you happen to pick from each partition, you get the same number of final values, and each of the final values are pairwise non-duplicate.
Unfortunately, fuzzy matching is not an equivalence relation, since it is presumably not transitive (though it's probably reflexive and symmetric). The result of this is that there isn't a canonical way to partition the data; you might find that any way you try to partition the data, some values in one set are equivalent to values from another set, or that some values from within a single set are not equivalent.
So, what behavior do you want, exactly, in these situations?

Efficiently estimating the number of unique elements in a large list

This problem is a little similar to that solved by reservoir sampling, but not the same. I think its also a rather interesting problem.
I have a large dataset (typically hundreds of millions of elements), and I want to estimate the number of unique elements in this dataset. There may be anywhere from a few, to millions of unique elements in a typical dataset.
Of course the obvious solution is to maintain a running hashset of the elements you encounter, and count them at the end, this would yield an exact result, but would require me to carry a potentially large amount of state with me as I scan through the dataset (ie. all unique elements encountered so far).
Unfortunately in my situation this would require more RAM than is available to me (nothing that the dataset may be far larger than available RAM).
I'm wondering if there would be a statistical approach to this that would allow me to do a single pass through the dataset and come up with an estimated unique element count at the end, while maintaining a relatively small amount of state while I scan the dataset.
The input to the algorithm would be the dataset (an Iterator in Java parlance), and it would return an estimated unique object count (probably a floating point number). It is assumed that these objects can be hashed (ie. you can put them in a HashSet if you want to). Typically they will be strings, or numbers.
You could use a Bloom Filter for a reasonable lower bound. You just do a pass over the data, counting and inserting items which were definitely not already in the set.
This problem is well-addressed in the literature; a good review of various approaches is http://www.edbt.org/Proceedings/2008-Nantes/papers/p618-Metwally.pdf. The simplest approach (and most compact for very high accuracy requirements) is called Linear Counting. You hash elements to positions in a bitvector just like you would a Bloom filter (except only one hash function is required), but at the end you estimate the number of distinct elements by the formula D = -total_bits * ln(unset_bits/total_bits). Details are in the paper.
If you have a hash function that you trust, then you could maintain a hashset just like you would for the exact solution, but throw out any item whose hash value is outside of some small range. E.g., use a 32-bit hash, but only keep items where the first two bits of the hash are 0. Then multiply by the appropriate factor at the end to approximate the total number of unique elements.
Nobody has mentioned approximate algorithm designed specifically for this problem, Hyperloglog.

Resources