Compute the similarity between two lists of objects - algorithm

I'd like to compute the similarity between two lists of various lengths. In particular, the similarity has to take into account different conditions:
-Given 2 list A and B, if A=B then similarity(A,B)=1
-In general, if B contains A, then similarity (A,B)->1. However, the measure of similarity should also take into consideration the number of elements of the two list. (E.g. if A contains 1000 objects and B just one, which is also contained in A, then similarity(A,B)->0).
-Similarity(A,B) defines also a threshold T. Values of similarity grater than T indicate that the two lists are similar.
Cosine similarity is probably related to this problem, but I have no idea how to work with subset and treshold.
I have found also different approaches, but threshold pararameter is snot specified:
-A Similarity Measure for Indefinite Rankings
-Kendall rank correlation coefficient

I think you are looking for some kind of set similarity.
The two most prominent measures for that are Jaccard Index and Sørensen–Dice coefficient

In your case, Using Jaccard similarity coefficient might help.

Related

Algorithm to best match item pairs given similarity score

I am trying to match two lists of products by name.
Products come from different websites, and theirs names may vary from one website to the other in many subtle ways, e.g. "iPhone 128 GB" vs "Apple iPhone 128GB".
The product lists intersect, but are not equal and one is not a superset of the other; i.e. some products from list A are not in list B, and vice versa.
Given an algorithm that compares two strings (product names) and returns a similarity score between 0 and 1 (I already have a satisfactory implementation here), I'm looking for an algorithm that performs an optimal match of list A to list B.
In other words, I think I'm looking for an algorithm that maximizes the sum of all similary scores in the matches.
Note that a product from one list must be matched to at most one product from the other list.
My initial idea
for each product in A, get the similary with each product in B, and retain the product that yields the highest score, provided that it exceeds a certain threshold, such as 0.75. Match these products.
if the product with the highest score was already matched to another product in A earlier in the loop, take the second-to-highest, provided that it exceeds the threshold above. Match to this one instead.
etc.
My worry with this native implementation is that if there's a better match later in the loop, but the product from B has already been assigned to another product from A in a previous iteration, the matching is not optimal.
An improved version
To ensure that the product is matched to its highest similarity counterpart, I thought of the following implementation:
pre-compute the similarity scores for all A-B pairs
discard the similarities lower than the threshold used above
order by similarity, highest first
for each pair, if neither product A nor product B has already been matched, match these products.
This algorithm should optimally match product pairs, ensuring that each pair got the highest similarity.
My worry is that it's very compute- and memory-intensive: say I have 5,000 products in both lists, that is 25,000,000 similarity scores to pre-compute and potentially store in memory (or database); in reality it will be lower due to the minimum required threshold, but it can still get very large and is still CPU intensive.
Did I miss something?
Is there a more efficient algorithm that gives the same output as this improved version?
Your model could be reformulated in graph terms: consider a complete weighted bipartite graph, where vertices of the first part are names from list A, vertices of the second part are names from list B and edges are weighted with precomputed scores of similarity.
Now your problem looks really close to the dense Assignment_problem, which optimal solution can be found with Hungarian algorithm (O(n³) complexity).
If optimal solution is not your final goal and some good approximations to optimum can also satisfy your requirements, try heuristic algorithms for assignment problem, here is another topic with a brief overview of them.
Your second algorithm should provide a decent output, but it's not optimal. Check the following case:
Set0 Set1
A C
B D
Similarities:
A-C = 900
A-D = 850
B-C = 850
B-D = 0
Your algorithm's output: [(A,C), (B,D)]. Value 900.
Optimal output: [(A,D), (B,C)]. Value 1700.
The problem you are working with is exactly the Assigment Problem, which is "finding, in a weighted bipartite graph, a matching in which the sum of weights of the edges is as large as possible". You can find many ways to optimally and efficiently solve this problem.

Index structure for top-k queries on bitstrings

Given array of bitstrings (all of the same length) and query string Q find top-k most similar strings to Q, where similarity between strings A and B is defined as number of 1 in A and B, (operation and is applied bitwise).
I think there is should be a classical result for this problem.
k is small, in hundreds, while number of vectors in hundreds of millions and length of the vectors is 512 or 1024
One way to tackle this problem is to construct a K-Nearest Neighbor Graph (K-NNG) (digraph) with a Russell-Rao similarity function.
Note that efficient K-NNG construction is still an open problem,and none of the known solutions for this problem is general, efficient and scalable [quoting from Efficient K-Nearest Neighbor Graph Construction for Generic Similarity Measures - Dong, Charikar, Li 2011].
Your distance function is often called Russell-Rao similarity (see for example A Survey of Binary Similarity and Distance Measures - Choi, Cha, Tappert 2010). Note that Russell-Rao similarity is not a metric (see Properties of Binary Vector Dissimilarity Measures - Zhang, Srihari 2003): The "if" part of "d(x, y) = 0 iff x == y" is false.
In A Fast Algorithm for Finding k-Nearest Neighbors with Non-metric Dissimilarity - Zhang, Srihari 2002, the authors propose a fast hierarchical search algorithm to find k-NNs using a non-metric measure in a binary vector space. They use a parametric binary vector distance function D(β). When β=0, this function is reduced to the Russell-Rao distance function. I wouldn't call it a "classical result", but this is the the only paper I could find that examines this problem.
You may want to check these two surveys: On nonmetric similarity search problems in complex domains - Skopal, Bustos 2011 and A Survey on Nearest Neighbor Search Methods - Reza, Ghahremani, Naderi 2014. Maybe you'll find something I missed.
This problem can be solved by writing simple Map and Reduce job. I'm neither claiming that this is the best solution, nor I'm claiming that this is the only solution.
Also, you have disclosed in the comments that k is in hundreds, there are millions of bitstrings and that the size of each of them is 512 or 1024.
Mapper pseudo-code:
Given Q;
For every bitstring b, compute similarity = b & Q
Emit (similarity, b)
Now, the combiner can consolidate the list of all bitStrings from every mapper that have the same similarity.
Reducer pseudo-code:
Consume (similarity, listOfBitStringsWithThisSimilarity);
Output them in decreasing order for similarity value.
From the output of reducer you can extract the top-k bitstrings.
So, MapReduce paradigm is probably the classical solution that you are looking for.

Set distance as similarity metric for MinHashing algorithm

I am currently working on document clustering using MinHashing technique. However, I am not getting desired results as MinHash is a rough estimation of Jaccard similarity and it doesn't suits my requirement.
This is my scenario:
I have a huge set of books and if a single page is given as a query, I need to find the corresponding book from which this page is obtained from. The limitation is, I have features for the entire book and it's impossible to get page-by-page features for the books. In this case, Jaccard similarity is giving poor results if the book is too big. What I really want is the distance between query page and the books (not vice-versa). That is:
Given 2 sets A, B: I want the distance from A to B,
dis(A->B) = (A & B)/A
Is there similar distance metric that gives distance from set A to set B. Further, is it still possible to use MinHashing algorithm with this kind of similarity metric?
We can estimate your proposed distance function using a similar approach as the MinHash algorithm.
For some hash function h(x), compute the minimal values of h over A and B. Denote these values h_min(A) and h_min(B). The MinHash algorithm relies on the fact that the probability that h_min(A) = h_min(B) is (A & B) / (A | B). We may observe that the probability that h_min(A) <= h_min(B) is A / (A | B). We can then compute (A & B) / A as the ratio of these two probabilities.
Like in the regular MinHash algorithm, we can approximate these probabilities by repeated sampling until the desired variance is achieved.

"Covering" the space of all possible histogram shapes

There is a very expensive computation I must make frequently.
The computation takes a small array of numbers (with about 20 entries) that sums to 1 (i.e. the histogram) and outputs something that I can store pretty easily.
I have 2 things going for me:
I can accept approximate answers
The "answers" change slowly. For example: [.1 .1 .8 0] and [.1
.1 .75 .05] will yield similar results.
Consequently, I want to build a look-up table of answers off-line. Then, when the system is running, I can look-up an approximate answer based on the "shape" of the input histogram.
To be precise, I plan to look-up the precomputed answer that corresponds to the histogram with the minimum Earth-Mover-Distance to the actual input histogram.
I can only afford to store about 80 to 100 precomputed (histogram , computation result) pairs in my look up table.
So, how do I "spread out" my precomputed histograms so that, no matter what the input histogram is, I'll always have a precomputed result that is "close"?
Finding N points in M-space that are a best spread-out set is more-or-less equivalent to hypersphere packing (1,2) and in general answers are not known for M>10. While a fair amount of research has been done to develop faster methods for hypersphere packings or approximations, it is still regarded as a hard problem.
It probably would be better to apply a technique like principal component analysis or factor analysis to as large a set of histograms as you can conveniently generate. The results of either analysis will be a set of M numbers such that linear combinations of histogram data elements weighted by those numbers will predict some objective function. That function could be the “something that you can store pretty easily” numbers, or could be case numbers. Also consider developing and training a neural net or using other predictive modeling techniques to predict the objective function.
Building on #jwpat7's answer, I would apply k-means clustering to a huge set of randomly generated (and hopefully representative) histograms. This would ensure that your space was spanned with whatever number of exemplars (precomputed results) you can support, with roughly equal weighting for each cluster.
The trick, of course, will be generating representative data to cluster in the first place. If you can recompute from time to time, you can recluster based on the actual data in the system so that your clusters might get better over time.
I second jwpat7's answer, but my very naive approach was to consider the count of items in each histogram bin as a y value, to consider the x values as just 0..1 in 20 steps, and then to obtain parameters a,b,c that describe x vs y as a cubic function.
To get a "covering" of the histograms I just iterated through "possible" values for each parameter.
e.g. to get 27 histograms to cover the "shape space" of my cubic histogram model I iterated the parameters through -1 .. 1, choosing 3 values linearly spaced.
Now, you could change the histogram model to be quartic if you think your data will often be represented that way, or whatever model you think is most descriptive, as well as generate however many histograms to cover. I used 27 because three partitions per parameter for three parameters is 3*3*3=27.
For a more comprehensive covering, like 100, you would have to more carefully choose your ranges for each parameter. 100**.3 isn't an integer, so the simple num_covers**(1/num_params) solution wouldn't work, but for 3 parameters 4*5*5 would.
Since the actual values of the parameters could vary greatly and still achieve the same shape it would probably be best to store ratios of them for comparison instead, e.g. for my 3 parmeters b/a and b/c.
Here is an 81 histogram "covering" using a quartic model, again with parameters chosen from linspace(-1,1,3):
edit: Since you said your histograms were described by arrays that were ~20 elements, I figured fitting parameters would be very fast.
edit2 on second thought I think using a constant in the model is pointless, all that matters is the shape.

Better distance metrics besides Levenshtein for ordered word sets and subsequent clustering

I am trying to solve a problem that involves comparing large numbers of word sets , each of which contains a large, ordered number of words from a set of words (totaling around 600+, very high dimensionality!) for similarity and then clustering them into distinct groupings. The solution needs to be as unsupervised as possible.
The data looks like
[Apple, Banana, Orange...]
[Apple, Banana, Grape...]
[Jelly, Anise, Orange...]
[Strawberry, Banana, Orange...]
...etc
The order of the words in each set matters ([Apple, Banana, Orange] is distinct from [Apple, Orange, Banana]
The approach I have been using so far has been to use Levenshtein distance (limited by a distance threshold) as a metric calculated in a Python script with each word being the unique identifier, generate a similarity matrix from the distances, and throwing that matrix into k-Mediods in KNIME for the groupings.
My questions are:
Is Levenshtein the most appropriate distance metric to use for this problem?
Is mean/medoid prototype clustering the best way to go about the groupings?
I haven't yet put much thought into validating the choice for 'k' in the clustering. Would evaluating an SSE curve of the clustering be the best way to go about this?
Are there any flaws in my methodology?
As an extension to the solution in the future, given training data, would anyone happen to have any ideas for going about assigning probabilities to cluster assignments? For example, set 1 has a 80% chance of being in cluster 1, etc.
I hope my questions don't seem too silly or the answers painfully obvious, I'm relatively new to data mining.
Thanks!
Yes, Levenshtein is a very suitable way to do this. But if the sequences vary in size much, you might be better off normalising these distances by dividing by the sum of the sequence lengths -- otherwise you will find that observed distances tend to increase for pairs of long sequences whose "average distance" (in the sense of the average distance between corresponding k-length substrings, for some small k) is constant.
Example: The pair ([Apple, Banana], [Carrot, Banana]) could be said to have the same "average" distance as ([Apple, Banana, Widget, Xylophone], [Carrot, Banana, Yam, Xylophone]) since every 2nd item matches in both, but the latter pair's raw Levenshtein distance will be twice as great.
Also bear in mind that Levenshtein does not make special allowances for "block moves": if you take a string, and move one of its substrings sufficiently far away, then the resulting pair (of original and modified strings) will have the same Levenshtein score as if the 2nd string had completely different elements at the position where the substring was moved to. If you want to take this into account, consider using a compression-based distance instead. (Although I say there that it's useful for computing distances without respect to order, it does of course favour ordered similarity to disordered similarity.)
check out SimMetrics on sourceforge for a platform supporting a variety of metrics able to use as a means to evaluate the best for a task.
for a commercially valid version check out K-Similarity from K-Now.co.uk.

Resources