I want to compute the Jaccard similarity between two data sets based on the existence/nonexistence of a list of standard codes.
For example (x,y,z are data sets):
Data sets x and y don't have any standard codes (Null), therefore I set the list values as zeroes.
x = [0,0,0]
y = [0,0,0]
z = [0,1,0]
from sklearn.metrics import jaccard_similarity_score
jaccard_similarity_score(x,y),jaccard_similarity_score(x, z)
Jaccard sim between x and z is 0.66 (2/3). Is there any similarity measure that deals with set intersection between two empty sets? In my case, I want to set the similarity between data set x and y as 0, not 1 (as computed using Jaccard).
It depends on each case but on your case I think that you should set the Jaccard similarity of set x and y as 1 because as you stated:
Dataset x and y does not have any standard codes (Null)
So someone can argue that x and y are quite similar (both of them have no standard codes).
In any case you should check if the denominator of the fraction is zero and handle it (maybe you could give a flag value -1 for example).
A Jaccard similarity between two empty sets does not make sense (division by zero). Depending on the problem, Overlap similarity (size of intersection) might be an option. Alternatively, you could wrap your Jaccard similarity function with a check for two empty sets, and return 0 in this case.
Related
Have a twodimensional grid and need a reproducible, random value for every integer coordinate on this grid. This value should be as unique as possible. In a grid of, let's say 1000 x 1000 it shouldn't occur twice.
To put it more mathematical: I'd need a function f(x, y) which gives an unique number no matter what x and y are as long as they are each in the range [0, 1000]
f(x, y) has to be reproducible and not have side-effects.
Probably there is some trivial solution but everything that comes to my mind, like multiplying x and y, adding some salt, etc. does not lead anywhere because the resulting number can easily occur multiple times.
One working solution I got is to use a randomizer and simply compute ALL values in the grid, but that is too computationally heavy (to do every time a value is needed) or requires too much memory in my case (I want to avoid pre-computing all the values).
Any suggestions?
Huge thanks in advance.
I would use the zero-padded concatenation of your x and y as a seed for a built-in random generator. I'm actually using something like this in some of my current experiments.
I.e. x = 13, y = 42 would become int('0013' + '0042') = 130042 to use as random seed. Then you can use the random generator of your choice to get the kind (float, int, etc) and range of values you need:
Example in Python 3.6+:
import numpy as np
from itertools import product
X = np.zeros((1000, 1000))
for x, y in product(range(1000), range(1000)):
np.random.seed(int(f'{x:04}{y:04}'))
X[x, y] = np.random.random()
Each value in the grid is randomly generated, but independently reproducible.
I have small question and I will be very happy if you can give me a solution or any idea for solution of probability distribution of the following idea:
I have a random variable x which follows exponntial distribution with parameter lambda1,I have one more variable y which follows exponential distribution with parameter lambda2. z is a discrete value, how can I define the probability distribution of k in the following formula ?
k=z-x-y
Thank you so much
Ok, lets start with rewriting formula a bit:
k = z-x-y = -(x-y) + z = - (x + y + -z)
That parts in the parentheses looks manageable. Let's start with x+y. For random variable x and y if one wants to find out their sum, answer is PDFs convolution.
q = x+y
PDF(q) = S PDFx(q-t) PDFy(t) dt
where S denotes integration. For x and y being exponential, the convolution integral is known and equal to expression here when lambdas are different, or to Gamma(2,lambda) when lambdas are equal, Gamma being Gamma distribution.
If z is some constant discrete value, then we could express it as continuous RV with PDF
PDF(t) = 𝛿(t+z)
where 𝛿 is Delta function, and we take into account that peak would be at -z as expected. It is normalized, so integral over t is eqaul to 1. It could be easily extended to discrete RV, as sum of 𝛿-functions at those values, multiplied by probabilities such that sum of them is equal to 1.
Again, we have sum of two RV, with known PDFs, and solution is convolution, which is easy to compute due to property of 𝛿-function. So final PDF of x + y + -z would be
PDF(q+z) dq
where PDF is taken from sum expression from Exponential distribution wiki, of Gamma distribution from Gamma wiki.
You just have to negate, and that's it
I have a question that can we normalize the levenshtein edit distance by dividing the e.d value by the length of the two strings?
I am asking this because, if we compare two strings of unequal length, the difference between the lengths of the two will be counted as well.
for eg:
ed('has a', 'has a ball') = 4 and ed('has a', 'has a ball the is round') = 15.
if we increase the length of the string, the edit distance will increase even though they are similar.
Therefore, I can not set a value, what a good edit distance value should be.
Yes, normalizing the edit distance is one way to put the differences between strings on a single scale from "identical" to "nothing in common".
A few things to consider:
Whether or not the normalized distance is a better measure of similarity between strings depends on the application. If the question is "how likely is this word to be a misspelling of that word?", normalization is a way to go. If it's "how much has this document changed since the last version?", the raw edit distance may be a better option.
If you want the result to be in the range [0, 1], you need to divide the distance by the maximum possible distance between two strings of given lengths. That is, length(str1)+length(str2) for the LCS distance and max(length(str1), length(str2)) for the Levenshtein distance.
The normalized distance is not a metric, as it violates the triangle inequality.
I used the following successfully:
len = std::max(s1.length(), s2.length());
// normalize by length, high score wins
fDist = float(len - levenshteinDistance(s1, s2)) / float(len);
Then chose the highest score. 1.0 means an exact match.
I had used a normalized edit distance or similarity (NES) which I think is very useful, defined by Daniel Lopresti and Jiangyin Zhou, in Equation (6) of their work: http://www.cse.lehigh.edu/~lopresti/Publications/1996/sdair96.pdf.
The NES in python is:
import math
def normalized_edit_similarity(m, d):
# d : edit distance between the two strings
# m : length of the shorter string
return ( 1.0 / math.exp( d / (m - d) ) )
print(normalized_edit_similarity(3, 0))
print(normalized_edit_similarity(3, 1))
print(normalized_edit_similarity(4, 1))
print(normalized_edit_similarity(5, 1))
print(normalized_edit_similarity(5, 2))
1.0
0.6065306597126334
0.7165313105737893
0.7788007830714049
0.513417119032592
More examples can be found in Table 2 in the above paper.
The variable m in the above function can be replaced with the length of the longer string, depending on your application.
Given that I have two lists that each contain a separate subset of a common superset, is
there an algorithm to give me a similarity measurement?
Example:
A = { John, Mary, Kate, Peter } and B = { Peter, James, Mary, Kate }
How similar are these two lists? Note that I do not know all elements of the common superset.
Update:
I was unclear and I have probably used the word 'set' in a sloppy fashion. My apologies.
Clarification: Order is of importance.
If identical elements occupy the same position in the list, we have the highest similarity for that element.
The similarity decreased the farther apart the identical elements are.
The similarity is even lower if the element only exists in one of the lists.
I could even add the extra dimension that lower indices are of greater value, so a a[1] == b[1] is worth more than a[9] == b[9], but that is mainly cause I am curious.
The Jaccard Index (aka Tanimoto coefficient) is used precisely for the use case recited in the OP's question.
The Tanimoto coeff, tau, is equal to Nc divided by Na + Nb - Nc, or
tau = Nc / (Na + Nb - Nc)
Na, number of items in the first set
Nb, number of items in the second set
Nc, intersection of the two sets, or the number of unique items
common to both a and b
Here's Tanimoto coded as a Python function:
def tanimoto(x, y) :
w = [ ns for ns in x if ns not in y ]
return float(len(w) / (len(x) + len(y) - len(w)))
I would explore two strategies:
Treat the lists as sets and apply set ops (intersection, difference)
Treat the lists as strings of symbols and apply the Levenshtein algorithm
If you truly have sets (i.e., an element is simply either present or absent, with no count attached) and only two of them, just adding the number of shared elements and dividing by the total number of elements is probably about as good as it gets.
If you have (or can get) counts and/or more than two of them, you can do a bit better than that with something like cosine simliarity or TFIDF (term frequency * inverted document frequency).
The latter attempts to give lower weighting to words that appear in all (or nearly) all the "documents" -- i.e., sets of words.
What is your definition of "similarity measurement?" If all you want is how many items in the set are in common with each other, you could find the cardinality of A and B, add the cardinalities together, and subtract from the cardinality of the union of A and B.
If order matters you can use Levenshtein distance or other kind of Edit distance
.
Is there a general way to convert between a measure of similarity and a measure of distance?
Consider a similarity measure like the number of 2-grams that two strings have in common.
2-grams('beta', 'delta') = 1
2-grams('apple', 'dappled') = 4
What if I need to feed this to an optimization algorithm that expects a measure of difference, like Levenshtein distance?
This is just an example...I'm looking for a general solution, if one exists. Like how to go from Levenshtein distance to a measure of similarity?
I appreciate any guidance you may offer.
Let d denotes distance, s denotes similarity. To convert distance measure to similarity measure, we need to first normalize d to [0 1], by using d_norm = d/max(d). Then the similarity measure is given by:
s = 1 - d_norm.
where s is in the range [0 1], with 1 denotes highest similarity (the items in comparison are identical), and 0 denotes lowest similarity (largest distance).
If your similarity measure (s) is between 0 and 1, you can use one of these:
1-s
sqrt(1-s)
-log(s)
(1/s)-1
Doing 1/similarity is not going to keep the properties of the distribution.
the best way is
distance (a->b) = highest similarity - similarity (a->b).
with highest similarity being the similarity with the biggest value. You hence flip your distribution.
the highest similarity becomes 0 etc
Yes, there is a most general way to change between similarity and distance: a strictly monotone decreasing function f(x).
That is, with f(x) you can make similarity = f(distance) or distance = f(similarity). It works in both directions. Such function works, because the relation between similarity and distance is that one decreases when the other increases.
Examples:
These are some well-known strictly monotone decreasing candidates that work for non-negative similarities or distances:
f(x) = 1 / (a + x)
f(x) = exp(- x^a)
f(x) = arccot(ax)
You can choose parameter a>0 (e.g., a=1)
Edit 2021-08
A very practical approach is to use the function sim2diss belonging to the statistical software R. This functions provides a up to 13 methods to compute dissimilarity from similarities. Sadly the methods are not at all explained: you have to look into the code :-\
similarity = 1/difference
and watch out for difference = 0
According to scikit learn:
Kernels are measures of similarity, i.e. s(a, b) > s(a, c) if objects a and b are considered “more similar” than objects a and c. A kernel must also be positive semi-definite.
There are a number of ways to convert between a distance metric and a similarity measure, such as a kernel. Let D be the distance, and S be the kernel:
S = np.exp(-D * gamma), where one heuristic for choosing gamma is 1 /
num_features
S = 1. / (D / np.max(D))
In the case of Levenshtein distance, you could increase the sim score by 1 for every time the sequences match; that is, 1 for every time you didn't need a deletion, insertion or substitution. That way the metric would be a linear measure of how many characters the two strings have in common.
In one of my projects (based on Collaborative Filtering) I had to convert between correlation (cosine between vectors) which was from -1 to 1 (closer 1 is more similar, closer to -1 is more diverse) to normalized distance (close to 0 the distance is smaller and if it's close to 1 the distance is bigger)
In this case: distance ~ diversity
My formula was: dist = 1 - (cor + 1)/2
If you have similarity to diversity and the domain is [0,1] in both cases the simlest way is:
dist = 1 - sim
sim = 1 - dist
Cosine similarity is widely used for n-gram count or TFIDF vectors.
from math import pi, acos
def similarity(x, y):
return sum(x[k] * y[k] for k in x if k in y) / sum(v**2 for v in x.values())**.5 / sum(v**2 for v in y.values())**.5
Cosine similarity can be used to compute a formal distance metric according to wikipedia. It obeys all the properties of a distance that you would expect (symmetry, nonnegativity, etc):
def distance_metric(x, y):
return 1 - 2 * acos(similarity(x, y)) / pi
Both of these metrics range between 0 and 1.
If you have a tokenizer that produces N-grams from a string you could use these metrics like this:
>>> import Tokenizer
>>> tokenizer = Tokenizer(ngrams=2, lower=True, nonwords_set=set(['hello', 'and']))
>>> from Collections import Counter
>>> list(tokenizer('Hello World again and again?'))
['world', 'again', 'again', 'world again', 'again again']
>>> Counter(tokenizer('Hello World again and again?'))
Counter({'again': 2, 'world': 1, 'again again': 1, 'world again': 1})
>>> x = _
>>> Counter(tokenizer('Hi world once again.'))
Counter({'again': 1, 'world once': 1, 'hi': 1, 'once again': 1, 'world': 1, 'hi world': 1, 'once': 1})
>>> y = _
>>> sum(x[k]*y[k] for k in x if k in y) / sum(v**2 for v in x.values())**.5 / sum(v**2 for v in y.values())**.5
0.42857142857142855
>>> distance_metric(x, y)
0.28196592805724774
I found the elegant inner product of Counter in this SO answer