How to measure and interpret diversity index of discrete probability distribution? - entropy

I need to measure the diversity of discrete probability distribution, for example, consider we have a question asked to 10 students, 9 of them answered yes, and 1 only answered now. this gives distribution of {0.9:yes, 0.1:now}, note the classification is variable, it might be multiple choices, is there a measure to quantify diversity?? i used shannon earlier but i could't interpret the result to understandable form like percentage.

Related

Fast hill climbing algorithm that can stabilize when near optimal [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a floating point number x from [1, 500] that generates a binary y of 1 at some probability p. And I'm trying to find the x that can generate the most 1 or has highest p. I'm assuming there's only one maximum.
Is there a algorithm that can converge fast to the x with highest p while making sure it doesn't jump around too much after it's achieved for e.x. within 0.1% of the optimal x? Specifically, it would be great if it stabilizes when near < 0.1% of optimal x.
I know we can do this with simulated annealing but I don't think I should hard code temperature because I need to use the same algorithm when x could be from [1, 3000] or the p distribution is different.
This paper provides an for smart hill-climbing algorithm. The idea is basically you take n samples as starting points. The algorithm is as follows (it is simplified into one dimensional for your problem):
Take n sample points in the search space. In the paper, he uses Linear Hypercube Sampling since the dimensions of the data in the paper is assumed to be large. In your case, since it is one-dimensional, you can just use random sapling as usual.
For each sample points, gather points from its "local neighborhood" and find a best fit quadratic curve. Find the new maximum candidate from the quadratic curve. If the objective function of the new maximum candidate is actually higher than the previous one, update the sample point to the new maximum candidate. Repeat this step with smaller "local neighborhood" size for each iteration.
Use the best point from the sample points
Restart: repeat step 2 and 3, and then compare the maximums. If there is no improvement, stop. If there is improvement, repeat again.

How to select the number of cluster centroid in K means [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am going through a list of algorithm that I found and try to implement them for learning purpose. Right now I am coding K mean and is confused in the following.
How do you know how many cluster there is in the original data set
Is there any particular format that I have follow in choosing the initial cluster centroid besides all centroid have to be different? For example does the algorithm converge if I choose cluster centroids that are different but close together?
Any advice would be appreciated
Thanks
With k-means you are minimizing a sum of squared distances. One approach is to try all plausible values of k. As k increases the sum of squared distances should decrease, but if you plot the result you may see that the sum of squared distances decreases quite sharply up to some value of k, and then much more slowly after that. The last value that gave you a sharp decrease is then the most plausible value of k.
k-means isn't guaranteed to find the best possible answer each run, and it is sensitive to the starting values you give it. One way to reduce problems from this is to start it many times, with different starting values, and pick the best answer. It looks a bit odd if an answer for larger k is actually larger than an answer for smaller k. One way to avoid this is to use the best answer found for k clusters as the basis (with slight modifications) for one of the starting points for k+1 clusters.
In the standard K-Means the K value is chosen by you, sometimes based on the problem itself ( when you know how many classes exists OR how many classes you want to exists) other times a "more or less" random value. Typically the first iteration consists of randomly selecting K points from the dataset to serve as centroids. In the following iterations the centroids are adjusted.
After check the K-Means algorithm, I suggest you also see the K-means++, which is an improvement of the first version, as it tries to find the best K for each problem, avoiding the sometimes poor clusterings found by the standard k-means algorithm.
If you need more specific details on implementation of some machine learning algorithm, please let me know.

Difference between Jaro-Winkler and Levenshtein distance? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to do fuzzy matching of millions of records from multiple files. I identified two algorithms for that: Jaro-Winkler and Levenshtein edit distance.
I was not able to understand what the difference is between the two. It seems Levenshtein gives the number of edits between two strings, and Jaro-Winkler provides a normalized score between 0.0 to 1.0.
My questions:
What are the fundamental differences between the two algorithms?
What is the performance difference between the two algorithms?
Levenshtein counts the number of edits (insertions, deletions, or substitutions) needed to convert one string to the other. Damerau-Levenshtein is a modified version that also considers transpositions as single edits. Although the output is the integer number of edits, this can be normalized to give a similarity value by the formula
1 - (edit distance / length of the larger of the two strings)
The Jaro algorithm is a measure of characters in common, being no more than half the length of the longer string in distance, with consideration for transpositions. Winkler modified this algorithm to support the idea that differences near the start of the string are more significant than differences near the end of the string. Jaro and Jaro-Winkler are suited for comparing smaller strings like words and names.
Deciding which to use is not just a matter of performance. It's important to pick a method that is suited to the nature of the strings you are comparing. In general though, both of the algorithms you mentioned can be expensive, because each string must be compared to every other string, and with millions of strings in your data set, that is a tremendous number of comparisons. That is much more expensive than something like computing a phonetic encoding for each string, and then simply grouping strings sharing identical encodings.
There is a wealth of detailed information on these algorithms and other fuzzy string matching algorithms on the internet. This one will give you a start:
A Comparison of Personal Name
Matching: Techniques and Practical
Issues
According to that paper, the speed of the four Jaro and Levenshtein algorithms I've mentioned are from fastest to slowest:
Jaro
Jaro-Winkler
Levenshtein
Damerau-Levenshtein
with the slowest taking 2 to 3 times as long as the fastest. Of course these times are dependent on the lengths of the strings and the implementations, and there are ways to optimize these algorithms that may not have been used.

two whole texts similarity using levenshtein distance [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have two text files which I'd like to compare. What I did is:
I've split both of them into sentences.
I've measured levenshtein distance between each of the sentences from one file with each of the sentences from second file.
I'd like to calculate average similarity between those two text files, however I have trouble to deliver any meaningful value - obviously arithmetic mean (sum of all the distances [normalized] divided by number of comparisions) is a bad idea.
How to interpret such results?
edit:
Distance values are normalized.
The levenshtein distances has a maximum value, i.e. the max. length of both input strings. It cannot get worse than that. So a normalized similarity index (0=bad, 1=match) for two strings a and b can be calculated as 1- distance(a,b)/max(a.length, b.length).
Take one sentence from File A. You said you'd compare this to each sentence of File B. I guess you are looking for a sentence out of B which has the smallest distance (i.e. the highest similarity index).
Simply calculate the average of all those 'minimum similarity indexes'. This should give you a rough estimation of the similarity of two texts.
But what makes you think that two texts which are similar might have their sentences shuffled? My personal opinion is that you should also introduce stop word lists, synonyms and all that.
Nevertheless: Please also check trigram matching which might be another good approach to what you are looking for.

Calculate integral of product of normal distributions efficiently [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've got two normal PDFs, given by μ1, μ2, σ1 and σ2. What I need is the integral over the product of these functions - the solution to the problem that if X occurred at μ1 with a certain probability expressed in σ1 and Y occurred at μ2 with a certain probability, what's the probability P(X=Y)?
x=linspace(-500,500,1000)
e1 = normpdf(x,mu1,sigma1)
e2 = normpdf(x,mu2,sigma2)
solution = sum(e1*e2)
To visualise, e1 is blue, e2 green, and e1*e2 is red (magnified by factor 100 for visualisation):
Is there however a more direct way of computing solution given mu1, mu2, sigma1 and sigma2?
Thanks!
You should be able to do the integral easily enough, but it does not mean what you think it means.
A mathematical normal distribution yields a randomly chosen real, which you could think of as containing an infinite number of random digits after the decimal point. The chance of any two numbers from such distributions being the same (even if they are from the same distribution) is zero.
A continuous probability density function p(x) like the normal distribution does not give, at p(x), the probability of the random number being x. Roughly speaking, it says that if you have a small interval of width delta-x at x then the probability of a random number being inside that interval is delta-x times p(x). For exact equality, you have to set delta-x to zero, so again you come out with probability zero.
To compute the interval (whatever it means) you might note that N(x;u,o) = exp(-(x-u)^2)/2o^2) neglecting terms that I can't be bothered to look up in http://en.wikipedia.org/wiki/Normal_distribution, and if you multiply two of these together you can add the stuff inside the exp(). If you do enough algebra you might end up with something that you can rewrite as another exponential with a quadratic inside, which will turn into another normal distribution, up to some factors which you can pull outside the integral sign.
A better way of approaching something like this problem would be to note that the difference of two normal distributions with mean M1 and M2 and variance V1 and V2 is a normal distribution with mean M1 - M2 and variance V1 + V2. Perhaps you could consider this distribution - you can easily work out that the probability that the difference of your two numbers is within any range that catches your fancy, for example between -0.0001 and +0.0001.

Resources