This question already has answers here:
How to rank a million images with a crowdsourced sort
(12 answers)
Closed 8 years ago.
Imagine I have a very long list of images and I want to rank them in order of how 'good' people think they are.
I don't want to get users to assign scores to images outright (1 - 10 etc) and order by that, I'd like to try something new.
What I was thinking would be an interesting way of doing it is:
Show a user two random images, they pick the better one
Collect lots of 'comparisons'
Use all the comparisons to come up with some ordering
Turns out this is used regularly, for example (using features, not images), this appears to be the way Uservoice's Smartvote works.
My question is whether there's a good known way to take this long list of comparisons and build a relative ranking for all the images from them but not to the level of complexity found in the research papers.
I've read a bunch of lectures and research papers but I was wondering if there was any sample code out there people might recommend?
Seems like you could just get some kind of numerical ranking system and then just sort based on that. Just borrow the algorithm from a win/loss sport, or chess, and treat each image comparison as a bout.
Did some looking, here's some sample code of what an algorithm like that looks like in Java
And here's a library you can borrow in python
If you search ELO you'll find a version of it in just about any language. Once you get your numerical image rankings, you can sort them any way you like. There are probably other ranking algorithms you could look into for win/loss competition, that was just the first that came up when I googled chess ranking.
For every image, count the number of times it won a duel, and divide by the number of duels it took part in. This ratio is your ranking score.
Example:
A B, A C, A D, B C, B D
Yields
B: 67%, C, D: 50%, A: 33%
Unless you perform a huge number of comparisons, there will be many ties.
Related
I am looking for any direction on how to implement the process below, you should not need to understand much at all about poker.
Below is a grid of possible two-card combinations.
Pocket pairs in blue, suited cards in yellow and off-suited in red.
Essentially there is a slider under the matrix which selects a percentage of possible combinations of two cards which a player could be dealt. However, you can see that it moves in a sort of linear fashion, towards the "better" cards.
These selections are also able to be parsed from strings e.g AA-88,AKo-AJo,KQo,AKs-AJs,KQs,QJs,JTs is 8.6% of the matrix.
I've looked around but cannot find questions about the specific selection process. I am not looking for "how to create this grid" or , more like how would I go about the selection process based on the sliding percentage. I am primarily a JavaScript developer but snippets in any language are appreciated, if applicable.
My initial assumptions are that there is some sort of weighting involved i.e. (favoured towards pairs over suited and suited over non-suited) or could it just be predetermined and I'm overthinking this?
In my opinion there should be something along the lines of "grouping(s)" AND "a subsequent weighting" process. It should also be customisable for the user to provide an optimal experience (imo).
For example, if you look at the below:
https://en.wikipedia.org/wiki/Texas_hold_%27em_starting_hands#Sklansky_hand_groups
These are/were standard hand rankings created back in the 1970s/1980s however since then, hand selection has become much more complicated. These kind of groupings have changed a lot in 30 years so poker players will want a custom user experience here.
But lets take a basic preflop scenario.
Combinations:- pairs = 6, suited = 4, nonsuited = 12
1 (AA:6, KK:6, QQ:6, JJ:6, AKs:4) = 28combos
2 (AQs:4, TT:6, AK:16, AJs:4, KQs:4, 99:6) = 40
3 (ATs:4, AQ:16, KJs:4, 88:6, KTs:4, QJs:4) = 38
....
9 (87s:4, QT:12, Q8s:4, 44:6, A9:16, J8s:4, 76s:4, JT:16) = 66
Say for instance we only reraise the top 28/1326 of combinations (in theory there should be some deduction here but for simplicity let's ignore that). We are only 3betting or reraising a very very obvious and small percentage of hands, our holdings are obvious at around 2-4% of total hands. So a player may want to disguise their reraise or 3bet range with say 50% of the weakest hands from group 9. As a basic example.
Different decision trees and game theory can be used with "range building" so a simple ordered list may not be suitable for what you're trying to achieve. depends on your programs purpose.
That said, if you just looking to build an ordered list then you could just take X% of hands that players open with, say average is 27% and run a hand equity calculator simulation tweaking the below GitHub to get different hand rankings. https://github.com/andrewprock/pokerstove
Theres also some lists here at the bottom this page.
http://www.propokertools.com/help/simulator_docs
Be lucky!
Say we have a perfume shop that has 100 different perfumes.
Let's say 10,000 customers come in an rate each perfume one through five stars.
Let's say the question is: "how to best construct a pack of 5 perfumes so that 95% customers will give a 4+ star rating for at least one of them"
How to do this algorithmically?
NOTE: I can see that even the question isn't properly formed; there's no guarantee that such a construction even exists. There is a trade-off between 2 parameters.
NOTE: Also, (and this makes the perfume analogy becomes slightly artificial), it doesn't matter whether we get one good match or three good matches. So {4.3, 0, 0, 0, 0} would be equivalent to {4.3, 4.2, 4.2, 4.2, 4.2} -- in both cases the score is 4.3.
Let's say for the purpose of argument that perfumes 0-19 are sweet, perfumes 20-39 are sour, etc (sim. salt, bitter, unami)
So there would be very high crosscorrelation between 0-19.
If you modelled this with 100 points in space, then 0-19 would all attract each other very strongly, they would form a cluster.
Similarly you would get 4 other clusters for the other four tastes.
So from just one metric, we have separated out 5 distinct flavours.
But does this technique extend?
π
PS just giving the names of related techniques would be very helpful, as this would allow me to Google for further information. So any answer that just restates the question in industry accepted terminology would be useful!
This algorithm should find a solution to the problem:
Order the perfumes by the number of customers giving a 4+ rating
Choose the first perfume not concidered yet from the list
Delete the ratings from the customers now satisfied.
Repeat the process for perfumes 2 - 5 in the pack.
Backtrace when neccessary to obtain a selection satisfying the criterion.
The true problem is NP-hard, but you can make use of a greedy algorithm:
Let C be the whole of your customers.
Assign to each perfume a coverage given by the number of customers in C that gave 4+ to each perfume
Sort by descending coverage. If C is empty and all coverages are zero, choose a perfume at random (actually, if C is nonzero but < 5% of the original, your requisite is met)
Remove from C all customers (not ratings) satisfied by the perfume just chosen
Repeat from 2 unless you already have 5 perfumes.
This automatically takes care of taste clustering: a customer giving high marks to sweet perfumes will be satisfied by the most voted sweet perfume, and he will then be struck out from C, all his further ratings ignored, and the algorithm will proceed to satisfy other customers.
Also, you should notice that even if you can't satisfy the requisite (95%, 4+) with five perfumes, perfume similarity will ensure that this algorithm maximizes both the coverage and the marks - so you might end up with, say, (93%, 3.9).
Also, suppose that 10% of users do not give any marks above 3. There's no way that you can 4-satisfy 95% of customers, since 10% of total are at most 3-satisfiable. You might want to build C with customers that actually did give at least one 4+ rating.
Or you could change the algorithm and instead of the one in your question, decide on using a knapsack: you want to take home the highest cumulative rating. This also raises the likelihood of a customer being satisfied by the overall package (as is, he is almost guaranteed to very much like one perfume, but he might strongly dislike the other four).
I've got a list of movies for each of which the following factors are known:
Number of people that wish to watch the movie in future
Number of people that have watched the movie
Number of people that have enjoyed the movie
Number of people that watched and disliked the movie
Number of comments on the movie
Number of page hits (directly or from search engines) for the movie page
So based on the above factors, I am looking for a way to calculate popularity for each of the movies. Is there any known formula or algorithm to calculate the popularity value in such case? Preferred algorithms are those which provide a more efficient way to update the previously calculated popularity value for each item.
There are basically infinite ways to do what you are after, depending on how important each factor is.
First, you will need to normalize the data. One way to do it is assume each feature is distributed normally, and find the standard deviation and mean of each feature. (your features are number of people watched the movie, number of people enjoyed the movie,...).
Once you have the sd (standard deviation) and mu (mean), you can easily transform the features for each movie to the standard form using norm = (value-mu)/sd.
The estimator for the mean (mu) is the simple average: sum(x_i) / n
The estimator for the standard deviation (sd) is sd = sqrt(Sum((x_i - mu)^2) / (n-1))
Once you have normalized your data, you can simply define the rating as a weighted sum, where each feature will get a boost according to how significant it is:
a1 * #watched + a2 * #liked + ....
If you don't know what the weight is, but willing to manually give grade to a set of movies, you might use supervised learning to find (a1,a2,...,an) for you using linear regression.
There is no correct answer, but I think we should try to model it as close to reality as possible.
Let's consider the following:
P1=Proportion of people who watched and enjoyed it
P2=Proportion of people who disliked the movie
P3=Proportion of people who watched and would like to see again
P4=People who will watch it later but haven't seen it yet
The number of comments simply can't tell how good a movie is, though it can tell how popular it is.Sure.You could leverage the amount of positive and negative comments if its possible to segregate so(possibly by up-votes and down-votes), or you could just use the number of comments as such(C).
Number of page hits should usually give a good indication of the popularity of the movie, so we should give it a good weight in our algorithm.Moreover we should give recent page hits more weight than say page hits of over a year ago.So try and keep the count of page hits in the last three days(N3), in the last week(N7), in the last month(N30) and in the last year(N365), and everything else(Nrest).
You come up with an algorithm using the factors I mentioned.
[Try to use weighted average and variations of Horner's rule for quick updates.Good Luck.]
Given a list of (say) songs, what's the best way to determine their relative "popularity"?
My first thought is to use Google Trends. This list of songs:
Subterranean Homesick Blues
Empire State of Mind
California Gurls
produces the following Google Trends report: (to find out what's popular now, I restricted the report to the last 30 days)
http://s3.amazonaws.com/instagal/original/image001.png?1275516612
Empire State of Mind is marginally more popular than California Gurls, and Subterranean Homesick Blues is far less popular than either.
So this works pretty well, but what happens when your list is 100 or 1000 songs long? Google Trends only allows you to compare 5 terms at once, so absent a huge round-robin, what's the right approach?
Another option is to just do a Google Search for each song and see which has the most results, but this doesn't really measure the same thing
Excellent question - one song by Britney Spears, might be phenomenally popular for 2 months then (thankfully) forgotten, while another song by Elvis might have sustained popularity for 30 years. How do you quantitatively distinguish the two? We know we want to think that sustained popularity is more important than a "flash in the pan", but how to get this result?
First, I would normalize around the release date - Subterranean Homesick Blues might be unpopular now (not in my house, though), but normalizing back to 1965 might yield a different result.
Since most songs climb in popularity, level off, then decline, let's choose the area when they level off. One might assume that during that period, that the two series are stationary, uncorrelated, and normally distributed. Now you can just apply a test to determine if the means are different.
There's probably less restrictive tests to determine the magnitude of difference between two time series, but I haven't run across them yet.
Anyone?
You could search for the item on Twitter and see how many times it is mentioned. Or look it up on Amazon to see how many people have reviewed it and what rating they gave it. Both Twitter and Amazon have APIs.
There is an unoffical google trends api. See http://zoastertech.com/projects/googletrends/index.php?page=Getting+Started I have not used it but perhaps it is of some help.
I would certainly treat Google's API of "restricted".
In general, comparison functions used for sorting algorithms are very "binary":
input: 2 elements
output: true/false
Here you have:
input: 5 elements
output: relative weights of each element
Therefore you will only need a linear number of calls to the API (whereas sorting usually requires O(N log N) calls to comparison functions).
You will need exactly ceil( (N-1)/4 ) calls. That you can parallelize, though do read the user guide closely as for the number of requests you are authorized to submit.
Then, once all of them are "rated" you can have a simple sort in local.
Intuitively, in order to gather them properly you would:
Shuffle your list
Pop the 5 first elements
Call the API
Insert them sorted in the result (use insertion sort here)
Pick up the median
Pop the 4 first elements (or less if less are available)
Call the API with the median and those 4 first
Go Back to Insert until your run out of elements
If your list is 1000 songs long, that 250 calls to the API, nothing too scary.
What is an algorithm to compare multiple sets of numbers against a target set to determine which ones are the most "similar"?
One use of this algorithm would be to compare today's hourly weather forecast against historical weather recordings to find a day that had similar weather.
The similarity of two sets is a bit subjective, so the algorithm really just needs to diferentiate between good matches and bad matches. We have a lot of historical data, so I would like to try to narrow down the amount of days the users need to look through by automatically throwing out sets that aren't close and trying to put the "best" matches at the top of the list.
Edit:
Ideally the result of the algorithm would be comparable to results using different data sets. For example using the mean square error as suggested by Niles produces pretty good results, but the numbers generated when comparing the temperature can not be compared to numbers generated with other data such as Wind Speed or Precipitation because the scale of the data is different. Some of the non-weather data being is very large, so the mean square error algorithm generates numbers in the hundreds of thousands compared to the tens or hundreds that is generated by using temperature.
I think the mean square error metric might work for applications such as weather compares. It's easy to calculate and gives numbers that do make sense.
Since your want to compare measurements over time you can just leave out missing values from the calculation.
For values that are not time-bound or even unsorted, multi-dimensional scatter data it's a bit more difficult. Choosing a good distance metric becomes part of the art of analysing such data.
Use the pearson correlation coefficient. I figured out how to calculate it in an SQL query which can be found here: http://vanheusden.com/misc/pearson.php
In finance they use Beta to measure the correlation of 2 series of numbers. EG, Beta could answer the question "Over the last year, how much would the price of IBM go up on a day that the price of the S&P 500 index went up 5%?" It deals with the percentage of the move, so the 2 series can have different scales.
In my example, the Beta is Covariance(IBM, S&P 500) / Variance(S&P 500).
Wikipedia has pages explaining Covariance, Variance, and Beta: http://en.wikipedia.org/wiki/Beta_(finance)
Look at statistical sites. I think you are looking for correlation.
As an example, I'll assume you're measuring temp, wind, and precip. We'll call these items "features". So valid values might be:
Temp: -50 to 100F (I'm in Minnesota, USA)
Wind: 0 to 120 Miles/hr (not sure if this is realistic but bear with me)
Precip: 0 to 100
Start by normalizing your data. Temp has a range of 150 units, Wind 120 units, and Precip 100 units. Multiply your wind units by 1.25 and Precip by 1.5 to make them roughly the same "scale" as your temp. You can get fancy here and make rules that weigh one feature as more valuable than others. In this example, wind might have a huge range but usually stays in a smaller range so you want to weigh it less to prevent it from skewing your results.
Now, imagine each measurement as a point in multi-dimensional space. This example measures 3d space (temp, wind, precip). The nice thing is, if we add more features, we simply increase the dimensionality of our space but the math stays the same. Anyway, we want to find the historical points that are closest to our current point. The easiest way to do that is Euclidean distance. So measure the distance from our current point to each historical point and keep the closest matches:
for each historicalpoint
distance = sqrt(
pow(currentpoint.temp - historicalpoint.temp, 2) +
pow(currentpoint.wind - historicalpoint.wind, 2) +
pow(currentpoint.precip - historicalpoint.precip, 2))
if distance is smaller than the largest distance in our match collection
add historicalpoint to our match collection
remove the match with the largest distance from our match collection
next
This is a brute-force approach. If you have the time, you could get a lot fancier. Multi-dimensional data can be represented as trees like kd-trees or r-trees. If you have a lot of data, comparing your current observation with every historical observation would be too slow. Trees speed up your search. You might want to take a look at Data Clustering and Nearest Neighbor Search.
Cheers.
Talk to a statistician.
Seriously.
They do this type of thing for a living.
You write that the "similarity of two sets is a bit subjective", but it's not subjective at all-- it's a matter of determining the appropriate criteria for similarity for your problem domain.
This is one of those situation where you are much better off speaking to a professional than asking a bunch of programmers.
First of all, ask yourself if these are sets, or ordered collections.
I assume that these are ordered collections with duplicates. The most obvious algorithm is to select a tolerance within which numbers are considered the same, and count the number of slots where the numbers are the same under that measure.
I do have a solution implemented for this in my application, but I'm looking to see if there is something that is better or more "correct". For each historical day I do the following:
function calculate_score(historical_set, forecast_set)
{
double c = correlation(historical_set, forecast_set);
double avg_history = average(historical_set);
double avg_forecast = average(forecast_set);
double penalty = abs(avg_history - avg_forecast) / avg_forecast
return c - penalty;
}
I then sort all the results from high to low.
Since the correlation is a value from -1 to 1 that says whether the numbers fall or rise together, I then "penalize" that with the percentage difference the averages of the two sets of numbers.
A couple of times, you've mentioned that you don't know the distribution of the data, which is of course true. I mean, tomorrow there could be a day that is 150 degree F, with 2000km/hr winds, but it seems pretty unlikely.
I would argue that you have a very good idea of the distribution, since you have a long historical record. Given that, you can put everything in terms of quantiles of the historical distribution, and do something with absolute or squared difference of the quantiles on all measures. This is another normalization method, but one that accounts for the non-linearities in the data.
Normalization in any style should make all variables comparable.
As example, let's say that a day it's a windy, hot day: that might have a temp quantile of .75, and a wind quantile of .75. The .76 quantile for heat might be 1 degree away, and the one for wind might be 3kmh away.
This focus on the empirical distribution is easy to understand as well, and could be more robust than normal estimation (like Mean-square-error).
Are the two data sets ordered, or not?
If ordered, are the indices the same? equally spaced?
If the indices are common (temperatures measured on the same days (but different locations), for example, you can regress the first data set against the second,
and then test that the slope is equal to 1, and that the intercept is 0.
http://stattrek.com/AP-Statistics-4/Test-Slope.aspx?Tutorial=AP
Otherwise, you can do two regressions, of the y=values against their indices. http://en.wikipedia.org/wiki/Correlation. You'd still want to compare slopes and intercepts.
====
If unordered, I think you want to look at the cumulative distribution functions
http://en.wikipedia.org/wiki/Cumulative_distribution_function
One relevant test is Kolmogorov-Smirnov:
http://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test
You could also look at
Student's t-test,
http://en.wikipedia.org/wiki/Student%27s_t-test
or a Wilcoxon signed-rank test http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test
to test equality of means between the two samples.
And you could test for equality of variances with a Levene test http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm
Note: it is possible for dissimilar sets of data to have the same mean and variance -- depending on how rigorous you want to be (and how much data you have), you could consider testing for equality of higher moments, as well.
Maybe you can see your set of numbers as a vector (each number of the set being a componant of the vector).
Then you can simply use dot product to compute the similarity of 2 given vectors (i.e. set of numbers).
You might need to normalize your vectors.
More : Cosine similarity