Sorting A List Of Songs By Popularity - algorithm

For student council this year, I'm on the "songs" committee, we pick the songs. Unfortunately, the kids at the dances always end up hating some of the stupid song choices. I thought I could make it different this year. Last thursday, I created a simple PHP application so kids could submit songs into the database, supplying a song name, artist, and genre (from a drop-down). I also implemented a voting feature similar to Reddit's. Click an upvote button, you've upvoted the song, incremented the upvote count. Same with downvotes.
Anywho, in the database, I have three tidbits of information I thought I could use to rate these songs, upvotes, downvotes, and a timestamp. For a while, the rank was created by simply having the songs with the higher "vote" count at the top. That is, the more upvotes, less downvotes (upvotes - downvotes) would be at the top of the list. That worked, for a while, but there were about 75 songs on the list by Sunday, and the songs that were submitted first were simply at the top of the list.
Sunday, I changed the rank algorithm to (upvotes - downvotes) / (CurrentTimestamp - CreationTimestamp), that is, the higher the vote count in the lesser amount of time, the higher the song would be on the list. This works, better, but still not how i'd like it.
What happens now, is that the instant a song is created and upvoted to a vote count of 1, it ends up at the top of the list somewhere. Songs who have vote counts in the negatives aren't viewed often because kids don't usually scroll to the bottom.
I guess I could sort the data so the lower songs appear at the top, so people are forced to see the lower songs. Honestly, I've never had to work on a "popularity" algorithm before, so, what are your thoughts?
Website's at http://www.songs.taphappysoftware.com - I don't know if I should put this here or not, might cause some unwanted songs at the dance :0

That's a very good question. There are a few similar questions that have been asked here.
This article is probably a good place to start. Apparently upvotes minus downvotes is a bad way to do it. The better way is to use complicated maths to assign a score to each and sort by that.
Here is a scoring function in Ruby from the article:
require 'statistics2'
def ci_lower_bound(pos, n, power)
if n == 0
return 0
end
z = Statistics2.pnormaldist(1-power/2)
phat = 1.0*pos/n
(phat + z*z/(2*n) - z * Math.sqrt((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n)
end
pos is the number of positive
rating, n is the total number of
ratings, and power refers to the
statistical power: pick 0.10 to have a
95% chance that your lower bound is
correct, 0.05 to have a 97.5% chance,
etc.
As a usability thing, I would sort the data by the score, but I would not show the score to the user. I would only show the number of upvotes and downvotes.

How about sorting songs by posting time or number of votes (negative + positive)? If your goal is to give every song equal attention, this sounds good enough.

Related

How to give players a score on a ranking/prediction task?

I have a website built with php/mysql, and I am looking for help in communicating to a Programmer what I want him to do with a Poll/Prediction game that I am trying to create.
For purposes of discussion, assume a game where perhaps 100 players try to predict the top 5 finishers in a Golf Tournament of perhaps 9 Golfers.
I am looking for help in how to create and assign a score based upon the accuracy of prediction.
The players provide a rank ordering using a drag and drop function to order the players from 1 through 5. This ordering has already been coded, and the ranks are stored somehow in the DB (I do not know how).
My initial thinking is to ask the coder to create a script which will assign a score from 1 to 5 for each Golfer that the player nominated to be in the Top 5.
So, a player who predicted perfectly would be awarded a perfect score of 12345.
His first golfer received a 1 for finishing first, second a 2 for finishing second, third golfer receives a 3 for finishing third, and so on.
Anybody less than perfect would have a score higher than 12345.
Players who got the first four positions correct would have to be differentiated on the basis of the finish of their fifth Golfer.
So, one might score 12347 and the other 12348 and the player with the highest score (12348) would be the loser in a matchup of the two players.
A player who did poorly, might have a score of 53419.
Question:
Is this a viable way of creating a score which the players of my game can be ranked upon?
Is it possible to instead simply have something like a Spearman Rank-Order Correlation calculated comparing the Actual Finish Positions with the Predicted Finish Positions for each player,
and then rank players on the basis of the correlation coefficients for their rankings?
Thanks for any help in clarifying how to conceptualize this before approaching a programmer who gets annoyed when I don't really know what I want him to do ahead of time.
It's a quite interesting problem.
It seems that there are three components that need to be considered in the scoring: the number of correct predictions, the order of correct predictions, and the weight of correct predictions.
For example, assume the truth is:
1,5,10,15,20
Here are some predictions:
1,6,7,8,9 : only predicted first one
2,1,10,21,30 : 1 and 10, but the order of 1 is incorrect
20,15,1,5,30 : hit four in the top 5, but the orders are incorrect
It depends on what you value most. You may first check how many in the top 5 the user has predicted, add a value, and then penalize wrong orders. The weight for each position should also be different, this way
1,5,10,15,20 will rank higher than 1,5,10,20,15 and higher than 1,10,5,20,15
Spearman may be working, but I feel it could be too coarse for your purpose.
This is actually a very similar problem that search engines have. EG, in search engine evaluation, the actual outcomes are preferred results provided by humans, and the predicted outcomes are the results delivered by the search engine. In both your task and for search engines, I'd guess you care a lot more about the accuracy of the winner than the accuracy of the 5th place finisher. If that is the case, then the mean average precision is probably a good measure.

Total Score ranking algorithm

Background;
I am looking for a way to calculate the score of a piece of audio based on listeners feedback. Each time a user listens to the track, they must vote if they like it, a simple yes or no. Then each track has a score, based on the number of yes and no votes.
Additionally I would like to decay the value of each vote uniformly over the course of 31 days, so after this amount of time, its value is 0 and doesn't contribute to the overall total score.
I have found a lot of discussions based on reddit and hacker news ranking algorithms, but these seem to decay the total score, and not individual votes themselves. Each vote will have a different amount of decay, based on when the vote was originally cast.
Can anyone help or recommend some material to look at?
Thanks
You could model it as "yes" = 1.0 and "no" = 0.0.
Then, the value of a vote on the nth day after it was cast = (31-n)/31. Further condition this if n > 31, then set it to 0.
Hope this answers your question.
What acceleration do you want on the degradation. A common one is logarithmic because it is easy to implement. Score a 1 for like and a -1 for dislike. Then, when adding up the likes/disliked, divide by the number of days since the vote. On day 1, the vote will have an absolute value of 1. On day 2, it will be 1/2. On day 3, it will be worth 1/3, etc... On day 31, it will be worth 1/31 (0.03).
The problem with logarithmic degradation is that it drops very quickly. You can use many other methods, such as multiplying by log(11-d) where d=1 on the first day, 2 on the second day, and so on. It only allows 11 days of degradation. log(31-d) would allow 31 days. You need to ensure you don't try to do log(0) or log(-x).
Another problem with this entire model is how to handle things that only have old votes. What if something has nothing but likes, but all the likes are old? It will register as not liked much because all the likes have degraded.

Feedback on ranking algorithm options for my website

I am currently working on writing an algorithm for my new site I plan to launch soon. The index page will display the "hottest" posts at the moment.
Variables to consider are:
Number of votes
How controversial the post is (# between 0-1)
Time since post
I have come up with two possible algorithms, the first and most simple is:
controversial * (numVotesThisHour / (numVotesTotal - numVotesThisHour)
Denom = numVotesTuisHour if numVotesTotal - numVotesThisHour == 0
Highest number is hottest
My other option is to use an algorithm similar to Reddit's (except that the score decreases as time goes by):
[controversial * log(x)] - (TimePassed / interval)
x = { numVotesTotal if numVotesTotal >= 10, 10 if numVotesTotal < 10
Highest number is hottest
The first algorithm would allow older posts to become "hot" again in the future while the second one wouldn't.
So my question is, which one of these two algorithms do you think is more effective? Which one do you think will display the truly "hot" topics at the moment? Can you think of any advantages or disadvantages to using one over the other? I just want to make sure I don't overlook anything so that I can ensure the content is as relevant as possible. Any feedback would be great! Thanks!
Am I missing something. In the first formula you have numVotesTotal in the denominator. So higher number of votes all time will mean it will never be so hot even if it is not so old.
For example if I have two posts - P1 and P2 (both equally controversial). Say P1 has numVotesTotal = 20, and P2 has numVotesTotal = 1000. Now in the last one hour P1 gets numVotesThisHour = 10 and P2 gets numVotesThisHour = 200.
According to the algorithm, P1 is more famous than P2. It doesn't make sense to me.
I think the first algorithm relies too heavily on instantaneous trend. Think of NASCAR, the current leader could be going 0 m.p.h. because he's at a pit stop. The second one uses the notion of average trend. I think both have their uses.
So for two posts with the same total votes and controversial rating, but where posts one receives 20 votes in the first hour and zero in the second, while the other receives 10 in each hour. The first post will be buried by the first algorithm but the second algorithm will rank them equally.
YMMV, but I think the 'hotness' is entirely dependent on the time frame, and not at all on the total votes unless your time frame is 'all time'. Also, it seems to me that the proportion of all votes in the relevant time frame, rather than the absolute number of them, is the important figure.
You might have several categories of hot:
Hottest this hour
Hottest this week
Hottest since your last visit
Hottest all time
So, 'Hottest in the last [whatever]' could be calculated like this:
votes_for_topic_in_timeframe / all_votes_in_timeframe
if you especially want a number between 0 and 1, (useful for comparing across categories) or, if you only want the ones in a specific timeframe, just take the votes_for_topic_in_timeframe values and sort into descending order.
If you don't want the user explicitly choosing the time frame, you may want to calculate all (say) four versions (or perhaps just the top 3), assign a multiplier to each category to give each category a relative importance, and calculate total values for each topic to take the top n. This has the advantage of potentially hiding from the user that no-one at all has voted in the last hour ;)

Rating Algorithm

I'm trying to develop a rating system for an application I'm working on. Basically app allows you to rate an object from 1 to 5(represented by stars). But I of course know that keeping a rating count and adding the rating the number itself is not feasible.
So the first thing that came up in my mind was dividing the received rating by the total ratings given. Like if the object has received the rating 2 from a user and if the number of times that object has been rated is 100 maybe adding the 2/100. However I believe this method is not good enough since 1)A naive approach 2) In order for me to get the number of times that object has been rated I have to do a look up on db which might end up having time complexity O(n)
So I was wondering what alternative and possibly better ways to approach this problem?
You can keep in DB 2 additional values - number of times it was rated and total sum of all ratings. This way to update object's rating you need only to:
Add new rating to total sum.
Divide total sum by total times it was rated.
There are many approaches to this but before that check
If all feedback givers treated at equal or some have more weight than others (like panel review, etc)
If the objective is to provide only an average or any score band or such. Consider scenario like this website - showing total reputation score
And yes - if average is to be omputed, you need to have total and count of feedback and then have to compute it - that's plain maths. But if you need any other method, be prepared for more compute cycles. balance between database hits and compute cycle but that's next stage of design. First get your requirement and approach to solution in place.
I think you should keep separate counters for 1 stars, 2 stars, ... to calcuate the rating, you'd have to compute rating = (1*numOneStars+2*numTwoStars+3*numThreeStars+4*numFourStars+5*numFiveStars)/numOneStars+numTwoStars+numThreeStars+numFourStars+numFiveStars)
This way you can, like amazon also show how many ppl voted 1 stars and how many voted 5 stars...
Have you considered a vote up/down mechanism over numbers of stars? It doesn't directly solve your problem but it's worth noting that other sites such as YouTube, Facebook, StackOverflow etc all use +/- voting as it is often much more effective than star based ratings.

Voting algorithm: how to calculate rank?

I am trying to figure our a way to calculate rank. Right now it simply takes ratio of wins / losses of each individual entry, so e.g. one won 99 times out of a 100, it has 99% winning rank. BUT if an entry won 1 out of total 1 votes, it will have a 100% winning rank, but definitely it can't be higher that of the one that won 99 times. What would be a better way to do this?
Try something like this:
votes = wins + losses
score = votes * ( wins / votes )
That way, something with 50% wins, but a million votes would still be ahead of something with 100% wins but only one vote.
You can add in an extra weight based on age (in days in this example), too, something like
if age < 5:
score = score + ((highest real score on site) * ((5 - age) / 5)
This will put brand new entries right at the top of the first page, and then they will move slowly down the list over the course of the next 5 days (I'm assuming age is a fractional number, not just an integer). After the 5 days are up, they will be put in the list based solely on the score from the previous bit of pseudo-code.
Depending on how complicated you want to make it, the Elo system chess uses (or something similar) may be what you want: http://en.wikipedia.org/wiki/Elo_rating_system
Even if a person has won 1/1 matches, his rating would be far below someone who has won/lost hundreds of matches against tough opponents, for instance.
You could always use a point system rather than win/loss ratio. Winning would always give points and then you could play around with either removing points for losing, not awarding points at all for losing, or awarding less points for losing. It all depends on exactly how you want people to be ranked. For example you may want to give 2 points for winning and 1 point for losing if you want to favor people who participate over those who do not (which sounds kind of like what you were talking about in your example of the person playing 100 games vs 1 game). The NHL uses a similar technique for rankings (2 points for a win, 1 point for an overtime loss, 0 points for a regular loss). That might give you some more flexibility.
if i understand the question correctly, then whoever gets more votes has the higher rank.
Would it make sense to add more rank to winning entry if losing entry originally had a much higher rank, e.g. much stronger competitor?

Resources