Weighted Voting Algorithm/Calculation - ranking

I am creating a 'duel' app and I am at a dead-end to calculating the results.
Each user either has an upvote or downvote. There is no 1-5 or five-star rating.
For example: If I were displayed 5 times and won 3, I would have 3 'upvotes' and 2 'downvotes'.
If I did straight percentages, any who was displayed 1 time and selected 1 time (100%) would always be the top where as if someone was 9/10 (90%) they would be below the 1/1 but in theory would belong on top.
Anyone have any ideas of how to accomplish this?

I, too, have been looking for a suitable algorithm for a voting website.
Whilst what #joshhendo suggested would appear to be a sound method of ranking votes, it doesn't take into account the percentage of positive votes.
For example:
Item 1 has 70 'up' votes and 30 'down' votes.
Item 2 has 400 'up' votes and 300 'down' votes.
For Item 1: 70-30 = 40
For Item 2: 400-300 = 100
Item 2 will appear above Item 1 because it has more positive votes. But Item 2 only has 25% positive votes, whereas Item 1 has ~57% positive votes. Item 1 should obviously appear above Item 2, because even though it doesn't have as many overall votes, it has a better 'up' to 'down' ratio of votes.
But then again, one wants to avoid the initial problem of items with 1 vote (positive) appearing above everything else.
I recommend you read this: http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
It suggests a more mathematically sound solution to this problem. It's actually a very interesting read, and I will be implementing something similar into my own website.
[edit]
This is also a very good read: http://blog.linkibol.com/2010/05/07/how-to-build-a-popularity-algorithm-you-can-be-proud-of/
[/edit]

Rather than the positive vote percent, track a bayesian average of that, e.g.:
(positive votes + weighted avg positive votes) / (total votes + arbitrary sample)
http://en.wikipedia.org/wiki/Bayesian_average

You could just tally up the votes, with an up vote counting as +1 and a down vote counting as -1.
For example, lets say someone was 9/10 (for example, had 9 up votes and 1 down vote), then their score would be 9 + -1 = 8. This is higher than 1/1, who has 1 up vote and 0 down votes, therefore their score would be 1 + -0 = 1. So, the person who would have got 90% in your percentage system now has a score of 8, which is higher than the person who would have got 100% with a score of 1.
That's the best and simplest solution I can think of. There may be more complex solutions that would work, but for what you want, I think that should work.

You can have a weighted score.
track each users's points, and a ranking-score
let q be your opponents score (which is in range of 0 to 1 inclusive on both ends.)
When you do battle, you gain 1-q points when you win and you lose q points when you lose. This means if you lose against someone who always wins, that's not going to hurt you much. If you lose to someone who almost always loses, you're going to lose lots of points for it.
Each (day, hour, whatever) recalculate everyone's q, where the #1 person gets a q of 1 (or 1.5, or 2, whatever, but 1 works the best) and the lowest person gets a q of 0.

Related

Algorithm: Eliminating players that no longer have a chance to win the tournament

I have been working on the algorithm for this problem, but can't figure it out. The problem is below:
In a tournament with X player, each player is betting on the outcomes of basketball matches in the NBA.
Guessing the correct match outcome earns a player 3 points, guessing the MVP of the match earns 1 point and guessing both wrong - 0 points.
The algorithm needs to be able to determine if a certain player can't reach the number 1 spot in this betting game.
For example, let's say there are a total of 30 games in the league, so the max points a player can get for guessing right is (3+1)*30=120.
In the table below you can see players X,Y and Z.
Player X guessed correctly so far 20 matches so he have 80 points.
Players Y and Z have 26 and 15 points, and since there are only 10 matches left, even if they guess correctly all the remaining 10 it would not be enough to reach the number 1 spot.
Therefore, the algorithm determined that they are eliminated from the game.
Team
Points
Points per match
Total Games
Max Points possible
Games left
Points Available
Eliminated?
X
80
0-L 1-MVP 3-W
30
120
10
0-40
N
Y
26
0-L 1-MVP 3-W
30
120
10
0-40
Y
Z
15
0-L 1-MVP 3-W
30
120
10
0-40
Y
The baseball elimination problem seems to be the most similar to this problem, but it's not exactly it.
How should I build the reduction of the maximum-flow problem to suit this problem?
Thank you.
I don't get why you are looking at very complex max-flow algorithms. Those might be needed for very complex things (especially when pairings lead to zero-sum results and order/remaining parings start to matter -> !much! harder to do worst-case analysis).
Maybe the baseball problem you mention is one of those (did not check it). But your use-case sounds trivial.
1. Get current leader score LS
2. Get remaining matches N
3. For each player P
4. Get current player score PS
5. Eliminate iff PS + 3 * N < LS
(assumes parallel progress: standings always synced to all players P have played M games
-> easy to generalize though)
This is simple. Given your description there is nothing preventing us from asumming worst-case performance from every other player aka it's a valid scenario that every other player guesses wrong for all upcoming guesses -> score S of player P can stay at S for all remaining games.
Things might quickly change to NP-hard decision-problems when there are more complex side-constraints (e.g. statistical distributions / expectations)

Developing player rankings with ELO

I recently created a tournament system that will soon lead into player rankings. Basically, after players are done with the tournament, they are given a rank based on how they did in the tournament. So the person who won the tournament will have the most points and be ranked #1, while the second will have the second most points and be ranked #2, and so on...
However, after they are ranked in the new rankings, they can challenge other members and have a way to play other members and change their ranks. So basically (using a ranking system), if Player A who is ranked #2 beats Player B who is ranked #1, Player A will now become #1.
I've also decided that if a player wants to compete in the rankings but was not present during the tournament, they can sign up after the tournament, and will be given the lowest possible rank with the lowest points (but they have a chance to move up).
So now, I am wanting to know which way should I go about planning this. When I convert the players from tournament to match rankings, I have to identify them with points in order to rank them. I decided this seems like the best way to do it.
1 1000
2 900
3 800
4 700
5 600
6 500
7 400
8 300
9 200
10 100
After looking on the internet I've decided it would be wise to use ELO to give players their new rank after they players have matched against each other.. I went about it on this page: http://www.lifewithalacrity.com/2006/01/ranking_systems.html
So if I go about it this way, lets say I have rank #10 facing rank #1. According to the website above, my formula is:
R' = R + K * (S - E)
and the rating of #10 only has 100 points where #1 has 1,000.
So after doing the math rank #10's expected value of beating #1 is:
1 / [ 1 + 10 ^ ( [1000 - 100] / 400) ]
= 0.55%
So
100 + 32 * (1 - 0.52)
= 115.36
The problem I have with ELO is it makes no sense. After A rank such as #10 beats #1, he should not gain something as low as 15 points. I'm not sure if i'm doing the math wrong, or if I'm splitting up the points wrong. Or maybe I shouldn't use ELO at all? Any suggestions would be very helpful
Don't get offended, it is your table that doesn't make sense.
Elo system is based on the premise that a rating is an accurate estimate of the strength, and difference of ratings accurately predicts an outcome of a match (a player better by 200 point is expected to score 75%). If an actual outcome does not agree with a prediction, it means that ratings do not reflect strength, hence must be adjusted according to how much an actual outcome differs from the predicted.
An official (as in FIDE) Elo system has few arbitrary arbitrary constants (e.g. 200/75 gauge, Erf as predictor, etc); choosing them (reasonably) different may lead to a different rating values, yet would result (in a long run) in the same ranking. There is some interesting math behind this assertion; this is not a right place to get into details.
Now back to your table. It assigns the rating based on the place, not on the points scored. The champion gets 1000 no matter whether she swept the tournament with an absolute 100% result, or barely made it among equals. These points do not estimate the strength of the participants.
So my advise is to abandon the table altogether, assign each new player an entry rating (say, 1000; it really doesn't matter as long as you are consistent), and stick to Elo from the very beginning.

If you know the future prices of a stock, what's the best time to buy and sell?

Interview Question by a financial software company for a Programmer position
Q1) Say you have an array for which the ith element is the price of a given stock on
day i.
If you were only permitted to buy one share of the stock and sell one share
of the stock, design an algorithm to find the best times to buy and sell.
My Solution :
My solution was to make an array of the differences in stock prices between day i and day i+1 for arraysize-1 days and then use Kadane Algorithm to return the sum of the largest continuous sub array.I would then buy at the start of the largest continuous array and sell at the end of the largest
continous array.
I am wondering if my solution is correct and are there any better solutions out there???
Upon answering i was asked a follow up question, which i answered exactly the same
Q2) Given that you know the future closing price of Company x for the next 10 days ,
Design a algorithm to to determine if you should BUY,SELL or HOLD for every
single day ( You are allowed to only make 1 decision every day ) with the aim of
of maximizing profit
Eg: Day 1 closing price :2.24
Day 2 closing price :2.11
...
Day 10 closing price : 3.00
My Solution: Same as above
I would like to know what if theres any better algorithm out there to maximise profit given
that i can make a decision every single day
Q1 If you were only permitted to buy one share of the stock and sell one share of the stock, design an algorithm to find the best times to buy and sell.
In a single pass through the array, determine the index i with the lowest price and the index j with the highest price. You buy at i and sell at j (selling before you buy, by borrowing stock, is in general allowed in finance, so it is okay if j < i). If all prices are the same you don't do anything.
Q2 Given that you know the future closing price of Company x for the next 10 days , Design a algorithm to to determine if you should BUY,SELL or HOLD for every single day ( You are allowed to only make 1 decision every day ) with the aim of of maximizing profit
There are only 10 days, and hence there are only 3^10 = 59049 different possibilities. Hence it is perfectly possible to use brute force. I.e., try every possibility and simply select the one which gives the greatest profit. (Even if a more efficient algorithm were found, this would remain a useful way to test the more efficient algorithm.)
Some of the solutions produced by the brute force approach may be invalid, e.g. it might not be possible to own (or owe) more than one share at once. Moreover, do you need to end up owning 0 stocks at the end of the 10 days, or are any positions automatically liquidated at the end of the 10 days? Also, I would want to clarify the assumption that I made in Q1, namely that it is possible to sell before buying to take advantage of falls in stock prices. Finally, there may be trading fees to be taken into consideration, including payments to be made if you borrow a stock in order to sell it before you buy it.
Once these assumptions are clarified it could well be possible to take design a more efficient algorithm. E.g., in the simplest case if you can only own one share and you have to buy before you sell, then you would have a "buy" at the first minimum in the series, a "sell" at the last maximum, and buys and sells at any minima and maxima inbetween.
The more I think about it, the more I think these interview questions are as much about seeing how and whether a candidate clarifies a problem as they are about the solution to the problem.
Here are some alternative answers:
Q1) Work from left to right in the array provided. Keep track of the lowest price seen so far. When you see an element of the array note down the difference between the price there and the lowest price so far, update the lowest price so far, and keep track of the highest difference seen. My answer is to make the amount of profit given at the highest difference by selling then, after having bought at the price of the lowest price seen at that time.
Q2) Treat this as a dynamic programming problem, where the state at any point in time is whether you own a share or not. Work from left to right again. At each point find the highest possible profit, given that own a share at the end of that point in time, and given that you do not own a share at the end of that point in time. You can work this out from the result of the calculations of the previous time step: In one case compare the options of buying a share and subtracting this from the profit given that you did not own at the end of the previous point or holding a share that you did own at the previous point. In the other case compare the options of selling a share to add to the profit given that you owned at the previous time, or staying pat with the profit given that you did not own at the previous time. As is standard with dynamic programming you keep the decisions made at each point in time and recover the correct list of decisions at the end by working backwards.
Your answer to question 1 was correct.
Your answer to question 2 was not correct. To solve this problem you work backwards from the end, choosing the best option at each step. For example, given the sequence { 1, 3, 5, 4, 6 }, since 4 < 6 your last move is to sell. Since 5 > 4, the previous move to that is buy. Since 3 < 5, the move on 5 is sell. Continuing in the same way, the move on 3 is to hold and the move on 1 is to buy.
Your solution for first problem is Correct. Kadane's Algorithm runtime complexity is O(n) is a optimal solution for maximum subarray problem. And benefit of using this algorithm is that it is easy to implement.
Your solution for second problem is wrong according to me. What you can do is to store the left and right index of maximum sum subarray you find. Once you find have maximum sum subarray and its left and right index. You can call this function again on the left part i.e 0 to left -1 and on right part i.e. right + 1 to Array.size - 1. So, this is a recursion process basically and you can further design the structure of this recursion with base case to solve this problem. And by following this process you can maximize profit.
Suppose the prices are the array P = [p_1, p_2, ..., p_n]
Construct a new array A = [p_1, p_2 - p_1, p_3 - p_2, ..., p_n - p_{n-1}]
i.e A[i] = p_{i+1} - p_i, taking p_0 = 0.
Now go find the maximum sum sub-array in this.
OR
Find a different algorithm, and solve the maximum sub-array problem!
The problems are equivalent.

Find all possible combinations of a scores consistent with data

So I've been working on a problem in my spare time and I'm stuck. Here is where I'm at. I have a number 40. It represents players. I've been given other numbers 39, 38, .... 10. These represent the scores of the first 30 players (1 -30). The rest of the players (31-40) have some unknown score. What I would like to do is find how many combinations of the scores are consistent with the given data.
So for a simpler example: if you have 3 players. One has a score of 1. Then the number of possible combinations of the scores is 3 (0,2; 2,0; 1,1), where (a,b) stands for the number of wins for player one and player two, respectively. A combination of (3,0) wouldn't work because no person can have 3 wins. Nor would (0,0) work because we need a total of 3 wins (and wouldn't get it with 0,0).
I've found the total possible number of games. This is the total number of games played, which means it is the total number of wins. (There are no ties.) Finally, I have a variable for the max wins per player (which is one less than the total number of players. No player can have more than that.)
I've tried finding the number of unique combinations by spreading out N wins to each player and then subtracting combinations that don't fit the criteria. E.g., to figure out many ways to give 10 victories to 5 people with no more than 4 victories to each person, you would use:
C(14,4) - C(5,1)*C(9,4) + C(5,2)*C(4,4) = 381. C(14,4) comes from the formula C(n+k-1, k-1) (google bars and strips, I believe). The next is picking off the the ones with the 5 (not allowed), but adding in the ones we subtracted twice.
Yeah, there has got to be an easier way. Lastly, the numbers get so big that I'm not sure that my computer can adequately handle them. We're talking about C(780, 39), which is 1.15495183 × 10^66. Regardless, there should be a better way of doing this.
To recap, you have 40 people. The scores of the first 30 people are 10 - 39. The last ten people have unknown scores. How many scores can you generate that are meet the criteria: all the scores add up to total possible wins and each player gets no more 39 wins.
Thoughts?
Generating functions:
Since the question is more about math, but still on a programming QA site, let me give you a partial solution that works for many of these problems using a symbolic algebra (like Maple of Mathematica). I highly recommend that you grab an intro combinatorics book, these kind of questions are answered there.
First of all the first 30 players who score 10-39 (with a total score of 735) are a bit of a red herring - what we would like to do is solve the other problem, the remaining 10 players whose score could be in the range of (0...39).
If we think of the possible scores of the players as the polynomial:
f(x) = x^0 + x^1 + x^2 + ... x^39
Where a value of x^2 is the score of 2 for example, consider what this looks like
f(x)^10
This represents the combined score of all 10 players, ie. the coefficent of x^385 is 2002, which represents the fact that there are 2002 ways for the 10 players to score 385. Wolfram Alpha (a programming lanuage IMO) can evaluate this for us.
If you'd like to know how many possible ways of doing this, just substitute in x=1 in the expression giving 8,140,406,085,191,601, which just happens to be 39^10 (no surprise!)
Why is this useful?
While I know it may seem silly to some to set up all this machinery for a simple problem that can be solved on paper - the approach of generating functions is useful when the problem gets messy (and asymptotic analysis is possible). Consider the same problem, but now we restrict the players to only score prime numbers (2,3,5,7,11,...). How many possible ways can the 10 of them score a specific number, say 344? Just modify your f(x):
f(x) = x^2 + x^3 + x^5 + x^7 + x^11 ...
and repeat the process! (I get [x^344]f(x)^10 = 1390).

Adjusting votes based on different numbers of voters

I have a 1 to 5 voting system and i'm trying to figure out the best way to find the most popular item voted on, taking into consideration the total possible number of votes cast. To get a vote total, i'm counting "1" votes as -3, "2" votes as -2, "3" votes as +1, "4" votes as +2, "5" votes as +3, so a "1" vote would cancel out a "5" vote and vice versa.
For this example, say we have 3 films playing in 3 different size theaters.
Film 1: 800 seats / Film 2: 400 seats / Film 3: 180 seats
In a way, we're limiting the total amount of votes based on seats, so I would like a way for the film in the smaller theater to not get automatically overwhelmed by the film in the larger theater. It's likely that there will be more votes cast in the larger theater, resulting in a higher total score.
Edit 10/18:
Alright, hopefully I can explain this better. I'm working for a film festival, and we're balloting the first screening of each film in the fest. Therefore, each film will have from 0 to a maximum number of votes based on the size of each theater. I'm looking to find the most popular film in 3 categories: narrative, documentary, short film. By popular I mean a combination of highest average vote and number of votes.
It seems like a weighted average is what i'm looking for, giving less weight to votes from a bigger theater and more weight to votes from a smaller theater to even things out.
You're working with weighted averages.
Instead of just adding up and dividing by the total number of elements (arithmetic mean):
a + b + c
---------
3
You are adding weights to each element, as they are not all evenly distributed:
w1*a + w2*b + w3*c
------------------
3
In your case, the weights could be this:
# of people in current theater
--------------------------------
# of people in all the theaters
Let's try a test case:
Theater 1: 100 people (rating: 1)
Theater 2: 1,000,000 people (rating: 5)
Average = (100 / (100 + 1000000)) * 1 + (1000000/(100 + 1000000)) * 5
-----------------------------------------------------------
2
= 2.49980002
Well, depending on your goals it sounds like you are interested in some sort of weighted average.
Continuing your film example, it sounds to me like you are trying to rate how "good" the films are. To do this, you don't want to factor the number of views of any particular film too highly into the final determination. However, you have to take it into account somewhat since a film that only got viewed 5 times and had an average rating of +2.7 has much less credibility than a film with 10,000 views getting the same rating.
You might consider simply not including a film in the results unless it has a minimum number of votes.
Given a uniform (even) distribution of votes across {1,2,3,4,5}, the expected rating of your film is 0.2. This is because the the votes {1 and 5} cancel eachother out, as do {2 and 4}. But the vote 3 has an expected value of 1/5 = 0.2. So if people give a rating of {1,2,3,4,5} with equal probability, then you would expect a film (no matter how many people see it) to have an average rating close to 0.2.
So I think the best option for you would be to add up all the scores received and simply divide by the number of people who have seen each film. This should be a good guess at people's sentiment toward the film as the average of the distribution should not get larger simply because more people see the film.
If I were you, I would also suggest adding a small penalty term to your final result, to take into account the fact that some people didn't even want to go see the movie. If lots of people didn't want to see the movie in the first place, but the 5 or so people that saw it gave it a 5* rating, that doesn't make it a good movie, does it?
So a final solution I would recommend: Add up all the points as you have described, and divide by the total number of people who have gone to the cinema. While not perfect (whatever perfect means), it should give you some indication of what people like and don't like. This essentially means people who chose not to see a movie are adding zero to the points total, but still affect the average because the end result is divided by a larger number.

Resources