Calculating scores from incomplete league tables - algorithm

When I was in high school and learning about matrices, we were shown a technique that would help in a situation like this:
There are a number of chess players in a league, and they need to determine a ranking for all of them, but don't have enough time for every player to play every other person. If it ends up that Player A beats Player B, and Player B beats Player C, you can say with some level of certainty that Player A is better than Player C and therefore award some points to player A in lieu of them actually playing each other.
As I said, this was a little while ago and I can't remember how to actually perform the algorithm, but I think it was called something like a "domination matrix". Searching the web for that has been fruitless and scary at times, so I don't think that's right.
Can anyone give me some help? Ideally an algorithm I can use for this program I'm working on, but even just a pointer to some more information about the procedure.

It sounds like you are remembering a presentation of the Perron-Frobenius theorem - which is at least a safer search term :-). One such is at
http://www.math.utah.edu/~keener/lectures/rankings.pdf
Chess players use the Elo system, described at http://en.wikipedia.org/wiki/Elo_rating_system and http://www.chesselo.com/, which would be easier to implement. It is possible that there is no good ranking even if you know everything - see http://en.wikipedia.org/wiki/Nontransitive_dice. People modelling soccer games usually keep track of defensive and offensive strengths separately.

What it sounds like you are describing is a Swiss System tournament or a very similar variation all described on the linked Wikipedia entry. Although rather than given an incomplete tournament to calculate ratings it is a way to organize a tournament to pair the best chess players with the best and the worst chess players with the worst to determine a ranking without the need for everyone to play everyone else.

Maybe some type of PageRank algorithm might work for you.
Imagine every person has a webpage in which they hyperlink to every person who defeated them.
Running the page rank algorithm on this data would give you give you the steady state of your link matrix which might indicate to you the relative importance of each person (I guess).
For example a person who played only one game but, in that, defeated someone who defeated lots of people might have a higher page rank than somebody who defeated 10 people who in turn have not won a single game.

perhaps the min-max algorithm ?

Related

Prevent Genetic algorithm in zero-sum game from cooperating

I have a specific game, which is not literally zero-sum, because points are awarded by the game during a match, but close to it, in the sense that the number of total points have a clear upper limit, so the more points you score, the less points are available for your opponents.
The game is played by 5 players, with no teams whatsoever.
I'm making a genetic algorithm play rounds against itself with pseudo-random "mutations" between generations.
But after a couple hundred generations, a pattern always emerges. The algorithm ends up strongly favoring a specific player (example: the player who plays first). Since the mutations giving the "best results" serve as a base for the next generation, this seems to move towards some version of "If you are the first player, play this way (the way being a very specific yet pretty random technique that gives bad, or at best average, results), and if not, then play in this specific way that indirectly but strongly favors the first player".
Then, for the next generations, the player whose turn is strongly favored starts mutating totally randomly because it wins every round no matter what it does, as long as the part of the algorithm that favors that player is still intact.
I'm looking for a way to prevent this specific evolution route, but I can't figure out how to possibly "reward" victory by your own strategy more than victory because you were helped a lot.
I think this happens because only the winner of the round robbin tournament gets promoted and mutated on each generation. At first players more or less win randomly, but then a strategy comes up that favors a position. Now I guess that slightly diverting from that strategy (pseudo-random mutations) makes you only lose the games where you are in the favoured position but not win any of the others, so you will never divert from that strategy, something like a local Nash equilibrium.
You could try to keep more than one individual per generation and generate mutations from them. But I doubt this will help and at best delay the effect. Because soon the code of the best individual will spread to all. This seems to be the root cause of the problem.
Therefore my suggestion would be to have t tribes where each tribe has x/t individuals. Now instead of playing a round robbin tournament each individual plays only against the individuals of other tribes. Then you keep the best individual per tribe, mutate and proceed with the next generation. So that the tribes never mix genes.
To me, it seems like there is an easy fix: play multiple games each evaluation.
Instead of each generation only testing one game, strongly favouring the starting player, play 5 games and distribute who starts first equally ( so every player starts first at least once ).
I suppose your population is larger than 5, right? So how are you testing the genomes against each other? You should definitely not have let them play only one game, because maybe you have paired up a medium player against 4 easy players, making it seem like the medium player is better.

How to rank a set of people based on competition results?

I'm having a bit of a brain melt here.
I have a set of people. They compete against each other in timed events. Each compeition yields a set of results showing everyone, ranked by their times.
From this data, I can see that (say) person A has beaten person B 73% of the time in 48 meetings. Simple.
Let's suppose I have people A B C D E F G though. For any pairing I can see who's the victor by comparing them to each other, but how do I come up with the "most accurate" OVERALL ranking?
Does it need to be some sort of iterative process? Any tips appreciated, I don't know where to start really!
(Each competition is not necessarily a complete set of all of the competitors, if that matters.)
I might like to further improve things by taking into account their relative times, not JUST "A beat B" or "B beat A". "A beat B by 6.3 seconds", etc etc. But let's keep things simple for now, I think!
Happy to give more info if needed, just tell me what!
Many thanks!
As a first step, I'd implement the elo rating system.
http://en.wikipedia.org/wiki/Elo_rating_system
It will do a decent job. You can get fancier with more complicated systems like Glicko or Trueskill, but I'd just go with Elo first and see if it is good enough for you.
You can use Elo Rating System (used in chess to evaluate players around the world).
I think it works the following way: each player starts with a given number of points. When two players challenge each other they will win or lose a different amount of points, based on the points that each player has.
Losing agains someone much stronger than you won't make you lose as much points as if you were playing against someone within your level (or below it). I think that the total points may be different after the match. For example, one player could win 10 points and the other lose 5, creating 5 new points in the system.
I believe this algorithm was used in Hot or not.
Some similar alternatives: Glicko Rating System and Chessmetrics

How To Make An Efficient Ludo Game Playing AI Algorithm

I want to develop a ludo game which will be played by at most 4 players and at least two. One of the players will be an AI. As there is so many conditions I am not able to decide what pawn to move for the computer. I am trying my best but still to develop an efficient algorithm that can compete with human. If anybody knows the answers of any algorithm implemented in any language please let me know. Thanks.
Also if you want you can try general game playing AI algorithm, such as monte carlo tree search. Basically idea is this - you need to simulate many random games from current move and after that choose such action which guarantees best outcome statistically.
Basically, AI depends upon the type of environment.
For the LUDO, environment is stochastic.
There are multiple algorithms to decide what pawn should move next.
for these types of environment, you need to learn algorithms like, "expectimax" , "MDP" or if you wanna make it more professionally you should go for "reinforcement learning".
I think that in most computer card/board games, getting a reasonably good strategy for your AI player is better than trying to get an always-winning-top-notch algorithm. The AI player should be fun to play with.
Pretty reasonable way to do it is to collect a set of empirical rules which your AI should follow. Like 'If I got 6 on the dices I should move a pawn from Home before considering any other moves', 'If I have a chance to "eat" another player's pawn, do it', etc. Then range these rules from the most important to less important and implement them in the code. You can combine a set of rules into different strategies and try to switch them to see if AI plays better or worse.
Start with a simple heuristic - what's the total number of squares each player has to move to get all their pieces home? Now you can make a few adjustments to that heuristic - for instance, what's the additional cost of a piece in the home square? (Hint - what's the expected total of the dice rolls before the player gets a six?). Now you can further adjust the 'expected distance' of pieces from home based on how likely they are to be hit. For instance, if a piece has a 1 in 6 chance of getting hit before the player's next move, then its heuristic distance is 5/6*(current distance)+1/6*(home distance).
You should then be able to choose a move that maximizes your player's advantage (difference in heuristic) over all the opponents.

Player rating for game with random teams

I am working on an algorithm to score individual players in a team-based game. The problem is that no fixed teams exist - every time 10 players want to play, they are divided into two (somewhat) even teams and play each other. For this reason, it makes no sense to score the teams, and instead we need to rely on individual player ratings.
There are a number of problems that I wish to take into account:
New players need some sort of provisional ranking to reach their "real" rating, before their rating counts the same as seasoned players.
The system needs to take into account that a team may consist of a mix of player skill levels - eg. one really good, one good, two mediocre, and one really poor. Therefore a simple "average" of player ratings probably won't suffice and it probably needs to be weighted in some way.
Ratings are adjusted after every game and as such the algorithm needs to be based on a per-game basis, not per "rating period". This might change if a good solution comes up (I am aware that Glicko uses a rating period).
Note that cheating is not an issue for this algorithm, since we have other measures of validating players.
I have looked at TrueSkill, Glicko and ELO (which is what we're currently using). I like the idea of TrueSkill/Glicko where you have a deviation that is used to determine how precise a rating is, but none of the algorithms take the random teams perspective into account and seem to be mostly based on 1v1 or FFA games.
It was suggested somewhere that you rate players as if each player from the winning team had beaten all the players on the losing team (25 "duels"), but I am unsure if that is the right approach, since it might wildly inflate the rating when a really poor player is on the winning team and gets a win vs. a very good player on the losing team.
Any and all suggestions are welcome!
EDIT: I am looking for an algorithm for established players + some way to rank newbies, not the two combined. Sorry for the confusion.
There is no AI and players only play each other. Games are determined by win/loss (there is no draw).
Provisional ranking systems are always imperfect, but the better ones (such as Elo) are designed to adjust provisional ratings more quickly than for ratings of established players. This acknowledges that trying to establish an ability rating off of just a few games with other players will inherently be error-prone.
I think you should use the average rating of all players on the opposing team as the input for establishing the provisional rating of the novice player, but handle it as just one game, not as N games vs. N players. Each game is really just one data sample, and the Elo system handles accumulation of these games to improve the ranking estimate for an individual player over time before switching over to the normal ranking system.
For simplicity, I would also not distinguish between established and provisional ratings for members of the opposing team when calculating a new provision rating for some member of the other team (unless Elo requires this). All of these ratings have implied error, so there is no point in adding unnecessary complications of probably little value in improving ranking estimates.
First off: It is very very unlikely that you will find a perfect system. Every system will have a flaw somewhere.
And to answer your question: Perhaps the ideas here will help: Lehman Rating on OkBridge.
This rating system is in use (since 1993!) on the internet bridge site called OKBridge. Bridge is a partnership game and is usually played with a team of 2 opposing another team of 2. The rating system was devised to rate the individual players and caters to the fact that many people play with different partners.
Without any background in this area, it seems to me a ranking systems is basically a statistical model. A good model will converge to a consistent ranking over time, and the goal would be to converge as quickly as possible. Several thoughts occur to me, several of which have been touched upon in other postings:
Clearly, established players have a track record and new players don't. So the uncertainty is probably greater for new players, although for inconsistent players it could be very high. Also, this probably depends on whether the game primarily uses innate skills or acquired skills. I would think that you would want a "variance" parameter for each player. The variance could be made up of two parts: a true variance and a "temperature". The temperature is like in simulated annealing, where you have a temperature that cools over time. Presumably, the temperature would cool to zero after enough games have been played.
Are there multiple aspects that come in to play? Like in soccer, you may have good shooters, good passers, guys who have good ball control, etc. Basically, these would be the degrees of freedom in you system (in my soccer analogy, they may or may not be truly independent). It seems like an accurate model would take these into account, of course you could have a black box model that implicitly handles these. However, I would expect understanding the number of degrees of freedom in you system would be helpful in choosing the black box.
How do you divide teams? Your teaming algorithm implies a model of what makes equal teams. Maybe you could use this model to create a weighting for each player and/or an expected performance level. If there are different aspects of player skills, maybe you could give extra points for players whose performance in one aspect is significantly better than expected.
Is the game truly win or lose, or could the score differential come in to play? Since you said no ties this probably doesn't apply, but at the very least a close score may imply a higher uncertainty in the outcome.
If you're creating a model from scratch, I would design with the intent to change. At a minimum, I would expect there may be a number of parameters that would be tunable, and might even be auto tuning. For example, as you have more players and more games, the initial temperature and initial ratings values will be better known (assuming you are tracking the statistics). But I would certainly anticipate that the more games have been played the better the model you could build.
Just a bunch of random thoughts, but it sounds like a fun problem.
There was an article in Game Developer Magazine a few years back by some guys from the TrueSkill team at Microsoft, explaining some of their reasoning behind the decisions there. It definitely mentioned teams games for Xbox Live, so it should be at least somewhat relevant. I don't have a direct link to the article, but you can order the back issue here: http://www.gdmag.com/archive/oct06.htm
One specific point that I remember from the article was scoring the team as a whole, instead of e.g. giving more points to the player that got the most kills. That was to encourage people to help the team win instead of just trying to maximize their own score.
I believe there was also some discussion on tweaking the parameters to try to accelerate convergence to an accurate evaluation of the player skill, which sounds like what you're interested in.
Hope that helps...
how is the 'scoring' settled?,
if a team would score 25 points in total (scores of all players in the team) you could divide the players score by the total team score * 100 to get the percentage of how much that player did for the team (or all points with both teams).
You could calculate a score with this data,
and if the percentage is lower than i.e 90% of the team members (or members of both teams):
treat the player as a novice and calculate the score with a different weighing factor.
sometimes an easier concept works out better.
The first question has a very 'gamey' solution. you can either create a newbie lobby for the first couple of games where the players can't see their score yet until they finish a certain amount of games that give you enough data for accurate rating.
Another option is a variation on the first but simpler-give them a single match vs AI that will be used to determine beginning score (look at quake live for an example).
For anyone who stumbles in here years after it was posted: TrueSkill now supports teams made up of multiple players and changing configurations.
Every time 10 players want to play,
they are divided into two (somewhat)
even teams and play each other.
This is interesting, as it implies both that the average skill level on each team is equal (and thus unimportant) and that each team has an equal chance of winning. If you assume this constraint to hold true, a simple count of wins vs losses for each individual player should be as good a measure as any.

Are there any well-known algorithms or computer models that computer scientists use to predict FIFA World Cup winners?

Occasionally I read news articles that mention about some computer models that computer scientists use to predict winners of some sporting events or the odds for betting which I think there must be a mathematical model behind it. I never bothered to think twice even though I am a "pseudo computer scientist" myself. With the 2010 FIFA World Cup just underway, and since I am also a "pseudo football/soccer player" myself, I just started to wonder about these calculations algorithms.
For example, I know one factor is determining the strength of opponents, so that a win against a strong opponent can count more than a win against a weak opponent. But it now kind of gets in a circular loop, or at least how does one determine the strength of a team in the first place, before that team can be considered strong or weak? If it's based on a historical data then there's no way that could be accurate, because those players of the past are no longer on the fields so their impact is none (except maybe if they become coaches like Maradona)
Anyway, long question short, if you're happen to be working in this field or have some knowledge, please shed some lights.
I know of some work, but its basis might surprise you a bit -- it's been used to predict (quite accurately) what countries do how well in the Olympics, but it's based purely on the economics of the countries in question, not looking at the individual athletes at all. I don't believe it's been used specifically to look at FIFA world cup, but I suspect it would apply about as well, or maybe even a bit better.
Some of the large Investment Banks started a competition for thier quants to write models to predict the wold cup winner.
http://kaggle.com/worldcup2010
More info on the models
http://kaggle.com/blog/2010/06/03/predicting-the-2010-fifa-world-cup-can-statisticians-outdo-the-investment-banks/
There's been some modeling to select horse racing winners using logit models here. The general principle can be applied to predicting which teams advance to the Round of 16 and subsequent rounds. Horse racing is at least, if not more, complex with regard to the number of variables that have a statistically significant effect on the outcome. For instance, in the author's model, weight, win rate, jockey characteristics, speed, post position, distance, and winnings were all significant variables. The authors didn't have access to "trip handicapping" at the time which has proven to be another important effect.
Reading this paper might help generate some thoughts around handicapping FIFA.
To tanascius point, developing a predictive model is the first step. As the authors further explain, developing a betting strategy based on the results is a different problem that's based, in part, on the accuracy of the model.
One guy has been using googles page-rank for sports. Not sure why he felt the need to rename it:
http://www.physorg.com/news180094320.html
I found that like by a quick search for using page-rank for sports because I realized it solves the circular references in rankings. Was curious if anyone had tried it, and there it was.
BTW, anyone who can make accurate predictions for things you can bet on is not publishing their methods or results. They should be making money instead.
I think there's too much to take into consideration:
Injured players, form players, form teams, pressure on teams, rivalries, weather, home advantage, past meetings, formations, team styles, expectations....
Dunno if this applies to the FIFA soccer video games, but I know that for the Super Bowl (american football, for those who dont know) they use the latest version of Madden to predict the winner.
Not very scientific, I suppose, but its there.

Resources