Modelling using the Binomial Distribution Formula - probability

So I'm in my first semester at University studying Actuarial Science and one of my classes is Probability. Needless to say, I have fallen in love with the whole topic and given my passion, I constantly try to solve problems on my own. Except that this time, I really am puzzled.
I first started with a typical American Roulette game at the casino (zero and double zero are possible outcomes). I was interested in finding how many times would the casino need to spin the wheel before being almost sure of turning a profit; assuming there is only 1 bet per spin, the player always bets on red and the bet amount is always 100$.
Well, this problem is not too difficult to solve. Intuitively we know that because the casino has an edge (expected value of 5,26$ for 100$ spin), it will beat the player over the long haul and turn a profit. But knowing the player could get lucky, and delay the inevitable (going bankrupt), how long could the "long haul" be? Would it be 10 spins? 50 spins? 100 spins?
As it turns out, I did the model for this projection and noticed that after 10 spins, the casino will only turn a profit 44.32% of the time. After 100 spins, it becomes 66.57% and after 1000 spins it is 94.89%. Conclusion, our intuition was confirmed. Over time, chances are the casino will crush the player.
I was able to do this using the Binomial Distribution Formula below:
Now, what if instead, my new game at the casino was to roll a fixed die. Possible outcomes would include (1,2,3,4,5,6) and their corresponding probabilities would be (10%,15%,20%,15%,10%,30%). Each number has the following profits for the casino (-200$,-100$,-50$,-100$,-200$,500$).
As you can see, the casino only makes money when a 6 is rolled (30% of the time), and although it doesn't win as often as with the Roulette game, its expected value is even higher (70$ per roll).
How would I find out how many events it would take for the casino to be profitable in this game? Intuitively I'm puzzled because the lower chance of winning tells me it would take much longer than the Roulette game but it's expected value being much higher tells me it would take fewer events.
Can I use a Binomial Distribution once again? For instance, after 5 rolls, 1 possible result could be (5,4,6,3,1) and in that case, the casino would have lost 50$ (-200$,-100$,500$,-50$,-200$). And it's confusing because unlike the Roulette game, the casino has 4 distinct outcomes (-200$,-100$,-50$ & 500$). I don't know how to solve this.
Any help? :)

Related

Evaluate different poker strategies

not sure if SO is the right place to ask this question, but I am gonna try anyway.
I am playing with neural networks and poker and I am facing a problem that is how to evaluate different players. Poker variant I am talking about is No-limit holdem for 6 players. Is there a better way to find out exact (or atleast somehow exact) winrate of players, than to simulate X (ranging from hundreds of thousands to milions) hands? Problem is that simulating milion of hands is kinda time-consuming, since each move means calculating neural network output. Generating all possible hand and board options doesn't seem like a good idea, since there is a LOT of them.
Is it possible to do it better?
Summary:
No way will you want to directly compute this metric.
You will not be able to simulate all possible hands with current computing power.
The main problem is the quantity of variables: not only do you have six two-card hands and five sequential up-cards, but you have to deal with five foreign betting strategies. Unless you know all the details of those strategies, you have no way of directly computing the probability-averaged outcome.
Assuming that you also have adaptive strategies, those adaptations add even more complexity to the computations, such that a 100-hand trial must consider the sequence of hands played -- a "big bang" of combinatorial explosion.
Thus, we seem to be stuck with Monte Carlo methods (e.g. random sampling). Experiment with a few trials to see how many you need to get a reasonable evaluation for your needs. Do you really need 10^6 hands played to do that, or will 100 or 1000 hands give you a good approximation? If you're just trying to train and tune your model, I'm guessing that 20 trials of 100 hands each will be more than you need to get 99% accuracy of your rate of return (win rate).

How does software that calculates winning probability of a Texas Hold'em or Omaha hand against 8 random opponent hands work?

So there are Texas Hold'em computer games where you play up to 8 opponents and supposedly some of these computer games tell you your probability of winning assuming your opponents hands are all random. In case someone doesn't know, in Hold'em each player is dealt 2 private cards and then eventually 5 community cards are dealt in the middle (first 3, then 1, then 1 more), and the winner is the player who can make the best 5 card poker hand they can using any combination of their 2 private cards and the 5 community cards. In Omaha, each player is dealt 4 private cards and there are still 5 community cards and the winner is the player who can make the best 5 card poker hand using 2 private cards and 3 community cards.
So, in Hold'em, for any given player's private hand, there are over 10^24 ways that 8 opponents' private hands and the 5 community cards could be dealt. So how do they calculate/estimate your probability of you winning in the beginning, assuming your 8 opponents' hands are random? In Omaha the situation is even worse although I've never seen an Omaha computer game that actually gives you your odds against 8 random opponents' hands. But anyway, are there any programming tricks that can get these winning probability calculations done (or say, correct within 3 or 4 decimal places), faster than brute force? I'm hoping someone can answer here who's written such a program before that runs fast enough, hence why I'm asking here. And I'm hoping the answer doesn't involve random sampling estimation, because there's always a small chance that could be way off.
As you identified the expected win rate is an intractably large summation and must be approximated. The standard approach is to use the Monte Carlo method which involves simulating various hands over and over and taking the empirical average: #wins/#games.
Interestingly, the (MSE) error of this approximation approach is independent of the dimensionality (number of combinations) specifically, letting X = 1 if you win, 0 if you lose, MSE = var(X)/N = p*(1-p)/N where p = Prob(X=1) (unknown), and N is the number of samples.
There are a whole host of different Monte Carlo techniques that can improve the variance of the vanilla sampling approach, such as importance sampling, common random numbers, Rao-Blackwellization, control variates, and stratified sampling to name only a few.
edit: just saw you are looking for a non-random approximation approach, I doubt you will have much luck with deterministic approximations approaches, I know that the current state of the art in compute poker research uses Monte Carlo methods to compute these probabilities, albeit with several variance-reduction tricks.
Regarding "because there's always a small chance that could be way off" you can always get a high probability bound on the error rate with Hoeffding's inequality.
I would use a pre-computed odds table instead of on-the-fly computation. Tables that list these are extremely easy to find, and have existed for quite some time so they are proven tools. It would be fairly simple to match your hole cards + community cards to the percentage listed in a pre-computed table, and return the value to you instantly, skipping on-the-fly computation time.
There are only 52 cards in a deck(classically), so if you simply find all the possible solutions ahead of time it is much faster to read from those instead of re-computing the odds for every hand on the fly.
Here's a link to an incomplete odds table:
http://www.learn-texas-holdem.com/texas-holdem-odds-probabilities.htm
I'd think about it like password-cracking. Instead of brute-forcing every character individually, use a list of common password to decrease compute time. The difference in this case is you know every possible combination ahead of time.

How to rank a set of people based on competition results?

I'm having a bit of a brain melt here.
I have a set of people. They compete against each other in timed events. Each compeition yields a set of results showing everyone, ranked by their times.
From this data, I can see that (say) person A has beaten person B 73% of the time in 48 meetings. Simple.
Let's suppose I have people A B C D E F G though. For any pairing I can see who's the victor by comparing them to each other, but how do I come up with the "most accurate" OVERALL ranking?
Does it need to be some sort of iterative process? Any tips appreciated, I don't know where to start really!
(Each competition is not necessarily a complete set of all of the competitors, if that matters.)
I might like to further improve things by taking into account their relative times, not JUST "A beat B" or "B beat A". "A beat B by 6.3 seconds", etc etc. But let's keep things simple for now, I think!
Happy to give more info if needed, just tell me what!
Many thanks!
As a first step, I'd implement the elo rating system.
http://en.wikipedia.org/wiki/Elo_rating_system
It will do a decent job. You can get fancier with more complicated systems like Glicko or Trueskill, but I'd just go with Elo first and see if it is good enough for you.
You can use Elo Rating System (used in chess to evaluate players around the world).
I think it works the following way: each player starts with a given number of points. When two players challenge each other they will win or lose a different amount of points, based on the points that each player has.
Losing agains someone much stronger than you won't make you lose as much points as if you were playing against someone within your level (or below it). I think that the total points may be different after the match. For example, one player could win 10 points and the other lose 5, creating 5 new points in the system.
I believe this algorithm was used in Hot or not.
Some similar alternatives: Glicko Rating System and Chessmetrics

Help on Elo System specifics for large group competitions

I'm trying to implement an Elo-based ranking system for the sport my website deals with.
There's a few thousand competitors, and each competition sees anything from 50 to 500 of them go against the clock. Fastest man wins.
My initial thought was that a race with 50 people can be treated as 50*49/2 = 1225 one-on-one matches.
I do all of these comparisons in one go, and adjust each competitors rating at the end. I.e. if someone's rating is 1600 it remains that for all 49 comparisons I make, and is adjusted by the sum of all the changes at the end. This doesn't seem right... is this what I should be doing?
The problem I have is that if one (normally strong) competitor has a terrible day (e.g. injury) then he can suddenly be beaten by 40+ people that he would normally beat. They all have lower ratings than him, and as such his rating gets PUMMELLED. With the recommended K-factor of 32 I see swings of thousands of points in a single event... If I drastically reduce the K factor (say, to 1) things are better, but I feel this is flawed.
Instead of summing all of the adjustments, should I be averaging them in some way? Or taking the most extreme value? Got my head in a bit of a twist here!
Any help appreciated, thank you!
Rather than treating a race of 50 as 1225 one-on-one matches I think you would get better results by treating it as 50 one-on-one matches where it's player vs. max(everybody else). Unless there's a reason racing against 10 opponents is any more difficult than racing against the best of those 10 opponents in a winner-takes-all scenario?
You might want to look at the underlying math behind Elo. The idea behind Elo is to assume that every player has a given "play strength" represented by a normally-distributed (or, I believe as it's now used, logistically-distributed) random variable, then to try to estimate that variable. Elo is set up so that someone with an Elo rating 200 points higher than another player should have an expected outcome of .75 in a game (where 1 means "win", 0.5 is "draw", and 0 is "loss.") The Elo calculation is designed so that it adjusts the hidden variable based on observable behavior about how the player performed against another player.
Given that you are having a multi-way tournament, it seems that the parameter estimation model would be different. I am not a mathematician or a statistician, but it seems like there must be some way to do parameter estimation from a multi-way tournament just as it could be done from a one-on-one tournament. This approach seems theoretically ideal, though regrettably I don't have anything concrete to suggest along these lines.

Is there a perfect algorithm for chess? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough.
I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do.
My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic.
Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though.
Edit: Hmm... looks like I ruffled some feathers here. That's good.
Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally).
I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.
"I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess."
You're not quite right. There can be such a machine. The issue is the hugeness of the state space that it would have to search. It's finite, it's just REALLY big.
That's why chess falls back on heuristics -- the state space is too huge (but finite). To even enumerate -- much less search for every perfect move along every course of every possible game -- would be a very, very big search problem.
Openings are scripted to get you to a mid-game that gives you a "strong" position. Not a known outcome. Even end games -- when there are fewer pieces -- are hard to enumerate to determine a best next move. Technically they're finite. But the number of alternatives is huge. Even a 2 rooks + king has something like 22 possible next moves. And if it takes 6 moves to mate, you're looking at 12,855,002,631,049,216 moves.
Do the math on opening moves. While there's only about 20 opening moves, there are something like 30 or so second moves, so by the third move we're looking at 360,000 alternative game states.
But chess games are (technically) finite. Huge, but finite. There's perfect information. There are defined start and end-states, There are no coin-tosses or dice rolls.
I know next to nothing about what's actually been discovered about chess. But as a mathematician, here's my reasoning:
First we must remember that White gets to go first and maybe this gives him an advantage; maybe it gives Black an advantage.
Now suppose that there is no perfect strategy for Black that lets him always win/stalemate. This implies that no matter what Black does, there is a strategy White can follow to win. Wait a minute - this means there is a perfect strategy for White!
This tells us that at least one of the two players does have a perfect strategy which lets that player always win or draw.
There are only three possibilities, then:
White can always win if he plays perfectly
Black can always win if he plays perfectly
One player can win or draw if he plays perfectly (and if both players play perfectly then they always stalemate)
But which of these is actually correct, we may never know.
The answer to the question is yes: there must be a perfect algorithm for chess, at least for one of the two players.
It has been proven for the game of checkers that a program can always win or tie the game. That is, there is no choice of moves that one player can make which force the other player into losing.
The researchers spent almost two decades going through the 500 billion billion possible checkers positions, which is still an infinitesimally small fraction of the number of chess positions, by the way. The checkers effort included top players, who helped the research team program checkers rules of thumb into software that categorized moves as successful or unsuccessful. Then the researchers let the program run, on an average of 50 computers daily. Some days, the program ran on 200 machines. While the researchers monitored progress and tweaked the program accordingly. In fact, Chinook beat humans to win the checkers world championship back in 1994.
Yes, you can solve chess, no, you won't any time soon.
This is not a question about computers but only about the game of chess.
The question is, does there exist a fail-safe strategy for never losing the game? If such a strategy exists, then a computer which knows everything can always use it and it is not a heuristic anymore.
For example, the game tic-tac-toe normally is played based on heuristics. But, there exists a fail-safe strategy. Whatever the opponent moves, you always find a way to avoid losing the game, if you do it right from the start on.
So you would need to proof that such a strategy exists or not for chess as well. It is basically the same, just the space of possible moves is vastly bigger.
I'm coming to this thread very late, and that you've already realised some of the issues. But as an ex-master and an ex-professional chess programmer, I thought I could add a few useful facts and figures. There are several ways of measuring the complexity of chess:
The total number of chess games is approximately 10^(10^50). That number is unimaginably large.
The number of chess games of 40 moves or less is around 10^40. That's still an incredibly large number.
The number of possible chess positions is around 10^46.
The complete chess search tree (Shannon number) is around 10^123, based on an average branching factor of 35 and an average game length of 80.
For comparison, the number of atoms in the observable universe is commonly estimated to be around 10^80.
All endgames of 6 pieces or less have been collated and solved.
My conclusion: while chess is theoretically solvable, we will never have the money, the motivation, the computing power, or the storage to ever do it.
Some games have, in fact, been solved. Tic-Tac-Toe is a very easy one for which to build an AI that will always win or tie. Recently, Connect 4 has been solved as well (and shown to be unfair to the second player, since a perfect play will cause him to lose).
Chess, however, has not been solved, and I don't think there's any proof that it is a fair game (i.e., whether the perfect play results in a draw). Speaking strictly from a theoretical perspective though, Chess has a finite number of possible piece configurations. Therefore, the search space is finite (albeit, incredibly large). Therefore, a deterministic Turing machine that could play perfectly does exist. Whether one could ever be built, however, is a different matter.
The average $1000 desktop will be able to solve checkers in a mere 5 seconds by the year 2040 (5x10^20 calculations).
Even at this speed, it would still take 100 of these computers approximately 6.34 x 10^19 years to solve chess. Still not feasible. Not even close.
Around 2080, our average desktops will have approximately 10^45 calculations per second. A single computer will have the computational power to solve chess in about 27.7 hours. It will definitely be done by 2080 as long as computing power continues to grow as it has the past 30 years.
By 2090, enough computational power will exist on a $1000 desktop to solve chess in about 1 second...so by that date it will be completely trivial.
Given checkers was solved in 2007, and the computational power to solve it in 1 second will lag by about 33-35 years, we can probably roughly estimate chess will be solved somewhere between 2055-2057. Probably sooner since when more computational power is available (which will be the case in 45 years), more can be devoted to projects such as this. However, I would say 2050 at the earliest, and 2060 at the latest.
In 2060, it would take 100 average desktops 3.17 x 10^10 years to solve chess. Realize I am using a $1000 computer as my benchmark, whereas larger systems and supercomputers will probably be available as their price/performance ratio is also improving. Also, their order of magnitude of computational power increases at a faster pace. Consider a supercomputer now can perform 2.33 x 10^15 calculations per second, and a $1000 computer about 2 x 10^9. By comparison, 10 years ago the difference was 10^5 instead of 10^6. By 2060 the order of magnitude difference will probably be 10^12, and even this may increase faster than anticipated.
Much of this depends on whether or not we as human beings have the drive to solve chess, but the computational power will make it feasible around this time (as long as our pace continues).
On another note, the game of Tic-Tac-Toe, which is much, much simpler, has 2,653,002 possible calculations (with an open board). The computational power to solve Tic-Tac-Toe in roughly 2.5 (1 million calculations per second) seconds was achieved in 1990.
Moving backwards, in 1955, a computer had the power to solve Tic-Tac-Toe in about 1 month (1 calculation per second). Again, this is based on what $1000 would get you if you could package it into a computer (a $1000 desktop obviously did not exist in 1955), and this computer would have been devoted to solving Tic-Tac-Toe....which was just not the case in 1955. Computation was expensive and would not have been used for this purpose, although I don't believe there is any date where Tic-Tac-Toe was deemed "solved" by a computer, but I'm sure it lags behind the actual computational power.
Also, take into account $1000 in 45 years will be worth about 4 times less than it is now, so much more money can go into projects such as this while computational power will continue to get cheaper.
It actually is possible for both players to have winning strategies in infinite games with no well-ordering; however, chess is well-ordered. In fact, because of the 50-move rule, there is an upper-limit to the number of moves a game can have, and thus there are only finitely many possible games of chess (which can be enumerated to solve exactly.. theoretically, at least :)
Your end of the argument is supported by the way modern chess programs work now. They work that way because it's way too resource-intense to code a chess program to operate deterministically. They won't necessarily always work that way. It's possible that chess will someday be solved, and if that happens, it will likely be solved by a computer.
I think you are dead on. Machines like Deep Blue and Deep Thought are programmed with a number of predefined games, and clever algorithms to parse the trees into the ends of those games. This is, of course, a dramatic oversimplification. There is always a chance to "beat" the computer along the course of a game. By this I mean making a move that forces the computer to make a move that is less than optimal (whatever that is). If the computer cannot find the best path before the time limit for the move, it might very well make a mistake by choosing one of the less-desirable paths.
There is another class of chess programs that uses real machine learning, or genetic programming / evolutionary algorithms. Some programs have been evolved and use neural networks, et al, to make decisions. In this type of case, I would imagine that the computer might make "mistakes", but still end up in a victory.
There is a fascinating book on this type of GP called Blondie24 that you might read. It is about checkers, but it could apply to chess.
For the record, there are computers that can win or tie at checkers. I'm not sure if the same could be done for chess. The number of moves is a lot higher. Also, things change because pieces can move in any direction, not just forwards and backwards. I think although I'm not sure, that chess is deterministic, but that there are just way too many possible moves for a computer to currently determine all the moves in a reasonable amount of time.
From game theory, which is what this question is about, the answer is yes Chess can be played perfectly. The game space is known/predictable and yes if you had you grandchild's quantum computers you could probably eliminate all heuristics.
You could write a perfect tic-tac-toe machine now-a-days in any scripting language and it'd play perfectly in real-time.
Othello is another game that current computers can easily play perfectly, but the machine's memory and CPU will need a bit of help
Chess is theoretically possible but not practically possible (in 2008)
i-Go is tricky, it's space of possibilities falls beyond the amount of atoms in the universe, so it might take us some time to make a perfect i-Go machine.
Chess is an example of a matrix game, which by definition has an optimal outcome (think Nash equilibrium). If player 1 and 2 each take optimal moves, a certain outcome will ALWAYS be reached (whether it be a win-tie-loss is still unknown).
As a chess programmer from the 1970's, I definitely have an opinion on this. What I wrote up about 10 years ago, still is basically true today:
"Unfinished Work and Challenges to Chess Programmers"
Back then, I thought we could solve Chess conventionally, if done properly.
Checkers was solved recently (Yay, University of Alberta, Canada!!!) but that was effectively done Brute Force. To do chess conventionally, you'll have to be smarter.
Unless, of course, Quantum Computing becomes a reality. If so, chess will be solved as easily as Tic-Tac-Toe.
In the early 1970's in Scientific American, there was a short parody that caught my attention. It was an announcement that the game of chess was solved by a Russian chess computer. It had determined that there is one perfect move for white that would ensure a win with perfect play by both sides, and that move is: 1. a4!
Lots of answers here make the important game-theoretic points:
Chess is a finite, deterministic game with complete information about the game state
You can solve a finite game and identify a perfect strategy
Chess is however big enough that you will not be able to solve it completely with a brute force method
However these observations miss an important practical point: it is not necessary to solve the complete game perfectly in order to create an unbeatable machine.
It is in fact quite likely that you could create an unbeatable chess machine (i.e. will never lose and will always force a win or draw) without searching even a tiny fraction of the possible state space.
The following techniques for example all massively reduce the search space required:
Tree pruning techniques like Alpha/Beta or MTD-f already massively reduce the search space
Provable winning position. Many endings fall in this category: You don't need to search KR vs K for example, it's a proven win. With some work it is possible to prove many more guaranteed wins.
Almost certain wins - for "good enough" play without any foolish mistakes (say about ELO 2200+?) many chess positions are almost certain wins, for example a decent material advantage (e.g. an extra Knight) with no compensating positional advantage. If your program can force such a position and has good enough heuristics for detecting positional advantage, it can safely assume it will win or at least draw with 100% probability.
Tree search heuristics - with good enough pattern recognition, you can quickly focus on the relevant subset of "interesting" moves. This is how human grandmasters play so it's clearly not a bad strategy..... and our pattern recognition algorithms are constantly getting better
Risk assessment - a better conception of the "riskiness" of a position will enable much more effective searching by focusing computing power on situations where the outcome is more uncertain (this is a natural extension of Quiescence Search)
With the right combination of the above techniques, I'd be comfortable asserting that it is possible to create an "unbeatable" chess playing machine. We're probably not too far off with current technology.
Note that It's almost certainly harder to prove that this machine cannot be beaten. It would probably be something like the Reimann hypothesis - we would be pretty sure that it plays perfectly and would have empirical results showing that it never lost (including a few billion straight draws against itself), but we wouldn't actually have the ability to prove it.
Additional note regarding "perfection":
I'm careful not to describe the machine as "perfect" in the game-theoretic sense because that implies unusually strong additional conditions, such as:
Always winning in every situation where it is possible to force a win, no matter how complex the winning combination may be. There will be situations on the boundary between win/draw where this is extremely hard to calculate perfectly.
Exploiting all available information about potential imperfection in your opponent's play, for example inferring that your opponent might be too greedy and deliberately playing a slightly weaker line than usual on the grounds that it has a greater potential to tempt your opponent into making a mistake. Against imperfect opponents it can in fact be optimal to make a losing if you estimate that your opponent probably won't spot the forced win and it gives you a higher probability of winning yourself.
Perfection (particularly given imperfect and unknown opponents) is a much harder problem than simply being unbeatable.
It's perfectly solvable.
There are 10^50 odd positions. Each position, by my reckoning, requires a minimum of 64 round bytes to store (each square has: 2 affiliation bits, 3 piece bits). Once they are collated, the positions that are checkmates can be identified and positions can be compared to form a relationship, showing which positions lead to other positions in a large outcome tree.
Then, the program needs only to find the lowest only one side checkmate roots, if such a thing exists. In any case, Chess was fairly simply solved at the end of the first paragraph.
if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic.
There are two competing ideas there. One is that you search every possible move, and the other is that you decide based on a heuristic. A heuristic is a system for making a good guess. If you're searching through every possible move, then you're no longer guessing.
"Is there a perfect algorithm for chess?"
Yes there is. Maybe it's for White to always win. Maybe it's for Black to always win. Maybe it's for both to always tie at least. We don't know which, and we'll never know, but it certainly exist.
See also
God's algorithm
I found this article by John MacQuarrie that references work by the "father of game theory" Ernst Friedrich Ferdinand Zermelo. It draws the following conclusion:
In chess either white can force a win, or black can force a win, or both sides can force at least a draw.
The logic seems sound to me.
There are two mistakes in your thought experiment:
If your Turing machine is not "limited" (in memory, speed, ...) you do not need to use heuristics but you can calculate evaluate the final states (win, loss, draw). To find the perfect game you would then just need to use the Minimax algorithm (see http://en.wikipedia.org/wiki/Minimax) to compute the optimal moves for each player, which would lead to one or more optimal games.
There is also no limit on the complexity of the used heuristic. If you can calculate a perfect game, there is also a way to compute a perfect heuristic from it. If needed its just a function that maps chess positions in the way "If I'm in this situation S my best move is M".
As others pointed out already, this will end in 3 possible results: white can force a win, black can force a win, one of them can force a draw.
The result of a perfect checkers games has already been "computed". If humanity will not destroy itself before, there will be also a calculation for chess some day, when computers have evolved enough to have enough memory and speed. Or we have some quantum computers... Or till someone (researcher, chess experts, genius) finds some algorithms that significantly reduces the complexity of the game. To give an example: What is the sum of all numbers between 1 and 1000? You can either calculate 1+2+3+4+5...+999+1000, or you can simply calculate: N*(N+1)/2 with N = 1000; result = 500500. Now imagine don't know about that formula, you don't know about Mathematical induction, you don't even know how to multiply or add numbers, ... So, it may be possible that there is a currently unknown algorithm that just ultimately reduces the complexity of this game and it would just take 5 Minutes to calculate the best move with a current computer. Maybe it would be even possible to estimate it as a human with pen & paper, or even in your mind, given some more time.
So, the quick answer is: If humanity survives long enough, it's just a matter of time!
I'm only 99.9% convinced by the claim that the size of the state space makes it impossible to hope for a solution.
Sure, 10^50 is an impossibly large number. Let's call the size of the state space n.
What's the bound on the number of moves in the longest possible game? Since all games end in a finite number of moves there exists such a bound, call it m.
Starting from the initial state, can't you enumerate all n moves in O(m) space? Sure, it takes O(n) time, but the arguments from the size of the universe don't directly address that. O(m) space might not even be very much. For O(m) space couldn't you also track, during this traversal, whether the continuation of any state along the path you are traversing leads to EitherMayWin, EitherMayForceDraw, WhiteMayWin, WhiteMayWinOrForceDraw, BlackMayWin, or BlackMayWinOrForceDraw? (There's a lattice depending on whose turn it is, annotate each state in the history of your traversal with the lattice meet.)
Unless I'm missing something, that's an O(n) time / O(m) space algorithm for determining which of the possible categories chess falls into. Wikipedia cites an estimate for the age of the universe at approximately 10^60th Planck times. Without getting into a cosmology argument, let's guess that there's about that much time left before the heat/cold/whatever death of the universe. That leaves us needing to evaluate one move every 10^10th Planck times, or every 10^-34 seconds. That's an impossibly short time (about 16 orders of magnitude shorter than the shortest times ever observed). Let's optimistically say that with a super-duper-good implementation running on top of the line present-or-forseen-non-quantum-P-is-a-proper-subset-of-NP technology we could hope to evaluate (take a single step forward, categorize the resulting state as an intermediate state or one of the three end states) states at a rate of 100 MHz (once every 10^-8 seconds). Since this algorithm is very parallelizable, this leaves us needing 10^26th such computers or about one for every atom in my body, together with the ability to collect their results.
I suppose there's always some sliver of hope for a brute-force solution. We might get lucky and, in exploring only one of white's possible opening moves, both choose one with much-lower-than-average fanout and one in which white always wins or wins-or-draws.
We could also hope to shrink the definition of chess somewhat and persuade everyone that it's still morally the same game. Do we really need to require positions to repeat 3 times before a draw? Do we really need to make the running-away party demonstrate the ability to escape for 50 moves? Does anyone even understand what the heck is up with the en passant rule? ;) More seriously, do we really need to force a player to move (as opposed to either drawing or losing) when his or her only move to escape check or a stalemate is an en passant capture? Could we limit the choice of pieces to which a pawn may be promoted if the desired non-queen promotion does not lead to an immediate check or checkmate?
I'm also uncertain about how much allowing each computer hash-based access to a large database of late game states and their possibly outcomes (which might be relatively feasible on existing hardware and with existing endgame databases) could help in pruning the search earlier. Obviously you can't memoize the entire function without O(n) storage, but you could pick a large integer and memoize that many endgames enumerating backwards from each possible (or even not easily provably impossible, I suppose) end state.
I know this is a bit of a bump, but I have to put my 5 cents worth in here. It is possible for a computer, or a person for that matter, to end every single chess game that he/she/it participates in, in either a win or a stalemate.
To achieve this, however, you must know precisely every possible move and reaction and so forth, all the way through to each and every single possible game outcome, and to visualize this, or to make an easy way of analyising this information, think of it as a mind map that branches out constantly.
The center node would be the start of the game. Each branch out of each node would symbolize a move, each one different to its bretheren moves. Presenting it in this manor would take much resources, especially if you were doing this on paper. On a computer, this would take possibly hundreds of Terrabytes of data, as you would have very many repedative moves, unless you made the branches come back.
To memorize such data, however, would be implausable, if not impossible. To make a computer recognize the most optimal move to take out of the (at most) 8 instantly possible moves, would be possible, but not plausable... as that computer would need to be able to process all the branches past that move, all the way to a conclusion, count all conclusions that result in a win or a stalemate, then act on that number of wining conclusions against losing conclusions, and that would require RAM capable of processing data in the Terrabytes, or more! And with todays technology, a computer like that would require more than the bank balance of the 5 richest men and/or women in the world!
So after all that consideration, it could be done, however, no one person could do it. Such a task would require 30 of the brightest minds alive today, not only in chess, but in science and computer technology, and such a task could only be completed on a (lets put it entirely into basic perspective)... extremely ultimately hyper super-duper computer... which couldnt possibly exist for at least a century. It will be done! Just not in this lifetime.
Mathematically, chess has been solved by the Minimax algorithm, which goes back to the 1920s (either found by Borel or von Neumann). Thus, a turing machine can indeed play perfect chess.
However, the computational complexity of chess makes it practically infeasible. Current engines use several improvements and heuristics. Top engines today have surpassed the best humans in terms of playing strength, but because of the heuristics that they are using, they might not play perfect when given infinite time (e.g., hash collisions could lead to incorrect results).
The closest that we currently have in terms of perfect play are endgame tablebases. The typical technique to generate them is called retrograde analysis. Currently, all position with up to six pieces have been solved.
It just might be solvable, but something bothers me:
Even if the entire tree could be traversed, there is still no way to predict the opponent's next move. We must always base our next move on the state of the opponent, and make the "best" move available. Then, based on the next state we do it again.
So, our optimal move might be optimal iff the opponent moves in a certain way. For some moves of the opponent our last move might have been sub-optimal.
I just fail to see how there could be a "perfect" move in every step.
For that to be the case, there must for every state [in the current game] be a path in the tree which leads to victory, regardless of the opponent's next move (as in tic-tac-toe), and I have a hard time figuring that.
Yes , in math , chess is classified as a determined game , that means it has a perfect algorithm for each first player , this is proven to be true even for infinate chess board , so one day probably a fast effective AI will find the perfect strategy, and the game is gone
More on this in this video : https://www.youtube.com/watch?v=PN-I6u-AxMg
There is also quantom chess , where there is no math proof that it is determined game http://store.steampowered.com/app/453870/Quantum_Chess/
and there you are detailed video about quantom chess https://chess24.com/en/read/news/quantum-chess
Of course
There's only 10 to the power of fifty possible combinations of pieces on the board. Having that in mind, to play to every compibation, you would need make under 10 to the power of fifty moves (including repetitions multiply that number by 3). So, there's less than ten to the power of one hundred moves in chess. Just pick those that lead to checkmate and you're good to go
64bit math (=chessboard) and bitwise operators (=next possible moves) is all You need. So simply. Brute Force will find the most best way usually. Of course, there is no universal algorithm for all positions. In real life the calculation is also limited in time, timeout will stop it. A good chess program means heavy code (passed,doubled pawns,etc). Small code can't be very strong. Opening and endgame databases just save processing time, some kind of preprocessed data. The device, I mean - the OS,threading poss.,environment,hardware define requirements. Programming language is important. Anyway, the development process is interesting.

Resources