Predicting remaining runtime for minimax algorithm with alpha-beta-pruning - algorithm

Problem
I am trying to solve a perfekt information zero-sum game (like tick-tack-toe or chess) using a negamax algorithm with alpha-beta-pruning. The goal is to proof wheter one player can force a win or draw. This means that there is no depth-limit but the algorithm always evaluates the gametree until there is a win/draw.
I spent multiple weeks optimizing my code to my specific game and got it down to a runtime of several days I would say. But there lies the problem:
Because of the alpha-beta-pruning the runtime of the minimax-algorithm is highly unpredictable. I can't tell wheter it will be done in the next 5 minutes or run for 5 more weeks until I actually simulated it. I would love to be able to predict the remaining runtime and not be off by several orders of magnitude.
What I tried so far
I am recording the results of all sub- and subsub-branches up to 5*sub-branches and the time it took my machine to simulate them. Then I just assume that positions on the same level take the same time to evaluate and call it a day. These predictions are sometimes off by a factor of 10 or more.
I also looked at recorded data to see wheter my assumtion holds. The time needed to evaluate a 5*sub-branch varied between 0.01s to as much as 180s. Thats why my predictions where off. Who would have gessed.
My Question
As I imagine this would apply to all implementations of minimax:
Are there more sophisticated algorithm out there to accuratly predict the remaining runtime of a minimax-algorithm with alpha-beta-pruning? Or is minimax just unpredictable by design?
If so how do they work?

I have spent a lot of time with Negamax algorithms which I highly suggest that you switch over to. It will give the same results as Minimax, but is much easier to debug and optimize further since it is just half the code.
I have no clue about the game you are trying to solve, but if it is even the slightliest complicated I assume it won't be possible without a super computer. To answer your questions though:
Minimax with alpha-beta pruning relies highly on the order of which you try your moves (to use board game terms). You want to try the best moves first, this is done in chess by ordering the possible moves function with e.g. capture moves higher up than castling.
You can also optimize the algorithm much much more with different techniques depending on what you are trying to solve. For example transposition tables if the same position can occur in another branch.
We need to know more about the game you are trying to solve to know what algorithm can work best.
Final words: If you want to get an idea of how long it will take to solve and how far you have gotten after some time, I suggest you use iterative deepending. This will also speed up your search, since you can try the best guesses from the previous iterations first and hence get faster beta cut offs in the next iteration:
for depth in range(1, inf):
score = minimax(alpha, beta, depth....)
time = elapsed_time()
Now you can print the elapsed time for each depth and see how far it gets in a certain period of time. This is also good to measuer if your optimizations are giving any results. Since the Minimax tree is getting exponentially larger for each depth you can get an idea on how much time the next depth will take you.
So if you know around how many moves it will take for a win/draw/loss you can pretty easily estimate whether it will be possible or not through this technique.
Hope I make myself clear, English is not my native language :) Feel free ask in the comments if something is not clear.

Related

making algorithm for getting programmer efficiency

I have some data about programmer actions inside an ide. From this data I am trying to make a good algorithm to calculate a programmers efficiency.
If we consider
efficiency = useful energy out / energy in
I made this rough equation:
energy in = active time(run events x code editing time)
Basically its the the time where stuff is actually being done by the programmer multiplied by run events like
debugging,build etc x the time where the programmer is actually editing code.
useful energy out = energy in - (#unsuccessfulbuilds + abortedtestruns
+ debuggerusetime)
Useful energy out is basically energy in minus things that I consider to be inefficient.
Can anyone see how to improve this, particularly from a mathematical point of view. Maths isn't my strong point and am not sure if I should use some sort of weighting for the equations and how to do this correctly. Also, I'm thinking of how to make it that whats minused from energy in in the useful energy out equation cant end up as less than 0. Can anyone give a hand with these questions ?
Your "algorithm" is completely arbitrary, making judgement of value over things that are inocuous to whatever you called "efficient/inneficient", and will endup with a completely incoherent final value after being calculated. Compilation time? So the first compilation of a C++ plugin that takes 30+ minutes is good? Debugging time is both efficient and inefficient in your proposal.
A programmer that codes for 10 minutes and make 6 consecutive builds with close to no changes will have the same output as the guy that code for 60 minutes.
I suggest you look firts to what is a good use of a programmers time, how other programs contabilize programmers efficiency. Etc.
Just on a side note, to create a model of efficiency of work of a highly technical and creative field, you must understand quite well math, statistics and project management. Thats why good scrum masters are so sought after.
Anyway, what you propose is not an algorithm, but a scoring system, usually algorithms do make use of scoring systems to help their internal rules work out the best solution based on the scoring. The scoring is just a value, while the algorithm is a process to an end.

Elite\Elitist model in a Genetic Algorithm

When is the right time to use the Elite\Elitist mode in a Genetic Algorithm? I have no idea when to use it. What kind problems can be solved using this?
All I know is an elitist model is where you choose the elite (the solution with highest fitness function) and they have a reserve slot for the next generation, and they are the one up for crossover.
You pretty much always use some form of elitism. What varies is the percentage (p) of best performers that you allow to survive to the next generation. So no elitism is basically saying p=0.
The higher p, the more your algorithm will have a tendency to find local peaks of fitness. i.e. once it finds a chromosome with a good fitness, it'll tend to focus more on optimizing it than trying to find new completely different solutions. On the contrary, if it's smaller, your GA will look for possible solutions all over the place and won't zero in as fast once it finds something close to the optimum solution.
So setting p correctly is going to have a direct impact on your algorithm's performance. But it depends on what you're after and your problem space. Play around with it a bit to adjust properly. I typically use 20% for the problems I work with, to give enough room for innovation. It works ok for me.

Best algorithm for optimizing the decisions in a simulation

I'm looking for the best algorithm to optimise the decisions made in a simultaion to find a fast result in a reasonable amount of time. The simultaion does a number of "ticks" and occasionaly needs to make a decision. Eventually a goal state is reached. ( It would be possible to never reach a goal state if you make very bad decisions )
There are many many goal states. I want to find the goal state with the least number of ticks ( a tick equates roughly to a second in real life." I basically want to decide which decisions to make to get to the goal in as few seconds as possible,
Some points about the problem domain:
Straight off the bat I can generate a series of choices that will lead to a solution. It won't be optimal.
I have a reasonable heuristic function to determine what would be a good decision
I have a reasonable function to determine the minimum possible time cost from a node to a goal.
Algorithms:
I need to process this problem for about 10 seconds and then give the best answer I can.
I believe A* would find me the optimal soluton. The problem is that the decision tree will be so large that I won't be able to calculate it quick enough.
IDA* would give me a good first few choices in 10 seconds but I need a path all the way to a goal.
At the moment I am thinking that I will start off with the known non optimal path to a goal and then perhaps use Simulated Anealing and attempt to improve it over 10 seconds.
What would be a good algorithm to research to try to solve this sort of problem?
Have a look at limited discrepancy search, repeating with increasingly loose limits on the maximum discrepancy search, or beam search.
If you have a good heuristic you should be able to use it to compare individual choices - for the limited discrepancy search, and compare partial solutions, for the beam search.
See if you can place an upper bound on how good any extension of a partial solution is. Then you can prune out partial solutions that can't possibly be extended to beat the result from the heuristic, or the best result found so far in a series of iterative searches with increasing depth.
Let's get a few facts out.
1) The only way to know for sure which decision is the best is to test every possible decision and evaluate the outcome based on some criteria.
2) We are highly unlikely to have the time to decide to go through every possible decision, so we have to limit how far in the future we will evaluate the decision.
3) We are highly unlikely to make the best move ~ever~. Not just often, but ever. Unless you have only a couple of decisions, chances are every time you make a decision, there was a better one you didn't get to.
4) We can use how our previous decisions worked out to our advantage.
Put all this together... Let's say when we have a decision, we evaluate what happens 30 ticks into the future, in 30 ticks we can check to see if what actually happened matches what we simulated 30 ticks ago. If it was, we know that decision leads to predictable outcomes and we should use that decision less. If we didn't, or if it turns out better than we hoped, we should use that decision more.
Ideally, you would use your logic in a ... simulation of your simulation ... for purposes of evaluating it. Then when you get to the 'real' simulation, you have a better chance at picking your better decisions earlier. Of course, give a higher weight to the results of your actual simulation results as opposed to your simulated simulation results.

Chess Optimizations

ok, so i have been working on my chess program for a while and i am beginning to hit a wall. i have done all of the standard optimizations (negascout, iterative deepening, killer moves, history heuristic, quiescent search, pawn position evaluation, some search extensions) and i'm all out of ideas!
i am looking to make it multi-threaded soon, and that should give me a good boost in performance, but aside from that are there any other nifty tricks you guys have come across? i have considered switching to MDF(f), but i have heard it is a hassle and isn't really worth it.
what i would be most interested in is some kind of learning algorithm, but i don't know if anyone has done that effectively with a chess program yet.
also, would switching to a bit board be significant? i currently am using 0x88.
Over the last year of development of my chess engine (www.chessbin.com), much of the time has been spent optimizing my code to allow for better and faster move searching. Over that time I have learned a few tricks that I would like to share with you.
Measuring Performance
Essentially you can improve your performance in two ways:
Evaluate your nodes faster
Search fewer nodes to come up with
the same answer
Your first problem in code optimization will be measurement. How do you know you have really made a difference? In order to help you with this problem you will need to make sure you can record some statistics during your move search. The ones I capture in my chess engine are:
Time it took for the search to
complete.
Number of nodes searched
This will allow you to benchmark and test your changes. The best way to approach testing is to create several save games from the opening position, middle game and the end game. Record the time and number of nodes searched for black and white.
After making any changes I usually perform tests against the above mentioned save games to see if I have made improvements in the above two matrices: number of nodes searched or speed.
To complicate things further, after making a code change you might run your engine 3 times and get 3 different results each time. Let’s say that your chess engine found the best move in 9, 10 and 11 seconds. That is a spread of about 20%. So did you improve your engine by 10%-20% or was it just varied load on your pc. How do you know? To fight this I have added methods that will allow my engine to play against itself, it will make moves for both white and black. This way you can test not just the time variance over one move, but a series of as many as 50 moves over the course of the game. If last time the game took 10 minutes and now it takes 9, you probably improved your engine by 10%. Running the test again should confirm this.
Finding Performance Gains
Now that we know how to measure performance gains lets discuss how to identify potential performance gains.
If you are in a .NET environment then the .NET profiler will be your friend. If you have a Visual Studio for Developers edition it comes built in for free, however there are other third party tools you can use. This tool has saved me hours of work as it will tell you where your engine is spending most of its time and allow you to concentrate on your trouble spots. If you do not have a profiler tool you may have to somehow log the time stamps as your engine goes through different steps. I do not suggest this. In this case a good profiler is worth its weight in gold. Red Gate ANTS Profiler is expensive but the best one I have ever tried. If you can’t afford one, at least use it for their 14 day trial.
Your profiler will surly identify things for you, however here are some small lessons I have learned working with C#:
Make everything private
Whatever you can’t make private, make
it sealed
Make as many methods static as
possible.
Don’t make your methods chatty, one
long method is better than 4 smaller
ones.
Chess board stored as an array [8][8]
is slower then an array of [64]
Replace int with byte where possible.
Return from your methods as early as
possible.
Stacks are better than lists
Arrays are better than stacks and
lists.
If you can define the size of the
list before you populate it.
Casting, boxing, un-boxing is evil.
Further Performance Gains:
I find move generation and ordering is extremely important. However here is the problem as I see it. If you evaluate the score of each move before you sort and run Alpha Beta, you will be able to optimize your move ordering such that you will get extremely quick Alpha Beta cutoffs. This is because you will be able to mostly try the best move first.
However the time you have spent evaluating each move will be wasted. For example you might have evaluated the score on 20 moves, sort your moves try the first 2 and received a cut-off on move number 2. In theory the time you have spent on the other 18 moves was wasted.
On the other hand if you do a lighter and much faster evaluation say just captures, your sort will not be that good and you will have to search more nodes (up to 60% more). On the other hand you would not do a heavy evaluation on every possible move. As a whole this approach is usually faster.
Finding this perfect balance between having enough information for a good sort and not doing extra work on moves you will not use, will allow you to find huge gains in your search algorithm. Furthermore if you choose the poorer sort approach you will want to first to a shallower search say to ply 3, sort your move before you go into the deeper search (this is often called Iterative Deepening). This will significantly improve your sort and allow you to search much fewer moves.
Answering an old question.
Assuming you already have a working transposition table.
Late Move Reduction. That gave my program about 100 elo points and it is very simple to implement.
In my experience, unless your implementation is very inefficient, then the actual board representation (0x88, bitboard, etc.) is not that important.
Although you can criple you chess engine with bad performance, a lightning fast move generator in itself is not going to make a program good.
The search tricks used and the evaluation function are the overwhelming factors determining overall strength.
And the most important parts, by far, of the evaluation are Material, Passed pawns, King Safety and Pawn Structure.
The most important parts of the search are: Null Move Pruning, Check Extension and Late Move reduction.
Your program can come a long, long way, on these simple techniques alone!
Good move ordering!
An old question, but same techniques apply now as for 5 years ago. Aren't we all writing our own chess engines, I have my own called "Norwegian Gambit" that I hope will eventually compete with other Java engines on the CCRL. I as many others use Stockfish for ideas since it is so nicely written and open. Their testing framework Fishtest and it's community also gives a ton of good advice. It is worth comparing your evaluation scores with what Stockfish gets since how to evaluate is probably the biggest unknown in chess-programming still and Stockfish has gone away from many traditional evals which have become urban legends (like the double bishop bonus). The biggest difference however was after I implemented the same techniques as you mention, Negascout, TT, LMR, I started using Stockfish for comparison and I noticed how for the same depth Stockfish had much less moves searched than I got (because of the move ordering).
Move ordering essentials
The one thing that is easily forgotten is good move-ordering. For the Alpha Beta cutoff to be efficient it is essential to get the best moves first. On the other hand it can also be time-consuming so it is essential to do it only as necessary.
Transposition table
Sort promotions and good captures by their gain
Killer moves
Moves that result in check on opponent
History heuristics
Silent moves - sort by PSQT value
The sorting should be done as needed, usually it is enough to sort the captures, and thereafter you could run the more expensive sorting of checks and PSQT only if needed.
About Java/C# vs C/C++/Assembly
Programming techniques are the same for Java as in the excellent answer by Adam Berent who used C#. Additionally to his list I would mention avoiding Object arrays, rather use many arrays of primitives, but contrary to his suggestion of using bytes I find that with 64-bit java there's little to be saved using byte and int instead of 64bit long. I have also gone down the path of rewriting to C/C++/Assembly and I am having no performance gain whatsoever. I used assembly code for bitscan instructions such as LZCNT and POPCNT, but later I found that Java 8 also uses those instead of the methods on the Long object. To my surprise Java is faster, the Java 8 virtual machine seems to do a better job optimizing than a C compiler can do.
I know that one improvement that was talked about at the AI courses in university where having a huge database of finishing moves. So having a precalculated database for games with only a small number of figures left. So that if you hit a near end positioning in your search you stop the search and take a precalculated value that improves your search results like extra deepening that you can do for important/critique moves without much computation time spend. I think it also comes with a change in heuristics in a late game state but I'm not a chess player so I don't know the dynamics of game finishing.
Be warned, getting game search right in a threaded environment can be a royal pain (I've tried it). It can be done, but from some literature searching I did a while back, it's extremely hard to get any speed boost at all out of it.
Its quite an old question, I was just searching questions on chess and found this one unanswered. Well it may not be of any help to you now, but may prove helpful to other users.
I didn't see null move pruning, transposition tables.. are you using them? They would give you a big boost...
One thing that gave me a big boost was minimizing conditional branching... Alot of things can be precomputed. Search for such opportunities.
Most modern PCs have multiple cores so it would be a good idea making it multithreading. You don't necessarily need to go MDF(f) for that.
I wont suggest moving your code to bitboard. Its simply too much work. Even though bitboards could give a boost on 64 bit machines.
Finally and most importantly chess literature dominates any optimizations we may use. optimization is too much work. Look at open source chess engines, particularly crafty and fruit/toga. Fruit used to be open source initially.
Late answer, but this may help someone:
Given all the optimizations you mentioned, 1450 ELO is very low. My guess is that something is very wrong with your code. Did you:
Wrote a perft routine and ran it through a set of positions? All tests should pass, so you know your move generator is free of bugs. If you don't have this, there's no point in talking about ELO.
Wrote a mirrorBoard routine and ran the evaluation code through a set of positions? The result should be the same for the normal and mirrored positions, otherwise you have a bug in your eval.
Do you have a hashtable (aka transposition table)? If not, this is a must. It will help while searching and ordering moves, giving a brutal difference in speed.
How do you implement move ordering? This links back to point 3.
Did you implement the UCI protocol? Is your move parsing function working properly? I had a bug like this in my engine:
/* Parses a uci move string and return a Board object */
Board parseUCIMoves(String moves)// e2e4 c7c5 g1f3 ...{
//...
if (someMove.equals("e1g1") || someMove.equals("e1c1"))
//apply proper castle
//...
}
Sometimes the engine crashed while playing a match, and I thought it was the GUI fault, since all perft tests were ok. It took me one week to find the bug by luck. So, test everything.
For (1) you can search every position to depth 6. I use a file with ~1000 positions. See here https://chessprogramming.wikispaces.com/Perft
For (2) you just need a file with millions of positions (just the FEN string).
Given all the above and a very basic evaluation function (material, piece square tables, passed pawns, king safety) it should play at +-2000 ELO.
As far as tips, I know large gains can be found in optimizing your move generation routines before any eval functions. Making that function as tight as possible can give you 10% or more in nodes/sec improvement.
If you're moving to bitboards, do some digging on rec.games.chess.computer archives for some of Dr. Robert Hyatts old posts about Crafty (pretty sure he doesn't post anymore). Or grab the latest copy from his FTP and start digging. I'm pretty sure it would be a significant shift for you though.
Transposition Table
Opening Book
End Game Table Bases
Improved Static Board Evaluation for Leaf Nodes
Bitboards for Raw Speed
Profile and benchmark. Theoretical optimizations are great, but unless you are measuring the performance impact of every change you make, you won't know whether your work is improving or worsening the speed of the final code.
Try to limit the penalty to yourself for trying different algorithms. Make it easy to test various implementations of algorithms against one another. i.e. Make it easy to build a PVS version of your code as well as a NegaScout version.
Find the hot spots. Refactor. Rewrite in assembly if necessary. Repeat.
Assuming "history heuristic" involves some sort of database of past moves, a learning algorithm isn't going to give you much more unless it plays a lot of games against the same player. You can probably achieve more by classifying a player and tweaking the selection of moves from your historic database.
It's been a long time since I've done any programming on any chess program, but at the time, bit boards did give a real improvement. Other than that I can't give you much advise. Do you only evaluate the position of pawns? Some (slight) bonuses for position or mobility of some key pieces may be in order.
I'm not certain what type of thing you would like it to learn however...

Is there a perfect algorithm for chess? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough.
I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do.
My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic.
Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though.
Edit: Hmm... looks like I ruffled some feathers here. That's good.
Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally).
I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.
"I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess."
You're not quite right. There can be such a machine. The issue is the hugeness of the state space that it would have to search. It's finite, it's just REALLY big.
That's why chess falls back on heuristics -- the state space is too huge (but finite). To even enumerate -- much less search for every perfect move along every course of every possible game -- would be a very, very big search problem.
Openings are scripted to get you to a mid-game that gives you a "strong" position. Not a known outcome. Even end games -- when there are fewer pieces -- are hard to enumerate to determine a best next move. Technically they're finite. But the number of alternatives is huge. Even a 2 rooks + king has something like 22 possible next moves. And if it takes 6 moves to mate, you're looking at 12,855,002,631,049,216 moves.
Do the math on opening moves. While there's only about 20 opening moves, there are something like 30 or so second moves, so by the third move we're looking at 360,000 alternative game states.
But chess games are (technically) finite. Huge, but finite. There's perfect information. There are defined start and end-states, There are no coin-tosses or dice rolls.
I know next to nothing about what's actually been discovered about chess. But as a mathematician, here's my reasoning:
First we must remember that White gets to go first and maybe this gives him an advantage; maybe it gives Black an advantage.
Now suppose that there is no perfect strategy for Black that lets him always win/stalemate. This implies that no matter what Black does, there is a strategy White can follow to win. Wait a minute - this means there is a perfect strategy for White!
This tells us that at least one of the two players does have a perfect strategy which lets that player always win or draw.
There are only three possibilities, then:
White can always win if he plays perfectly
Black can always win if he plays perfectly
One player can win or draw if he plays perfectly (and if both players play perfectly then they always stalemate)
But which of these is actually correct, we may never know.
The answer to the question is yes: there must be a perfect algorithm for chess, at least for one of the two players.
It has been proven for the game of checkers that a program can always win or tie the game. That is, there is no choice of moves that one player can make which force the other player into losing.
The researchers spent almost two decades going through the 500 billion billion possible checkers positions, which is still an infinitesimally small fraction of the number of chess positions, by the way. The checkers effort included top players, who helped the research team program checkers rules of thumb into software that categorized moves as successful or unsuccessful. Then the researchers let the program run, on an average of 50 computers daily. Some days, the program ran on 200 machines. While the researchers monitored progress and tweaked the program accordingly. In fact, Chinook beat humans to win the checkers world championship back in 1994.
Yes, you can solve chess, no, you won't any time soon.
This is not a question about computers but only about the game of chess.
The question is, does there exist a fail-safe strategy for never losing the game? If such a strategy exists, then a computer which knows everything can always use it and it is not a heuristic anymore.
For example, the game tic-tac-toe normally is played based on heuristics. But, there exists a fail-safe strategy. Whatever the opponent moves, you always find a way to avoid losing the game, if you do it right from the start on.
So you would need to proof that such a strategy exists or not for chess as well. It is basically the same, just the space of possible moves is vastly bigger.
I'm coming to this thread very late, and that you've already realised some of the issues. But as an ex-master and an ex-professional chess programmer, I thought I could add a few useful facts and figures. There are several ways of measuring the complexity of chess:
The total number of chess games is approximately 10^(10^50). That number is unimaginably large.
The number of chess games of 40 moves or less is around 10^40. That's still an incredibly large number.
The number of possible chess positions is around 10^46.
The complete chess search tree (Shannon number) is around 10^123, based on an average branching factor of 35 and an average game length of 80.
For comparison, the number of atoms in the observable universe is commonly estimated to be around 10^80.
All endgames of 6 pieces or less have been collated and solved.
My conclusion: while chess is theoretically solvable, we will never have the money, the motivation, the computing power, or the storage to ever do it.
Some games have, in fact, been solved. Tic-Tac-Toe is a very easy one for which to build an AI that will always win or tie. Recently, Connect 4 has been solved as well (and shown to be unfair to the second player, since a perfect play will cause him to lose).
Chess, however, has not been solved, and I don't think there's any proof that it is a fair game (i.e., whether the perfect play results in a draw). Speaking strictly from a theoretical perspective though, Chess has a finite number of possible piece configurations. Therefore, the search space is finite (albeit, incredibly large). Therefore, a deterministic Turing machine that could play perfectly does exist. Whether one could ever be built, however, is a different matter.
The average $1000 desktop will be able to solve checkers in a mere 5 seconds by the year 2040 (5x10^20 calculations).
Even at this speed, it would still take 100 of these computers approximately 6.34 x 10^19 years to solve chess. Still not feasible. Not even close.
Around 2080, our average desktops will have approximately 10^45 calculations per second. A single computer will have the computational power to solve chess in about 27.7 hours. It will definitely be done by 2080 as long as computing power continues to grow as it has the past 30 years.
By 2090, enough computational power will exist on a $1000 desktop to solve chess in about 1 second...so by that date it will be completely trivial.
Given checkers was solved in 2007, and the computational power to solve it in 1 second will lag by about 33-35 years, we can probably roughly estimate chess will be solved somewhere between 2055-2057. Probably sooner since when more computational power is available (which will be the case in 45 years), more can be devoted to projects such as this. However, I would say 2050 at the earliest, and 2060 at the latest.
In 2060, it would take 100 average desktops 3.17 x 10^10 years to solve chess. Realize I am using a $1000 computer as my benchmark, whereas larger systems and supercomputers will probably be available as their price/performance ratio is also improving. Also, their order of magnitude of computational power increases at a faster pace. Consider a supercomputer now can perform 2.33 x 10^15 calculations per second, and a $1000 computer about 2 x 10^9. By comparison, 10 years ago the difference was 10^5 instead of 10^6. By 2060 the order of magnitude difference will probably be 10^12, and even this may increase faster than anticipated.
Much of this depends on whether or not we as human beings have the drive to solve chess, but the computational power will make it feasible around this time (as long as our pace continues).
On another note, the game of Tic-Tac-Toe, which is much, much simpler, has 2,653,002 possible calculations (with an open board). The computational power to solve Tic-Tac-Toe in roughly 2.5 (1 million calculations per second) seconds was achieved in 1990.
Moving backwards, in 1955, a computer had the power to solve Tic-Tac-Toe in about 1 month (1 calculation per second). Again, this is based on what $1000 would get you if you could package it into a computer (a $1000 desktop obviously did not exist in 1955), and this computer would have been devoted to solving Tic-Tac-Toe....which was just not the case in 1955. Computation was expensive and would not have been used for this purpose, although I don't believe there is any date where Tic-Tac-Toe was deemed "solved" by a computer, but I'm sure it lags behind the actual computational power.
Also, take into account $1000 in 45 years will be worth about 4 times less than it is now, so much more money can go into projects such as this while computational power will continue to get cheaper.
It actually is possible for both players to have winning strategies in infinite games with no well-ordering; however, chess is well-ordered. In fact, because of the 50-move rule, there is an upper-limit to the number of moves a game can have, and thus there are only finitely many possible games of chess (which can be enumerated to solve exactly.. theoretically, at least :)
Your end of the argument is supported by the way modern chess programs work now. They work that way because it's way too resource-intense to code a chess program to operate deterministically. They won't necessarily always work that way. It's possible that chess will someday be solved, and if that happens, it will likely be solved by a computer.
I think you are dead on. Machines like Deep Blue and Deep Thought are programmed with a number of predefined games, and clever algorithms to parse the trees into the ends of those games. This is, of course, a dramatic oversimplification. There is always a chance to "beat" the computer along the course of a game. By this I mean making a move that forces the computer to make a move that is less than optimal (whatever that is). If the computer cannot find the best path before the time limit for the move, it might very well make a mistake by choosing one of the less-desirable paths.
There is another class of chess programs that uses real machine learning, or genetic programming / evolutionary algorithms. Some programs have been evolved and use neural networks, et al, to make decisions. In this type of case, I would imagine that the computer might make "mistakes", but still end up in a victory.
There is a fascinating book on this type of GP called Blondie24 that you might read. It is about checkers, but it could apply to chess.
For the record, there are computers that can win or tie at checkers. I'm not sure if the same could be done for chess. The number of moves is a lot higher. Also, things change because pieces can move in any direction, not just forwards and backwards. I think although I'm not sure, that chess is deterministic, but that there are just way too many possible moves for a computer to currently determine all the moves in a reasonable amount of time.
From game theory, which is what this question is about, the answer is yes Chess can be played perfectly. The game space is known/predictable and yes if you had you grandchild's quantum computers you could probably eliminate all heuristics.
You could write a perfect tic-tac-toe machine now-a-days in any scripting language and it'd play perfectly in real-time.
Othello is another game that current computers can easily play perfectly, but the machine's memory and CPU will need a bit of help
Chess is theoretically possible but not practically possible (in 2008)
i-Go is tricky, it's space of possibilities falls beyond the amount of atoms in the universe, so it might take us some time to make a perfect i-Go machine.
Chess is an example of a matrix game, which by definition has an optimal outcome (think Nash equilibrium). If player 1 and 2 each take optimal moves, a certain outcome will ALWAYS be reached (whether it be a win-tie-loss is still unknown).
As a chess programmer from the 1970's, I definitely have an opinion on this. What I wrote up about 10 years ago, still is basically true today:
"Unfinished Work and Challenges to Chess Programmers"
Back then, I thought we could solve Chess conventionally, if done properly.
Checkers was solved recently (Yay, University of Alberta, Canada!!!) but that was effectively done Brute Force. To do chess conventionally, you'll have to be smarter.
Unless, of course, Quantum Computing becomes a reality. If so, chess will be solved as easily as Tic-Tac-Toe.
In the early 1970's in Scientific American, there was a short parody that caught my attention. It was an announcement that the game of chess was solved by a Russian chess computer. It had determined that there is one perfect move for white that would ensure a win with perfect play by both sides, and that move is: 1. a4!
Lots of answers here make the important game-theoretic points:
Chess is a finite, deterministic game with complete information about the game state
You can solve a finite game and identify a perfect strategy
Chess is however big enough that you will not be able to solve it completely with a brute force method
However these observations miss an important practical point: it is not necessary to solve the complete game perfectly in order to create an unbeatable machine.
It is in fact quite likely that you could create an unbeatable chess machine (i.e. will never lose and will always force a win or draw) without searching even a tiny fraction of the possible state space.
The following techniques for example all massively reduce the search space required:
Tree pruning techniques like Alpha/Beta or MTD-f already massively reduce the search space
Provable winning position. Many endings fall in this category: You don't need to search KR vs K for example, it's a proven win. With some work it is possible to prove many more guaranteed wins.
Almost certain wins - for "good enough" play without any foolish mistakes (say about ELO 2200+?) many chess positions are almost certain wins, for example a decent material advantage (e.g. an extra Knight) with no compensating positional advantage. If your program can force such a position and has good enough heuristics for detecting positional advantage, it can safely assume it will win or at least draw with 100% probability.
Tree search heuristics - with good enough pattern recognition, you can quickly focus on the relevant subset of "interesting" moves. This is how human grandmasters play so it's clearly not a bad strategy..... and our pattern recognition algorithms are constantly getting better
Risk assessment - a better conception of the "riskiness" of a position will enable much more effective searching by focusing computing power on situations where the outcome is more uncertain (this is a natural extension of Quiescence Search)
With the right combination of the above techniques, I'd be comfortable asserting that it is possible to create an "unbeatable" chess playing machine. We're probably not too far off with current technology.
Note that It's almost certainly harder to prove that this machine cannot be beaten. It would probably be something like the Reimann hypothesis - we would be pretty sure that it plays perfectly and would have empirical results showing that it never lost (including a few billion straight draws against itself), but we wouldn't actually have the ability to prove it.
Additional note regarding "perfection":
I'm careful not to describe the machine as "perfect" in the game-theoretic sense because that implies unusually strong additional conditions, such as:
Always winning in every situation where it is possible to force a win, no matter how complex the winning combination may be. There will be situations on the boundary between win/draw where this is extremely hard to calculate perfectly.
Exploiting all available information about potential imperfection in your opponent's play, for example inferring that your opponent might be too greedy and deliberately playing a slightly weaker line than usual on the grounds that it has a greater potential to tempt your opponent into making a mistake. Against imperfect opponents it can in fact be optimal to make a losing if you estimate that your opponent probably won't spot the forced win and it gives you a higher probability of winning yourself.
Perfection (particularly given imperfect and unknown opponents) is a much harder problem than simply being unbeatable.
It's perfectly solvable.
There are 10^50 odd positions. Each position, by my reckoning, requires a minimum of 64 round bytes to store (each square has: 2 affiliation bits, 3 piece bits). Once they are collated, the positions that are checkmates can be identified and positions can be compared to form a relationship, showing which positions lead to other positions in a large outcome tree.
Then, the program needs only to find the lowest only one side checkmate roots, if such a thing exists. In any case, Chess was fairly simply solved at the end of the first paragraph.
if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic.
There are two competing ideas there. One is that you search every possible move, and the other is that you decide based on a heuristic. A heuristic is a system for making a good guess. If you're searching through every possible move, then you're no longer guessing.
"Is there a perfect algorithm for chess?"
Yes there is. Maybe it's for White to always win. Maybe it's for Black to always win. Maybe it's for both to always tie at least. We don't know which, and we'll never know, but it certainly exist.
See also
God's algorithm
I found this article by John MacQuarrie that references work by the "father of game theory" Ernst Friedrich Ferdinand Zermelo. It draws the following conclusion:
In chess either white can force a win, or black can force a win, or both sides can force at least a draw.
The logic seems sound to me.
There are two mistakes in your thought experiment:
If your Turing machine is not "limited" (in memory, speed, ...) you do not need to use heuristics but you can calculate evaluate the final states (win, loss, draw). To find the perfect game you would then just need to use the Minimax algorithm (see http://en.wikipedia.org/wiki/Minimax) to compute the optimal moves for each player, which would lead to one or more optimal games.
There is also no limit on the complexity of the used heuristic. If you can calculate a perfect game, there is also a way to compute a perfect heuristic from it. If needed its just a function that maps chess positions in the way "If I'm in this situation S my best move is M".
As others pointed out already, this will end in 3 possible results: white can force a win, black can force a win, one of them can force a draw.
The result of a perfect checkers games has already been "computed". If humanity will not destroy itself before, there will be also a calculation for chess some day, when computers have evolved enough to have enough memory and speed. Or we have some quantum computers... Or till someone (researcher, chess experts, genius) finds some algorithms that significantly reduces the complexity of the game. To give an example: What is the sum of all numbers between 1 and 1000? You can either calculate 1+2+3+4+5...+999+1000, or you can simply calculate: N*(N+1)/2 with N = 1000; result = 500500. Now imagine don't know about that formula, you don't know about Mathematical induction, you don't even know how to multiply or add numbers, ... So, it may be possible that there is a currently unknown algorithm that just ultimately reduces the complexity of this game and it would just take 5 Minutes to calculate the best move with a current computer. Maybe it would be even possible to estimate it as a human with pen & paper, or even in your mind, given some more time.
So, the quick answer is: If humanity survives long enough, it's just a matter of time!
I'm only 99.9% convinced by the claim that the size of the state space makes it impossible to hope for a solution.
Sure, 10^50 is an impossibly large number. Let's call the size of the state space n.
What's the bound on the number of moves in the longest possible game? Since all games end in a finite number of moves there exists such a bound, call it m.
Starting from the initial state, can't you enumerate all n moves in O(m) space? Sure, it takes O(n) time, but the arguments from the size of the universe don't directly address that. O(m) space might not even be very much. For O(m) space couldn't you also track, during this traversal, whether the continuation of any state along the path you are traversing leads to EitherMayWin, EitherMayForceDraw, WhiteMayWin, WhiteMayWinOrForceDraw, BlackMayWin, or BlackMayWinOrForceDraw? (There's a lattice depending on whose turn it is, annotate each state in the history of your traversal with the lattice meet.)
Unless I'm missing something, that's an O(n) time / O(m) space algorithm for determining which of the possible categories chess falls into. Wikipedia cites an estimate for the age of the universe at approximately 10^60th Planck times. Without getting into a cosmology argument, let's guess that there's about that much time left before the heat/cold/whatever death of the universe. That leaves us needing to evaluate one move every 10^10th Planck times, or every 10^-34 seconds. That's an impossibly short time (about 16 orders of magnitude shorter than the shortest times ever observed). Let's optimistically say that with a super-duper-good implementation running on top of the line present-or-forseen-non-quantum-P-is-a-proper-subset-of-NP technology we could hope to evaluate (take a single step forward, categorize the resulting state as an intermediate state or one of the three end states) states at a rate of 100 MHz (once every 10^-8 seconds). Since this algorithm is very parallelizable, this leaves us needing 10^26th such computers or about one for every atom in my body, together with the ability to collect their results.
I suppose there's always some sliver of hope for a brute-force solution. We might get lucky and, in exploring only one of white's possible opening moves, both choose one with much-lower-than-average fanout and one in which white always wins or wins-or-draws.
We could also hope to shrink the definition of chess somewhat and persuade everyone that it's still morally the same game. Do we really need to require positions to repeat 3 times before a draw? Do we really need to make the running-away party demonstrate the ability to escape for 50 moves? Does anyone even understand what the heck is up with the en passant rule? ;) More seriously, do we really need to force a player to move (as opposed to either drawing or losing) when his or her only move to escape check or a stalemate is an en passant capture? Could we limit the choice of pieces to which a pawn may be promoted if the desired non-queen promotion does not lead to an immediate check or checkmate?
I'm also uncertain about how much allowing each computer hash-based access to a large database of late game states and their possibly outcomes (which might be relatively feasible on existing hardware and with existing endgame databases) could help in pruning the search earlier. Obviously you can't memoize the entire function without O(n) storage, but you could pick a large integer and memoize that many endgames enumerating backwards from each possible (or even not easily provably impossible, I suppose) end state.
I know this is a bit of a bump, but I have to put my 5 cents worth in here. It is possible for a computer, or a person for that matter, to end every single chess game that he/she/it participates in, in either a win or a stalemate.
To achieve this, however, you must know precisely every possible move and reaction and so forth, all the way through to each and every single possible game outcome, and to visualize this, or to make an easy way of analyising this information, think of it as a mind map that branches out constantly.
The center node would be the start of the game. Each branch out of each node would symbolize a move, each one different to its bretheren moves. Presenting it in this manor would take much resources, especially if you were doing this on paper. On a computer, this would take possibly hundreds of Terrabytes of data, as you would have very many repedative moves, unless you made the branches come back.
To memorize such data, however, would be implausable, if not impossible. To make a computer recognize the most optimal move to take out of the (at most) 8 instantly possible moves, would be possible, but not plausable... as that computer would need to be able to process all the branches past that move, all the way to a conclusion, count all conclusions that result in a win or a stalemate, then act on that number of wining conclusions against losing conclusions, and that would require RAM capable of processing data in the Terrabytes, or more! And with todays technology, a computer like that would require more than the bank balance of the 5 richest men and/or women in the world!
So after all that consideration, it could be done, however, no one person could do it. Such a task would require 30 of the brightest minds alive today, not only in chess, but in science and computer technology, and such a task could only be completed on a (lets put it entirely into basic perspective)... extremely ultimately hyper super-duper computer... which couldnt possibly exist for at least a century. It will be done! Just not in this lifetime.
Mathematically, chess has been solved by the Minimax algorithm, which goes back to the 1920s (either found by Borel or von Neumann). Thus, a turing machine can indeed play perfect chess.
However, the computational complexity of chess makes it practically infeasible. Current engines use several improvements and heuristics. Top engines today have surpassed the best humans in terms of playing strength, but because of the heuristics that they are using, they might not play perfect when given infinite time (e.g., hash collisions could lead to incorrect results).
The closest that we currently have in terms of perfect play are endgame tablebases. The typical technique to generate them is called retrograde analysis. Currently, all position with up to six pieces have been solved.
It just might be solvable, but something bothers me:
Even if the entire tree could be traversed, there is still no way to predict the opponent's next move. We must always base our next move on the state of the opponent, and make the "best" move available. Then, based on the next state we do it again.
So, our optimal move might be optimal iff the opponent moves in a certain way. For some moves of the opponent our last move might have been sub-optimal.
I just fail to see how there could be a "perfect" move in every step.
For that to be the case, there must for every state [in the current game] be a path in the tree which leads to victory, regardless of the opponent's next move (as in tic-tac-toe), and I have a hard time figuring that.
Yes , in math , chess is classified as a determined game , that means it has a perfect algorithm for each first player , this is proven to be true even for infinate chess board , so one day probably a fast effective AI will find the perfect strategy, and the game is gone
More on this in this video : https://www.youtube.com/watch?v=PN-I6u-AxMg
There is also quantom chess , where there is no math proof that it is determined game http://store.steampowered.com/app/453870/Quantum_Chess/
and there you are detailed video about quantom chess https://chess24.com/en/read/news/quantum-chess
Of course
There's only 10 to the power of fifty possible combinations of pieces on the board. Having that in mind, to play to every compibation, you would need make under 10 to the power of fifty moves (including repetitions multiply that number by 3). So, there's less than ten to the power of one hundred moves in chess. Just pick those that lead to checkmate and you're good to go
64bit math (=chessboard) and bitwise operators (=next possible moves) is all You need. So simply. Brute Force will find the most best way usually. Of course, there is no universal algorithm for all positions. In real life the calculation is also limited in time, timeout will stop it. A good chess program means heavy code (passed,doubled pawns,etc). Small code can't be very strong. Opening and endgame databases just save processing time, some kind of preprocessed data. The device, I mean - the OS,threading poss.,environment,hardware define requirements. Programming language is important. Anyway, the development process is interesting.

Resources