Need some suggestions on what algorithm to use [closed] - algorithm

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Its a simple game in c++ .
There are 5 random towers generated in coordinate range (0.0f,0.0f) to (10.0f,10.0f).
They have random hp,range and damage capped within a certain limit. They can't move.
Now , 10 units are added on the map with fixed movement speed, hp and damage.
No of units and towers will be fixed through simulations . Only their initial position will be randomized.
1000 simulations are to be run.
Goal is to achieve a win rate of 90% approx for units.
A game is won when units destroy all of the towers . Units can move at a predefined speed towards tower. Each simulation takes multiple rounds to complete. In each round unit move towards best selected target and attack if within a certain range. Similarly towers pick any one unit within its attack range and keeps attacking it until it dies or moves out of range.
I need some pointers on what algorithms shall i invest my time in to achieve the same.
Currently , I am able to achieve 84.2% win rate using some weighted average of distance from unit, hp ,range and damage of towers and selecting the tower which scores least on these criteria. Moving towards tower with least distance from unit without considering other attributes achieves a win rate of approx 72 % .
From comment of deleted answer:
There is one more restriction . I can just select a target each time. The units will make sure to move towards that target . I am not supposed to modify the part where units move towards target . So , there has to be a target tower each round of a simulation towards which unit targeting it will move. So , there is no way i can move my units away from tower to a safe area and assemble them at a point and then plan my attack.

I've had a better idea for a formula to select what tower to attack.
For each of the warriors use it to get a "score" for each tower. Then select the tower with the highest score
a1*todalDmgFromOtherWarriorsAimedAtThatTower - a2*towerRange - a3*towerDamage - a4*towerHP - a5*distance/speed
a1-a5 should be modified again and again until you get the optimal result making some parameters more important than others

If there is no time limit or advantage for time, I would try to go for a grouped approach - let all units attack the same tower together, and have all units enter the tower's attack range at the exact same time. This may actually end up not taking that much longer, since you'll fire faster while taking less damage and thus not have to account for preventing units from dying as much, also producing a much higher (if not perfect) win rate.
You can possibly have a specific unit (one with the highest HP?) enter the range slightly before the other units so it draws fire and can move out of range when it's close to death. If the strongest unit has moved out of range, you can either move the next attacked unit out of range too (and so on) or simply continue attacking until the tower is destroyed.
You'll have to play around with which tower to attack first. Probably the weakest (lowest HP + damage), but you may not want to send in your strongest unit to draw fire, because you probably want to keep this for the last, strongest tower.
Moving a unit such to avoid the attack range of all the towers to get to the desired tower may be difficult. Some options:
Leave the unit where it is.
Pick towers strategically to 'untrap' the strongest units.
Attack multiple towers.
If all of this sounds like a near-impossible task, requiring some really advanced AI, note that it may be a lot simpler than you think. Just ignore most of the constraints to start and add them in one at a time, as in start simple and build it up from there. But yes, it's a lot more difficult than your individual approach; the main difficulty lies in the geometry calculations and playing around a bit to find the best order of attack for the towers and order of damage-takers.
How I would probably approach this: (test the efficiency at every step and stop when you're happy)
Write a heuristic to determine the best tower. Move all your units there to attack it (ignoring all other towers). Repeat until the game ends. This should be really simple.
Modify to wait until most units are there before entering the tower range. Shouldn't be too difficult.
You can stop here if you want, before any difficult stuff starts happening (maybe hack at it a little to improve), thus it shouldn't have taken you too long, and simply compare this to your current approach.
Write some simple code to have units move around other towers (if possible).
Modify your picking-tower code to redetermine a tower if some units can't get there.
Incrementally make everything more complicated.
Side note - Since the towers are static, you can determine the time it will take to get to a tower ahead of time, so you can just wait at a safe spot (rather than just outside range of the tower, which may be inside range of multiple other towers) if other units will take longer to get there.
Additional note - If units can be ranged too, if any unit has a longer attack range than any tower, it would be most efficient to have that unit solo that tower until it is destroyed (FREE KILL!).

Related

Techniques to evaluate the "twistiness" of a road in Google Maps?

As per the title. I want to, given a Google maps URL, generate a twistiness rating based on how windy the roads are. Are there any techniques available I can look into?
What do I mean by twistiness? Well I'm not sure exactly. I suppose it's characterized by a high turn -to-distance ratio, as well as high angle-change-per-turn number. I'd also say that elevation change of a road comes in to it as well.
I think that once you know exactly what you want to measure, the implementation is quite straightforward.
I can think of several measurements:
the ratio of the road length to the distance between start and end (this would make a long single curve "twisty", so it is most likely not the complete answer)
the number of inflection points per unit length (this would make an almost straight road with a lot of little swaying "twisty", so it is most likely not the complete answer)
These two could be combined by multiplication, so that you would have:
road-length * inflection-points
--------------------------------------
start-end-distance * road-length
You can see that this can be shortened to "inflection-points per start-end-distance", which does seem like a good indicator for "twistiness" to me.
As for taking elevation into account, I think that making the whole calculation in three dimensions is enough for a first attempt.
You might want to handle left-right inflections separately from up-down inflections, though, in order to make it possible to scale the elevation inflections by some factor.
Try http://www.hardingconsultants.co.nz/transportationconference2007/images/Presentations/Technical%20Conference/L1%20Megan%20Fowler%20Canterbury%20University.pdf as a starting point.
I'd assume that you'd have to somehow capture the road centreline from Google Maps as a vectorised dataset & analyse using GIS software to do what you describe. Maybe do a screen grab then a raster-to-vector conversion to start with.
Cumulative turn angle per Km is a commonly-used measure in road assessment. Vertex density is also useful. Note that these measures depend upon an assumption that vertices have been placed at some form of equal density along the line length whilst they were captured, rather than being manually placed. Running a GIS tool such as a "bendsimplify" algorithm on the line should solve this. I have written scripts in Python for ArcGIS 10 to define these measures if anyone wants them.
Sinuosity is sometimes used for measuring bends in rivers - see the help pages for Hawths Tools for ArcGIS for a good description. It could be misleading for roads that have major
changes in course along their length though.

How To Make An Efficient Ludo Game Playing AI Algorithm

I want to develop a ludo game which will be played by at most 4 players and at least two. One of the players will be an AI. As there is so many conditions I am not able to decide what pawn to move for the computer. I am trying my best but still to develop an efficient algorithm that can compete with human. If anybody knows the answers of any algorithm implemented in any language please let me know. Thanks.
Also if you want you can try general game playing AI algorithm, such as monte carlo tree search. Basically idea is this - you need to simulate many random games from current move and after that choose such action which guarantees best outcome statistically.
Basically, AI depends upon the type of environment.
For the LUDO, environment is stochastic.
There are multiple algorithms to decide what pawn should move next.
for these types of environment, you need to learn algorithms like, "expectimax" , "MDP" or if you wanna make it more professionally you should go for "reinforcement learning".
I think that in most computer card/board games, getting a reasonably good strategy for your AI player is better than trying to get an always-winning-top-notch algorithm. The AI player should be fun to play with.
Pretty reasonable way to do it is to collect a set of empirical rules which your AI should follow. Like 'If I got 6 on the dices I should move a pawn from Home before considering any other moves', 'If I have a chance to "eat" another player's pawn, do it', etc. Then range these rules from the most important to less important and implement them in the code. You can combine a set of rules into different strategies and try to switch them to see if AI plays better or worse.
Start with a simple heuristic - what's the total number of squares each player has to move to get all their pieces home? Now you can make a few adjustments to that heuristic - for instance, what's the additional cost of a piece in the home square? (Hint - what's the expected total of the dice rolls before the player gets a six?). Now you can further adjust the 'expected distance' of pieces from home based on how likely they are to be hit. For instance, if a piece has a 1 in 6 chance of getting hit before the player's next move, then its heuristic distance is 5/6*(current distance)+1/6*(home distance).
You should then be able to choose a move that maximizes your player's advantage (difference in heuristic) over all the opponents.

Fast algorithm for line of sight calculation in an RTS game

I'm making a simple RTS game. I want it to run very fast because it should work with thousands of units and 8 players.
Everything seems to work flawlessly but it seems the line of sight calculation is a bottleneck. It's simple: if an enemy unit is closer than any of my unit's LOS range it will be visible.
Currently I use a quite naive algorithm: for every enemy units I check whether any of my units is see him. It's O(n^2)
So If there are 8 players and they have 3000 units each that would mean 3000*21000=63000000 tests per player at the worst case. Which is quite slow.
More details: it's a stupid simple 2D space RTS: no grid, units are moving along a straight lines everywhere and there is no collision so they can move through each other. So even hundreds of units can be at the same spot.
I want to speed up this LOS algorithm somehow. Any ideas?
EDIT:
So additional details:
I meant one player can have even 3000 units.
my units have radars so they towards all directions equally.
Use a spatial data structure to efficiently look up units by location.
Additionally, if you only care whether a unit is visible, but not which unit spotted it, you can do
for each unit
mark the regions this unit sees in your spatial data structure
and have:
isVisible(unit) = isVisible(region(unit))
A very simple spatial data structure is the grid: You overlay a coarse grid over the playing field. The regions are this grid's cells. You allocate an array of regions, and for each region keep of list of units presently in this region.
You may also find Muki Haklay's demonstration of spatial indexes useful.
One of the most fundamental rules in gamedev is to optimize the bejeebers out of your algorithms by exploiting all possible constraints your gameplay defines - this is the main reason that you don't see wildly different games built on top of any given companies game engine, they've exploited their constraints so efficiently that they can't deal with anything that isn't within these constraints.
That said, you said that units move in straight lines - and you say that players can 3000 units - even if I assume that's 3000 units for eight players, that's 375 units per player, so I think I'm safe in assuming that on each step of game play (and I am assuming that each step involves the calculation you describe above) that more units will not change their direction than units that will change direction.
So, if this is true, then you want to divide all your pieces into two groups - those that did change direction in the last step, and those that did not.
For those that did, you need to do a bit of calulating - for units of any two opposing forces, you want to ask 'when will unit A see unit B given that neither unit A nor unit B change direction or speed ?(you can deal with accelleration/decelleration, but then it gets more complicated) - to calculate this you need first to determine if the vectors that unit A and unit B are travelling on will intersect (simple 2D line intersection calculation, combined with a calculation that tells you when each unit hits this intersection) - if they don't, and they can't see each other now, then they never will see each other unless at least one of them changes direction. If they do intersect, then you need to calculate the time differential between when the first and second unit pass through the point of intersection - if this distance is greater than the LOS range, then these units will never see each other unless one changes direction - if this differential is less than the LOS range then a few more (wave hands vigorously) calculations will tell you when this blessed event will take place.
Now, what you have is a collection of information bifurcated into elements that never will see each other and elements that will see each other at some time t in the future - each step, you simply deal with the units that have changed direction and compute their interactions with the rest of the units. (Oh, and deal with those units that previous calculations told you would come into view of each other - remember to keep these in an insertable ordered structure) What you've effectively done is exploited the linear behavior of the system to change your question from 'Does unit A see unit B' to 'When will unit A see unit B'
Now, all of that said, this isn't to discount the spatial data structure answer - it's a good answer - however, it is also capable of dealing with units in random motion, so you want to consider how to optimize this process further - you also need to be careful about dealing with cross region visibility, i.e. units at the borders of two different regions may be able to see each other - if you have pieces that tend to clump up, using a spatial data structure with variable dimensions might be the answer, where pieces that are not in the same region are guaranteed not to be able to see each other.
I'd do this with a grid. I think that's how commercial RTS games solve the problem.
Discretize the game world for the visibility tracker. (Square grid is easiest. Experiment with the coarseness to see what value works best.)
Record the present units in each area. (Update whenever a unit moves.)
Record the areas each player sees. (This has to be updated as units move. The unit could just poll to determine its visible tiles. Or you could analyze the map before the game starts..)
Make a list (or whatever structure is fitting) for the enemy units seen by each player.
Now whenever a unit goes from one area of visibility to another, perform a check:
Went from an unseen to a seen area - add the unit to the player's visibility tracker.
Went from a seen to an unseen area - remove the unit from the player's visibility tracker.
In the other two cases no visibility change occurred.
This is fast but takes some memory. However, with BitArrays and Lists of pointers, the memory usage shouldn't be that bad.
There was an article about this in one of the Game Programming Gems books (one of the first three, I think.)

Algorithm for finding the best routes for food distribution in game

I'm designing a city building game and got into a problem.
Imagine Sierra's Caesar III game mechanics: you have many city districts with one market each. There are several granaries over the distance connected with a directed weighted graph. The difference: people (here cars) are units that form traffic jams (here goes the graph weights).
Note: in Ceasar game series, people harvested food and stockpiled it in several big granaries, whereas many markets (small shops) took food from the granaries and delivered it to the citizens.
The task: tell each district where they should be getting their food from while taking least time and minimizing congestions on the city's roads.
Map example
Suppose that yellow districts need 7, 7 and 4 apples accordingly.
Bluish granaries have 7 and 11 apples accordingly.
Suppose edges weights to be proportional to their length. Then, the solution should be something like the gray numbers indicated on the edges. Eg, first district gets 4 apples from the 1st and 3 apples from the 2nd granary, while the last district gets 4 apples from only the 2nd granary.
Here, vertical roads are first occupied to the max, and then the remaining workers are sent to the diagonal paths.
Question
What practical and very fast algorithm should I use? I was looking at some papers (Congestion Games: Optimization in Competition etc.) describing congestion games, but could not get the big picture.
You want to look into the Max-flow problem. Seems like in this case it is a bipartite graph, which should make things easier to visualize.
This is a Multi-source Multi-sink Maximum Flow Problem which can easily be converted into a simple Maximum Flow Problem by creating a super source and a super sink as described in the link. There are many efficient solutions to Maximum Flow Problems.
One thing you could do, which would address the incremental update problem discussed in another answer and which might also be cheaper to computer, is forget about a globally optimal solution. Let each villager participate in something like ant colony optimization.
Consider preventing the people on the bottom-right-hand yellow node in your example from squeezing out those on the far-right-hand yellow node by allowing the people at the far-right-hand yellow node to bid up the "price" of buying resources from the right-hand blue node, which would encourage some of those from the bottom-right-hand yellow node to take the slightly longer walk to the left-hand blue node.
I agree with Larry and mathmike, it certainly seems like this problem is a specialization of network flow.
On another note, the problem may get easier if your final algorithm finds a spanning tree for each market to its resources (granaries), consumes those resources greedily based on shortest path first, then moves onto the next resource pile.
It may help to think about it in terms of using a road to max capacity first (maximizing road efficiency), rather than trying to minimize congestion.
This goes to the root of the problem - in general, it's easier to find close to optimal solutions in graph problems and in terms of game dev, close to optimal is probably good enough.
Edit: Wanted to also point out that mathmike's link to Wikipedia also talks about Maximum Flow Problem with Vertex Capacities where each of your granaries can be thought of as vertices with finite capacity.
Something you have to note, is that your game is continuous. If you have a solution X at time t, and some small change occurs (e.g: the player builds another road, or one of the cities gain more population), the solution that the Max Flow algorithms give you may change drastically, but you'd probably want the solution at t+1 to be similar to X. A totally different solution at each time step is unrealistic (1 new road is built at the southern end of the map, and all routes are automatically re-calculated).
I would use some algorithm to calculate initial solution (or when a major change happens, like an earthquake destroys 25% of the roads), but most of the time only update it incrementally: meaning, define some form of valid transformation on a solution (e.g. 1 city tries to get 1 food unit from a different granary than it does now) - you try the update (simulate the expected congestion), and keep the updated solution if its better than the existing solution. Run this step N times after each game turn or some unit of time.
Its both efficient computationally (don't need to run full Max Flow every second) and will get you more realistic, smooth changes in behavior.
It might be more fun to have a dynamic that models a behavior resulting in a good reasonable solution, rather than finding an ideal solution to drive the behavior. Suppose you plan each trip individually. If you're a driver and you need to get from point A to point B, how would you get there? You might consider a few things:
I know about typical traffic conditions at this hour and I'll try to find ways around roads that are usually busy. You might model this as an averaged traffic value at different times, as the motorists don't necessarily have perfect information about the current traffic, but may learn and identify trends over time.
I don't like long, confusing routes with a lot of turns. When planning a trip, you might penalize those with many edges.
If speed limits and traffic lights are included in your model, I'd want to avoid long stretches with low speed limits and/or a lot of traffic lights. I'd prefer freeways or highways for longer trips, even if they have more traffic.
There may be other interesting dynamics that evolve from considering the problem behaviorally rather than as a pure optimization. In real life, traffic rarely converges on optimal solutions, so a big part of the challenge in transportation engineering is coming up with incentives, penalties and designs that encourage a better solution from the natural dynamics playing out in the drivers' decisions.

Is there a perfect algorithm for chess? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough.
I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do.
My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic.
Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though.
Edit: Hmm... looks like I ruffled some feathers here. That's good.
Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally).
I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.
"I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess."
You're not quite right. There can be such a machine. The issue is the hugeness of the state space that it would have to search. It's finite, it's just REALLY big.
That's why chess falls back on heuristics -- the state space is too huge (but finite). To even enumerate -- much less search for every perfect move along every course of every possible game -- would be a very, very big search problem.
Openings are scripted to get you to a mid-game that gives you a "strong" position. Not a known outcome. Even end games -- when there are fewer pieces -- are hard to enumerate to determine a best next move. Technically they're finite. But the number of alternatives is huge. Even a 2 rooks + king has something like 22 possible next moves. And if it takes 6 moves to mate, you're looking at 12,855,002,631,049,216 moves.
Do the math on opening moves. While there's only about 20 opening moves, there are something like 30 or so second moves, so by the third move we're looking at 360,000 alternative game states.
But chess games are (technically) finite. Huge, but finite. There's perfect information. There are defined start and end-states, There are no coin-tosses or dice rolls.
I know next to nothing about what's actually been discovered about chess. But as a mathematician, here's my reasoning:
First we must remember that White gets to go first and maybe this gives him an advantage; maybe it gives Black an advantage.
Now suppose that there is no perfect strategy for Black that lets him always win/stalemate. This implies that no matter what Black does, there is a strategy White can follow to win. Wait a minute - this means there is a perfect strategy for White!
This tells us that at least one of the two players does have a perfect strategy which lets that player always win or draw.
There are only three possibilities, then:
White can always win if he plays perfectly
Black can always win if he plays perfectly
One player can win or draw if he plays perfectly (and if both players play perfectly then they always stalemate)
But which of these is actually correct, we may never know.
The answer to the question is yes: there must be a perfect algorithm for chess, at least for one of the two players.
It has been proven for the game of checkers that a program can always win or tie the game. That is, there is no choice of moves that one player can make which force the other player into losing.
The researchers spent almost two decades going through the 500 billion billion possible checkers positions, which is still an infinitesimally small fraction of the number of chess positions, by the way. The checkers effort included top players, who helped the research team program checkers rules of thumb into software that categorized moves as successful or unsuccessful. Then the researchers let the program run, on an average of 50 computers daily. Some days, the program ran on 200 machines. While the researchers monitored progress and tweaked the program accordingly. In fact, Chinook beat humans to win the checkers world championship back in 1994.
Yes, you can solve chess, no, you won't any time soon.
This is not a question about computers but only about the game of chess.
The question is, does there exist a fail-safe strategy for never losing the game? If such a strategy exists, then a computer which knows everything can always use it and it is not a heuristic anymore.
For example, the game tic-tac-toe normally is played based on heuristics. But, there exists a fail-safe strategy. Whatever the opponent moves, you always find a way to avoid losing the game, if you do it right from the start on.
So you would need to proof that such a strategy exists or not for chess as well. It is basically the same, just the space of possible moves is vastly bigger.
I'm coming to this thread very late, and that you've already realised some of the issues. But as an ex-master and an ex-professional chess programmer, I thought I could add a few useful facts and figures. There are several ways of measuring the complexity of chess:
The total number of chess games is approximately 10^(10^50). That number is unimaginably large.
The number of chess games of 40 moves or less is around 10^40. That's still an incredibly large number.
The number of possible chess positions is around 10^46.
The complete chess search tree (Shannon number) is around 10^123, based on an average branching factor of 35 and an average game length of 80.
For comparison, the number of atoms in the observable universe is commonly estimated to be around 10^80.
All endgames of 6 pieces or less have been collated and solved.
My conclusion: while chess is theoretically solvable, we will never have the money, the motivation, the computing power, or the storage to ever do it.
Some games have, in fact, been solved. Tic-Tac-Toe is a very easy one for which to build an AI that will always win or tie. Recently, Connect 4 has been solved as well (and shown to be unfair to the second player, since a perfect play will cause him to lose).
Chess, however, has not been solved, and I don't think there's any proof that it is a fair game (i.e., whether the perfect play results in a draw). Speaking strictly from a theoretical perspective though, Chess has a finite number of possible piece configurations. Therefore, the search space is finite (albeit, incredibly large). Therefore, a deterministic Turing machine that could play perfectly does exist. Whether one could ever be built, however, is a different matter.
The average $1000 desktop will be able to solve checkers in a mere 5 seconds by the year 2040 (5x10^20 calculations).
Even at this speed, it would still take 100 of these computers approximately 6.34 x 10^19 years to solve chess. Still not feasible. Not even close.
Around 2080, our average desktops will have approximately 10^45 calculations per second. A single computer will have the computational power to solve chess in about 27.7 hours. It will definitely be done by 2080 as long as computing power continues to grow as it has the past 30 years.
By 2090, enough computational power will exist on a $1000 desktop to solve chess in about 1 second...so by that date it will be completely trivial.
Given checkers was solved in 2007, and the computational power to solve it in 1 second will lag by about 33-35 years, we can probably roughly estimate chess will be solved somewhere between 2055-2057. Probably sooner since when more computational power is available (which will be the case in 45 years), more can be devoted to projects such as this. However, I would say 2050 at the earliest, and 2060 at the latest.
In 2060, it would take 100 average desktops 3.17 x 10^10 years to solve chess. Realize I am using a $1000 computer as my benchmark, whereas larger systems and supercomputers will probably be available as their price/performance ratio is also improving. Also, their order of magnitude of computational power increases at a faster pace. Consider a supercomputer now can perform 2.33 x 10^15 calculations per second, and a $1000 computer about 2 x 10^9. By comparison, 10 years ago the difference was 10^5 instead of 10^6. By 2060 the order of magnitude difference will probably be 10^12, and even this may increase faster than anticipated.
Much of this depends on whether or not we as human beings have the drive to solve chess, but the computational power will make it feasible around this time (as long as our pace continues).
On another note, the game of Tic-Tac-Toe, which is much, much simpler, has 2,653,002 possible calculations (with an open board). The computational power to solve Tic-Tac-Toe in roughly 2.5 (1 million calculations per second) seconds was achieved in 1990.
Moving backwards, in 1955, a computer had the power to solve Tic-Tac-Toe in about 1 month (1 calculation per second). Again, this is based on what $1000 would get you if you could package it into a computer (a $1000 desktop obviously did not exist in 1955), and this computer would have been devoted to solving Tic-Tac-Toe....which was just not the case in 1955. Computation was expensive and would not have been used for this purpose, although I don't believe there is any date where Tic-Tac-Toe was deemed "solved" by a computer, but I'm sure it lags behind the actual computational power.
Also, take into account $1000 in 45 years will be worth about 4 times less than it is now, so much more money can go into projects such as this while computational power will continue to get cheaper.
It actually is possible for both players to have winning strategies in infinite games with no well-ordering; however, chess is well-ordered. In fact, because of the 50-move rule, there is an upper-limit to the number of moves a game can have, and thus there are only finitely many possible games of chess (which can be enumerated to solve exactly.. theoretically, at least :)
Your end of the argument is supported by the way modern chess programs work now. They work that way because it's way too resource-intense to code a chess program to operate deterministically. They won't necessarily always work that way. It's possible that chess will someday be solved, and if that happens, it will likely be solved by a computer.
I think you are dead on. Machines like Deep Blue and Deep Thought are programmed with a number of predefined games, and clever algorithms to parse the trees into the ends of those games. This is, of course, a dramatic oversimplification. There is always a chance to "beat" the computer along the course of a game. By this I mean making a move that forces the computer to make a move that is less than optimal (whatever that is). If the computer cannot find the best path before the time limit for the move, it might very well make a mistake by choosing one of the less-desirable paths.
There is another class of chess programs that uses real machine learning, or genetic programming / evolutionary algorithms. Some programs have been evolved and use neural networks, et al, to make decisions. In this type of case, I would imagine that the computer might make "mistakes", but still end up in a victory.
There is a fascinating book on this type of GP called Blondie24 that you might read. It is about checkers, but it could apply to chess.
For the record, there are computers that can win or tie at checkers. I'm not sure if the same could be done for chess. The number of moves is a lot higher. Also, things change because pieces can move in any direction, not just forwards and backwards. I think although I'm not sure, that chess is deterministic, but that there are just way too many possible moves for a computer to currently determine all the moves in a reasonable amount of time.
From game theory, which is what this question is about, the answer is yes Chess can be played perfectly. The game space is known/predictable and yes if you had you grandchild's quantum computers you could probably eliminate all heuristics.
You could write a perfect tic-tac-toe machine now-a-days in any scripting language and it'd play perfectly in real-time.
Othello is another game that current computers can easily play perfectly, but the machine's memory and CPU will need a bit of help
Chess is theoretically possible but not practically possible (in 2008)
i-Go is tricky, it's space of possibilities falls beyond the amount of atoms in the universe, so it might take us some time to make a perfect i-Go machine.
Chess is an example of a matrix game, which by definition has an optimal outcome (think Nash equilibrium). If player 1 and 2 each take optimal moves, a certain outcome will ALWAYS be reached (whether it be a win-tie-loss is still unknown).
As a chess programmer from the 1970's, I definitely have an opinion on this. What I wrote up about 10 years ago, still is basically true today:
"Unfinished Work and Challenges to Chess Programmers"
Back then, I thought we could solve Chess conventionally, if done properly.
Checkers was solved recently (Yay, University of Alberta, Canada!!!) but that was effectively done Brute Force. To do chess conventionally, you'll have to be smarter.
Unless, of course, Quantum Computing becomes a reality. If so, chess will be solved as easily as Tic-Tac-Toe.
In the early 1970's in Scientific American, there was a short parody that caught my attention. It was an announcement that the game of chess was solved by a Russian chess computer. It had determined that there is one perfect move for white that would ensure a win with perfect play by both sides, and that move is: 1. a4!
Lots of answers here make the important game-theoretic points:
Chess is a finite, deterministic game with complete information about the game state
You can solve a finite game and identify a perfect strategy
Chess is however big enough that you will not be able to solve it completely with a brute force method
However these observations miss an important practical point: it is not necessary to solve the complete game perfectly in order to create an unbeatable machine.
It is in fact quite likely that you could create an unbeatable chess machine (i.e. will never lose and will always force a win or draw) without searching even a tiny fraction of the possible state space.
The following techniques for example all massively reduce the search space required:
Tree pruning techniques like Alpha/Beta or MTD-f already massively reduce the search space
Provable winning position. Many endings fall in this category: You don't need to search KR vs K for example, it's a proven win. With some work it is possible to prove many more guaranteed wins.
Almost certain wins - for "good enough" play without any foolish mistakes (say about ELO 2200+?) many chess positions are almost certain wins, for example a decent material advantage (e.g. an extra Knight) with no compensating positional advantage. If your program can force such a position and has good enough heuristics for detecting positional advantage, it can safely assume it will win or at least draw with 100% probability.
Tree search heuristics - with good enough pattern recognition, you can quickly focus on the relevant subset of "interesting" moves. This is how human grandmasters play so it's clearly not a bad strategy..... and our pattern recognition algorithms are constantly getting better
Risk assessment - a better conception of the "riskiness" of a position will enable much more effective searching by focusing computing power on situations where the outcome is more uncertain (this is a natural extension of Quiescence Search)
With the right combination of the above techniques, I'd be comfortable asserting that it is possible to create an "unbeatable" chess playing machine. We're probably not too far off with current technology.
Note that It's almost certainly harder to prove that this machine cannot be beaten. It would probably be something like the Reimann hypothesis - we would be pretty sure that it plays perfectly and would have empirical results showing that it never lost (including a few billion straight draws against itself), but we wouldn't actually have the ability to prove it.
Additional note regarding "perfection":
I'm careful not to describe the machine as "perfect" in the game-theoretic sense because that implies unusually strong additional conditions, such as:
Always winning in every situation where it is possible to force a win, no matter how complex the winning combination may be. There will be situations on the boundary between win/draw where this is extremely hard to calculate perfectly.
Exploiting all available information about potential imperfection in your opponent's play, for example inferring that your opponent might be too greedy and deliberately playing a slightly weaker line than usual on the grounds that it has a greater potential to tempt your opponent into making a mistake. Against imperfect opponents it can in fact be optimal to make a losing if you estimate that your opponent probably won't spot the forced win and it gives you a higher probability of winning yourself.
Perfection (particularly given imperfect and unknown opponents) is a much harder problem than simply being unbeatable.
It's perfectly solvable.
There are 10^50 odd positions. Each position, by my reckoning, requires a minimum of 64 round bytes to store (each square has: 2 affiliation bits, 3 piece bits). Once they are collated, the positions that are checkmates can be identified and positions can be compared to form a relationship, showing which positions lead to other positions in a large outcome tree.
Then, the program needs only to find the lowest only one side checkmate roots, if such a thing exists. In any case, Chess was fairly simply solved at the end of the first paragraph.
if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic.
There are two competing ideas there. One is that you search every possible move, and the other is that you decide based on a heuristic. A heuristic is a system for making a good guess. If you're searching through every possible move, then you're no longer guessing.
"Is there a perfect algorithm for chess?"
Yes there is. Maybe it's for White to always win. Maybe it's for Black to always win. Maybe it's for both to always tie at least. We don't know which, and we'll never know, but it certainly exist.
See also
God's algorithm
I found this article by John MacQuarrie that references work by the "father of game theory" Ernst Friedrich Ferdinand Zermelo. It draws the following conclusion:
In chess either white can force a win, or black can force a win, or both sides can force at least a draw.
The logic seems sound to me.
There are two mistakes in your thought experiment:
If your Turing machine is not "limited" (in memory, speed, ...) you do not need to use heuristics but you can calculate evaluate the final states (win, loss, draw). To find the perfect game you would then just need to use the Minimax algorithm (see http://en.wikipedia.org/wiki/Minimax) to compute the optimal moves for each player, which would lead to one or more optimal games.
There is also no limit on the complexity of the used heuristic. If you can calculate a perfect game, there is also a way to compute a perfect heuristic from it. If needed its just a function that maps chess positions in the way "If I'm in this situation S my best move is M".
As others pointed out already, this will end in 3 possible results: white can force a win, black can force a win, one of them can force a draw.
The result of a perfect checkers games has already been "computed". If humanity will not destroy itself before, there will be also a calculation for chess some day, when computers have evolved enough to have enough memory and speed. Or we have some quantum computers... Or till someone (researcher, chess experts, genius) finds some algorithms that significantly reduces the complexity of the game. To give an example: What is the sum of all numbers between 1 and 1000? You can either calculate 1+2+3+4+5...+999+1000, or you can simply calculate: N*(N+1)/2 with N = 1000; result = 500500. Now imagine don't know about that formula, you don't know about Mathematical induction, you don't even know how to multiply or add numbers, ... So, it may be possible that there is a currently unknown algorithm that just ultimately reduces the complexity of this game and it would just take 5 Minutes to calculate the best move with a current computer. Maybe it would be even possible to estimate it as a human with pen & paper, or even in your mind, given some more time.
So, the quick answer is: If humanity survives long enough, it's just a matter of time!
I'm only 99.9% convinced by the claim that the size of the state space makes it impossible to hope for a solution.
Sure, 10^50 is an impossibly large number. Let's call the size of the state space n.
What's the bound on the number of moves in the longest possible game? Since all games end in a finite number of moves there exists such a bound, call it m.
Starting from the initial state, can't you enumerate all n moves in O(m) space? Sure, it takes O(n) time, but the arguments from the size of the universe don't directly address that. O(m) space might not even be very much. For O(m) space couldn't you also track, during this traversal, whether the continuation of any state along the path you are traversing leads to EitherMayWin, EitherMayForceDraw, WhiteMayWin, WhiteMayWinOrForceDraw, BlackMayWin, or BlackMayWinOrForceDraw? (There's a lattice depending on whose turn it is, annotate each state in the history of your traversal with the lattice meet.)
Unless I'm missing something, that's an O(n) time / O(m) space algorithm for determining which of the possible categories chess falls into. Wikipedia cites an estimate for the age of the universe at approximately 10^60th Planck times. Without getting into a cosmology argument, let's guess that there's about that much time left before the heat/cold/whatever death of the universe. That leaves us needing to evaluate one move every 10^10th Planck times, or every 10^-34 seconds. That's an impossibly short time (about 16 orders of magnitude shorter than the shortest times ever observed). Let's optimistically say that with a super-duper-good implementation running on top of the line present-or-forseen-non-quantum-P-is-a-proper-subset-of-NP technology we could hope to evaluate (take a single step forward, categorize the resulting state as an intermediate state or one of the three end states) states at a rate of 100 MHz (once every 10^-8 seconds). Since this algorithm is very parallelizable, this leaves us needing 10^26th such computers or about one for every atom in my body, together with the ability to collect their results.
I suppose there's always some sliver of hope for a brute-force solution. We might get lucky and, in exploring only one of white's possible opening moves, both choose one with much-lower-than-average fanout and one in which white always wins or wins-or-draws.
We could also hope to shrink the definition of chess somewhat and persuade everyone that it's still morally the same game. Do we really need to require positions to repeat 3 times before a draw? Do we really need to make the running-away party demonstrate the ability to escape for 50 moves? Does anyone even understand what the heck is up with the en passant rule? ;) More seriously, do we really need to force a player to move (as opposed to either drawing or losing) when his or her only move to escape check or a stalemate is an en passant capture? Could we limit the choice of pieces to which a pawn may be promoted if the desired non-queen promotion does not lead to an immediate check or checkmate?
I'm also uncertain about how much allowing each computer hash-based access to a large database of late game states and their possibly outcomes (which might be relatively feasible on existing hardware and with existing endgame databases) could help in pruning the search earlier. Obviously you can't memoize the entire function without O(n) storage, but you could pick a large integer and memoize that many endgames enumerating backwards from each possible (or even not easily provably impossible, I suppose) end state.
I know this is a bit of a bump, but I have to put my 5 cents worth in here. It is possible for a computer, or a person for that matter, to end every single chess game that he/she/it participates in, in either a win or a stalemate.
To achieve this, however, you must know precisely every possible move and reaction and so forth, all the way through to each and every single possible game outcome, and to visualize this, or to make an easy way of analyising this information, think of it as a mind map that branches out constantly.
The center node would be the start of the game. Each branch out of each node would symbolize a move, each one different to its bretheren moves. Presenting it in this manor would take much resources, especially if you were doing this on paper. On a computer, this would take possibly hundreds of Terrabytes of data, as you would have very many repedative moves, unless you made the branches come back.
To memorize such data, however, would be implausable, if not impossible. To make a computer recognize the most optimal move to take out of the (at most) 8 instantly possible moves, would be possible, but not plausable... as that computer would need to be able to process all the branches past that move, all the way to a conclusion, count all conclusions that result in a win or a stalemate, then act on that number of wining conclusions against losing conclusions, and that would require RAM capable of processing data in the Terrabytes, or more! And with todays technology, a computer like that would require more than the bank balance of the 5 richest men and/or women in the world!
So after all that consideration, it could be done, however, no one person could do it. Such a task would require 30 of the brightest minds alive today, not only in chess, but in science and computer technology, and such a task could only be completed on a (lets put it entirely into basic perspective)... extremely ultimately hyper super-duper computer... which couldnt possibly exist for at least a century. It will be done! Just not in this lifetime.
Mathematically, chess has been solved by the Minimax algorithm, which goes back to the 1920s (either found by Borel or von Neumann). Thus, a turing machine can indeed play perfect chess.
However, the computational complexity of chess makes it practically infeasible. Current engines use several improvements and heuristics. Top engines today have surpassed the best humans in terms of playing strength, but because of the heuristics that they are using, they might not play perfect when given infinite time (e.g., hash collisions could lead to incorrect results).
The closest that we currently have in terms of perfect play are endgame tablebases. The typical technique to generate them is called retrograde analysis. Currently, all position with up to six pieces have been solved.
It just might be solvable, but something bothers me:
Even if the entire tree could be traversed, there is still no way to predict the opponent's next move. We must always base our next move on the state of the opponent, and make the "best" move available. Then, based on the next state we do it again.
So, our optimal move might be optimal iff the opponent moves in a certain way. For some moves of the opponent our last move might have been sub-optimal.
I just fail to see how there could be a "perfect" move in every step.
For that to be the case, there must for every state [in the current game] be a path in the tree which leads to victory, regardless of the opponent's next move (as in tic-tac-toe), and I have a hard time figuring that.
Yes , in math , chess is classified as a determined game , that means it has a perfect algorithm for each first player , this is proven to be true even for infinate chess board , so one day probably a fast effective AI will find the perfect strategy, and the game is gone
More on this in this video : https://www.youtube.com/watch?v=PN-I6u-AxMg
There is also quantom chess , where there is no math proof that it is determined game http://store.steampowered.com/app/453870/Quantum_Chess/
and there you are detailed video about quantom chess https://chess24.com/en/read/news/quantum-chess
Of course
There's only 10 to the power of fifty possible combinations of pieces on the board. Having that in mind, to play to every compibation, you would need make under 10 to the power of fifty moves (including repetitions multiply that number by 3). So, there's less than ten to the power of one hundred moves in chess. Just pick those that lead to checkmate and you're good to go
64bit math (=chessboard) and bitwise operators (=next possible moves) is all You need. So simply. Brute Force will find the most best way usually. Of course, there is no universal algorithm for all positions. In real life the calculation is also limited in time, timeout will stop it. A good chess program means heavy code (passed,doubled pawns,etc). Small code can't be very strong. Opening and endgame databases just save processing time, some kind of preprocessed data. The device, I mean - the OS,threading poss.,environment,hardware define requirements. Programming language is important. Anyway, the development process is interesting.

Resources