I'm trying to filter out illegal moves efficiently for a game engine of an old board game. The rules nor the game don't really matter.
The board is a 2D grid with walls around its borders and with every move the player moves one of his pawns and places a wall (horizontal or vertical, of length 2 between any 4 squares) of his own. I need to test for every possible wall whether or not it splits the board into unreachable segments, or rather whether or not a path from a pawn to a goal is blocked.
e.g. If there's a wall vertically across the whole middle of the board, it splits the board into two segments; and if there's a pawn on one side and the goal's on the other, that's an illegal wall placement. I've tried the graph approach with the answers to my previously posted question, but it's pretty slow since I need to generate a move tree and it grows exponentially.
I'm not exactly sure how to approach doing this. I'm guessing processing the board first, then checking for every wall whether it connects to an existing wall and whether that creates new sections. Afterward, check if the crucial squares are in those newly separated sections.
I am developing a software for a board game (2 player) which has 10x4 cells and both the players have 9 pieces each. Initially, the pieces for player 1 will be at the top of the board and for player 2 all the pieces will be at the bottom of the board (similar to chess but a lot less complex!).
I have used MiniMax algorithm to calculate the next best move. Now, the algorithm itself seems to work fine. The problem that I am facing is in Heuristic value calculation.
If there is no best move found in the depth till I am searching (4 currently), my code simply takes the first step which it finds from the move list. That is probably because the score of all the moves are same till the depth of 4!
So, it just keeps stalling. Eg., it will move piece 1 from position A to B in 1st turn and in the 2nd turn it will move the same piece from position B to A.
It keeps on doing that until the opponent moves closer to my pieces.
Now, what I want would love to know is how do I make sure that if the opponent is NOT closing in, I do that instead of stalling.
Currently, I am calculating the heuristic values based on the difference between my pieces and opp pieces.
How to calculate the values such that those moves which lead to position closer to the opponent's pieces are selected? Appreciate your help! Thanks!
Consider an infinite 2D board. We have two players at points P1 and P2 on the board. They need to traverse a sequence of boxes on the board G1, G2, G3 .... Gn.
At the start only G1 is known. The coordinates of G2 to Gn are not known only after the box previous to it has been traversed. The players can move one at a time in one of the 8 possible directions on the board in unit time. We need to find the minimum time to traverse all required boxes using the two players.
The obvious solution is a greedy approach where the player nearer to the box that needs to be traversed moves towards it. Then we calculate the nearer player again for the next G. I feel a better solution exists to this problem that I cannot get my head around right now. Does a better solution exist?
I think as the board is infinite we should try to cover as much area with both players within n moves as possible (for every n). This way we maximize the abound of fields we can reach within n moves.
So my strategy would be:
Who is next to the next box?
Let this be P1.
Let P1 go to the box (shortest path) and go with the other player P2 in the directly opposite direction. This way we maximize the distance between the two players which minimizes the overlapping of the area they can reach in n steps. This way we maximize the coverage of the area the two players can reach in n steps for the next box.
Choosing the player closest to the next box is the best heuristic you can find.
Explanation: Whenever a new goal appears, there are only two options: Move player 1 or player 2 to the goal at the cost of the distance in fields. We also prefer a situation where players are far apart in contrast to close together. The most extreme case would be that both players are on the same field which is as good as having only one player. Since the playing field is infinite, being far apart is always better.
If this is correct, then you should ask yourself: Should I really choose the player being further away from the goal and end up in a situation where the players are closer together than they would have been if I had taken the other player?
Certainly not. In an infinite field, choosing the closest player helps with both, minimizing the current cost and improving the situation for the next goal (players far apart).
Since the problem is non deterministic, the solution must be heuristic.
The "price" at each turn is the number of moves taken in the turn. This can be PrN1 or PrN2 for the number of moves for player1 or player2 respectively in turn N.
The "score" at each turn can be thought of as the probability that a certain arrangement (the position of both players) after the moves will be a good arrangement for the rest of the turns.
You'd want to use an evaluation function that considers both price and score to make the decision.
The problem is, the only useful scoring function is one that is some function of the distance between the players (the larger the distance, the greater the chance of being closer to the next turns) and this is exactly in sync with the minimum price. Any choice that keeps the players as far apart as possible is necessarily the one that was cheapest to make.
What all that means if that the best algorithm is simply to move the closest player, which is the first instinct you had.
Had the board not been infinite, you could create a better scoring function, one that considers the probabilities of the next boxes which would give lower scores to arrangements that leave a player on the edges of the board.
I implemented a minimax algorithm for TTT. When I make the AI player make the first move, it evaluates all minimax values of the possible moves as 0. That means it can choose any square on the grid as the first move. However, any Tic Tac Toe guide will tell you that choosing a corner or center square when making the first move is the better choice since there is a higher chance of winning.
Why is my algorithm not reflecting this?
EDIT: To clarify, what I'm trying to ask is: is this a limitation of the minimax algorithm or is my implementation incorrect?
Your algorithm does not have to reflect this: if you try out all starting positions, you would find that the corners and the center give you more path to win than other cells.
With tic-tac-toe's lack of complexity, the minimax can look ahead all the way through the end of the game, starting with the very first move. The number of available moves goes down quickly as you progress through the game, so the full search finishes pretty fast.
With more complex games (othello, checkers, chess), the so-called "opening books" become more important. The number of available moves in the beginning of the game is enormous, so a traditional approach is to pick a move from the "book" of openings, and stay with pre-calculated "book moves" for the first three to six plays. Moves outside the book are not considered, saving a lot of CPU on moves that remain essentially the same.
I'm working on a snake game (Nibbles in Linux) that is played on a 60*60 field, with four snakes competing for an apple which is randomly placed.
I've implemented the movement of my snake with the A* (A star) Algorithm.
My problem is this: When I'm not the nearest snake to the apple, I don't want to go to get the apple, because my chance to get it is lower than at least one snake, so I want to look for a place that I hope at the next place that an apple is generated , Then I'll be the nearest snake to that apple. I mean that I'm looking for a place which is nearest to the maximum number of potential locations.
Please suggest any good way or any algorithm that can help me to find this place.
Here is an image of the game. The red points are the snakes' heads.
I tested some ways and Finally I decided to use this way:
I think the best way is to make a 2D array with size:60*60 , then for each node(x) of the array, calculate how many nodes of the field-which are walkable!(not block), is this node(x) nearest to.
then the answer will be The maximum amount, then I set this node the goal.
but because I must find the next move in less than 0.1sec and to do this work, there is 4 loops of size:60, (60^4) and when I found it, A* algorithm will be run too , this work would never be done in less than 0.1 sec.
So , since the Snake can't move Diagonally and it goes just: up,down,right,left, I decided not to check all the nodes,Since in each cycle(0.1sec) , I can just move 1 unit, I decided to check just 4 nodes(up,down,left,right) and move to a node which It's amount is Max.
now it's working almost right. ;)
Since you have already implemented A*, after you generate your map, you could use A* to create a map of values for each cell based on the total cost from each cell to visit every other cell. If you generate this after you've placed your blocks, then your weighted map will account for their presence.
To generate this, off the top of my head, I would say you could start from each cell, and assign it one point for each cell it can visit in one turn. For example, a cell in the corner would get two points for the first move, because it can only access two other cells. It would get five points for the second turn, because it can access five cells in two moves. Repeat until you've visited all the other squares, and then you have the score for that square.
From that point, if an apple appears and your snake is not the closest to it, your snake could head for the highest weighted cell, which you've calculated beforehand.
Edit: Please see comments below for a more advantageous solution.
If you are nearest to apple you should walk to get it but if you are far apart from apple your best chance is walking in a middle of map, you should find strategy to how to occupy the middle of map.
You can divide your map to four zooms (clockwise), upper left, upper right, bottom right and bottom left (1,2,3,4). We check this between two snakes: If apple currently is in zoom 1 (assume center for average) and you are in center of map, your opponent can be in zooms 1,2,3,4 (again assume it's in the center of this zooms to take average in simpler way) if it's in zoom 1 it has better chance (1-0) if it's in zoom 2 or 4, your distance is sqrt(2)/2 and your opponent distance is 1, so you are nearest, and finally if your opponent is in zoom 3 your distance is sqrt(2)/2 and your opponent distance is sqrt(2), so in 3 cases with one oppnent you have better chance.
But because your shape has some blocks, you should calculate center position in other way, in fact, for each point in your grid calculate its distance to all other points. this will take 60^2 * 60^2 which can be done fast. and find cells with minimum total sums(you can select best 10 cells), this cells can be your centers, everytime you should move from one center to another (except when you are nearest to apple or your snake eats apple and wants comback to nearest centers) .
Nearest to the maximum number of locations is the center as others have stated. Nearer to the maximum number of locations than the other snakes is a much, different and harder questions. In that case, I would A* the head of each snake to see who has the most squares under control. That's the base score. Next, as I'm drawing a blank, I'd Monte Carlo a random set of points around the map and choose the point that gave the highest score as a destination. If you had the processing power, you could try every point on the grid and choose the best as K.G. suggested, but that could get pretty intense.
The true test is when you find your point, figure out how far in the future it takes you to get there, and running some AI for the other snakes to see if they will intercept you. You start getting into plys like chess. :)