This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What Algorithm for a Tic-Tac-Toe Game Can I Use To Determine the “Best Move” for the AI?
I want to develop a tic tac toe (5x7) game in which there are two players one is computer and one is human.
the player who makes 4 in a row
or 4 in diagonal first wins. If i do using brute force technique it takes long time for computer to decide next move. Can anyone suggest an algorithm with less time complexity ?
thanks in advance..
Alpha-Beta prunning is a good algorithm for searching for the best next turn.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a floating point number x from [1, 500] that generates a binary y of 1 at some probability p. And I'm trying to find the x that can generate the most 1 or has highest p. I'm assuming there's only one maximum.
Is there a algorithm that can converge fast to the x with highest p while making sure it doesn't jump around too much after it's achieved for e.x. within 0.1% of the optimal x? Specifically, it would be great if it stabilizes when near < 0.1% of optimal x.
I know we can do this with simulated annealing but I don't think I should hard code temperature because I need to use the same algorithm when x could be from [1, 3000] or the p distribution is different.
This paper provides an for smart hill-climbing algorithm. The idea is basically you take n samples as starting points. The algorithm is as follows (it is simplified into one dimensional for your problem):
Take n sample points in the search space. In the paper, he uses Linear Hypercube Sampling since the dimensions of the data in the paper is assumed to be large. In your case, since it is one-dimensional, you can just use random sapling as usual.
For each sample points, gather points from its "local neighborhood" and find a best fit quadratic curve. Find the new maximum candidate from the quadratic curve. If the objective function of the new maximum candidate is actually higher than the previous one, update the sample point to the new maximum candidate. Repeat this step with smaller "local neighborhood" size for each iteration.
Use the best point from the sample points
Restart: repeat step 2 and 3, and then compare the maximums. If there is no improvement, stop. If there is improvement, repeat again.
This question already has answers here:
Algorithm for solving Flow Free Game
(6 answers)
Closed 7 years ago.
I'm looking to create a program to perform a puzzle for me in a similar fashion as this one. I have completed the piece searching portion to identify each of the pieces but I'm stuck at where to being to start solving the puzzle.
Backtracking is an option but for very large board sizes it will have high complexity and take time.
Is there any suitable algorithm to solve the game efficiently. Can using heuristics to solve the board help?
Looking for a direction to start.
It's hard to understand the precise logic of the game you posted. However, if you want something more efficient than DFS/BFS, then you should take a look at A* star: http://theory.stanford.edu/~amitp/GameProgramming/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to understand basic chess algorithms. I have not read the literature in depth yet but after some cogitating here is my attempt:
1) Assign weight values to the pieces(i.e. a bishop is more valuable than a pawn)
2) Define heuristic function that attaches a value to a particular move
3) Build minimax tree to store all possible moves. Prune the tree via alpha/beta pruning.
4) Traverse the tree to find the best move for each player
Is this the core "big picture" idea of chess algorithms? Can someone point me to resources that go more in depth regarding chess algorithms?
Following is an overview of chess engine development.
1. Create a board representation.
In an object-oriented language, this will be an object that will represent a chess board in memory. The options at this stage are:
Bitboards
0x88
8x8
Bitboards is the recommended way for many reasons.
2. Create an evaluation function.
This simply takes a board and side-to-evaluate as agruments and returns a score. The method signature will look something like:
int Evaluate(Board boardPosition, int sideToEvaluateFor);
This is where you use the weights assigned to each piece. This is also where you would use any heuristics if you so desire. A simple evaluation function would add weights of sideToEvaluateFor's pieces and subtract weights of the opposite side's pieces. Such an evaluation function is of course too naive for a real chess engine.
3. Create a search function.
This will be, like you said, something on the lines of a MiniMax search with Alpha-Beta pruning. Some of the popular search algorithms are:
NegaMax
NegaScout
MTD(f)
Basic idea is to try all different variations to a certain maximum depth and choose the move recommended by the variation which results in highest score. The score for each variation is the score returned by Evaluation method for the board position at the maximum depth.
For an example of chess engine in C# have a look at https://github.com/bytefire/shutranj which I put together recently. A better open source engine to look at is StockFish (https://github.com/mcostalba/Stockfish) which is written in C++.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I've been looking for algorithms/methods for the game called Duchess http://www.cse.unsw.edu.au/~blair/duchess/rules.html
I'm considering alpha beta pruning but I don't know if it is applicable for a game of more than two players.
The team vs team version can be played using alpha-beta pruning and similar game tree search techniques because it is a zero-sum two-player game. You just need to think of the teams as players.
The version with tree players is not amenable to standard alpha-beta like game tree search methods because it is not a two-player zero-sum game.
The problem is that in a two-player game you can use an "evaluation function" to evaluate how good a given board configuration is for player 1, and e.g. try to interpret this as "probability" that player 1 wins from the given configuration assuming "good" play. If the winning probability for player 1 is P then for player 2 it is 1 - P, obviously, so P is enough to represent the evaluation of a board configuration. Alpha-beta pruning uses this evaluation values at the core of the algorithm.
When you have three players, this is no longer well-defined, because the probability that a player 1 wins from a given configuration assuming "good" play depends on whether player 2 and player 3 will conspire together against player 1 or not. Also, there are scenarios known as "king making" where player 1 can't win, but player 1 can still decide whether player 2 or player 3 wins.
For three players, you basically have to resort to a scheme where a board configuration is evaluated to three values, P1, P2 and P3, each representing the relative preference of a given player to reach that configuration. After that, you can have a game three search where every player is trying to maximize the player's preference value at the search frontier. But you need for example to be able to answer to the question if it is more preferable to player X to lose by getting checkmated or to lose by not being the winner, and if so, how much.
I guess mini-max algorithm can be applied with taking max ((no. of players) - 1) times for each opponents turn and min just once for one's turn. So for one round of trees you will have tree depth equal to number of players
In this case you gotta consider the cumulative max(maximum Sum of all the oponents). the working of algorithm will highly depend on how you do the scoring.
This question already has answers here:
Algorithm interview from Google
(8 answers)
Closed 8 years ago.
I am a long time lurker, and just had an interview with Google where they asked me this question:
Given a requested time d which is impossible (i.e. within 5 days of an already scheduled performance), give an O(log n)-time algorithm to find the next available day d2 (d2 > d).
I had no clue how to solve it, and now that the interview is over, I am dying to figure out how to solve it. Knowing how smart most of you folks are, I was wondering if you can give me a hand here. This is NOT for homework, or anything of that sort. I just want to learn how to solve it for future interviews. I tried asking follow up questions but he said that is all I can tell you.
Thanks!
This is completely firing from the hip because I'm not sure if the question is complete, but if you had a list of dates in an array such that d[0] < d[1] < ... < d[n], the simple answer would be a binary search tree to find the next day.