If you are given:
A good shuffling algorithm (a good source of randomness plus a method of shuffling not subject to any of the common pitfalls which would bias the result)
A magic function WINNABLE(D) which takes the shuffled deck and returns True if the deck D is winnable by some playing sequence, or False if it inevitably results in a losing position.
then it would be possible to generate a set of "well distributed" winnable solitaire deals by generating a large set of starting decks with (1) and then filtering them down to the winnable set with (2). This method of randomly generating possibilities and picking from them is always a good starting point when you're trying to avoid having subtle selection bias creep in to your result.
The problem with this is that (2) is hard (maybe NP-hard depending on the game) and even approximations of it are computationally expensive (if you're on an iPad, say). However, cheaper algorithms such as starting from a winning position and making random "un-moves" to reverse the game back to a starting point may have biases toward particular deck shuffles that are very hard to quantify or avoid.
Are there any interesting algorithms or research in the area of generating winnable games like this?
Since solitaire games vary so much, reasoning at this level of generality is itself hard. To focus our ideas, let's take a particular example: Forty Thieves. It's a double-pack game starting with empty foundations, to be built ace-upwards; an empty waste pile; and a layout of ten pre-dealt face-up piles of four cards each. The top cards of the waste and layout piles are exposed. At each move, you can:
Move an exposed card to its legal place in a foundation, no worrying
back;
Move an exposed card onto a pile in the layout, only legal when
building downwards in the same suit;
Move an exposed card to an empty layout slot;
Deal a card from stock to the top of the waste.
A beginner plays these options in the order stated. (The implementation I play actually has a hint button that suggests a move accordingly.) I estimate that fewer than one in ten deals are winnable by that strategy, whereas the actual proportion of winnable deals is about one in three.
Now if you generate winnable deals by random un-moves, there is a hard-to-quantify bias; I don't disagree with that. I think, though, that the deals will tend to be harder than average among deals that happen to be winnable, with almost no deals winnable by the beginner's strategy.
You can, however, deliberately make the un-moves non-random. If you select un-moves in the opposite order to a beginner's strategy, you get a deal on which the beginner's strategy works: e.g. if only as a last resort you un-move from a foundation to waste, then moving from waste to a foundation whenever possible is always right.
Hmm, I don't know much about Solitaire but this is how I would tackle the problem. See my pseudo code.
//Assuming you have created a "card" object.
Generate a List<Cards> deck;// A list populated with every card in deck that you can use in Solitaire with the number of each card you can use in Solitaire.
Generate a List<Cards> table;
while(deck.size()>0){//This is the real code.
table.add(deck.remove((int)(Math.random()*deck.size())));
}
//And done. You know have a perfectly shuffled list of Cards in table.
//Now divide the list up however you want.
I have no idea for part 2.
Related
I have a specific game, which is not literally zero-sum, because points are awarded by the game during a match, but close to it, in the sense that the number of total points have a clear upper limit, so the more points you score, the less points are available for your opponents.
The game is played by 5 players, with no teams whatsoever.
I'm making a genetic algorithm play rounds against itself with pseudo-random "mutations" between generations.
But after a couple hundred generations, a pattern always emerges. The algorithm ends up strongly favoring a specific player (example: the player who plays first). Since the mutations giving the "best results" serve as a base for the next generation, this seems to move towards some version of "If you are the first player, play this way (the way being a very specific yet pretty random technique that gives bad, or at best average, results), and if not, then play in this specific way that indirectly but strongly favors the first player".
Then, for the next generations, the player whose turn is strongly favored starts mutating totally randomly because it wins every round no matter what it does, as long as the part of the algorithm that favors that player is still intact.
I'm looking for a way to prevent this specific evolution route, but I can't figure out how to possibly "reward" victory by your own strategy more than victory because you were helped a lot.
I think this happens because only the winner of the round robbin tournament gets promoted and mutated on each generation. At first players more or less win randomly, but then a strategy comes up that favors a position. Now I guess that slightly diverting from that strategy (pseudo-random mutations) makes you only lose the games where you are in the favoured position but not win any of the others, so you will never divert from that strategy, something like a local Nash equilibrium.
You could try to keep more than one individual per generation and generate mutations from them. But I doubt this will help and at best delay the effect. Because soon the code of the best individual will spread to all. This seems to be the root cause of the problem.
Therefore my suggestion would be to have t tribes where each tribe has x/t individuals. Now instead of playing a round robbin tournament each individual plays only against the individuals of other tribes. Then you keep the best individual per tribe, mutate and proceed with the next generation. So that the tribes never mix genes.
To me, it seems like there is an easy fix: play multiple games each evaluation.
Instead of each generation only testing one game, strongly favouring the starting player, play 5 games and distribute who starts first equally ( so every player starts first at least once ).
I suppose your population is larger than 5, right? So how are you testing the genomes against each other? You should definitely not have let them play only one game, because maybe you have paired up a medium player against 4 easy players, making it seem like the medium player is better.
I want to make a game card battle based game. In this cards have specific attributes which can increase player's hp/attack/defense or attack an enemy to reduce his hp/attack/defense
I am trying to make an AI for this game. AI has to predict which card it will select on the basis of current situations like AI's hp/attack/defense and Enemy's hp/attack/defense. Since AI cannot see enemy's card hence it cannot predict future moves.
I searched few techniques of AI like minmax but I think minmax will not be suitable since AI cannot predict any future moves.
I am searching for a technique which is very flexible so that i can add a large variety of cards later.
Can you please suggest a technique for such game.
Thanks
This isn't an ActionScript 3 topic per se but I do think it's rather interesting.
First I'd suggest picking up Yu-Gi-Oh's Stardust Accelerator World Championship 2009 for the Nintendo DS or a comparable game.
The game has a fairly advanced computer AI system that not only deals with expected advantage or disadvantage in terms of hit points but also card advantage and combos. If you're taking on a challenge like this, I definately recommend you do the required research (plus, when playing video games is research, who can complain?)
My suggestion for building an AI is as follows:
As the computer decides its move, create an array of Move objects. Then have it create a new Move object for each possible Move that it can see.
For each move object, calculate how much less HP the opponent will have, how many cards they will still have, how many creatures,etc.
Have the computer decide what's most important (more damage, more card advantage) and have it play that move.
More sophisticated AI's will also think several turns in advance and perhaps "see" moves that others do not.
I suggest you look at this game of Reversi I built a few weeks back for fun in Flash. This has a very basic AI implemented, but the basics could be applied to your situation.
Basically, the way that game works is after each move (player or CPU, so I can determine if the player made the right move in comparison to what the CPU would have made), I create a Vector of each possible legal move. I then decide which move provides the highest score change, and set that as best move. However, I also check to see if the move would result in the other player having access to a corner (if you've never played, the player who grabs the corners generally wins). If it does, I tell the CPU to avoid that move and check the second best move and so on. The end result is a CPU who can actually put up a fight.
Keep in mind that this is just a single afternoon of work (for the entire game, from the crappy GUI to the functionality to the AI) so it is very basic and I could do things like run future possible moves through the check sequence as well. Fun fact, though, my moves (which I based the AI on obviously) are the ones the CPU would make nearly 80% of the time. The only time that does not hold true is when I play the game like you would Chess, where your move is made solely to position a move four turns down the line
For your game, you have a ton of variables to consider and not a single point scale as I had. I would suggest listing out each thing and applying a point value to each thing so you can apply importance values to each one. I did something similar for a caching system that automatically determines what is the most important file to keep based on age, usage, size, etc. You then look at each card in the CPU's hand, calculate what each card's value is and play that card (assuming it is legal to do so, of course).
Once you figure that out, you can look into things like what each move could do in the next turn (i.e. "damage" values for each move). And once that is done, you could add functionality to it that would let the CPU make strategic moves that would allow them to use a more powerful card or perform a "finishing" move or however it works in the end.
Again, though, keep it to a simple point based system and keep going from there. You need something you can physically compare, so sticking to a point based system makes that simple.
I apologize for the length of this answer, but I hope it helps in some way.
I know how algorithms like minimax can be used in order to play perfect games (In this case, I'm looking a game similar to Tic-Tac-Toe)
However, I'm wondering how one would go about creating a non-perfect algorithm, or an AI at different 'skill levels' (Easy, Medium, Hard etc), that a human player would actually have a chance of defeating.
Cut off the search at various depths to limit the skill of the computer. Change the evaluation function to make the computer favor different strategies.
Non-expert human players play with sub-optimal strategies and limited tactics. These roughly correspond to poor evaluation of game states and limited ability to think ahead.
Regarding randomness, a little is desired so the computer doesn't always make the same mistakes and can sometimes luck into doing better or worse than usual. For this, just don't always choose the best path, but choose the among them weighted by their scores. You can make the AI even more interesting by having it refine its evaluation function, i.e. update its weightings, based on the results of the game. This way it can learn a better evaluation function at limited search depth through playing, just as a human might.
One way i use in my games is to utilize random value. For easy game levels, i let the odds of selecting a random number in the favor of the human player. Example:
Easy level: only beat the human if you can randomly select a value less than 10 from the range of 1 to 100
Medium level: beat the human if you can select a random value which is less than 50 from a range of 1 to 100
Hard level: beat the human if you can randomly select a value less than 90 from a range of 1 to 100
I am sure there are better ways, but this might give you an idea
the "simplest" way would be to use a threshold value along with your minmax results, creating a set from those results that exceed the threshold, then randomly select a choice/path for the program to take. the lower the threshold the possibly easier opponent.
i say possibly because even by pure dumb luck the best move could be selected, hence "Beginner's Luck".
essentially, you are looking to increase the entropy (randomness) of the possible outcomes. if you want to specifically dumb down the computer opponent, you could limit the levels your minmax algorithm traverses, or devalue the points for some portion of the algorithm.
It is not easy for a engine to make human mistakes. Reducing the search depth is a straightforward approach but it has its limits. For example, chess engines that are reduced to one ply often give check while one valuable piece is still attacked. When the opponent defends the check with a counterattack, both pieces are en prise. It is unlikely that even an inexperienced human falls for this mistake.
Maybe you can use some ideas from a chess engine called Phalanx:
http://phalanx.sourceforge.net/index.html
It is one of the few open source engine that has a sophisticated difficulty level (-e option). If I'm not mistaken, it performs a normal search but sometimes ignores non-obvious moves. evaluate.c contains a function called blunder, which evaluates whether a move is likely to be overlooked by a human.
Can someone give me some pointers on how I should implement the artificial intelligence (human vs. computer gameplay) for a Puyo Puyo game? Is this project even worth pursuing?
The point of the game is to form chains of 4 or more beans of the same color that trigger other chains. The longer your chain is, the more points you get. My description isn't that great so here's a simple video of a game in progress: http://www.youtube.com/watch?v=K1JQQbDKTd8&feature=related
Thanks!
You could also design a fairly simple expert system for a low-level difficulty. Something like:
1) Place the pieces anywhere that would result in clearing blocks
2) Otherwise, place the pieces adjacent to same-color pieces.
3) Otherwise, place the pieces anywhere.
This is the basic strategy a person would employ just after becoming familiar with the game. It will be passable as a computer opponent, but it's unlikely to beat anybody who has played more than 20 minutes. You also won't learn much by implementing it.
Best strategy is not to kill every single chain as soon as possible, but assemble in a way, that when you brake something on top everything collapse and you get lot of combos. So problem is in assembling whit this combo strategy. It is also important that there is probably better to make 4 combos whit 5 peaces that 5 combos whit 4 peaces (but i am not sure, check whit scoring)
You should build big structure that allows this super combo moves. When building this structure you have also problem, that you do not know, which peace you will you get (except for next peace), so there is little probability involved.
This is very interesting problem.
You can apply:
Dynamic programming to check the current score.
Monte Carlo for probability needs.
Little heuristics (heuristics always solve problem faster)
In general I would describe this problem as optimal positioning of peaces to maximise probability of win. There is no single strategy, because building bigger "heap" brings greater risk for loosing game.
One parameter of how good your heap is can be "entropy" - number of single/buried peaces, after making combo.
The first answer that comes to mind is a lookahead search with alpha-beta pruning. Is it worth doing? Perhaps as a learning exercise.
I was wondering, which are the most commonly used algorithms applied to finding patterns in puzzle games conformed by grids of cells.
I know that depends of many factors, like the kind of patterns You want to detect, or the rules of the game...but I wanted to know which are the most commonly used algorithms in that kind of problems...
For example, games like columns, bejeweled, even tetris.
I also want to know if detecting patterns by "brute force" ( like , scanning all the grid trying to find three adyacent cells of the same color ) is significantly worst that using particular algorithms in very small grids, like 4 X 4 for example ( and again, I know that depends of the kind of game and rules ...)
Which structures are commonly used in this kind of games ?
It's always domain-dependent. But there's also two situations where you'd do these kinds of searches. Ones situation is after a move (a change to the game field made by the player), and the other would be if/when the whole board has changed.
In Tetris, you wouldn't need to scan the whole board after a piece is dropped. You'd just have to search the rows the piece is touching.
In a match-3 games like Bejeweled, where you're swapping two adjacent pieces at a time, you'd first run a localized search in each direction around each square that changed, to see if any pieces have triggered. Then, if they have, the game will dump some new, random pieces onto the board. Now, you could run the same localized search around each square that's changed, but that might involve a lot of if statements and might actually be slower to just scanning the whole board from top left to bottom right. It depends on your implementation and would require profiling.
As Adrian says, a simple 2D array suffices. Often, though, you may add a "border" of pixels around this array, to simplify the searching-for-patterns aspect. Without a border, you'd have to have if statements along the edge squares that says "well, if you're in the top row, don't search up (and walk off the array)". With a border around it, you can safely just search through everything: saving yourself if statements, saving yourself branching, saving yourself pipeline issues, searching faster.
To Jon: these kinds of things really do matter in high-performance settings, even on modern machines, if you're making a search algorithm to play/solve the game. If you are, you want your underlying simulation to run as quickly as possible in order to search as deep as possible in the fewest cycles.
Regarding algorithms: It certainly depends on the game. For example for tetris, you'd only have to scan each row if it has the same color. I can't even think of something that would not equal the brute force approach in this case. But for most casual games brute force should be perfectly fine. Pattern recognition should be negligible in comparison to graphics and sound processing.
Regarding structures: A simple 2D-Array should suffice for representing the board.
Given the average computer speed these days, if it's real-time as the user is playing the game, it probably won't matter (EDIT: for very small game boards only). Certainly, it would depend on the complexity of the game logic, but also how fast the code is going to run on the target machine (i.e., is this a JavaScript web page game, or a Windows app written in C++).
If this is for something like simulating gameplay strategies, then use an algorithm that's more efficient.
A more efficient strategy could involve keeping track of incremental changes to the game board, instead of re-scanning the whole board every time.