Another question about TDD from me. I have read some articles and book chapters about TDD and I understand why you should TDD and I understand the simple examples but it seems when I am trying this out in the real world I get stuck very easy.
Could you give me some simple TDD examples if you were to program the well known Spider Solitaire that comes with Windows Vista? Which tests would you start with?
Solitaire games involve cards.
So, you think of a Card class. You write some tests for individual Card objects. You write your Card class to pass the tests.
You need a deck that shuffles and deals into the layout. You think of the Deck class and the shuffle algorithm and how it maintains state for dealing. You write some tests for a Deck that shuffles and deals. You write your Deck class to pass the tests. [Note, this requires a mock random number generator that isn't actually random.]
Solitaire games involve a layout with empty spaces and cards. Some empty spaces of rules (Kings only or Aces only). Solitaire games sometimes involve a stock, more-or-less the remains of the Deck.
So you think of a Layout class with spaces for cards. You write some tests for the Layout and put in various Cards. You write your Layout class to pass the tests.
Then there's rules on what cards can be moved from the layout. Whole stacks, sub-stacks, top cards, whatever. You have an AllowedMove or GameState or some such class. Same drill. Define roughly what it does, write tests, finish the class.
You have user interface and display stuff. The drill is the same.
Rough out the class.
Define the tests.
Finish the class.
etc.
I cover this in detail in a book on OO Design.
Well, when you're asking about TDD for spider solitaire, you're basically asking about how to design such a game. The tests will be the consequence of design decisions. Solitaire is a simple game, but designing such a game from scratch isn't trivial (there's more than one way to go about it).
You might want to start with something much simpler to design, like a number guessing game (where the system generates a random number and you try to guess it in as few tries as possible).
Some features of such a simple game would be:
Feature to generate the secret number randomly between 1 and 10. Generating a new number starts a new game.
Feature to compare whether a player's input is higher or lower than that number, or if the guess is right on
Feature to count the number of guesses
From that you might try these tests (just as crude examples, but easily coded):
Run your generator 1000 times. Make sure secret_number >= 1 && secret_number <= 10 every time.
For a sample set of numbers (randomly generated), does your comparison function return "HIGH" when number > secret_number, "LOW" when number < secret_number, and "WIN" when number == secret_number?
Repeat the previous test, but track the number of items you test. When "WIN" is returned, make sure your counter feature matches your test's number of items.
This is just a very rough outline, and by no means complete. But you can see from the english descriptions that code examples would be even more verbose. I think if you want more specific answers, you have to ask a more specific question.
First you should separate out the GUI from the engine. TDDing the GUI is the hardest part, so you shall keep your GUI layer as thin as you can. Google for "the humble dialog box" and read the tddui list on Yahoo! groups.
The engine layer will implement the game rules. I'm not sure how the Spider solitaire differs from the ancient solitaire (i.e. from Windows 3.1) on which I based the following:
Here is the initial test list I will start from:
you can always move a card to an empty stack
a card has a value, and one can compare to cards
you can only move a card to a non-empty stack when the card on the top is lower than the moved card
you can always move several cards to an empty stack
you can only move several cards to a non-empty stack when the card on the top is lower than the moved card
when you move all the turned cards from a stack, the top card can be returned (or shall be automatically returned ?)
you can take a the first card from the dock and put in an any stack (?)
I'me getting unsure about the rules, but I think this is enough to get the idea.
Last, start with the simplest test, add tests to the list when you have a new test idea, or when you find yourself questioning: what happen if ... .
list the features of spider solitaire [i don't play card games]
describe how you would test each feature
do it
Related
I am learning BDD and trying to make very simple game. Player see some polygon shape and need to guess it's area by expanding circle spot. To guess player need to hold finger on screen (mobile game) for some time to expand circle to desired size. If circle's area is near shape area, thep player wins.
Now i want to create first minimal test to start developing, but I can't figure out this test.
This is the most simple test that I write (in bdd style):
public partial class GuessShapeSize_Feature
{
[Test]
public void RightGuess_Scenario()
{
Given_expanding_spot_expand_speed_is (5.0f);
Given_shape_has_area_of (15.0f);
When_player_holds_finger_for_seconds (3.0f);
Then_player_guess_result_is (GuessResult.Success);
}
}
The problem here is that it is complex test that required around 5 classes: Level (container for everything happens here and checks for result), Shape (to guess), PlayerInput (hold and release finger), CircleSpot (that expands over time), TimeManager (i need to fake 3 seconds elapsed).
I can't name this test really simple for the first test. But I can't imagine any simpler test. What should i do in this situtation?
You don't really need 5 classses for that first answer. All you need is an empty API or UI, and something which always returns GuessResult.Success.
The time manager will only be used from within the scenario (and is always set to 3 seconds for now).
If that behaviour isn't rich enough (well, of course it isn't!) then what scenario comes next? Can you think of an example where your game should return something other than GameResult.Success?
Start by making one scenario at a time pass in the simplest possible way. When it isn't enough, change it.
It's absolutely fine to have the classes of your design in mind, but make it simple, and refactor as you go to meet that design.
As you break out other classes, you'll be delegating parts of the system behaviour to those classes. Some things will be better expressed as behaviour of classes than full system stuff (probably the rules around area calculation, for example) so write an example of how the class behaves from the API / UI's point of view (TDD). This is the "outside-in" of BDD.
This will help you to keep a good Test Pyramid (lots of unit tests, few scenarios).
Don't forget to talk to someone about what it is you're doing, even if it's a rubber duck.
Think smaller, meaning: look at the "building blocks" that your overall experience requires:
You need an expanding spot. What about a test that simply tests that you have such an expanding thing?
Then: that expanding spot will at some point hit a hard limit. So, write a test to ensure the spot grows to that limit, but not beyond.
Next thing: user input. Write a test showing the growing shape and test the required user interactions (without deciding about winning or loosing)
And then when all these things work, then you can come back to the test you already have: the full user experience, maybe in different flavors to represent the winning and losing case.
I am designing a bot to play Texas Hold'Em Poker on tables of up to ten players, and the design includes a few feed forward neural networks (FFNN). These neural nets each have 8 to 12 inputs, 2 to 6 outputs, and 1 or 2 hidden layers, so there are a few hundred weights that I have to optimize. My main issue with training through back propagation is getting enough training data. I play poker in my spare time, but not enough to gather data on my own. I have looked into purchasing a few million hands off of a poker site, but I don't think my wallet will be very happy with me if I do... So, I have decided on approaching this by designing a genetic algorithm. I have seen examples of FFNNs being trained to play games like Super Mario and Tetris using genetic algorithms, but never for a game like poker, so I want to know if this is a viable approach to training my bot.
First, let me give a little background information (this may be confusing if you are unfamiliar with poker). I have a system in place that allows the bot to put its opponents on a specific range of hands so that it can make intelligent decisions accordingly, but it relies entirely on accurate output from three different neural networks:
NN_1) This determines how likely it is that an opponent is a) playing the actual value of his hand, b) bluffing, or c) playing a hand with the potential to become stronger later on.
NN_2) This assumes the opponent is playing the actual value of his hand and outputs the likely strength. It represents option (a) from the first neural net.
NN_3) This does the same thing as NN_2 but instead assumes the opponent is bluffing, representing option (b).
Then I have an algorithm for option (c) that does not use a FFNN. The outputs for (a), (b), and (c) are then combined based on the output from NN_1 to update my opponent's range.
Whenever the bot is faced with a decision (i.e. should it fold, call, or raise?), it calculates which is most profitable based on its opponents' hand ranges and how they are likely to respond to different bet sizes. This is where the fourth and final neural net comes in. It takes inputs based on properties unique to each player and the state of the table, and it outputs the likelihood of the opponent folding, calling, or raising.
The bot will also have a value for aggression (how likely it is to raise instead of call) and its opening range (which hands to play pre-flop). These four neural networks and two values will define each generation of bots in my genetic algorithm.
Here is my plan for training:
I will be simulating multiple large tournaments with 10n initial bots each with random values for everything. For the first few dozen tournaments, they will all be placed on tables of 10. They will play until either one bot is left or they play, say, 1,000 hands. If they reach that hand limit, the remaining bots will instantly go all-in every hand until one is left. After each table has completed, the most accurate FFNNs will be placed in the winning bot that will move on to the next round (even if the bot containing the best FFNN was not the winner). The winning bot will retain its aggression and opening range values. The tournament ends when only 100 bots remain, and random variations on those bots will generate the players for the next tournament. I'm assuming the first few tournaments will be complete chaos, so I don't want to narrow down my options too much early on.
If by some miracle, the bots actually develop a profitable, or at least somewhat coherent, strategy (I will check for this periodically), I will begin decreasing the amount of variation between bots. Anyone who plays poker could tell you that there are different types of players each with different strategies. I want to make sure that I am allowing enough room for different strategies to develop throughout this process. Then I may develop some sort of "super bot" that can switch between those different strategies if one is failing.
So, are there any glaring issue with this approach? If so, how would you recommend fixing them? Do you have any advice for speeding up this process or increasing my chances of success? I just want to make sure I'm not about to waste hundreds of hours on something doomed to fail. Also, if this site is not the correct place to be asking this question, please refer me to another website before flagging this. I would really appreciate it. Thanks all!
It will be difficult to use ANN for poker bot. It is better to think for expert system. You can use odds calculator to have numerical evaluation of the hand strength and after that expert system for money management (risk management). ANNs are good in other problems.
I'm a relatively inexperienced programmer, and recently I've been getting interested in making a Checkers game app for a school project. I'm not sure where I can start (or if I should even attempt) at creating this. The project I have in mind probably wouldn't involve much more than a simple AI & a multiplayer player mode.
Can anyone give some hints / guidance for me to start learning?
To some extent I agree with some of the comments on the question that suggest 'try something simpler first', but checkers is simple enough that you may be able to get a working program - and you will certainly learn useful things as you go.
My suggestion would be to divide the problem into sections and solve each one in turn. For example:
1) Board representation - perhaps use an 8x8 array to represent the board. You need to be able to fill a square with empty, white piece, black piece, white king, black king. A more efficient solution might be to have a look at 'bit-boards' in which the occupancy of the board is described by a set of 64-bit integers. You probably want to end up with functions that can load or save a board state, print or display the board, and determine what (if anything ) is at some position.
2) Move representation - find a way to calculate legal moves. Which pieces can move and where they can move to. You will need to take into account - moving off the edges of the board, blocked moves, jumps, multiple jumps, kings moving 'backwards' etc. You probably want to end up with functions that can calculate all legal moves for a piece, determine if a suggested move is legal, record a game as a series of moves, maybe interface with the end user so by mousing or entering text commands you can 'play' a game on your board. So even if you only get that far then you have a 'product' you can demonstrate and people can interact with.
3) Computer play - this is the harder part - You will need to learn about minimax, alpha-beta pruning, iterative deepening and all the associated guff that goes into computer game AI - some of it sounds harder than it actually is. You also need to develop a position evaluation algorithm that measures the value of a position so the computer can decide which is the 'best' move to make. This can be as simple as the naive assumption that taking an opponents piece is always better than not taking one, that making a king is better than not making one, or that a move that leaves you with more future moves is better than one that leaves you with less choices for your next move. In practice, even a very simple 'greedy' board evaluation can work quite well if you can look 2-3 moves ahead.
All in all though, it may be simpler to look at something a little less ambitious than checkers - Othello is possibly a good choice and it is not hard to write an Othello player that can thrash a human who hasn't played a lot of the game. 3D tic-tac-toe, or a small dots-and-boxes game might be suitable too. Games like these are simpler as there are no kings or boundaries to complicate things, all (well most) moves are legal and they are sufficiently 'fun' to play to be a worthwhile software demonstration.
First let me state, the task you are talking about is a lot larger then you think it is.
How you should do it is break it down into very small manageable pieces.
The reasons are
Smaller steps are easier to understand
Getting fast feed back will help inspire you to continue and will help you fix things as they go wrong.
As you start think of the smallest step possible of something to do. Here are some ideas of parts to start:
Make a simple title screen- Just the title and to hit a key for it to
go away.
make the UI for an empty checkerboard grid.
I know those sound like not much but those will probably take much ore time than you think.
then add thing like adding the checkers, keeping the the gameboard data etc.,
Don't even think about AI until you have a game that two players can play with no UI.
What you should do is think about: what is the smallest increment I can do and add that, add that and then think about what the next small piece is.
Trust me this is the best way about going about it. If you try to write everything at once it will never happen.
If you are given:
A good shuffling algorithm (a good source of randomness plus a method of shuffling not subject to any of the common pitfalls which would bias the result)
A magic function WINNABLE(D) which takes the shuffled deck and returns True if the deck D is winnable by some playing sequence, or False if it inevitably results in a losing position.
then it would be possible to generate a set of "well distributed" winnable solitaire deals by generating a large set of starting decks with (1) and then filtering them down to the winnable set with (2). This method of randomly generating possibilities and picking from them is always a good starting point when you're trying to avoid having subtle selection bias creep in to your result.
The problem with this is that (2) is hard (maybe NP-hard depending on the game) and even approximations of it are computationally expensive (if you're on an iPad, say). However, cheaper algorithms such as starting from a winning position and making random "un-moves" to reverse the game back to a starting point may have biases toward particular deck shuffles that are very hard to quantify or avoid.
Are there any interesting algorithms or research in the area of generating winnable games like this?
Since solitaire games vary so much, reasoning at this level of generality is itself hard. To focus our ideas, let's take a particular example: Forty Thieves. It's a double-pack game starting with empty foundations, to be built ace-upwards; an empty waste pile; and a layout of ten pre-dealt face-up piles of four cards each. The top cards of the waste and layout piles are exposed. At each move, you can:
Move an exposed card to its legal place in a foundation, no worrying
back;
Move an exposed card onto a pile in the layout, only legal when
building downwards in the same suit;
Move an exposed card to an empty layout slot;
Deal a card from stock to the top of the waste.
A beginner plays these options in the order stated. (The implementation I play actually has a hint button that suggests a move accordingly.) I estimate that fewer than one in ten deals are winnable by that strategy, whereas the actual proportion of winnable deals is about one in three.
Now if you generate winnable deals by random un-moves, there is a hard-to-quantify bias; I don't disagree with that. I think, though, that the deals will tend to be harder than average among deals that happen to be winnable, with almost no deals winnable by the beginner's strategy.
You can, however, deliberately make the un-moves non-random. If you select un-moves in the opposite order to a beginner's strategy, you get a deal on which the beginner's strategy works: e.g. if only as a last resort you un-move from a foundation to waste, then moving from waste to a foundation whenever possible is always right.
Hmm, I don't know much about Solitaire but this is how I would tackle the problem. See my pseudo code.
//Assuming you have created a "card" object.
Generate a List<Cards> deck;// A list populated with every card in deck that you can use in Solitaire with the number of each card you can use in Solitaire.
Generate a List<Cards> table;
while(deck.size()>0){//This is the real code.
table.add(deck.remove((int)(Math.random()*deck.size())));
}
//And done. You know have a perfectly shuffled list of Cards in table.
//Now divide the list up however you want.
I have no idea for part 2.
I want to make a game card battle based game. In this cards have specific attributes which can increase player's hp/attack/defense or attack an enemy to reduce his hp/attack/defense
I am trying to make an AI for this game. AI has to predict which card it will select on the basis of current situations like AI's hp/attack/defense and Enemy's hp/attack/defense. Since AI cannot see enemy's card hence it cannot predict future moves.
I searched few techniques of AI like minmax but I think minmax will not be suitable since AI cannot predict any future moves.
I am searching for a technique which is very flexible so that i can add a large variety of cards later.
Can you please suggest a technique for such game.
Thanks
This isn't an ActionScript 3 topic per se but I do think it's rather interesting.
First I'd suggest picking up Yu-Gi-Oh's Stardust Accelerator World Championship 2009 for the Nintendo DS or a comparable game.
The game has a fairly advanced computer AI system that not only deals with expected advantage or disadvantage in terms of hit points but also card advantage and combos. If you're taking on a challenge like this, I definately recommend you do the required research (plus, when playing video games is research, who can complain?)
My suggestion for building an AI is as follows:
As the computer decides its move, create an array of Move objects. Then have it create a new Move object for each possible Move that it can see.
For each move object, calculate how much less HP the opponent will have, how many cards they will still have, how many creatures,etc.
Have the computer decide what's most important (more damage, more card advantage) and have it play that move.
More sophisticated AI's will also think several turns in advance and perhaps "see" moves that others do not.
I suggest you look at this game of Reversi I built a few weeks back for fun in Flash. This has a very basic AI implemented, but the basics could be applied to your situation.
Basically, the way that game works is after each move (player or CPU, so I can determine if the player made the right move in comparison to what the CPU would have made), I create a Vector of each possible legal move. I then decide which move provides the highest score change, and set that as best move. However, I also check to see if the move would result in the other player having access to a corner (if you've never played, the player who grabs the corners generally wins). If it does, I tell the CPU to avoid that move and check the second best move and so on. The end result is a CPU who can actually put up a fight.
Keep in mind that this is just a single afternoon of work (for the entire game, from the crappy GUI to the functionality to the AI) so it is very basic and I could do things like run future possible moves through the check sequence as well. Fun fact, though, my moves (which I based the AI on obviously) are the ones the CPU would make nearly 80% of the time. The only time that does not hold true is when I play the game like you would Chess, where your move is made solely to position a move four turns down the line
For your game, you have a ton of variables to consider and not a single point scale as I had. I would suggest listing out each thing and applying a point value to each thing so you can apply importance values to each one. I did something similar for a caching system that automatically determines what is the most important file to keep based on age, usage, size, etc. You then look at each card in the CPU's hand, calculate what each card's value is and play that card (assuming it is legal to do so, of course).
Once you figure that out, you can look into things like what each move could do in the next turn (i.e. "damage" values for each move). And once that is done, you could add functionality to it that would let the CPU make strategic moves that would allow them to use a more powerful card or perform a "finishing" move or however it works in the end.
Again, though, keep it to a simple point based system and keep going from there. You need something you can physically compare, so sticking to a point based system makes that simple.
I apologize for the length of this answer, but I hope it helps in some way.