Edit: This question is not a duplicate of What is the optimal algorithm for the game 2048?
That question asks 'what is the best way to win the game?'
This question asks 'how can we work out the complexity of the game?'
They are completely different questions. I'm not interested in which steps are required to move towards a 'win' state - I'm interested in in finding out whether the total number of possible steps can be calculated.
I've been reading this question about the game 2048 which discusses strategies for creating an algorithm that will perform well playing the game.
The accepted answer mentions that:
the game is a discrete state space, perfect information, turn-based game like chess
which got me thinking about its complexity. For deterministic games like chess, its possible (in theory) to work out all the possible moves that lead to a win state and work backwards, selecting the best moves that keep leading towards that outcome. I know this leads to a large number of possible moves (something in the range of the number of atoms in the universe).. but is 2048 more or less complex?
Psudocode:
for the current arrangement of tiles
- work out the possible moves
- work out what the board will look like if the program adds a 2 to the board
- work out what the board will look like if the program adds a 4 to the board
- move on to working out the possible moves for the new state
At this point I'm thinking I will be here a while waiting on this to run...
So my question is - how would I begin to write this algorithm - what strategy is best for calculating the complexity of the game?
The big difference I see between 2048 and chess is that the program can select randomly between 2 and 4 when adding new tiles - which seems add a massive number of additional possible moves.
Ultimately I'd like the program to output a single figure showing the number of possible permutations in the game. Is this possible?!
Let's determine how many possible board configurations there are.
Each tile can be either empty, or contain a 2, 4, 8, ..., 512 or 1024 tile.
That's 12 possibilities per tile. There are 16 tiles, so we get 1612 = 248 possible board states - and this most likely includes a few unreachable ones.
Assuming we could store all of these in memory, we could work backwards from all board states that would generate a 2048 tile in the next move, doing a constant amount of work to link reachable board states to each other, which should give us a probabilistic best move for each state.
To store all bits in memory, let's say we'd need 4 bits per tile, i.e. 64 bits = 8 bytes per board state.
248 board states would then require 8*248 = 2251799813685248 bytes = 2048 TB (not to mention added overhead to keep track of the best boards). That's a bit beyond what a desktop computer these days has, although it might be possible to cleverly limit the number of boards required at any given time as to get down to something that will fit on, say, a 3 TB hard drive, or perhaps even in RAM.
For reference, chess has an upper bound of 2155 possible positions.
If we were to actually calculate, from the start, every possible move (in a breadth-first search-like manner), we'd get a massive number.
This isn't the exact number, but rather a rough estimate of the upper bound.
Let's make a few assumptions: (which definitely aren't always true, but, for the sake of simplicity)
There are always 15 open squares
You always have 4 moves (left, right, up, down)
Once the total sum of all tiles on the board reaches 2048, it will take the minimum number of combinations to get a single 2048 (so, if placing a 2 makes the sum 2048, the combinations will be 2 -> 4 -> 8 -> 16 -> ... -> 2048, i.e. taking 10 moves)
A 2 will always get placed, never a 4 - the algorithm won't assume this, but, for the sake of calculating the upper bound, we will.
We won't consider the fact that there may be duplicate boards generated during this process.
To reach 2048, there needs to be 2048 / 2 = 1024 tiles placed.
You start with 2 randomly placed tiles, then repeatedly make a move and another tile gets placed, so there's about 1022 'turns' (a turn consisting of making a move and a tile getting placed) until we get a sum of 2048, then there's another 10 turns to get a 2048 tile.
In each turn, we have 4 moves, and there can be one of two tiles placed in one of 15 positions (30 possibilities), so that's 4*30 = 120 possibilities.
This would, in total, give us 1201032 possible states.
If we instead assume a 4 will always get placed, we get 120519 states.
Calculating the exact number will likely involve working our way through all these states, which won't really be viable.
Related
I'm creating probability assistant for Battleship game - in essence, for given game state (field state and available ships), it would produce field where all free cells will have probability of hit.
My current approach is to do a monte-carlo like computation - get random free cell, get random ship, get random ship rotation, check if this placement is valid, if so continue with next ship from available set. If available set is empty, add how the ships were set to output stack. Redo this multiple times, use outputs to compute probability of each cell.
Is there sane algorithm to process all possible ship placements for given field state?
An exact solution is possible. But does not qualify as sane in my books.
Still, here is the idea.
There are many variants of the game, but let's say that we start with a worst case scenario of 1 ship of size 5, 2 of size 4, 3 of size 3 and 4 of size 2.
The "discovered state" of the board is all spots where shots have been taken, or ships have been discovered, plus the number of remaining ships. The discovered state naively requires 100 bits for the board (10x10, any can be shot) plus 1 bit for the count of remaining ships of size 5, 2 bits for the remaining ships of size 4, 2 bits for remaining ships of size 3 and 3 bits for remaining ships of size 2. This makes 108 bits, which fits in 14 bytes.
Now conceptually the idea is to figure out the map by shooting each square in turn in the first row, the second row, and so on, and recording the game state along with transitions. We can record the forward transitions and counts to find how many ways there are to get to any state.
Then find the end state of everything finished and all ships used and walk the transitions backwards to find how many ways there are to get from any state to the end state.
Now walk the data structure forward, knowing the probability of arriving at any state while on the way to the end, but this time we can figure out the probability of each way of finding a ship on each square as we go forward. Sum those and we have our probability heatmap.
Is this doable? In memory, no. In a distributed system it might be though.
Remember that I said that recording a state took 14 bytes? Adding a count to that takes another 8 bytes which takes us to 22 bytes. Adding the reverse count takes us to 30 bytes. My back of the envelope estimate is that at any point in our path there are on the order of a half-billion states we might be in with various ships left, killed ships sticking out and so on. That's 15 GB of data. Potentially for each of 100 squares. Which is 1.5 terabytes of data. Which we have to process in 3 passes.
So I have a non-overlapping set of rectangles and I want to efficiently determine where a rectangle of a given size can fit. Oh, also it will need to be reasonably efficient at updating as I will be “allocating” the space based on some other constraints once I find the possible valid locations.
The “complicated part” is the rectangles can touch (for example a rectangle at (0,0) 100 units wide and 50 units tall and a second rectangle at (0,50) and 50x50 allows a fit of a 50 wide by 80 tall rectangle at (0,0) through (0,20). Finding a fit may involve “merging” more then two rectangles.
(note: I'll be starting with a small number of adjacent rectangles, approximately 3, and removing rectangular areas as I "allocate" them. I expect the vast number of these allocations will not exactly cover an existing rectangle, and will leave me with 2 more more newer rectangles.)
At first I thought I could keep two “views” of my rectangles, one preferring to break in the y-axis to keep the widest possible rectangles and another that breaks in the x-axis to keep the talles possible and then I could do...um...something clever to search.
Then I figured “um, people have been working on this stuff for a long time and just because I can’t figure out how to construct the right google query it doesn’t mean this isn’t some straightforward application of quad trees or r-lists I have somehow forgotten to know about”
So is there already a good solution to this problem?
(So what am I really doing? I'm laser cutting features into the floor of a box. Features like 'NxM circles 1.23" in diameter with 0.34" separations'. The floor starts as a rectangle with small rectangles already removed from the corners for supports. I'm currently keeping a list of unallocated rectangles sorted by y with x as the tie breaker, and in some limited cases I can do a merge between 2 rectangles in that list if it produces a large enough result to fit my current target into. That doesn't really work all that well. I could also just place the features manually, but I would rather write a program for it.)
(Also: how many “things” am I doing it to? So far my boxes have had 20 to 40 “features” to place, and computers are pretty quick so some really inefficient algorithm may well work, but this is a hobby project and I may as well learn something interesting as opposed to crudely lashing some code together)
Ok, so absent a good answer to this, I came at it from another angle.
I took a look at all my policies and came up with a pretty short list: "allocate anywhere it fits", "allocate at a specific x,y position", and "allocate anywhere with y>(specific value)".
I decided to test with a pretty simple data structure. A list of non-overlapping rectangles representing space that is allocatable. Not sorted, or merged or organized in any specific way. The closest to interesting is I track extents on rectangles to make retrieving min/max X or Y quick.
I made myself a little function that checks an allocation at a specific position, and returns a list of rectangles blocking the allocation all trimmed to the intersection with the prospective allocation (and produce a new free list if applicable). I used this as a primitive to implement all 3 policies.
"allocate at a specific x,y position" is trivial, use the primitive if you see no blocking rectangles that allocation is successful.
I implemented "allocate anywhere" as "allocate with y>N" where N is "minimum Y" from the free list.
"allocate where y>N" starts with x=min-X from the free list and checks for an allocation there. If it finds no blockers it is done. If it finds blockers it moves x to the max-X of the blocker list. If that places the right edge of the prospective allocation past max-X for the free list then x is set back to min-X for the free list and y is set to the minimum of all the max-Y's in all the blocking lists encountered since the last Y change.
For my usage patterns I also get some mileage from remembering the size of the last failed allocation (with N=minY), and fast failing any that are at least as wide/tall as the last failure.
Performance is fast enough for my usage patterns (free list starting with one to three items, allocations in the tens to forties).
Currently, I have a pool of basketball players where I have a projected total of points for each player. Additionally, I have a normal distribution function that gives me a random drawing from a normal distribution for each player. Currently, I have an algorithm that calculates n unique random lineups of 8 players based on some constraints. Between each lineup, the normal distribution function runs again to produce new predictions for each player. Then the best lineup is produced for that specific set of predictions.
I would like to tweak this algorithm in the following way. I would like to have 4 tiers of maximum and minimum percentages where each player is assigned a tier. Within the number of lineups generated, I would like each specific player to occur with that frequency. So for example if I wanted to generate 10 lineups and player 1 is in tier 1 which requires the player to be between 50-60%, then the player would occur in 5-6 lineups ideally.
I'm struggling with how to modify my current algorithm to include this stipulation. Any thoughts would be greatly appreciated! I just don't know how to force each player within a specific range of percentages.
There are a lot of ways to do it.
Here is an easy approach. Keep a current relative odds of being picked for each player. The actual probability is the relative odds divided by the sum of the odds. Each person starts with the expected number of times be selected. Whenever someone is selected, their relative odds is reduced by 1. If it goes below 0, that person is out of the pool.
This approach guarantees that each player will not be in more than a maximum number of teams. It makes it unlikely, but not impossible, that any given player will be in fewer teams than you want.
An easy way to solve that is to randomly round people's desired frequencies up and down to get the right integer count. And now everything has to come even.
There is yet another problem, though. Which is that it is possible that you'll not succeed in assignment to fill all the teams. But if you go from the most popular player to the least, the odds of such mistakes should be acceptably low. Doubly so if you widen the ranges slightly by populating a few extra teams, then throwing away ones that didn't work out.
First draft
So if I understand correctly, you have N players that might appear in the first
position of the string. But you want them to be selected not at random, but according
to some percentage.
Now the first step is to normalize those percentages:
Alice 20%
Bob 40%
Charlie 10%
Doug 60%
Eric 30%
The sum is 160%, so you generate a random number from 1 to 160; say it's 97.
97 is more than 20, so subtract 20 and ignore Alice.
77 is more than 40, so subtract 40 and ignore Bob.
37 is more than 10, so subtract 10 and ignore Charlie.
27 is less than 60: Doug it is.
You can also pre-populate a 160-element array with 20 "Alice" indexes, 60 "Doug" indexes etc., and your player is players[array[random(160)]].
I am trying to generate a Settlers of Catan game board and am stuck trying to create an efficient implementation of hex numbers.
The goal is to randomly generate a set of numbers from 2-12 (with only one instance of 2 and 12, and two instances of all numbers in between), ensuring that the values 6 and 8 they are not hexagonally (?) adjacent to one another. 6 & 8 are special because they are the numbers you are most likely to roll so the game does not want these next to one another as players get disproportionately higher resources of that kind. A 7 means you have to discard resources.
The expected result: http://imgur.com/Ng7Siy8
Right now I have a working brute force implementation that is very slow and I am hoping to optimize it, but I am not sure how. The implementation is in VBA, which has constrained the data structures I can use.
In pseudo code I am doing something like this:
For Each of the 19 hexes
Loop Until we have a valid number
Generate a random number between 1 and 12
Check
Have we already placed too many of that number?
Is the number equal to 6 or 8?
Is the number being placed on a hex next to another hex with 6 or 8 placed on it?
If valid
Place
If invalid
Regenerate random number
It's very manual and subject to the random generator function, which means it can be anywhere from being really short to being really really long (compounded over 19 hexes).
Note: How my numbers are being placed seems important. I start at the outside of the gameboard (see here http://imgur.com/Ng7Siy8) on the gray hex with number 6, and then move counter clockwise around the board inward. This means that my next hex is 2 light green, 4 light orange...continuing around to 9 dark green and then coming inwards to 4 light orange.
This pattern limits the number of comparisons I need to make.
There are several optimizations you can do - first of all you know exactly how many numbers are present prom each tile - you have 2,3,3,4,4,5,5,6,6,8,8,9,9,10,10,11,11,12. So start off with this set of numbers - you will eliminate the check if the number has been generated too many times. now you can do a random shuffle of this set of numbers and check if it is "valid". This will still result in too many negative checks I think but it should perform better than your current approach.
Place the 8 first, calculate which of the remaining tiles you'd be happy to place the 6 on (i.e. non-adjacent), then choose on at random for the 6. Then place the rest.
Here is the facts first.
In the game of bridge there are 4
players named North, South, East and
West.
All 52 cards are dealt with 13 cards
to each player.
There is a Honour counting systems.
Ace=4 points, King=3 points, Queen=2
points and Jack=1 point.
I'm creating a "Card dealer" with constraints where for example you might say that the hand dealt to north has to have exactly 5 spades and between 13 to 16 Honour counting points, the rest of the hands are random.
How do I accomplish this without affecting the "randomness" in the best way and also having effective code?
I'm coding in C# and .Net but some idea in Pseudo code would be nice!
Since somebody already mentioned my Deal 3.1, I'd like to point out some of the optimizations I made in that code.
First of all, to get the most flexibly constraints, I wanted to add a complete programming language to my dealer, so you could generate whole libraries of constraints with different types of evaluators and rules. I used Tcl for that language, because I was already learning it for work, and, in 1994 when Deal 0.0 was released, Tcl was the easiest language to embed inside a C application.
Second, I needed the constraint language to run fairly fast. The constraints are running deep inside the loop. Quite a lot of code in my dealer is little optimizations with lookup tables and the like.
One of the most surprising and simple optimizations was to not deal cards to a seat until a constraint is checked on that seat. For example, if you want north to match constraint A and south to match constraint B, and your constraint code is:
match constraint A to north
match constraint B to south
Then only when you get to the first line do you fill out the north hand. If it fails, you reject the complete deal. If it passes, next fill out the south hand and check its constraint. If it fails, throw out the entire deal. Otherwise, finish the deal and accept it.
I found this optimization when doing some profiling and noticing that most of the time was spent in the random number generator.
There is one fancy optimization, which can work in some instances, call "smart stacking."
deal::input smartstack south balanced hcp 20 21
This generates a "factory" for the south hand which takes some time to build but which can then very quickly fill out the one hand to match this criteria. Smart stacking can only be applied to one hand per deal at a time, because of conditional probability problems. [*]
Smart stacking takes a "shape class" - in this case, "balanced," a "holding evaluator", in this case, "hcp", and a range of values for the holding evaluator. A "holding evaluator" is any evaluator which is applied to each suit and then totaled, so hcp, controls, losers, and hcp_plus_shape, etc. are all holding evalators.
For smartstacking to be effective, the holding evaluator needs to take a fairly limited set of values. How does smart stacking work? That might be a bit more than I have time to post here, but it's basically a huge set of tables.
One last comment: If you really only want this program for bidding practice, and not for simulations, a lot of these optimizations are probably unnecessary. That's because the very nature of practicing makes it unworthy of the time to practice bids that are extremely rare. So if you have a condition which only comes up once in a billion deals, you really might not want to worry about it. :)
[Edit: Add smart stacking details.]
Okay, there are exactly 8192=2^13 possible holdings in a suit. Group them by length and honor count:
Holdings(length,points) = { set of holdings with this length and honor count }
So
Holdings(3,7) = {AK2, AK3,...,AKT,AQJ}
and let
h(length,points) = |Holdings(length,points)|
Now list all shapes that match your shape condition (spades=5):
5-8-0-0
5-7-1-0
5-7-0-1
...
5-0-0-8
Note that the collection of all possible hand shapes has size 560, so this list is not huge.
For each shape, list the ways you can get the total honor points you are looking for by listing the honor points per suit. For example,
Shape Points per suit
5-4-4-0 10-3-0-0
5-4-4-0 10-2-1-0
5-4-4-0 10-1-2-0
5-4-4-0 10-0-3-0
5-4-4-0 9-4-0-0
...
Using our sets Holdings(length,points), we can compute the number of ways to get each of these rows.
For example, for the row 5-4-4-0 10-3-0-0, you'd have:
h(5,10)*h(4,3)*h(4,0)*h(0,0)
So, pick one of these rows at random, with relative probability based on the count, and then, for each suit, choose a holding at random from the correct Holdings() set.
Obviously, the wider the range of hand shapes and points, the more rows you will need to pre-compute. A little more code, you can still do this with some cards pre-determined - if you know where the ace of spades or west's whole hand or whatever.
[*] In theory, you can solve these conditional probability issues for smart stacking with multiple hands, but the solution to the problem would make it effective only for extremely rare types of deals. That's because the number of rows in the factory's table is roughly the product of the number of rows for stacking one hand times the number of rows for stacking the other hand. Also, the h() table has to be keyed on the number of ways of dividing the n cards amongst hand 1, hand 2, and other hands, which changes the number of values from roughly 2^13 to 3^13 possible values, which is about two orders of magnitude bigger.
Since the numbers are quite small here, you could just take the heuristic approach: Randomly deal your cards, evaluate the constraints and just deal again if they are not met.
Depending on how fast your computer is, it might be enough to do this:
Repeat:
do a random deal
Until the board meets all the constraints
As with all performance questions, the thing to do is try it and see!
edit I tried it and saw:
done 1000000 hands in 12914 ms, 4424 ok
This is without giving any thought to optimisation - and it produces 342 hands per second meeting your criteria of "North has 5 spades and 13-16 honour points". I don't know the details of your application but it seems to me that this might be enough.
I would go for this flow, which I think does not affect the randomness (other than by pruning solutions that do not meet constraints):
List in your program all possible combinations of "valued" cards whose total Honour points count is between 13 and 16. Then pick randomly one of these combinations, removing the cards from a fresh deck.
Count how many spades you already have among the valued cards, and pick randomly among the remaining spades of the deck until you meet the count.
Now pick from the deck as much non-spades, non-valued cards as you need to complete the hand.
Finally pick the other hands among the remaining cards.
You can write a program that generates the combinations of my first point, or simply hardcode them while accounting for color symmetries to reduce the number of lines of code :)
Since you want to practise bidding, I guess you will likely be having various forms of constraints (and not just 1S opening, as I guess for this current problem) coming up in the future. Trying to come up with the optimal hand generation tailored to the constraints could be a huge time sink and not really worth the effort.
I would suggest you use rejection sampling: Generate a random deal (without any constraints) and test if it satisfies your constraints.
In order to make this feasible, I suggest you concentrate on making the random deal generation (without any constraints) as fast as you can.
To do this, map each hand to a 12byte integer (the total number of bridge hands fits in 12 bytes). Generating a random 12 byte integer can be done in just 3, 4 byte random number calls, of course since the number of hands is not exactly fitting in 12 bytes, you might have a bit of processing to do here, but I expect it won't be too much.
Richard Pavlicek has an excellent page (with algorithms) to map a deal to a number and back.
See here: http://www.rpbridge.net/7z68.htm
I would also suggest you look at the existing bridge hand dealing software (like Deal 3.1, which is freely available) too. Deal 3.1 also supports doing double dummy analysis. Perhaps you could make it work for you without having to roll one of your own.
Hope that helps.