Get all possible valid positions of ships in battleship game - algorithm

I'm creating probability assistant for Battleship game - in essence, for given game state (field state and available ships), it would produce field where all free cells will have probability of hit.
My current approach is to do a monte-carlo like computation - get random free cell, get random ship, get random ship rotation, check if this placement is valid, if so continue with next ship from available set. If available set is empty, add how the ships were set to output stack. Redo this multiple times, use outputs to compute probability of each cell.
Is there sane algorithm to process all possible ship placements for given field state?

An exact solution is possible. But does not qualify as sane in my books.
Still, here is the idea.
There are many variants of the game, but let's say that we start with a worst case scenario of 1 ship of size 5, 2 of size 4, 3 of size 3 and 4 of size 2.
The "discovered state" of the board is all spots where shots have been taken, or ships have been discovered, plus the number of remaining ships. The discovered state naively requires 100 bits for the board (10x10, any can be shot) plus 1 bit for the count of remaining ships of size 5, 2 bits for the remaining ships of size 4, 2 bits for remaining ships of size 3 and 3 bits for remaining ships of size 2. This makes 108 bits, which fits in 14 bytes.
Now conceptually the idea is to figure out the map by shooting each square in turn in the first row, the second row, and so on, and recording the game state along with transitions. We can record the forward transitions and counts to find how many ways there are to get to any state.
Then find the end state of everything finished and all ships used and walk the transitions backwards to find how many ways there are to get from any state to the end state.
Now walk the data structure forward, knowing the probability of arriving at any state while on the way to the end, but this time we can figure out the probability of each way of finding a ship on each square as we go forward. Sum those and we have our probability heatmap.
Is this doable? In memory, no. In a distributed system it might be though.
Remember that I said that recording a state took 14 bytes? Adding a count to that takes another 8 bytes which takes us to 22 bytes. Adding the reverse count takes us to 30 bytes. My back of the envelope estimate is that at any point in our path there are on the order of a half-billion states we might be in with various ships left, killed ships sticking out and so on. That's 15 GB of data. Potentially for each of 100 squares. Which is 1.5 terabytes of data. Which we have to process in 3 passes.

Related

How many times do I have to repeat a specific shuffle of playing cards to get back to where I started?

This is my first post on Stack Overflow, so please excuse my mistakes if I'm doing something wrong.
Ok so I'm trying to find an algorithm/function/something that can calculate how many times I have to do the same type of shuffle of 52 playing cards to get back to where I started.
The specific shuffle I'm using goes like this:
-You will have two piles.
-You have the deck with the back facing up. (Lets call this pile 1)
-You will now alternate between putting a card in the back of pile 1 Example: Let's say you have 4 cards in a pile, back facing up, going from 4 closest to the ground and 1 closest to the sky (Their order is 4,3,2,1. You take card 1 and put it beneath card 4 mening card 1 is now closest to the ground and card 4 is second closest, order is now 1,4,3,2. and putting one in pile 2. -Pile 2 will "stack downwards" meaning you will always put the new card at the bottom of that pile. (Back always facing up)
-The first card will always get put at the back of pile 1.
-Repeat this process until all cards are in pile 2.
-Now take pile 2 and do the exact same thing you just did.
My question is: How many times do I have to repeat this process until I get back where I started?
Side notes:
- If this is a common way of shuffling cards and there already is a solution, please let me know.
- I'm still new to math and coding so if writing up an equation/algorithm/code for this is really easy then don't laugh at me pls ;<.
- Sorry if I'm asking this at the wrong place, I don't know how all this works.
- English isn't my main language and I'm not a native speaker either so please excuse any bad grammar and/or other grammatical errors.
I do however have a code that does all of this (Link here) but I'm unsure if it's the most effective way to do it, and it hasn't given a result yet so I don't even know if it works. If you wan't to give tips or suggestions on how to change it then please do, I would really appreciate it. It's done in scratch however because I can't write in any other languages... sorry...
Thanks in advance.
Any fixed shuffle is equivalent to a permutation; what you want to know is the order of that permutation. This can be computed by decomposing the permutation into cycles and then computing the least common multiple of the cycle lengths.
I'm not able to properly understand your algorithm, but here's an example of shuffling 8 elements and then finding the number of times that shuffle needs to be repeated to get back to an unshuffled state.
Suppose the sequence starts as 1,2,3,4,5,6,7,8 and after one shuffle, it's 3,1,4,5,2,8,7,6.
The number 1 goes to position 2, then 2 goes to position 5, then 5 goes to position 4, then 4 goes to position 3, then 3 goes to position 1. So the first cycle is (1 2 5 4 3).
The number 6 goes to position 8, then 8 goes to position 6. So the next cycle is (6 8).
The number 7 stays in position 7, so this is a trivial cycle (7).
The lengths of the cycles are 5, 2 and 1, so the least common multiple is 10. This shuffle takes 10 iterations to get back to the intitial state.
If you don't mind sitting down with pen and paper for a while, you should be able to follow this procedure for your own shuffling algorithm.

Algorithm to Make a Set of Random Outcomes Approach a Specific Percentage

Currently, I have a pool of basketball players where I have a projected total of points for each player. Additionally, I have a normal distribution function that gives me a random drawing from a normal distribution for each player. Currently, I have an algorithm that calculates n unique random lineups of 8 players based on some constraints. Between each lineup, the normal distribution function runs again to produce new predictions for each player. Then the best lineup is produced for that specific set of predictions.
I would like to tweak this algorithm in the following way. I would like to have 4 tiers of maximum and minimum percentages where each player is assigned a tier. Within the number of lineups generated, I would like each specific player to occur with that frequency. So for example if I wanted to generate 10 lineups and player 1 is in tier 1 which requires the player to be between 50-60%, then the player would occur in 5-6 lineups ideally.
I'm struggling with how to modify my current algorithm to include this stipulation. Any thoughts would be greatly appreciated! I just don't know how to force each player within a specific range of percentages.
There are a lot of ways to do it.
Here is an easy approach. Keep a current relative odds of being picked for each player. The actual probability is the relative odds divided by the sum of the odds. Each person starts with the expected number of times be selected. Whenever someone is selected, their relative odds is reduced by 1. If it goes below 0, that person is out of the pool.
This approach guarantees that each player will not be in more than a maximum number of teams. It makes it unlikely, but not impossible, that any given player will be in fewer teams than you want.
An easy way to solve that is to randomly round people's desired frequencies up and down to get the right integer count. And now everything has to come even.
There is yet another problem, though. Which is that it is possible that you'll not succeed in assignment to fill all the teams. But if you go from the most popular player to the least, the odds of such mistakes should be acceptably low. Doubly so if you widen the ranges slightly by populating a few extra teams, then throwing away ones that didn't work out.
First draft
So if I understand correctly, you have N players that might appear in the first
position of the string. But you want them to be selected not at random, but according
to some percentage.
Now the first step is to normalize those percentages:
Alice 20%
Bob 40%
Charlie 10%
Doug 60%
Eric 30%
The sum is 160%, so you generate a random number from 1 to 160; say it's 97.
97 is more than 20, so subtract 20 and ignore Alice.
77 is more than 40, so subtract 40 and ignore Bob.
37 is more than 10, so subtract 10 and ignore Charlie.
27 is less than 60: Doug it is.
You can also pre-populate a 160-element array with 20 "Alice" indexes, 60 "Doug" indexes etc., and your player is players[array[random(160)]].

Generating Settlers of Catan Numbers?

I am trying to generate a Settlers of Catan game board and am stuck trying to create an efficient implementation of hex numbers.
The goal is to randomly generate a set of numbers from 2-12 (with only one instance of 2 and 12, and two instances of all numbers in between), ensuring that the values 6 and 8 they are not hexagonally (?) adjacent to one another. 6 & 8 are special because they are the numbers you are most likely to roll so the game does not want these next to one another as players get disproportionately higher resources of that kind. A 7 means you have to discard resources.
The expected result: http://imgur.com/Ng7Siy8
Right now I have a working brute force implementation that is very slow and I am hoping to optimize it, but I am not sure how. The implementation is in VBA, which has constrained the data structures I can use.
In pseudo code I am doing something like this:
For Each of the 19 hexes
Loop Until we have a valid number
Generate a random number between 1 and 12
Check
Have we already placed too many of that number?
Is the number equal to 6 or 8?
Is the number being placed on a hex next to another hex with 6 or 8 placed on it?
If valid
Place
If invalid
Regenerate random number
It's very manual and subject to the random generator function, which means it can be anywhere from being really short to being really really long (compounded over 19 hexes).
Note: How my numbers are being placed seems important. I start at the outside of the gameboard (see here http://imgur.com/Ng7Siy8) on the gray hex with number 6, and then move counter clockwise around the board inward. This means that my next hex is 2 light green, 4 light orange...continuing around to 9 dark green and then coming inwards to 4 light orange.
This pattern limits the number of comparisons I need to make.
There are several optimizations you can do - first of all you know exactly how many numbers are present prom each tile - you have 2,3,3,4,4,5,5,6,6,8,8,9,9,10,10,11,11,12. So start off with this set of numbers - you will eliminate the check if the number has been generated too many times. now you can do a random shuffle of this set of numbers and check if it is "valid". This will still result in too many negative checks I think but it should perform better than your current approach.
Place the 8 first, calculate which of the remaining tiles you'd be happy to place the 6 on (i.e. non-adjacent), then choose on at random for the 6. Then place the rest.

How to work out the complexity of the game 2048?

Edit: This question is not a duplicate of What is the optimal algorithm for the game 2048?
That question asks 'what is the best way to win the game?'
This question asks 'how can we work out the complexity of the game?'
They are completely different questions. I'm not interested in which steps are required to move towards a 'win' state - I'm interested in in finding out whether the total number of possible steps can be calculated.
I've been reading this question about the game 2048 which discusses strategies for creating an algorithm that will perform well playing the game.
The accepted answer mentions that:
the game is a discrete state space, perfect information, turn-based game like chess
which got me thinking about its complexity. For deterministic games like chess, its possible (in theory) to work out all the possible moves that lead to a win state and work backwards, selecting the best moves that keep leading towards that outcome. I know this leads to a large number of possible moves (something in the range of the number of atoms in the universe).. but is 2048 more or less complex?
Psudocode:
for the current arrangement of tiles
- work out the possible moves
- work out what the board will look like if the program adds a 2 to the board
- work out what the board will look like if the program adds a 4 to the board
- move on to working out the possible moves for the new state
At this point I'm thinking I will be here a while waiting on this to run...
So my question is - how would I begin to write this algorithm - what strategy is best for calculating the complexity of the game?
The big difference I see between 2048 and chess is that the program can select randomly between 2 and 4 when adding new tiles - which seems add a massive number of additional possible moves.
Ultimately I'd like the program to output a single figure showing the number of possible permutations in the game. Is this possible?!
Let's determine how many possible board configurations there are.
Each tile can be either empty, or contain a 2, 4, 8, ..., 512 or 1024 tile.
That's 12 possibilities per tile. There are 16 tiles, so we get 1612 = 248 possible board states - and this most likely includes a few unreachable ones.
Assuming we could store all of these in memory, we could work backwards from all board states that would generate a 2048 tile in the next move, doing a constant amount of work to link reachable board states to each other, which should give us a probabilistic best move for each state.
To store all bits in memory, let's say we'd need 4 bits per tile, i.e. 64 bits = 8 bytes per board state.
248 board states would then require 8*248 = 2251799813685248 bytes = 2048 TB (not to mention added overhead to keep track of the best boards). That's a bit beyond what a desktop computer these days has, although it might be possible to cleverly limit the number of boards required at any given time as to get down to something that will fit on, say, a 3 TB hard drive, or perhaps even in RAM.
For reference, chess has an upper bound of 2155 possible positions.
If we were to actually calculate, from the start, every possible move (in a breadth-first search-like manner), we'd get a massive number.
This isn't the exact number, but rather a rough estimate of the upper bound.
Let's make a few assumptions: (which definitely aren't always true, but, for the sake of simplicity)
There are always 15 open squares
You always have 4 moves (left, right, up, down)
Once the total sum of all tiles on the board reaches 2048, it will take the minimum number of combinations to get a single 2048 (so, if placing a 2 makes the sum 2048, the combinations will be 2 -> 4 -> 8 -> 16 -> ... -> 2048, i.e. taking 10 moves)
A 2 will always get placed, never a 4 - the algorithm won't assume this, but, for the sake of calculating the upper bound, we will.
We won't consider the fact that there may be duplicate boards generated during this process.
To reach 2048, there needs to be 2048 / 2 = 1024 tiles placed.
You start with 2 randomly placed tiles, then repeatedly make a move and another tile gets placed, so there's about 1022 'turns' (a turn consisting of making a move and a tile getting placed) until we get a sum of 2048, then there's another 10 turns to get a 2048 tile.
In each turn, we have 4 moves, and there can be one of two tiles placed in one of 15 positions (30 possibilities), so that's 4*30 = 120 possibilities.
This would, in total, give us 1201032 possible states.
If we instead assume a 4 will always get placed, we get 120519 states.
Calculating the exact number will likely involve working our way through all these states, which won't really be viable.

Optimizing Conway's 'Game of Life'

To experiment, I've (long ago) implemented Conway's Game of Life (and I'm aware of this related question!).
My implementation worked by keeping 2 arrays of booleans, representing the 'last state', and the 'state being updated' (the 2 arrays being swapped at each iteration). While this is reasonably fast, I've often wondered about how to optimize this.
One idea, for example, would be to precompute at iteration N the zones that could be modified at iteration (N+1) (so that if a cell does not belong to such a zone, it won't even be considered for modification at iteration (N+1)). I'm aware that this is very vague, and I never took time to go into the details...
Do you have any ideas (or experience!) of how to go about optimizing (for speed) Game of Life iterations?
I am going to quote my answer from the other question, because the chapters I mention have some very interesting and fine-tuned solutions. Some of the implementation details are in c and/or assembly, yes, but for the most part the algorithms can work in any language:
Chapters 17 and 18 of
Michael Abrash's Graphics
Programmer's Black Book are one of
the most interesting reads I have ever
had. It is a lesson in thinking
outside the box. The whole book is
great really, but the final optimized
solutions to the Game of Life are
incredible bits of programming.
There are some super-fast implementations that (from memory) represent cells of 8 or more adjacent squares as bit patterns and use that as an index into a large array of precalculated values to determine in a single machine instruction if a cell is live or dead.
Check out here:
http://dotat.at/prog/life/life.html
Also XLife:
http://linux.maruhn.com/sec/xlife.html
You should look into Hashlife, the ultimate optimization. It uses the quadtree approach that skinp mentioned.
As mentioned in Arbash's Black Book, one of the most simple and straight forward ways to get a huge speedup is to keep a change list.
Instead of iterating through the entire cell grid each time, keep a copy of all the cells that you change.
This will narrow down the work you have to do on each iteration.
The algorithm itself is inherently parallelizable. Using the same double-buffered method in an unoptimized CUDA kernel, I'm getting around 25ms per generation in a 4096x4096 wrapped world.
what is the most efficient algo mainly depends on the initial state.
if the majority of cells is dead, you could save a lot of CPU time by skipping empty parts and not calculating stuff cell by cell.
im my opinion it can make sense to check for completely dead spaces first, when your initial state is something like "random, but with chance for life lower than 5%."
i would just divide the matrix up into halves and start checking the bigger ones first.
so if you have a field of 10,000 * 10,000, you´d first accumulate the states of the upper left quarter of 5,000 * 5,000.
and if the sum of states is zero in the first quarter, you can ignore this first quarter completely now and check the upper right 5,000 * 5,000 for life next.
if its sum of states is >0, you will now divide up the second quarter into 4 pieces again - and repeat this check for life for each of these subspaces.
you could go down to subframes of 8*8 or 10*10 (not sure what makes the most sense here) now.
whenever you find life, you mark these subspaces as "has life".
only spaces which "have life" need to be divided into smaller subspaces - the empty ones can be skipped.
when you are finished assigning the "has life" attribute to all possible subspaces, you end up with a list of subspaces which you now simply extend by +1 to each direction - with empty cells - and perform the regular (or modified) game of life rules to them.
you might think that dividn up a 10,000*10,000 spae into subspaces of 8*8 is a lot os tasks - but accumulating their states values is in fact much, much less computing work than performing the GoL algo to each cell plus their 8 neighbours plus comparing the number and storing the new state for the net iteration somewhere...
but like i said above, for a random init state with 30% population this wont make much sense, as there will be not many completely dead 8*8 subspaces to find (leave alone dead 256*256 subpaces)
and of course, the way of perfect optimisation will last but not least depend on your language.
-110
Two ideas:
(1) Many configurations are mostly empty space. Keep a linked list (not necessarily in order, that would take more time) of the live cells, and during an update, only update around the live cells (this is similar to your vague suggestion, OysterD :)
(2) Keep an extra array which stores the # of live cells in each row of 3 positions (left-center-right). Now when you compute the new dead/live value of a cell, you need only 4 read operations (top/bottom rows and the center-side positions), and 4 write operations (update the 3 affected row summary values, and the dead/live value of the new cell). This is a slight improvement from 8 reads and 1 write, assuming writes are no slower than reads. I'm guessing you might be able to be more clever with such configurations and arrive at an even better improvement along these lines.
If you don't want anything too complex, then you can use a grid to slice it up, and if that part of the grid is empty, don't try to simulate it (please view Tyler's answer). However, you could do a few optimizations:
Set different grid sizes depending on the amount of live cells, so if there's not a lot of live cells, that likely means they are in a tiny place.
When you randomize it, don't use the grid code until the user changes the data: I've personally tested randomizing it, and even after a long amount of time, it still fills most of the board (unless for a sufficiently small grid, at which point it won't help that much anymore)
If you are showing it to the screen, don't use rectangles for pixel size 1 and 2: instead set the pixels of the output. Any higher pixel size and I find it's okay to use the native rectangle-filling code. Also, preset the background so you don't have to fill the rectangles for the dead cells (not live, because live cells disappear pretty quickly)
Don't exactly know how this can be done, but I remember some of my friends had to represent this game's grid with a Quadtree for a assignment. I'm guess it's real good for optimizing the space of the grid since you basically only represent the occupied cells. I don't know about execution speed though.
It's a two dimensional automaton, so you can probably look up optimization techniques. Your notion seems to be about compressing the number of cells you need to check at each step. Since you only ever need to check cells that are occupied or adjacent to an occupied cell, perhaps you could keep a buffer of all such cells, updating it at each step as you process each cell.
If your field is initially empty, this will be much faster. You probably can find some balance point at which maintaining the buffer is more costly than processing all the cells.
There are table-driven solutions for this that resolve multiple cells in each table lookup. A google query should give you some examples.
I implemented this in C#:
All cells have a location, a neighbor count, a state, and access to the rule.
Put all the live cells in array B in array A.
Have all the cells in array A add 1 to the neighbor count of their
neighbors.
Have all the cells in array A put themselves and their neighbors in array B.
All the cells in Array B Update according to the rule and their state.
All the cells in Array B set their neighbors to 0.
Pros:
Ignores cells that don't need to be updated
Cons:
4 arrays: a 2d array for the grid, an array for the live cells, and an array
for the active cells.
Can't process rule B0.
Processes cells one by one.
Cells aren't just booleans
Possible improvements:
Cells also have an "Updated" value, they are updated only if they haven't
updated in the current tick, removing the need of array B as mentioned above
Instead of array B being the ones with live neighbors, array B could be the
cells without, and those check for rule B0.

Resources