Find minimum number of moves for Tower of London task - algorithm

I am looking for a solution for a task similar to the Tower of Hanoi task, however this is different from Hanoi as the disks are not constrained by size. The Tower of London task I am creating has 8 disks, instead of the traditional 3 or 5 (as shown in the Wikipedia link). I am using PEBL software that is "programmed primarily in C++ (although you do not need to know C++ to use PEBL), but also uses flex and bison (GNU versions of lex and yacc) to handle parsing."
Here is a video of what the task looks like in action: http://www.youtube.com/watch?v=IiBJ94HRpeM&noredirect=1
*Each disk is a number. e.g., blue disk=1, red disk = 2, etc.
1 \
2 ----\
3 ----/ 3 1
4 5 / 2 4 5
========= =========
The left side consists of the disks you have to move, to match the right side. There are 3 columns.
So if I am making it with 8 disks, I would create a trial to look like this:
1 \
2 ----\ 7 8
6 3 8 ----/ 3 6 1
7 4 5 / 2 4 5
========= =========
How do I figure out what is the minimum amount of moves needed for the left to look like the right? I don't need to use PEBL to code this, but I need to know since I am calculating how close to the minimum a person would get for each trial.

The principle is easy and its called breadth first search:
Each state has a certain number of successor states (defined by the moves possible).
You start out with a set of states that contains the initial state and step number 0.
If the end state is in the set of states, return the step number.
Increment the step number.
Rebuild the set of states by replacing the current states with each of their successor states.
Go to 2
So, in each step, compute the successor states of your currently available states and look if you reached the target state.
BUT, be warned, this can take a while and eat up a lot of memory!
You can optimize a bit in our case, since you can leave out the predecessor state.
Still, you will have 5 possible moves in most states. Which means you will have 5^N states to consider after N steps.
For example, your second example will need 10 moves, if I don't err. This will give you about 10 million states. Most contemporary computers will not be able to search beyond depth 15.
I think that an algorithm to find a solution would be easy and fast, but we have no proof this solution would be the shortest one.

Related

Get all possible valid positions of ships in battleship game

I'm creating probability assistant for Battleship game - in essence, for given game state (field state and available ships), it would produce field where all free cells will have probability of hit.
My current approach is to do a monte-carlo like computation - get random free cell, get random ship, get random ship rotation, check if this placement is valid, if so continue with next ship from available set. If available set is empty, add how the ships were set to output stack. Redo this multiple times, use outputs to compute probability of each cell.
Is there sane algorithm to process all possible ship placements for given field state?
An exact solution is possible. But does not qualify as sane in my books.
Still, here is the idea.
There are many variants of the game, but let's say that we start with a worst case scenario of 1 ship of size 5, 2 of size 4, 3 of size 3 and 4 of size 2.
The "discovered state" of the board is all spots where shots have been taken, or ships have been discovered, plus the number of remaining ships. The discovered state naively requires 100 bits for the board (10x10, any can be shot) plus 1 bit for the count of remaining ships of size 5, 2 bits for the remaining ships of size 4, 2 bits for remaining ships of size 3 and 3 bits for remaining ships of size 2. This makes 108 bits, which fits in 14 bytes.
Now conceptually the idea is to figure out the map by shooting each square in turn in the first row, the second row, and so on, and recording the game state along with transitions. We can record the forward transitions and counts to find how many ways there are to get to any state.
Then find the end state of everything finished and all ships used and walk the transitions backwards to find how many ways there are to get from any state to the end state.
Now walk the data structure forward, knowing the probability of arriving at any state while on the way to the end, but this time we can figure out the probability of each way of finding a ship on each square as we go forward. Sum those and we have our probability heatmap.
Is this doable? In memory, no. In a distributed system it might be though.
Remember that I said that recording a state took 14 bytes? Adding a count to that takes another 8 bytes which takes us to 22 bytes. Adding the reverse count takes us to 30 bytes. My back of the envelope estimate is that at any point in our path there are on the order of a half-billion states we might be in with various ships left, killed ships sticking out and so on. That's 15 GB of data. Potentially for each of 100 squares. Which is 1.5 terabytes of data. Which we have to process in 3 passes.

How many times do I have to repeat a specific shuffle of playing cards to get back to where I started?

This is my first post on Stack Overflow, so please excuse my mistakes if I'm doing something wrong.
Ok so I'm trying to find an algorithm/function/something that can calculate how many times I have to do the same type of shuffle of 52 playing cards to get back to where I started.
The specific shuffle I'm using goes like this:
-You will have two piles.
-You have the deck with the back facing up. (Lets call this pile 1)
-You will now alternate between putting a card in the back of pile 1 Example: Let's say you have 4 cards in a pile, back facing up, going from 4 closest to the ground and 1 closest to the sky (Their order is 4,3,2,1. You take card 1 and put it beneath card 4 mening card 1 is now closest to the ground and card 4 is second closest, order is now 1,4,3,2. and putting one in pile 2. -Pile 2 will "stack downwards" meaning you will always put the new card at the bottom of that pile. (Back always facing up)
-The first card will always get put at the back of pile 1.
-Repeat this process until all cards are in pile 2.
-Now take pile 2 and do the exact same thing you just did.
My question is: How many times do I have to repeat this process until I get back where I started?
Side notes:
- If this is a common way of shuffling cards and there already is a solution, please let me know.
- I'm still new to math and coding so if writing up an equation/algorithm/code for this is really easy then don't laugh at me pls ;<.
- Sorry if I'm asking this at the wrong place, I don't know how all this works.
- English isn't my main language and I'm not a native speaker either so please excuse any bad grammar and/or other grammatical errors.
I do however have a code that does all of this (Link here) but I'm unsure if it's the most effective way to do it, and it hasn't given a result yet so I don't even know if it works. If you wan't to give tips or suggestions on how to change it then please do, I would really appreciate it. It's done in scratch however because I can't write in any other languages... sorry...
Thanks in advance.
Any fixed shuffle is equivalent to a permutation; what you want to know is the order of that permutation. This can be computed by decomposing the permutation into cycles and then computing the least common multiple of the cycle lengths.
I'm not able to properly understand your algorithm, but here's an example of shuffling 8 elements and then finding the number of times that shuffle needs to be repeated to get back to an unshuffled state.
Suppose the sequence starts as 1,2,3,4,5,6,7,8 and after one shuffle, it's 3,1,4,5,2,8,7,6.
The number 1 goes to position 2, then 2 goes to position 5, then 5 goes to position 4, then 4 goes to position 3, then 3 goes to position 1. So the first cycle is (1 2 5 4 3).
The number 6 goes to position 8, then 8 goes to position 6. So the next cycle is (6 8).
The number 7 stays in position 7, so this is a trivial cycle (7).
The lengths of the cycles are 5, 2 and 1, so the least common multiple is 10. This shuffle takes 10 iterations to get back to the intitial state.
If you don't mind sitting down with pen and paper for a while, you should be able to follow this procedure for your own shuffling algorithm.

How to work out the complexity of the game 2048?

Edit: This question is not a duplicate of What is the optimal algorithm for the game 2048?
That question asks 'what is the best way to win the game?'
This question asks 'how can we work out the complexity of the game?'
They are completely different questions. I'm not interested in which steps are required to move towards a 'win' state - I'm interested in in finding out whether the total number of possible steps can be calculated.
I've been reading this question about the game 2048 which discusses strategies for creating an algorithm that will perform well playing the game.
The accepted answer mentions that:
the game is a discrete state space, perfect information, turn-based game like chess
which got me thinking about its complexity. For deterministic games like chess, its possible (in theory) to work out all the possible moves that lead to a win state and work backwards, selecting the best moves that keep leading towards that outcome. I know this leads to a large number of possible moves (something in the range of the number of atoms in the universe).. but is 2048 more or less complex?
Psudocode:
for the current arrangement of tiles
- work out the possible moves
- work out what the board will look like if the program adds a 2 to the board
- work out what the board will look like if the program adds a 4 to the board
- move on to working out the possible moves for the new state
At this point I'm thinking I will be here a while waiting on this to run...
So my question is - how would I begin to write this algorithm - what strategy is best for calculating the complexity of the game?
The big difference I see between 2048 and chess is that the program can select randomly between 2 and 4 when adding new tiles - which seems add a massive number of additional possible moves.
Ultimately I'd like the program to output a single figure showing the number of possible permutations in the game. Is this possible?!
Let's determine how many possible board configurations there are.
Each tile can be either empty, or contain a 2, 4, 8, ..., 512 or 1024 tile.
That's 12 possibilities per tile. There are 16 tiles, so we get 1612 = 248 possible board states - and this most likely includes a few unreachable ones.
Assuming we could store all of these in memory, we could work backwards from all board states that would generate a 2048 tile in the next move, doing a constant amount of work to link reachable board states to each other, which should give us a probabilistic best move for each state.
To store all bits in memory, let's say we'd need 4 bits per tile, i.e. 64 bits = 8 bytes per board state.
248 board states would then require 8*248 = 2251799813685248 bytes = 2048 TB (not to mention added overhead to keep track of the best boards). That's a bit beyond what a desktop computer these days has, although it might be possible to cleverly limit the number of boards required at any given time as to get down to something that will fit on, say, a 3 TB hard drive, or perhaps even in RAM.
For reference, chess has an upper bound of 2155 possible positions.
If we were to actually calculate, from the start, every possible move (in a breadth-first search-like manner), we'd get a massive number.
This isn't the exact number, but rather a rough estimate of the upper bound.
Let's make a few assumptions: (which definitely aren't always true, but, for the sake of simplicity)
There are always 15 open squares
You always have 4 moves (left, right, up, down)
Once the total sum of all tiles on the board reaches 2048, it will take the minimum number of combinations to get a single 2048 (so, if placing a 2 makes the sum 2048, the combinations will be 2 -> 4 -> 8 -> 16 -> ... -> 2048, i.e. taking 10 moves)
A 2 will always get placed, never a 4 - the algorithm won't assume this, but, for the sake of calculating the upper bound, we will.
We won't consider the fact that there may be duplicate boards generated during this process.
To reach 2048, there needs to be 2048 / 2 = 1024 tiles placed.
You start with 2 randomly placed tiles, then repeatedly make a move and another tile gets placed, so there's about 1022 'turns' (a turn consisting of making a move and a tile getting placed) until we get a sum of 2048, then there's another 10 turns to get a 2048 tile.
In each turn, we have 4 moves, and there can be one of two tiles placed in one of 15 positions (30 possibilities), so that's 4*30 = 120 possibilities.
This would, in total, give us 1201032 possible states.
If we instead assume a 4 will always get placed, we get 120519 states.
Calculating the exact number will likely involve working our way through all these states, which won't really be viable.

Greedy Algorithm Optimization

Consider a DVR recorder that has the duty to record television programs.
Each program has a starting time and ending time.
The DVR has the following restrictions:
It may only record up to two items at once.
If it chooses to record an item, it must record it from start to end.
Given the the number of television programs and their starting/ending times, what is the maximum number of programs the DVR can record?
For example: Consider 6 programs:
They are written in the form:
a b c. a is the program number, b is starting time, and c is ending time
1 0 3
2 6 7
3 3 10
4 1 5
5 2 8
6 1 9
The optimal way to record is have programs 1 and 3 recorded back to back, and programs 2 and 4 recorded back to back. 2 and 4 will be recording alongside 1 and 3.
This means the max number of programs is 4.
What is an efficient algorithm to find the max number of programs that can be recorded?
This is a classic example for a greedy algorithm.
You create an array with tuples for each program in the input.
Now you sort this array by the end times and start going from the left to the right. If you can take the very next program (you are recording at most one program already), you increment the result counter and remember the end-time. For another program again fill the available slot if possible, if not, you can't record it and can discard it.
This way you will get the maximum number of programs that can be recorded in O(nlogn) time.

sorting cards with wildcards

i am programming a card game and i need to sort a stack of cards by their rank. so that they form a gapless sequence.
in this special game the card with value 2 could be used as a wild card, so for example the cards
2 3 5
should be sorted like this
3 2 5
because the 2 replaces the 4, otherwise it would not be a valid sequence.
however the cards
2 3 4
should stay like they are.
restriction: there an be only one '2' used as a wildcard.
2 2 3 4
would also stay like it is, because the first 2 would replace the ACE (or 1, whatever you call it).
the following would not be a valid input sequence, since one of the 2s must be use as a wildcard and one not. it is not possible to make up a gapless sequence then.
2 4 2 6
now i have a difficulty to figure out if a 2 is used as a wildcard or not. once i got that, i think i can do the rest of the sorting
thanks for any algorithmic help on this problem!
EDIT in response to your clarification to your new requirement:
You're implying that you'll never get data for which a gapless sequence cannot be formed. (If only I could have such guarantees in the real world.) So:
Do you have a 2?
No: your sequence must already be gapless.
Yes: You need to figure out where to put it.
Sort your input. Do you see a gap? Since you can only use one 2 as a wildcard, there can be at most one gap.
No: treat the 2 as a legitimate number two.
Yes: move the 2 to the gap to fill it in.
EDIT in response to your new requirement:
In this case, just look for the highest single gap, and plug it with a 2 if you have a 2 available.
Original answer:
Since your sequence must be gapless, you could count the number of 2s you have and the sizes of all the gaps that are present. Then just fill in the highest gap for which you have a sufficient number of 2s.

Resources