I'm in the process of creating a game where the user will be presented with 2 sets of colored tiles. In order to ensure that the puzzle is solvable, I start with one set, copy it to a second set, then swap tiles from one set to another. Currently, (and this is where my issue lies) the number of swaps is determined by the level the user is playing - 1 swap for level 1, 2 swaps for level 2, etc. This same number of swaps is used as a goal in the game. The user must complete the puzzle by swapping a tile from one set to the other to make the 2 sets match (by color). The order of the tiles in the (user) solved puzzle doesn't matter as long as the 2 sets match.
The problem I have is that as the number of swaps I used to generate the puzzle approaches the number of tiles in each set, the puzzle becomes easier to solve. Basically, you can just drag from one set in whatever order you need for the second set and solve the puzzle with plenty of moves left. What I am looking to do is after I finish building the puzzle, calculate the minimum number of moves required to solve the puzzle. Again, this is almost always less than the number of swaps used to create the puzzle, especially as the number of swaps approaches the number of tiles in each set.
My goal is to calculate the best case scenario and then give the user a "fudge factor" (i.e. 1.2 times the minimum number of moves). Solving the puzzle in under this number of moves will result in passing the level.
A little background as to how I currently have the game configured:
Levels 1 to 10: 9 tiles in each set. 5 different color tiles.
Levels 11 to 20: 12 tiles in each set. 7 different color tiles.
Levels 21 to 25: 15 tiles in each set. 10 different color tiles.
Swapping within a set is not allowed.
For each level, there will be at least 2 tiles of a given color (one for each set in the solved puzzle).
Is there any type of algorithm anyone could recommend to calculate the minimum number of moves to solve a given puzzle?
The minimum moves to solve a puzzle is essentially the shortest path from that unsolved state to a solved state. Your game implicitly defines a graph where the vertices are legal states, and there's an edge between two states if there's a legal move that enables that transition.
Depending on the size of your search space, a simple breadth-first search would be feasible, and would give you the minimum number of steps to reach any given state. In fact, you can generate the problems this way too: instead of making random moves to arrive at a state and checking its "distance" from the initial state, simply explore the search space in breadth-first/level-order, and pick a state at a given "distance" for your puzzle.
Related questions
Rush Hour - Solving the Game
BFS is used to solve Rush Hour, with source code in Java
Alternative
IF the search space is too huge for BFS (and I'm not yet convinced that it is), you can use iterative deepening depth-first search instead. It's space-efficient like DFS, but (cummulatively) level-order like BFS. Even though nodes would be visited many times, it is still asymptotically identical to BFS, but requiring much leser space.
I didn't quite understand the puzzle from your description, but two general ideas often useful in solving that kind of puzzles are backtracking and branch and bound.
The A* search algorithm. The idea is that you have some measure of how close a position is to the solution. A* is then a "best first" search in the sense that at each step it considers moves from the best position found so far. It's up to you to come up with some kind of measure of how close you are to a solution. (It doesn't have to be accurate, it's just a heuristic to guide the search.) In practice it often performs much better than a pure breadth first search because it's always guided by your closeness scoring function. But without understanding your problem description, it's hard to say. (A rule of thumb is that if there's a sense of "making progress" while doing a puzzle, rather than it all suddenly coming together at the end, then A* is a good choice.)
Related
I'm writing a program that needs to quickly check whether a contiguous region of space is fillable by tetrominoes (any type, any orientation). My first attempt was to simply check if the number of squares was divisible by 4. However, situations like this can still come up:
As you can see, even though these regions have 8 squares each, they are impossible to tile with tetrominoes.
I've been thinking for a bit and I'm not sure how to proceed. It seems to me that the "hub" squares, or squares that lead to more than two "tunnels", are the key to this. It's easy in the above examples, since you can quickly count the spaces in each such tunnel — 3, 1, and 3 in the first example, and 3, 1, 1, and 2 in the second — and determine that it's impossible to proceed due to the fact that each tunnel needs to connect to the hub square to fit a tetromino, which can't happen for all of them. However, you can have more complicated examples like this:
...where a simple counting technique just doesn't work. (At least, as far as I can tell.) And that's to say nothing of more open spaces with a very small number of hub squares. Plus, I don't have any proof that hub squares are the only trick here. For all I know, there may be tons of other impossible cases.
Is some sort of search algorithm (A*?) the best option for solving this? I'm very concerned about performance with hundreds, or even thousands, of squares. The algorithm needs to be very efficient, since it'll be used for real-time tiling (more or less), and in a browser at that.
Perfect matching on a perfect matching
[EDIT 28/10/2014: As noticed by pix, this approach never tries to use T-tetrominoes, so it is even more likely to give an incorrect "No" answer than I thought...]
This will not guarantee a solution on an arbitrary shape, but it will work quickly and well most of the time.
Imagine a graph in which there is a vertex for each white square, and an edge between two vertices if and only if their corresponding white squares are adjacent. (Each vertex can therefore touch at most 4 edges.) A perfect matching in this graph is a subset of edges such that every vertex touches exactly one edge in the subset. In other words, it is a way of pairing up adjacent vertices -- or in yet other words, a domino tiling of the white squares. Later I'll explain how to find a nicely random-looking perfect matching; for now, let's just assume that it can be done.
Then, starting from this domino tiling, we can just repeat the matching process, gluing dominos together into tetrominos! The only differences the second time around are that instead of having a vertex per white square, we have a vertex per domino; and because we must add an edge whenever two dominos are adjacent, a vertex can now have as many as 6 edges.
The first step (domino tiling) step cannot fail: if a domino tiling for the given shape exists, then one will be found. However, it is possible for the second step (gluing dominos together into tetrominos) to fail, because it has to work with the already-decided domino tiling, and this may limit its options. Here is an example showing how different domino tilings of the same shape can enable or spoil the tetromino tiling:
AABCDD --> XXXYYY Success :)
BC XY
AABBDD --> Failure.
CC
Solving the maximum matching problems
In order to generate a random pattern of dominos, the edges in the graph can be given random weights, and the maximum weighted matching problem can be solved. The weights should be in the range [1, V/(V-2)), to guarantee that it is never possible to achieve a higher score by leaving some vertices unpaired. The graph is in fact bipartite as it contains no odd-length cycles, meaning that the faster O(V^2*E) algorithm for the maximum weighted bipartite matching problem can be used for this step. (This is not true for the second matching problem: one domino can touch two other dominos that touch each other.)
If the second step fails to find a complete set of tetrominos, then either no solution is possible*, or a solution is possible using a different set of dominos. You can try randomly reweighting the graph used to find the domino tiling, and then rerunning the first step. Alternatively, instead of completely reweighting it from scratch, you could just increase the weights for the problematic dominos, and try again.
* For a plain square with even side lengths, we know that a solution is always possible: just fill it with 2x2 square tetrominos.
If you're not familiar with it, the game consists of a collection of cars of varying sizes, set either horizontally or vertically, on a NxM grid that has a single exit.
Each car can move forward/backward in the directions it's set in, as long as another car is not blocking it. You can never change the direction of a car.
There is one special car, usually it's the red one. It's set in the same row that the exit is in, and the objective of the game is to find a series of moves (a move - moving a car N steps back or forward) that will allow the red car to drive out of the maze.
I've been trying to think how to generate instances for this problem, generating levels of difficulty based on the minimum number to solve the board.
Any idea of an algorithm or a strategy to do that?
Thanks in advance!
The board given in the question has at most 4*4*4*5*5*3*5 = 24.000 possible configurations, given the placement of cars.
A graph with 24.000 nodes is not very large for todays computers. So a possible approach would be to
construct the graph of all positions (nodes are positions, edges are moves),
find the number of winning moves for all nodes (e.g. using Dijkstra) and
select a node with a large distance from the goal.
One possible approach would be creating it in reverse.
Generate a random board, that has the red car in the winning position.
Build the graph of all reachable positions.
Select a position that has the largest distance from every winning position.
The number of reachable positions is not that big (probably always below 100k), so (2) and (3) are feasible.
How to create harder instances through local search
It's possible that above approach will not yield hard instances, as most random instances don't give rise to a complex interlocking behavior of the cars.
You can do some local search, which requires
a way to generate other boards from an existing one
an evaluation/fitness function
(2) is simple, maybe use the length of the longest solution, see above. Though this is quite costly.
(1) requires some thought. Possible modifications are:
add a car somewhere
remove a car (I assume this will always make the board easier)
Those two are enough to reach all possible boards. But one might to add other ways, because of removing makes the board easier. Here are some ideas:
move a car perpendicularly to its driving direction
swap cars within the same lane (aaa..bb.) -> (bb..aaa.)
Hillclimbing/steepest ascend is probably bad because of the large branching factor. One can try to subsample the set of possible neighbouring boards, i.e., don't look at all but only at a few random ones.
I know this is ancient but I recently had to deal with a similar problem so maybe this could help.
Constructing instances by applying random operators from a terminal state (i.e., reverse) will not work well. This is due to the symmetry in the state space. On average you end up in a state that is too close to the terminal state.
Instead, what worked better was to generate initial states (by placing random cars on the grid) and then to try to solve it with some bounded heuristic search algorithm such as IDA* or branch and bound. If an instance cannot be solved under the bound, discard it.
Try to avoid A*. If you have your definition of what you mean is a "hard" instance (I find 16 moves to be pretty difficult) you can use A* with a pruning rule that prevents expansion of nodes x with g(x)+h(x)>T (T being your threshold (e.g., 16)).
Heuristics function - Since you don't have to be optimal when solving it, you can use any simple inadmissible heuristic such as number of obstacle squares to the goal. Alternatively, if you need a stronger heuristic function, you can implement a manhattan distance function by generating the entire set of winning states for the generated puzzle and then using the minimal distance from a current state to any of the terminal state.
here is the video of game I am interested in.
http://www.youtube.com/watch?v=UhWeLmSf6pA
I would like to know the algorithm that is used to make a pattern for challenge as shown in video.
Can anyone tell me, which algorithm i should use to make a clone of this game in windows.
Thanks
This is basically a Hamiltonian path problem. If someone can find a general solution better than bruteforce, I would be interested.
The board can be translated to a graph. Every piece is a node, the connections are edges. There are many graph packages and algorithms to find a Hamiltonian path. You can easily create your own model and recursive or iterative solution.
The upper bound for the number of solution search can be guesses: We have 4 nodes with 2 edges, makes only 1 choice and a factor of 1^2. 8 nodes with 3 edges, makes 2 choices and a factor of 2^8. And 4 nodes with 4 edges, makes 3 choices and a factor of 3^4. The starting point has one option more, makes a factor of 2. For sure, we have 16 different starting points. In total, the upper bound would be 1^2*2^8*3^4*2*16 = 663.552. The set of solutions would be smaller, because we will have dead ends.
For this specific problem, we can even reduce it a little bit more, because we need only three starting points, (0,0), (0,1) and (1,1). If we have all solutions for this three points, we can use a function to generate mirrored and rotated solutions. Makes an upper bound of 124.416.
After we have all solutions, we can place the 2 and 3 points somewhere in between each solution. We can even create a function to guess the "hardness" of a solution by counting all possibilities to have the 2 and 3 at the same node and same place.
If we just want to create different puzzles, a backtracking with random directions would be totally fine. Easy to implement and fast running time should be expected.
I have already implemented a functionallity, when keeper automatically moves using breadth first search algorithm. Now I want it to move boxes automatically as well (if keeper can move box from source to destination without moving another boxes). How to I do it? I've tried modifying BFS, not haven't yet succeed.
UPDATE: I don't need to solve the puzzle. Instead I want to develop handy user-interface, when user can move boxes with their mouse. For this I need some algo, which would allow to compute move sequence. But it's only about moving single box and if only no other boxes should be moved in order to do so.
Use breadth-first search as before (or A* if you prefer), but search the appropriate set of states.
When you are searching for a path for the keeper, the states correspond to the squares in the grid. But when you are searching for a way for the keeper to move a block, the states correspond to pairs of squares in the grid (one for the keeper, one for the block).
Here's the smallest non-trivial example. Suppose we have a Sokoban level with squares labelled as follows:
The grid contains the keeper and one block. The state space consists of pairs of squares occupied by the keeper and the block. There are 56 such states, drawn as small circles in the diagram below.
The lines show possible transitions within this state space. The thin lines correspond to moves by the keeper (and are bidirectional). The heavy lines correspond to pushing the block (hence go in one direction only). It is this state space that you need to search.
For example, if the block starts at 7 and the keeper at 8, then the keeper can push the block to 8 by following the red path in the state space:
Note that along this path, the block goes through the positions 7–6–5–6–7–8. You couldn't have found this path by just considering positions for the block, as the block passes through positions 6 and 7 twice.
From Wikipedia - Sokoban:
Sokoban can be studied using the theory of computational complexity.
The problem of solving Sokoban puzzles has been proven to be
NP-hard.3 This is also interesting for artificial intelligence
researchers, because solving Sokoban can be compared to designing a
robot which moves boxes in a warehouse. Further work has shown that
solving Sokoban problems is also PSPACE-complete.[4]
Sokoban is difficult not only due to its branching factor (which is
comparable to chess), but also its enormous search tree depth; some
levels require more than 1000 "pushes". Skilled human players rely
mostly on heuristics; they are usually able to quickly discard futile
or redundant lines of play, and recognize patterns and subgoals,
drastically cutting down on the amount of search.
Some Sokoban puzzles can be solved automatically by using a
single-agent search algorithm, such as IDA*, enhanced by several
techniques which make use of domain-specific knowledge.[5] This is the
method used by Rolling Stone, a Sokoban solver developed by the
University of Alberta GAMES Group. The more complex Sokoban levels
are, however, out of reach even for the best automated solvers.
Is this what you wanted to know?
Incidentally, solving Sokoban puzzles in a provably optimal way is NP-hard, which means there's a $1,000,000 prize waiting for you if you figure out how to do it.
I like playing the puzzle game Flood-It, which can be played online at:
https://www.lemoda.net/javascript/flood-it/game.html
It's also available as an iGoogle gadget. The aim is to fill the whole board with the least number of successive flood-fills.
I'm trying to write a program which can solve this puzzle optimally. What's the best way to approach this problem? Ideally I want to use the A* algorithm, but I have no idea what should be the function estimating the number of steps left. I did write a program which conducted a depth-4 brute force search to maximize the filled area. It worked reasonably well and beat me in solving the puzzle, but I'm not completely satisfied with that algorithm.
Any suggestions? Thanks in advance.
As a heuristic, you could construct a graph where each node represents a set of contiguous, same-colour squares, and each node is connected to those it touches. (Each edge weighted as 1). You could then use a path-finding algorithm to calculate the "distance" from the top left to all other nodes. Then, by looking the results of flood-filling using each of the other 5 colours, determine which one minimizes the distance to the "furthest" node, since that will likely be your bottleneck.
Add the result of that calculation to the number of fills done so far, and use that as your A* heuristic.
A naive 'greedy' algorithm is to pick the next step that maximizes the overall perimeter of the main region.
(A couple of smart friends of mine were thinking about this the other day and decided the optimium may be NP-hard (e.g. you must brute force it) - I do not know if they're correct (wasn't around to hear the reasoning and haven't thought through it myself).)
Note that for computing steps, I presume the union-find algorithm is your friend, it makes computing 'one step' very fast (see e.g. this blog post).
After playing the game a few times, I noticed that a good strategy is to always go "deep", to go for the colour which goes farthest into the unflooded territory.
A* is just a prioritized graph search. Each node is a game state, you rank nodes based on some heuristic, and always expand the lowest-expected-final-cost node. As long as your heuristic doesn't underestimate costs, the first solution you find is guaranteed to be optimal.
After playing the games a few times, I found that trying to drill to the opposite corner then all corners tended to result in a win. So a good starting cost estimate would be (cost so far) + a sufficient number of fills to reach the opposite corner [note: not minimum, just sufficient. Just greedily fill towards the corner to compute the heuristic].
I have been working on this, and after I got my solver working I took a look at the approaches others had taken.
Most of the solvers out there are heuristic and do not guarantee optimality. Heuristics look at the number of squares and distribution of colors left unchosen, or the distance to the "farthest away" square. Combining a good heuristic with bounded DFS (or BFS with lookahead) results in solutions that are quite fast for the standard 14x14 grid.
I took a slightly different approach because I was interested in finding the provably optimal path, not just a 'good' one. I observed that the search space actually grows much slower than the branching factor of the search tree, because there are quite a lot of duplicate positions. (With a depth-first strategy it is therefore important to maintain a history to avoid a redundant work.) The effective branching factor seems closer to 3 than to 5.
The search strategy I took is to perform BFS up to a "midpoint" depth where the number of states would become infeasible, somewhere between 11 and 13 moves works best. Then, I examine each state at the midpoint depth and perform a new BFS starting with that as the root. Both of these BFS searches can be pruned by eliminating states found in previous depths, and the latter search can be bounded by the depth of the best-known solution. (A heuristic applied to the order of the subtrees examined in the second step would probably help some, as well.)
The other pruning technique which proved to be key to a fast solver is simply checking whether there are more than N colors left, if you are N or fewer steps away from the current best solution.
Once we know which midpoint state is on the path to an optimal solution, the program can perform DFS using that midpoint state as a goal (and pruning any path that selects a square not in the midpoint.) Or, it might be feasible to just build up the paths in the BFS steps, at the cost of some additional memory.
My solver is not super-fast but it can find a guaranteed optimal solution in no more than a couple minutes. (See http://markgritter.livejournal.com/673948.html, or the code at http://pastebin.com/ZcrS286b.)
Smashery's answer can be slightly tweaked. For the total number of moves estimate, if there are 'k' colors at maximum distance, add 'k-1' to the number of moves estimate.
More generally, for each color, consider the maximum distance at which the color can be cleared. This gives us a dictionary mapping some maximum distances to a non-zero number of colors that can be cleared at that distance. Sum value-1 across the keys and add this to the maximum distance to get a number of moves estimate.
Also, there are certain free cases. If at any point we can clear a color in one move, we can take that move without considering the other moves.
Here's an idea for implementing the graph to support Smashery's heuristic.
Represent each group of contiguous, same-colour squares in a disjoint set, and a list of adjacent groups of squares. A flood fill merges a set to all its adjacent sets, and merges the adjacency lists. This implicit graph structure will let you find the distance from the upper left corner to the farthest node.
I think you could consider the number of squares that match or don't match the current color. So, your heuristic measure of "distance" would be the number of squares on the board that are -not- the same color as your chosen color, rather than the number of steps.
A naive heuristic could be to use the number of colours left (minus 1) - this is admissible because it will take at least that many clicks to clear off the board.
I'm not certain, but I'm fairly sure that this could be solved greedily. You're trying to reduce the number of color fields to 1, so reducing more color fields earlier shouldn't be any less efficient than reducing fewer earlier.
1) Define a collection of existing like-colored groups.
2) For each collection, count the number of neighboring collections by color. The largest count of neighboring collections with a single color is the weight of this collection.
3) Take the collection with the highest count of neighbors with a single color, and fill it to that color. Merge the collections, and update the sort for all the collections affected by the merge (all the new neighbors of the merged collection).
Overall, I think this should actually compute in O(n log n) time, where n is the number of pixels and the log(n) only comes from maintaining the sorted list of weights.
I'm not sure if there needs to be a tie-breaker for when multiple fields have the same weight though. Maybe the tie-breaker goes to the color that's common to the most groups on the map.
Anyway, note that the goal of the game is to reduce the number of distinct color fields and not to maximize the perimeter, as different color schemes can occasionally make a larger field a sub-optimal choice. Consider the field:
3 3 3 3 3
1 1 1 1 1
1 1 1 1 1
2 2 2 2 2
1 2 2 2 2
The color 1 has the largest perimeter by any measure, but the color 2 is the optimal choice.
EDIT>
Scratch that. The example:
3 1 3 1 3
1 1 1 1 1
1 1 1 1 1
2 2 2 2 2
1 2 2 2 2
Invalidates my own greedy algorithm. But I'm not convinced that this is a simple graph traversal, since changing to a color shared by 2 neighbors visits 2 nodes, and not 1.
Color elimination should probably play some role in the heuristic.
1) It is never correct to fill with a color that is not already on the graph.
2) If there is one color field with a unique color, at least one fill will be required for it. It cannot be bundled with any other fills. I think this means that it's safe to fill it sooner rather than later.
3) The greedy algorithm for neighbor field count makes sense for a 2 color map.