I'm creating a puzzle game that, while playable by hand for easy levels, is meant to be solved by computer programs for harder ones. The puzzle is a flood fill on a hexagonal board. You can try a prototype here.
(source: hacker.org)
Here is how the puzzle works: by choosing a color from the top, you perform a flood fill starting from the upper-left tile. This progressively converts the board to a solid color. The challenge is to do this in a certain number of moves.
I have created several puzzles similar to this, and the key is to use an algorithm that generates boards that are hard to solve without knowing how they were created. For example, here we might produce a board by reversing the flood fill: working backwards from a solid board until it has been unflooded. We know how many steps this took, and can set this as a lower bound on a solution.
The problem I'm facing is that when I try this approach, my upper-bound is way too high. It becomes trivial to solve the puzzle within this number of moves, even by moving randomly.
An approach that is not a solution is generating a random board and then solving it optimally and setting this as the target. The point is to create a puzzle where solving it optimally is NP time or at least a hard P.
So what I'm looking for is an algorithm that can generate extremely hard boards where solving them, as they get larger, becomes a serious challenge.
When doing RSA encryption, we do not find prime numbers, we select random numbers and then apply tests to them that give us increasingly high probability of the number being prime, withou ever proving it.
I suggest the same. Try to find conditions which that give a good likelyhood of the puzzle having the desired properties, and testing for those. Or you could use genetic algorithms/neural networks and train them to recognize the "good" puzzles, which amounts to the same thing.
I would try to prove that it is NP-complete or in P, to get a feel for the configurations which are difficult.
I'd also abstract away the hexagons and use a representation as a graph.
I've played the rectangular flood puzzle a lot (http://labpixies.com/gadget_page.php?id=10). Excited to see a Hex version! I think finding a hard game is as easy as: avoid large blocks of the same color to appear in the puzzle. At least in the rectangular cases I've seen, nearly all the puzzles that can be solved in a small number of steps have large color blocks.
P.S. I think your "lower bound" is not valid. When working forwards, if a good strategy is used, you could actually finish in fewer steps. The "lower bound" is really an upper bound for the optimum solution.
Related
I was thinking about maze algorithms recently (mostly because I'm working on a game, but I felt this is a more general question than game development related). In simple terms, I was wondering if there is a sort of maze algorithm that can generate (a possibly infinite number of) cells without any information specifically about the cell's neighbors. I imagine, if such a thing were possible, it would rely heavily upon noise functions such as Perlin or Simplex.
Each cell has four walls, these are used when actually rendering the maze so that corridors and walls are not the same thickness.
Let's say, for example, I'd like a cell at (32, 15) to generate its walls.
I know of algorithms like Ellers (which requires a limited number of columns, but infinite rows) and the Virtual fractal Mazes algorithm (which needs to know previous cells in order to build upon them infinitely in both x and y directions).
Does anyone know of any algorithm I could look into for this specific request? If not, are there any algorithms that are good for chunk-based mazes that you know of?
(Note: I did search around for a bit through StackOverflow to see if there were any questions with similar requests to mine, but I did not come across any. If you happen to know of one, a link would be greatly appreciated :D)
Thank you in advance.
Seeeeeecreeeets. My preeeeciooouss secretts. But yeah I can understand the frustration so I'll throw this one to you OP/SO. Feel free to update the PCG Wiki if you're not as lazy as me :3
There are actually many ways to do this. Some of the best techniques for procgen are:
Asking what you really want.
Design backwards. Play in reverse. Result is forwards.
Look at a random sampling of your target goal and try to see overall patterns.
But to get back to the question, there are two simple ways and they both start from asking what your really want. I'll give those first.
The first is to create 2 layers. Both are random noise. You connect the top and the bottom so they're fully connected. No isolated portions. This is asking what you really want which is connected-ness. And then guaranteeing it in a local clean-up step. (tbh I forget the function applied to layer 2 that guarantees connected-ness. I can't find the code atm.... But I remember it was a really simple local function... XOR, Curl, or something similar. Maybe you can figure it out before I fix this).
The second way is using the properties of your functions. As long as your random function is smooth enough you can take the gradient and assign a tile to it. The way you assign the tiles changes the maze structure but you can guarantee connectivity by clever selection of tiles for each gradient (b/c similar or opposite gradients are more likely to be near each other on a smooth gradient function). From here your smooth random can be any form of Perlin Noise, etc. Once again a asking what you want technique.
For backwards-reversed you unfortunately have an NP problem (I'm not sure if it's hard, complete, or whatever it's been a while since I've worked on this...). So while you could generate a random map of distances down a maze path. And then from there generate the actual obstacles... it's not really advisable. There's also a ton of consideration on different cases even for small mazes...
012
123
234
Is simple. There's a column in the lower right corner of 0 and the middle 2 has an _| shaped wall.
042
123
234
This one makes less sense. You still are required to have the same basic walls as before on all the non-changed squares... But you can't have that 4. It needs to be within 1 of at least a single neighbor. (I mean you could have a +3 cost for that square by having something like a conveyor belt or something, but then we're out of the maze problem) Okay so....
032
123
234
Makes more sense but the 2 in the corner is nonsense once again. Flipping that from a trough to a peak would give.
034
123
234
Which makes sense. At any rate. If you can get to this point then looking at local neighbors will give you walls if it's +/-1 then no wall. Otherwise wall. Also note that you can break the rules for the distance map in a consistent way and make a maze just fine. (Like instead of allowing a column picking a wall and throwing it up. This is just loop splitting at this point and should be safe)
For random sampling as the final one that I'm going to look at... Certain maze generation algorithms in the limit take on some interesting properties either as an average configuration or after millions of steps. Some form Voronoi regions. Some form concentric circles with a randomly flipped wall to allow a connection between loops. Etc. The loop one is good example to work off of. Create a set of loops. Flip a random wall on each loop. One will delete a wall which will create access to the next loop. One will split a path and offer a dead-end and a continuation. For a random flip to be a failure there has to be an opening and a split made right next to each other (unless you allow diagonals then we're good). So make loops. Generate random noise per loop. Xor together. Replace local failures with a fixed path if no diagonals are allowed.
So how do we get random noise per loop? Or how do we get better loops than just squares? Just take a random function. Separate divergence and now you have a loop map. If you have the differential equations for the source random function you can pick one random per loop. A simpler way might be to generate concentric circular walls and pick a random point at each radius to flip. Then distort the final result. You have to be careful your distortion doesn't violate any of your path-connected-ness conditions at that point though.
I'm working on an optimization problem and attempting to use simulated annealing as a heuristic. My goal is to optimize placement of k objects given some cost function. Solutions take the form of a set of k ordered pairs representing points in an M*N grid. I'm not sure how to best find a neighboring solution given a current solution. I've considered shifting each point by 1 or 0 units in a random direction. What might be a good approach to finding a neighboring solution given a current set of points?
Since I'm also trying to learn more about SA, what makes a good neighbor-finding algorithm and how close to the current solution should the neighbor be? Also, if randomness is involved, why is choosing a "neighbor" better than generating a random solution?
I would split your question into several smaller:
Also, if randomness is involved, why is choosing a "neighbor" better than generating a random solution?
Usually, you pick multiple points from a neighborhood, and you can explore all of them. For example, you generate 10 points randomly and choose the best one. By doing so you can efficiently explore more possible solutions.
Why is it better than a random guess? Good solutions tend to have a lot in common (e.g. they are close to each other in a search space). So by introducing small incremental changes, you would be able to find a good solution, while random guess could send you to completely different part of a search space and you'll never find an appropriate solution. And because of the curse of dimensionality random jumps are not better than brute force - there will be too many places to jump.
What might be a good approach to finding a neighboring solution given a current set of points?
I regret to tell you, that this question seems to be unsolvable in general. :( It's a mix between art and science. Choosing a right way to explore a search space is too problem specific. Even for solving a placement problem under varying constraints different heuristics may lead to completely different results.
You can try following:
Random shifts by fixed amount of steps (1,2...). That's your approach
Swapping two points
You can memorize bad moves for some time (something similar to tabu search), so you will use only 'good' ones next 100 steps
Use a greedy approach to generate a suboptimal placement, then improve it with methods above.
Try random restarts. At some stage, drop all of your progress so far (except for the best solution so far), raise a temperature and start again from a random initial point. You can do this each 10000 steps or something similar
Fix some points. Put an object at point (x,y) and do not move it at all, try searching for the best possible solution under this constraint.
Prohibit some combinations of objects, e.g. "distance between p1 and p2 must be larger than D".
Mix all steps above in different ways
Try to understand your problem in all tiniest details. You can derive some useful information/constraints/insights from your problem description. Assume that you can't solve placement problem in general, so try to reduce it to a more specific (== simpler, == with smaller search space) problem.
I would say that the last bullet is the most important. Look closely to your problem, consider its practical aspects only. For example, a size of your problems might allow you to enumerate something, or, maybe, some placements are not possible for you and so on and so forth. THere is no way for SA to derive such domain-specific knowledge by itself, so help it!
How to understand that your heuristic is a good one? Only by practical evaluation. Prepare a decent set of tests with obvious/well-known answers and try different approaches. Use well-known benchmarks if there are any of them.
I hope that this is helpful. :)
I'm dealing with a war game. I have a list of my bases B(x,y) from which I can send attacks on the enemy (they have bases between my own bases). Each base B can attack at a range R (the same radius for all bases). How can I find my bases to be able to attack as many enemy bases as possible, but use a minimum number of my bases?
I've reduced the problem to finding the minimum number of bases (and their coordinates) required to cover the largest area possible. I wonder if there is a better way than looking at all the possible combinations and because the number of bases could reach thousands.
Example: If the attack radius is 10 and I have five bases in a square and its center: (0,0), (10,0), (10,10), (0,10), (5,5) then the answer is that only the first four would be needed because all the area covered by the one in the center is already covered by the others.
Note 1 The solution must be single-threaded.
Note 2 The solution doesn't have to be perfect if that means a big gain in speed. The number of bases reaches thousands and this needs to use as little time as possible. I would consider running time greater than 100 ms for 10,000 bases in Python on a modern computer unacceptable, so I was thinking maybe I could start by eliminating the obvious, like if there are multiple bases within R/10 distance of each other, simply eliminate all except for one (whichever).
If I understand you correctly, the enemy bases and your bases are given as well as the (constant) attack radius. I.e. if you select one of your bases, you know exactly which of the enemy bases get attacked due to the selection.
The first step would be to eliminate those enemy cities from the problem which can not be attacked by any of your bases. Then, selecting all of your bases guarantees attacking all attackable enemy bases, so there is solution that attacks as many enemy bases as possible.
Under all those solutions you are looking for the one that uses the minimum number of your bases. This problem is equivalent to the https://en.wikipedia.org/wiki/Set_cover_problem, which is unfortunately NP-hard. You can apply all known solution methods such as Integer Linear Programming or the already mentioned greedy algorithm / metaheuristics.
If your problem instance is large and runtime is the primary concern, greedy is probably the way to go. For example you could always add that particular base of yours to the selection which adds the highest number of enemy bases that can be attacked which were previously not under attack by your already selected bases.
Hum the solution depends on your needs. If you need real time answer, maybe a greedy algorithm could provide good solution.
Other solution could be using meta-heuristic with constraint time(http://en.wikipedia.org/wiki/Metaheuristic). I probably would use genetic algorithm to search a solution for this problem under a limited time.
If interested I can provide a toy example of implementation in Python.
EDIT :
When you have to provide solution quickly a greedy algorithm is often better. But in your case I doubt. Particularity of many greedy algorithm is that you need to start from scratch each time you try to compute a new result.
Speaking again of genetic algorithm, you could for example each time you have to take a decision restart the search process from its last result. In fact you could probably let him turning has a subprocess and each 100ms take the better solution computed during the last loop.
If not too greedy in computing resource, this solution would provide better results than greedy one on the long run as the solution will probably need to be adapted to the changes of the situation but many element will stay unchanged. Just be aware that initializing a meta-search with the solution of a greedy algorithm is anyway a good idea!
I am writing a maze generation algorithm, and this wikipedia article caught my eye. I decided to implement it in java, which was a cinch. The problem I am having is that while a maze-like picture is generated, the maze often is not solvable and is not often interesting. What I mean by interesting is that there are a vast number of unreachable places and often there are many solutions.
I implemented the 1234/3 rule (although is is changeable easily, see comments for an explanation) with a roughly 50/50 distribution in the start. The mazes always reach an equilibrium where there is no change between t-steps.
My question is, is there a way to guarantee the mazes solvability from a fixed start and endpoint? Also, is there a way to make the maze more interesting to solve (fewer/one solution and few/no unreachable places)? If this is not possible with cellular automata, please tell me. Thank you.
I don't think it's possible to ensure a solvable, interesting maze through simple cellular automata, unless there's some specific criteria that can be placed on the starting state. The fact that cells have no knowledge of the overall shape because each cell won't be able to coordinate with the group as a whole.
If you're insistent on using them, you could do some combination of modification and pathfinding after generation is finished, but other methods (like the ones shown in the Wikipedia article or this question) are simpler to implement and won't result in walls that take up a whole cell (unless you want that).
the root of the problem is that "maze quality" is a global measure, but your automaton cells are restricted to a very local knowledge of the system.
to resolve this, you have three options:
add the global information from outside. generate mazes using the automaton and random initial data, then measure the maze quality (eg using flood fill or a bunch of other maze solving techniques) and repeat until you get a result you like.
use a much more complex set of explicit rules and state. you can work out a set of rules / cell values that encode both the presence of walls and the lengths / quality of paths. for example, -1 would be a wall and a positive value would be the sum of all neighbours above and to the left. then positive values encode the path distance from top left, roughly. that's not enough, but it shows the general idea... you need to encode an algorithm about the maze "directly" in the rules of the system.
use a less complex, but still turing complete, set of rules, and encode the rules for maze generation in the initial state. for example, you could use conway's life and construct an initial state that is a "program" that implements maze generation via gliders etc etc.
if it helps any you could draw a parallel between the above and:
ghost in the machine / external user
FPGA
programming a general purpose CPU
Run a path finding algorithm over it. Dijkstra would give you a sure way to compute all solutions. A* would give you one good solution.
The difficulty of a maze can be measured by the speed at which these algorithms solve it.
You can add some dead-ends in order to shut down some solutions.
This question refers to the Google-sponsored AI Challenge, a contest that happens every few months and in which the contenders need to submit a bot able to autonomously play a game against other robotic players. The competition that just closed was called "ants" and you can read all its specification here, if you are interested.
My question is specific to one aspect of ants: combat strategy.
The problem
Given a grid of discrete coordinates [like a chessboard] and given that each player has a number of ants that at each turn can either:
stay still
move east / north / west / south,
...an ant will be killed by an enemy ant if an enemy ant in range is surrounded by less (or the same) of its own enemies than the ant [equivalent to: "An ant will kill an enemy ant if an enemy in range is surrounded by more (or the same) enemies than its target"]
A visual example:
In this case the yellow ants are going to move west, and the orange ant, not being able to move away [blue tiles are blocking] will have two yellow ants "in range" and will die (if the explanation is still not clear, I invite you to visit the link above to see more examples and explained scenarios).
The question
My question is substantially about complexity. I thought to this problem extensively, but I still couldn't come up with an acceptable way to calculate the optimal set of moves in a reasonable time. It seems to me that for finding the best possible set of moves for my ants, I should evaluate the outcome for every possible scenario, but since battles can be pretty crowded with ants, this would mean that computation time would grow exponentially (5^n, with n being the number of ants involved).
Another limitation of this approach is that the solution being worked on doesn't improve its effectiveness proportionally to the time spent computing, so arbitrarily interrupting its execution might leave you with a non-acceptable solution.
I suspect that a good solution might be found via some geometrical considerations in combination with linear algebra, (maybe calculating some "centres of gravity" for groups of ants?) but I could not pass the level of "intuition" on this...
So, my question really boils down to:
How should this problem be approached to find [nearly] optimal solutions in a computation time of ~50-100 ms on a modern machine (this figure is derived by the official contest rules)?
If you are interested by the problem and need some inspiration, I highly recommend to watch some of the available game replays.
I think your problem can be solved by turning the problem around.
Instead of calculating the best moves - per ant - you could caclulate the best move candidates per discrete position on your playing board.
+1 for a save place
+2 for a place that results in an enemy dying
-1 point for a position of certain death
That would scale in a linear way - but have some trade off in not providing best individual movement.
Maybe worth a try :)
Tricky indeed. You may find some hints in Bee algorithms. This is a set of algorithms to use swarm cooperation and 'reasonable computation time'. Bee algorithms can for instance be used to roughly (!) solve the traveling salesman problem. I expect that these algorithms can provide you with the best solution given computing time.
Of course, the problem can be simplified using geometry: relative positions of ants in a neighbourhood are more important than absolute positions. And also light_303's solution is complementatry to the search pattern I propose.
EDIT FROM THE OP:
I'm selecting this answer as accepted as the winner of the contest published a post-mortem analysis of his code, and indeed he followed the approach suggested by the author of this answer. You can read the winner's blog entry here.
For these kind of problems, MinMax algorithm with alpha beta pruning is usually used. (*) [simple explanation for minmax and alpa beta prunning is at the end, but for more details, the wikipedia page should also be read].
In order to overcome the problem you have mentioned about extremely large number of possible moves, a common improvement is doing the minmax algorithm iteratively. At first you explore all nodes until depth 1, and find the best solution. If you still have some time: explore all nodes until depth 2, and now chose a new more informed best solution, and so on...
When out of time: gives the best solution you could find, at the last level you explored.
To further improve your solution, you might want to reorder the nodes you develop: for iteration i, sort the nodes in iteration (i-1) [by their heuristical value for each vertex] and explore each possibility according the order. The idea behind it is that you are more likely to prun more vertices, if you first investigate the "best" solutions.
The problem here remains finding a good heuristical function, which evaluates "how good a state is"
(*)The MinMax algorithm is simple: You explore the game tree, and decide what will you do for each state, and what is your oponent is most likely to do for each action. This is done until depth k, where k is given to the algorithm.
The alpha beta prunning is an addition to minmax, which tells you "which nodes should not be explored anymore, since any way I am not going to chose them, because I have a better solution"
My question is substantially about complexity. I thought to this
problem extensively, but I still couldn't come up with an acceptable
way to calculate the optimal set of moves in a reasonable time.
Exactly!
It's an AI competition. AI deals with problems which are too complex to be solved with optimal algorithms.
So you have to try "stuff", like your idea about centers of gravity. Even better would be some genetic algorithms where better strategies are found through natural selection (but it's hard to set up some evolving "framework" for that).
BTW: you can see the blog of the current leader and his strategy is surprisingly simple.