Connecting pieces of wires- algorithm - algorithm

I'm using C#, but in the future I might need to use it on other languages.
Many games have such puzzles. There is a group of wires (there are 2 types of wires: straight and curved.), there is a place from where a signal comes and there is a place from where the signal must leave. But the arrangement of the wires doesn't allow that to happen. You must turn some of the wires in order to create a path for the signal.
Yeah, I'm trying to find the continent America again, in order not to try finding it more than once in the future.
Somewhere in the future I will also try the same thing but this time with wires that split the signal to 2 or 3 signals.
Problem is that I can't think of an algorithm that I can imagine how to turn it into a code. I have been thinking for some time and I can't think of anything good.
So, can you help me? I will be able to understand the algorithm as "what does the program has to do", but I basically need help with understanding the algorithm as "how to write the code".
Thanks!

Take a look at some maze generation algorithms -- they do the same thing you're looking for, and as such are what you'll need to create the grid. Pick a simple 'cell-carver' from the ones I linked to; randomly rotate all of the wire-pieces, et voilĂ !
A comment on your question notes that turning an algorithm into code involves breaking it down bit by bit -- so that's what we'll do:
You'd start with a grid of potential wire locations (probably 2D, but a 3D game would be amazing).
To produce a solvable level, you could do a couple things -- produce a level, and see if it is solvable (bad), or produce a solved level, then "unsolve" it (better). As I alluded to above, producing a solved level involves algorithms very similar to maze generation -- a solved level would have many traversable "paths", just like a maze. Unsolving the level would just loop through all wire pieces, and rotate them a bit.
But what of maze generation? The resource I linked to contains some algorithms that would be perfect for your use -- a simple randomized DFS should be plenty for your needs.
You'll note that I covered the general case involving branching wires -- as it is, excluding branching means you have to do more coding, to "prune" branches intelligently -- that is, backtracking when your one true path gets stuck, say in a corner, due to the random movement probably necessary to make an interesting level.
You should also note that a solution for such a game would, if I understood correctly, require use of all given wire (I know some games like this). Otherwise, the game would probably be a lot simpler to play given the above generation tactics.

The Union-Find algorithms (that answer my problem) are explained in this PDF file: http://www.cs.princeton.edu/~rs/AlgsDS07/01UnionFind.pdf
I don't think I would have ever thought of this algorithm, let alone making it fast.

Related

Search space data

I was wondering if anyone knew of a source which provides 2D model search spaces to test a GA against. I believe i read a while ago that there are a bunch of standard search spaces which are typically used when evaluating these type of algorithms.
If not, is it just a case of randomly generating this data yourself each time?
Edit: View from above and from the side.
The search space is completely dependent on your problem. The idea of a genetic algorithm being that modify the "genome" of a population of individuals to create the next generation, measure the fitness of the new generation and modify the genomes again with some randomness thrown is to try to prevent getting stuck in local minima. The search space however is completely determined by what you have in your genome, which in turn in completely determined by what the problem is.
There might be standard search spaces (i.e. genomes) that have been found to work well for particular problems (I haven't heard of any) but usually the hardest part in using GAs is defining what you have in your genome and how it is allowed to mutate. The usefulness comes from the fact that you don't have to explicitly declare all the values for the different variables for the model, but you can find good values (not necessarily the best ones though) using a more or less blind search.
EXAMPLE
One example used quite heavily is the evolved radio antenna (Wikipedia). The aim is to find a configuration for a radio antenna such that the antenna itself is as small and lightweight as possible, with the restriction that is has to respond to certain frequencies and have low noise and so on.
So you would build your genome specifying
the number of wires to use
the number of bends in each wire
the angle of each bend
maybe the distance of each bend from the base
(something else, I don't know what)
run your GA, see what comes out the other end, analyse why it didn't work. GAs have a habit of producing results you didn't expect because of bugs in the simulation. Anyhow, you discover that maybe the genome has to encode the number of bends individually for each of the wires in the antenna, meaning that the antenna isn't going to be symmetric. So you put that in your genome and run the thing again. Simulating stuff that needs to work in the physical world is usually the most expensive because at some point you have to test the indivudal(s) in the real world.
There's a reasonable tutorial of genetic algorithms here with some useful examples about different encoding schemes for the genome.
One final point, when people say that GAs are simple and easy to implement, they mean that the framework around the GA (generating a new population, evaluating fitness etc.) is simple. What usually is not said is that setting up a GA for a real problem is very difficult and usually requires a lot of trial and error because coming up with an encoding scheme that works well is not simple for complex problems. The best way to start is to start simple and make things more complex as you go along. You can of course make another GA to come with the encoding for first GA :).
There are several standard benchmark problems out there.
BBOB (Black Box Optimization Benchmarks) -- have been used in recent years as part of a continuous optimization competition
DeJong functions -- pretty old, and really too easy for most practical purposes these days. Useful for debugging perhaps.
ZDT/DTLZ multiobjective functions -- multi-objective optimization problems, but you could scalarize them yourself I suppose.
Many others

Multiple parameter optimization with lots of local minima

I'm looking for algorithms to find a "best" set of parameter values. The function in question has a lot of local minima and changes very quickly. To make matters even worse, testing a set of parameters is very slow - on the order of 1 minute - and I can't compute the gradient directly.
Are there any well-known algorithms for this kind of optimization?
I've had moderate success with just trying random values. I'm wondering if I can improve the performance by making the random parameter chooser have a lower chance of picking parameters close to ones that had produced bad results in the past. Is there a name for this approach so that I can search for specific advice?
More info:
Parameters are continuous
There are on the order of 5-10 parameters. Certainly not more than 10.
How many parameters are there -- eg, how many dimensions in the search space? Are they continuous or discrete - eg, real numbers, or integers, or just a few possible values?
Approaches that I've seen used for these kind of problems have a similar overall structure - take a large number of sample points, and adjust them all towards regions that have "good" answers somehow. Since you have a lot of points, their relative differences serve as a makeshift gradient.
Simulated
Annealing: The classic approach. Take a bunch of points, probabalistically move some to a neighbouring point chosen at at random depending on how much better it is.
Particle
Swarm Optimization: Take a "swarm" of particles with velocities in the search space, probabalistically randomly move a particle; if it's an improvement, let the whole swarm know.
Genetic Algorithms: This is a little different. Rather than using the neighbours information like above, you take the best results each time and "cross-breed" them hoping to get the best characteristics of each.
The wikipedia links have pseudocode for the first two; GA methods have so much variety that it's hard to list just one algorithm, but you can follow links from there. Note that there are implementations for all of the above out there that you can use or take as a starting point.
Note that all of these -- and really any approach to this large-dimensional search algorithm - are heuristics, which mean they have parameters which have to be tuned to your particular problem. Which can be tedious.
By the way, the fact that the function evaluation is so expensive can be made to work for you a bit; since all the above methods involve lots of independant function evaluations, that piece of the algorithm can be trivially parallelized with OpenMP or something similar to make use of as many cores as you have on your machine.
Your situation seems to be similar to that of the poster of Software to Tune/Calibrate Properties for Heuristic Algorithms, and I would give you the same advice I gave there: consider a Metropolis-Hastings like approach with multiple walkers and a simulated annealing of the step sizes.
The difficulty in using a Monte Carlo methods in your case is the expensive evaluation of each candidate. How expensive, compared to the time you have at hand? If you need a good answer in a few minutes this isn't going to be fast enough. If you can leave it running over night, it'll work reasonably well.
Given a complicated search space, I'd recommend a random initial distributed. You final answer may simply be the best individual result recorded during the whole run, or the mean position of the walker with the best result.
Don't be put off that I was discussing maximizing there and you want to minimize: the figure of merit can be negated or inverted.
I've tried Simulated Annealing and Particle Swarm Optimization. (As a reminder, I couldn't use gradient descent because the gradient cannot be computed).
I've also tried an algorithm that does the following:
Pick a random point and a random direction
Evaluate the function
Keep moving along the random direction for as long as the result keeps improving, speeding up on every successful iteration.
When the result stops improving, step back and instead attempt to move into an orthogonal direction by the same distance.
This "orthogonal direction" was generated by creating a random orthogonal matrix (adapted this code) with the necessary number of dimensions.
If moving in the orthogonal direction improved the result, the algorithm just continued with that direction. If none of the directions improved the result, the jump distance was halved and a new set of orthogonal directions would be attempted. Eventually the algorithm concluded it must be in a local minimum, remembered it and restarted the whole lot at a new random point.
This approach performed considerably better than Simulated Annealing and Particle Swarm: it required fewer evaluations of the (very slow) function to achieve a result of the same quality.
Of course my implementations of S.A. and P.S.O. could well be flawed - these are tricky algorithms with a lot of room for tweaking parameters. But I just thought I'd mention what ended up working best for me.
I can't really help you with finding an algorithm for your specific problem.
However in regards to the random choosing of parameters I think what you are looking for are genetic algorithms. Genetic algorithms are generally based on choosing some random input, selecting those, which are the best fit (so far) for the problem, and randomly mutating/combining them to generate a next generation for which again the best are selected.
If the function is more or less continous (that is small mutations of good inputs generally won't generate bad inputs (small being a somewhat generic)), this would work reasonably well for your problem.
There is no generalized way to answer your question. There are lots of books/papers on the subject matter, but you'll have to choose your path according to your needs, which are not clearly spoken here.
Some things to know, however - 1min/test is way too much for any algorithm to handle. I guess that in your case, you must really do one of the following:
get 100 computers to cut your parameter testing time to some reasonable time
really try to work out your parameters by hand and mind. There must be some redundancy and at least some sanity check so you can test your case in <1min
for possible result sets, try to figure out some 'operations' that modify it slightly instead of just randomizing it. For example, in TSP some basic operator is lambda, that swaps two nodes and thus creates new route. Your can be shifting some number up/down for some value.
then, find yourself some nice algorithm, your starting point can be somewhere here. The book is invaluable resource for anyone who starts with problem-solving.

Algorithms for realtime strategy wargame AI

I'm designing a realtime strategy wargame where the AI will be responsible for controlling a large number of units (possibly 1000+) on a large hexagonal map.
A unit has a number of action points which can be expended on movement, attacking enemy units or various special actions (e.g. building new units). For example, a tank with 5 action points could spend 3 on movement then 2 in firing on an enemy within range. Different units have different costs for different actions etc.
Some additional notes:
The output of the AI is a "command" to any given unit
Action points are allocated at the beginning of a time period, but may be spent at any point within the time period (this is to allow for realtime multiplayer games). Hence "do nothing and save action points for later" is a potentially valid tactic (e.g. a gun turret that cannot move waiting for an enemy to come within firing range)
The game is updating in realtime, but the AI can get a consistent snapshot of the game state at any time (thanks to the game state being one of Clojure's persistent data structures)
I'm not expecting "optimal" behaviour, just something that is not obviously stupid and provides reasonable fun/challenge to play against
What can you recommend in terms of specific algorithms/approaches that would allow for the right balance between efficiency and reasonably intelligent behaviour?
If you read Russell and Norvig, you'll find a wealth of algorithms for every purpose, updated to pretty much today's state of the art. That said, I was amazed at how many different problem classes can be successfully approached with Bayesian algorithms.
However, in your case I think it would be a bad idea for each unit to have its own Petri net or inference engine... there's only so much CPU and memory and time available. Hence, a different approach:
While in some ways perhaps a crackpot, Stephen Wolfram has shown that it's possible to program remarkably complex behavior on a basis of very simple rules. He bravely extrapolates from the Game of Life to quantum physics and the entire universe.
Similarly, a lot of research on small robots is focusing on emergent behavior or swarm intelligence. While classic military strategy and practice are strongly based on hierarchies, I think that an army of completely selfless, fearless fighters (as can be found marching in your computer) could be remarkably effective if operating as self-organizing clusters.
This approach would probably fit a little better with Erlang's or Scala's actor-based concurrency model than with Clojure's STM: I think self-organization and actors would go together extremely well. Still, I could envision running through a list of units at each turn, and having each unit evaluating just a small handful of very simple rules to determine its next action. I'd be very interested to hear if you've tried this approach, and how it went!
EDIT
Something else that was on the back of my mind but that slipped out again while I was writing: I think you can get remarkable results from this approach if you combine it with genetic or evolutionary programming; i.e. let your virtual toy soldiers wage war on each other as you sleep, let them encode their strategies and mix, match and mutate their code for those strategies; and let a refereeing program select the more successful warriors.
I've read about some startling successes achieved with these techniques, with units operating in ways we'd never think of. I have heard of AIs working on these principles having had to be intentionally dumbed down in order not to frustrate human opponents.
First you should aim to make your game turn based at some level for the AI (i.e. you can somehow model it turn based even if it may not be entirely turn based, in RTS you may be able to break discrete intervals of time into turns.) Second, you should determine how much information the AI should work with. That is, if the AI is allowed to cheat and know every move of its opponent (thereby making it stronger) or if it should know less or more. Third, you should define a cost function of a state. The idea being that a higher cost means a worse state for the computer to be in. Fourth you need a move generator, generating all valid states the AI can transition to from a given state (this may be homogeneous [state-independent] or heterogeneous [state-dependent].)
The thing is, the cost function will be greatly influenced by what exactly you define the state to be. The more information you encode in the state the better balanced your AI will be but the more difficult it will be for it to perform, as it will have to search exponentially more for every additional state variable you include (in an exhaustive search.)
If you provide a definition of a state and a cost function your problem transforms to a general problem in AI that can be tackled with any algorithm of your choice.
Here is a summary of what I think would work well:
Evolutionary algorithms may work well if you put enough effort into them, but they will add a layer of complexity that will create room for bugs amongst other things that can go wrong. They will also require extreme amounts of tweaking of the fitness function etc. I don't have much experience working with these but if they are anything like neural networks (which I believe they are since both are heuristics inspired by biological models) you will quickly find they are fickle and far from consistent. Most importantly, I doubt they add any benefits over the option I describe in 3.
With the cost function and state defined it would technically be possible for you to apply gradient decent (with the assumption that the state function is differentiable and the domain of the state variables are continuous) however this would probably yield inferior results, since the biggest weakness of gradient descent is getting stuck in local minima. To give an example, this method would be prone to something like attacking the enemy always as soon as possible because there is a non-zero chance of annihilating them. Clearly, this may not be desirable behaviour for a game, however, gradient decent is a greedy method and doesn't know better.
This option would be my most highest recommended one: simulated annealing. Simulated annealing would (IMHO) have all the benefits of 1. without the added complexity while being much more robust than 2. In essence SA is just a random walk amongst the states. So in addition to the cost and states you will have to define a way to randomly transition between states. SA is also not prone to be stuck in local minima, while producing very good results quite consistently. The only tweaking required with SA would be the cooling schedule--which decides how fast SA will converge. The greatest advantage of SA I find is that it is conceptually simple and produces superior results empirically to most other methods I have tried. Information on SA can be found here with a long list of generic implementations at the bottom.
3b. (Edit Added much later) SA and the techniques I listed above are general AI techniques and not really specialized to AI for games. In general, the more specialized the algorithm the more chance it has at performing better. See No Free Lunch Theorem 2. Another extension of 3 is something called parallel tempering which dramatically improves the performance of SA by helping it avoid local optima. Some of the original papers on parallel tempering are quite dated 3, but others have been updated4.
Regardless of what method you choose in the end, its going to be very important to break your problem down into states and a cost function as I said earlier. As a rule of thumb I would start with 20-50 state variables as your state search space is exponential in the number of these variables.
This question is huge in scope. You are basically asking how to write a strategy game.
There are tons of books and online articles for this stuff. I strongly recommend the Game Programming Wisdom series and AI Game Programming Wisdom series. In particular, Section 6 of the first volume of AI Game Programming Wisdom covers general architecture, Section 7 covers decision-making architectures, and Section 8 covers architectures for specific genres (8.2 does the RTS genre).
It's a huge question, and the other answers have pointed out amazing resources to look into.
I've dealt with this problem in the past and found the simple-behavior-manifests-complexly/emergent behavior approach a bit too unwieldy for human design unless approached genetically/evolutionarily.
I ended up instead using abstracted layers of AI, similar to a way armies work in real life. Units would be grouped with nearby units of the same time into squads, which are grouped with nearby squads to create a mini battalion of sorts. More layers could be use here (group battalions in a region, etc.), but ultimately at the top there is the high-level strategic AI.
Each layer can only issue commands to the layers directly below it. The layer below it will then attempt to execute the command with the resources at hand (ie, the layers below that layer).
An example of a command issued to a single unit is "Go here" and "shoot at this target". Higher level commands issued to higher levels would be "secure this location", which that level would process and issue the appropriate commands to the lower levels.
The highest level master AI is responsible for very board strategic decisions, such as "we need more ____ units", or "we should aim to move towards this location".
The army analogy works here; commanders and lieutenants and chain of command.

How to implement AI for Puyo Puyo game?

Can someone give me some pointers on how I should implement the artificial intelligence (human vs. computer gameplay) for a Puyo Puyo game? Is this project even worth pursuing?
The point of the game is to form chains of 4 or more beans of the same color that trigger other chains. The longer your chain is, the more points you get. My description isn't that great so here's a simple video of a game in progress: http://www.youtube.com/watch?v=K1JQQbDKTd8&feature=related
Thanks!
You could also design a fairly simple expert system for a low-level difficulty. Something like:
1) Place the pieces anywhere that would result in clearing blocks
2) Otherwise, place the pieces adjacent to same-color pieces.
3) Otherwise, place the pieces anywhere.
This is the basic strategy a person would employ just after becoming familiar with the game. It will be passable as a computer opponent, but it's unlikely to beat anybody who has played more than 20 minutes. You also won't learn much by implementing it.
Best strategy is not to kill every single chain as soon as possible, but assemble in a way, that when you brake something on top everything collapse and you get lot of combos. So problem is in assembling whit this combo strategy. It is also important that there is probably better to make 4 combos whit 5 peaces that 5 combos whit 4 peaces (but i am not sure, check whit scoring)
You should build big structure that allows this super combo moves. When building this structure you have also problem, that you do not know, which peace you will you get (except for next peace), so there is little probability involved.
This is very interesting problem.
You can apply:
Dynamic programming to check the current score.
Monte Carlo for probability needs.
Little heuristics (heuristics always solve problem faster)
In general I would describe this problem as optimal positioning of peaces to maximise probability of win. There is no single strategy, because building bigger "heap" brings greater risk for loosing game.
One parameter of how good your heap is can be "entropy" - number of single/buried peaces, after making combo.
The first answer that comes to mind is a lookahead search with alpha-beta pruning. Is it worth doing? Perhaps as a learning exercise.

Finding patterns in Puzzle games

I was wondering, which are the most commonly used algorithms applied to finding patterns in puzzle games conformed by grids of cells.
I know that depends of many factors, like the kind of patterns You want to detect, or the rules of the game...but I wanted to know which are the most commonly used algorithms in that kind of problems...
For example, games like columns, bejeweled, even tetris.
I also want to know if detecting patterns by "brute force" ( like , scanning all the grid trying to find three adyacent cells of the same color ) is significantly worst that using particular algorithms in very small grids, like 4 X 4 for example ( and again, I know that depends of the kind of game and rules ...)
Which structures are commonly used in this kind of games ?
It's always domain-dependent. But there's also two situations where you'd do these kinds of searches. Ones situation is after a move (a change to the game field made by the player), and the other would be if/when the whole board has changed.
In Tetris, you wouldn't need to scan the whole board after a piece is dropped. You'd just have to search the rows the piece is touching.
In a match-3 games like Bejeweled, where you're swapping two adjacent pieces at a time, you'd first run a localized search in each direction around each square that changed, to see if any pieces have triggered. Then, if they have, the game will dump some new, random pieces onto the board. Now, you could run the same localized search around each square that's changed, but that might involve a lot of if statements and might actually be slower to just scanning the whole board from top left to bottom right. It depends on your implementation and would require profiling.
As Adrian says, a simple 2D array suffices. Often, though, you may add a "border" of pixels around this array, to simplify the searching-for-patterns aspect. Without a border, you'd have to have if statements along the edge squares that says "well, if you're in the top row, don't search up (and walk off the array)". With a border around it, you can safely just search through everything: saving yourself if statements, saving yourself branching, saving yourself pipeline issues, searching faster.
To Jon: these kinds of things really do matter in high-performance settings, even on modern machines, if you're making a search algorithm to play/solve the game. If you are, you want your underlying simulation to run as quickly as possible in order to search as deep as possible in the fewest cycles.
Regarding algorithms: It certainly depends on the game. For example for tetris, you'd only have to scan each row if it has the same color. I can't even think of something that would not equal the brute force approach in this case. But for most casual games brute force should be perfectly fine. Pattern recognition should be negligible in comparison to graphics and sound processing.
Regarding structures: A simple 2D-Array should suffice for representing the board.
Given the average computer speed these days, if it's real-time as the user is playing the game, it probably won't matter (EDIT: for very small game boards only). Certainly, it would depend on the complexity of the game logic, but also how fast the code is going to run on the target machine (i.e., is this a JavaScript web page game, or a Windows app written in C++).
If this is for something like simulating gameplay strategies, then use an algorithm that's more efficient.
A more efficient strategy could involve keeping track of incremental changes to the game board, instead of re-scanning the whole board every time.

Resources