I'm conducting a molecular dynamics simulation for silica. Some time ago I turned to the fluctuating dipole model, and after much effort I'm still having problems implementing it.
In short, all oxygen atoms in the system are polarizable, and their dipole moments depend on their position with respect to all the other atoms in the system. More particularly, I use TS potential (http://digitallibrary.sissa.it/bitstream/handle/1963/2874/tangney.pdf?sequence=2), where dipoles are found iteratively at each time step.
This means that when evaluating forces acting on atoms, I have to take into account this potential energy dependency on coordinates.
Before, I was using simple pairwise potential models, and so I would set my program to compute forces using analytic formulas obtained by differentiating potential energy expression.
Now I'm at a loss: how to implement this new potential? In all the articles that I've found they only give you formulas, but not the algorithm. As I see it, when I compute forces, acting on a certain atom, I have to take into account the change of the dipole of this atom, the change of dipoles of all the neighboring atoms, then tge change of dipoles of still more atoms, and so on, as they depend on each other. After all, it is because of this interdependency that the dipoles are found iteratively at each time step. Clearly, I can't compute forces iteratively for each atom, because the computational complexity of the algorithm would be way too high. Should I use some simple functions to account for the change of dipoles? This doesn't look like a good idea either, cause dipoles are calculated iteratively, with high precision, and then, where it actually matters (computing forces), we would use crude functions?
So how do I implement this model? Also, is it possible to compute forces analytically, as I did before, or is it necessary to compute them using finite difference formula for derivative?
I haven't found the answer for my question in literature, but if you know some article, or site, or book, where this material is highlighted, please, direct me to that source.
Thank you for your time!
==================================================================================
UPDATE:
Thank you for your answer. Unfortunately, this was not my question. I didn't ask how to compute dipoles, but how to compute forces given that those dipoles vary with movement considerably.
I tried to compute forces in straightforward manner (not taking into account dipoles interdependance via their distances, just compute the dipoles on each steps, then compute the forces as if those dipoles are static), but the results I got were not physically correct.
To analyze the situation, I set up a simulation of a system consisting of just two atoms: Si and O. They have opposite charges, and so they oscillate. And the energy time dependence graphic looks like this:
The curve on the top represents kinetic energy, the one in the middle represents potential energy without taking dipole interaction into account, and the one at the bottom represents potential energy of the system, where dipole interaction was taken into account.
You can clearly see from this graphic, that the system is doing what it shouldn't do: climbing up the potential slope. So I decided this is due to the fact that I didn't take dipole moment coordinate dependence into account. For instance, at a given time point, we compute the forces, and they are directed so as to move both atoms toward each other. But when we do move them towards each other (even slightly), the dipole moment changes, and we find out that we actually ended up with higher potential energy than before! During the next time step the situation is the same.
So the question is, how to take this effect into account, cause what few ways I can think of are either way too computationally intense, or way too crude.
Not sure I fully understand your question, but it sounds like you might be needing to implement a Markov chain type solution?
See this nice post for more info: http://freakonometrics.hypotheses.org/6803
EDIT.
Reason I suggest this is that it sounds like you have a system where state of each atom depends on it's neighbors, and in turn the neighbors state depends on their neighbors and so on. Conceptually this could be modeled as a huge matrix and you iteratively update each value based on it's neighbors (???). This is intractable, but the article linked to shows how to solve a problem of very large transition matrices using Markov chains instead of computing the actual matrix.
Related
Massively edited this question to make it easier to understand.
Given an environment with arbitrary dimensions and arbitrary positioning of an arbitrary number of obstacles, I have an agent exploring the environment with a limited range of sight (obstacles don't block sight). It can move in the four cardinal directions of NSEW, one cell at a time, and the graph is unweighted (each step has a cost of 1). Linked below is a map representing the agent's (yellow guy) current belief of the environment at the instant of planning. Time does not pass in the simulation while the agent is planning.
http://imagizer.imageshack.us/a/img913/9274/qRsazT.jpg
What exploration algorithm can I use to maximise the cost-efficiency of utility, given that revisiting cells are allowed? Each cell holds a utility value. Ideally, I would seek to maximise the sum of utility of all cells SEEN (not visited) divided by the path length, although if that is too complex for any suitable algorithm then the number of cells seen will suffice. There is a maximum path length but it is generally in the hundreds or higher. (The actual test environments used on my agent are at least 4x bigger, although theoretically there is no upper bound on the dimensions that can be set, and the maximum path length would thus increase accordingly)
I consider BFS and DFS to be intractable, A* to be non-optimal given a lack of suitable heuristics, and Dijkstra's inappropriate in generating a single unbroken path. Is there any algorithm you can think of? Also, I need help with loop detection, as I've never done that before since allowing revisitations is my first time.
One approach I have considered is to reduce the map into a spanning tree, except that instead of defining it as a tree that connects all cells, it is defined as a tree that can see all cells. My approach would result in the following:
http://imagizer.imageshack.us/a/img910/3050/HGu40d.jpg
In the resultant tree, the agent can go from a node to any adjacent nodes that are 0-1 turn away at intersections. This is as far as my thinking has gotten right now. A solution generated using this tree may not be optimal, but it should at least be near-optimal with much fewer cells being processed by the algorithm, so if that would make the algorithm more likely to be tractable, then I guess that is an acceptable trade-off. I'm still stuck with thinking how exactly to generate a path for this however.
Your problem is very similar to a canonical Reinforcement Learning (RL) problem, the Grid World. I would formalize it as a standard Markov Decision Process (MDP) and use any RL algorithm to solve it.
The formalization would be:
States s: your NxM discrete grid.
Actions a: UP, DOWN, LEFT, RIGHT.
Reward r: the value of the cells that the agent can see from the destination cell s', i.e. r(s,a,s') = sum(value(seen(s')).
Transition function: P(s' | s, a) = 1 if s' is not out of the boundaries or a black cell, 0 otherwise.
Since you are interested in the average reward, the discount factor is 1 and you have to normalize the cumulative reward by the number of steps. You also said that each step has cost one, so you could subtract 1 to the immediate reward rat each time step, but this would not add anything since you will already average by the number of steps.
Since the problem is discrete the policy could be a simple softmax (or Gibbs) distribution.
As solving algorithm you can use Q-learning, which guarantees the optimality of the solution provided a sufficient number of samples. However, if your grid is too big (and you said that there is no limit) I would suggest policy search algorithms, like policy gradient or relative entropy (although they guarantee convergence only to local optima). You can find something about Q-learning basically everywhere on the Internet. For a recent survey on policy search I suggest this.
The cool thing about these approaches is that they encode the exploration in the policy (e.g., the temperature in a softmax policy, the variance in a Gaussian distribution) and will try to maximize the cumulative long term reward as described by your MDP. So usually you initialize your policy with a high exploration (e.g., a complete random policy) and by trial and error the algorithm will make it deterministic and converge to the optimal one (however, sometimes also a stochastic policy is optimal).
The main difference between all the RL algorithms is how they perform the update of the policy at each iteration and manage the tradeoff exploration-exploitation (how much should I explore VS how much should I exploit the information I already have).
As suggested by Demplo, you could also use Genetic Algorithms (GA), but they are usually slower and require more tuning (elitism, crossover, mutation...).
I have also tried some policy search algorithms on your problem and they seems to work well, although I initialized the grid randomly and do not know the exact optimal solution. If you provide some additional details (a test grid, the max number of steps and if the initial position is fixed or random) I can test them more precisely.
I am writing a maze generation algorithm, and this wikipedia article caught my eye. I decided to implement it in java, which was a cinch. The problem I am having is that while a maze-like picture is generated, the maze often is not solvable and is not often interesting. What I mean by interesting is that there are a vast number of unreachable places and often there are many solutions.
I implemented the 1234/3 rule (although is is changeable easily, see comments for an explanation) with a roughly 50/50 distribution in the start. The mazes always reach an equilibrium where there is no change between t-steps.
My question is, is there a way to guarantee the mazes solvability from a fixed start and endpoint? Also, is there a way to make the maze more interesting to solve (fewer/one solution and few/no unreachable places)? If this is not possible with cellular automata, please tell me. Thank you.
I don't think it's possible to ensure a solvable, interesting maze through simple cellular automata, unless there's some specific criteria that can be placed on the starting state. The fact that cells have no knowledge of the overall shape because each cell won't be able to coordinate with the group as a whole.
If you're insistent on using them, you could do some combination of modification and pathfinding after generation is finished, but other methods (like the ones shown in the Wikipedia article or this question) are simpler to implement and won't result in walls that take up a whole cell (unless you want that).
the root of the problem is that "maze quality" is a global measure, but your automaton cells are restricted to a very local knowledge of the system.
to resolve this, you have three options:
add the global information from outside. generate mazes using the automaton and random initial data, then measure the maze quality (eg using flood fill or a bunch of other maze solving techniques) and repeat until you get a result you like.
use a much more complex set of explicit rules and state. you can work out a set of rules / cell values that encode both the presence of walls and the lengths / quality of paths. for example, -1 would be a wall and a positive value would be the sum of all neighbours above and to the left. then positive values encode the path distance from top left, roughly. that's not enough, but it shows the general idea... you need to encode an algorithm about the maze "directly" in the rules of the system.
use a less complex, but still turing complete, set of rules, and encode the rules for maze generation in the initial state. for example, you could use conway's life and construct an initial state that is a "program" that implements maze generation via gliders etc etc.
if it helps any you could draw a parallel between the above and:
ghost in the machine / external user
FPGA
programming a general purpose CPU
Run a path finding algorithm over it. Dijkstra would give you a sure way to compute all solutions. A* would give you one good solution.
The difficulty of a maze can be measured by the speed at which these algorithms solve it.
You can add some dead-ends in order to shut down some solutions.
This question refers to the Google-sponsored AI Challenge, a contest that happens every few months and in which the contenders need to submit a bot able to autonomously play a game against other robotic players. The competition that just closed was called "ants" and you can read all its specification here, if you are interested.
My question is specific to one aspect of ants: combat strategy.
The problem
Given a grid of discrete coordinates [like a chessboard] and given that each player has a number of ants that at each turn can either:
stay still
move east / north / west / south,
...an ant will be killed by an enemy ant if an enemy ant in range is surrounded by less (or the same) of its own enemies than the ant [equivalent to: "An ant will kill an enemy ant if an enemy in range is surrounded by more (or the same) enemies than its target"]
A visual example:
In this case the yellow ants are going to move west, and the orange ant, not being able to move away [blue tiles are blocking] will have two yellow ants "in range" and will die (if the explanation is still not clear, I invite you to visit the link above to see more examples and explained scenarios).
The question
My question is substantially about complexity. I thought to this problem extensively, but I still couldn't come up with an acceptable way to calculate the optimal set of moves in a reasonable time. It seems to me that for finding the best possible set of moves for my ants, I should evaluate the outcome for every possible scenario, but since battles can be pretty crowded with ants, this would mean that computation time would grow exponentially (5^n, with n being the number of ants involved).
Another limitation of this approach is that the solution being worked on doesn't improve its effectiveness proportionally to the time spent computing, so arbitrarily interrupting its execution might leave you with a non-acceptable solution.
I suspect that a good solution might be found via some geometrical considerations in combination with linear algebra, (maybe calculating some "centres of gravity" for groups of ants?) but I could not pass the level of "intuition" on this...
So, my question really boils down to:
How should this problem be approached to find [nearly] optimal solutions in a computation time of ~50-100 ms on a modern machine (this figure is derived by the official contest rules)?
If you are interested by the problem and need some inspiration, I highly recommend to watch some of the available game replays.
I think your problem can be solved by turning the problem around.
Instead of calculating the best moves - per ant - you could caclulate the best move candidates per discrete position on your playing board.
+1 for a save place
+2 for a place that results in an enemy dying
-1 point for a position of certain death
That would scale in a linear way - but have some trade off in not providing best individual movement.
Maybe worth a try :)
Tricky indeed. You may find some hints in Bee algorithms. This is a set of algorithms to use swarm cooperation and 'reasonable computation time'. Bee algorithms can for instance be used to roughly (!) solve the traveling salesman problem. I expect that these algorithms can provide you with the best solution given computing time.
Of course, the problem can be simplified using geometry: relative positions of ants in a neighbourhood are more important than absolute positions. And also light_303's solution is complementatry to the search pattern I propose.
EDIT FROM THE OP:
I'm selecting this answer as accepted as the winner of the contest published a post-mortem analysis of his code, and indeed he followed the approach suggested by the author of this answer. You can read the winner's blog entry here.
For these kind of problems, MinMax algorithm with alpha beta pruning is usually used. (*) [simple explanation for minmax and alpa beta prunning is at the end, but for more details, the wikipedia page should also be read].
In order to overcome the problem you have mentioned about extremely large number of possible moves, a common improvement is doing the minmax algorithm iteratively. At first you explore all nodes until depth 1, and find the best solution. If you still have some time: explore all nodes until depth 2, and now chose a new more informed best solution, and so on...
When out of time: gives the best solution you could find, at the last level you explored.
To further improve your solution, you might want to reorder the nodes you develop: for iteration i, sort the nodes in iteration (i-1) [by their heuristical value for each vertex] and explore each possibility according the order. The idea behind it is that you are more likely to prun more vertices, if you first investigate the "best" solutions.
The problem here remains finding a good heuristical function, which evaluates "how good a state is"
(*)The MinMax algorithm is simple: You explore the game tree, and decide what will you do for each state, and what is your oponent is most likely to do for each action. This is done until depth k, where k is given to the algorithm.
The alpha beta prunning is an addition to minmax, which tells you "which nodes should not be explored anymore, since any way I am not going to chose them, because I have a better solution"
My question is substantially about complexity. I thought to this
problem extensively, but I still couldn't come up with an acceptable
way to calculate the optimal set of moves in a reasonable time.
Exactly!
It's an AI competition. AI deals with problems which are too complex to be solved with optimal algorithms.
So you have to try "stuff", like your idea about centers of gravity. Even better would be some genetic algorithms where better strategies are found through natural selection (but it's hard to set up some evolving "framework" for that).
BTW: you can see the blog of the current leader and his strategy is surprisingly simple.
Looking for an algorithm that can be used to determine groups of units that move together as a squad in a real time strategy game like StarCraft. The direction that I am currently look at is a clustering algorithm but having a hard time finding which one would work best since units are moving as a group not just standing still. Any help would be great.
K-means is not the best choice, as it requires you to specify the number of clusters you expect to find. Some might contain single objects then.
I recommend adapting DBSCAN. In particular, the generalized version GDBSCAN.
For this, you need to define what constitutes the neighborhood of a unit - say, any other unit within a range of 2 that is belonging to the same player and moving approximately in the same direction (up to a certain delta threshold in x and y velocity).
Next, you need to specify when you consider units to start forming an initial cluster, called "core point". Say that is a minimum of 3 units.
Then using DBSCAN is quite basic, and should give you good results. You need to fine-tune the parameters a bit. Things like this minimum size are clearly an input parameter, and depend on your use case. So is the neighborhood definition: you are looking for groups that move into the same direction, this information needs to be put into the algorithm somehow. With GDBSCAN this is trivial, by adjusting the neighborhood definition.
You may want to look at a number of classification algorithms, like k-Nearest Neighbor or Support Vector Machines
Kmeans algorithm is quite simple and standard approach. You can check if it works:
I am looking for a general algorithm to help in situations with similar constraints as this example :
I am thinking of a system where images are constructed based on a set of operations. Each operation has a set of parameters. The total "gene" of the image is then the sequential application of the operations with the corresponding parameters. The finished image is then given a vote by one or more real humans according to how "beautiful" it is.
The question is what kind of algorithm would be able to do better than simply random search if you want to find the most beautiful image? (and hopefully improve the confidence over time as votes tick in and improve the fitness function)
Given that the operations will probably be correlated, it should be possible to do better than random search. So for example operation A with parameters a1 and a2 followed by B with parameters b1 could generally be vastly superior to B followed by A. The order of operations will matter.
I have tried googling for research papers on random walk and markov chains as that is my best guesses about where to look, but so far have found no scenarios similar enough. I would really appreciate even just a hint of where to look for such an algorithm.
I think what you are looking for fall in a broad research area called metaheuristics (which include many non-linear optimization algorithms such as genetic algorithms, simulated annealing or tabu search).
Then if your raw fitness function is just giving a statistical value somehow approximating a real (but unknown) fitness function, you can probably still use most metaheuristics by (somehow) smoothing your fitness function (averaging results would do that).
Do you mean the Metropolis algorithm?
This approach uses a random walk, weighted by the fitness function. It is useful for locating local extrema in complicated fitness landscapes, but is generally slower than deterministic approaches where those will work.
You're pretty much describing a genetic algorithm in which the sequence of operations represents the "gene" ("chromosome" would be a better term for this, where the parameter[s] passed to each operation represents a single "gene", and multiple genes make up a chromosome), the image produced represents the phenotypic expression of the gene, and the votes from the real humans represent the fitness function.
If I understand your question, you're looking for an alternative algorithm of some sort that will evaluate the operations and produce a "beauty" score similar to what the real humans produce. Good luck with that - I don't think there really is any such thing, and I'm not surprised that you didn't find anything. Human brains, and correspondingly human evaluations of aesthetics, are much too staggeringly complex to be reducible to a simplistic algorithm.
Interestingly, your question seems to encapsulate the bias against using real human responses as the fitness function in genetic-algorithm-based software. This is a subject of relevance to me, since my namesake software is specifically designed to use human responses (or "votes") to evaluate music produced via a genetic process.
Simple Markov Chain
Markov chains, which you mention, aren't a bad way to go. A Markov chain is just a state machine, represented as a graph with edge weights which are transition probabilities. In your case, each of your operations is a node in the graph, and the edges between the nodes represent allowable sequences of operations. Since order matters, your edges are directed. You then need three components:
A generator function to construct the graph of allowed transitions (which operations are allowed to follow one another). If any operation is allowed to follow any other, then this is easy to write: all nodes are connected, and your graph is said to be complete. You can initially set all the edge weights to 1.
A function to traverse the graph, crossing N nodes, where N is your 'gene-length'. At each node, your choice is made randomly, but proportionally weighted by the values of the edges (so better edges have a higher chance of being selected).
A weighting update function which can be used to adjust the weightings of the edges when you get feedback about an image. For example, a simple update function might be to give each edge involved in a 'pleasing' image a positive vote each time that image is nominated by a human. The weighting of each edge is then normalised, with the currently highest voted edge set to 1, and all the others correspondingly reduced.
This graph is then a simple learning network which will be refined by subsequent voting. Over time as votes accumulate, successive traversals will tend to favour the more highly rated sequences of operations, but will still occasionally explore other possibilities.
Advantages
The main advantage of this approach is that it's easy to understand and code, and makes very few assumptions about the problem space. This is good news if you don't know much about the search space (e.g. which sequences of operations are likely to be favourable).
It's also easy to analyse and debug - you can inspect the weightings at any time and very easily calculate things like the top 10 best sequences known so far, etc. This is a big advantage - other approaches are typically much harder to investigate ("why did it do that?") because of their increased abstraction. Although very efficient, you can easily melt your brain trying to follow and debug the convergence steps of a simplex crawler!
Even if you implement a more sophisticated production algorithm, having a simple baseline algorithm is crucial for sanity checking and efficiency comparisons. It's also easy to tinker with, by messing with the update function. For example, an even more baseline approach is pure random walk, which is just a null weighting function (no weighting updates) - whatever algorithm you produce should perform significantly better than this if its existence is to be justified.
This idea of baselining is very important if you want to evaluate the quality of your algorithm's output empirically. In climate modelling, for example, a simple test is "does my fancy simulation do any better at predicting the weather than one where I simply predict today's weather will be the same as yesterday's?" Since weather is often correlated on a timescale of several days, this baseline can give surprisingly good predictions!
Limitations
One disadvantage of the approach is that it is slow to converge. A more agressive choice of update function will push promising results faster (for example, weighting new results according to a power law, rather than the simple linear normalisation), at the cost of giving alternatives less credence.
This is equivalent to fiddling with the mutation rate and gene pool size in a genetic algorithm, or the cooling rate of a simulated annealing approach. The tradeoff between 'climbing hills or exploring the landscape' is an inescapable "twiddly knob" (free parameter) which all search algorithms must deal with, either directly or indirectly. You are trying to find the highest point in some fitness search space. Your algorithm is trying to do that in less tries than random inspection, by looking at the shape of the space and trying to infer something about it. If you think you're going up a hill, you can take a guess and jump further. But if it turns out to be a small hill in a bumpy landscape, then you've just missed the peak entirely.
Also note that since your fitness function is based on human responses, you are limited to a relatively small number of iterations regardless of your choice of algorithmic approach. For example, you would see the same issue with a genetic algorithm approach (fitness function limits the number of individuals and generations) or a neural network (limited training set).
A final potential limitation is that if your "gene-lengths" are long, there are many nodes, and many transitions are allowed, then the size of the graph will become prohibitive, and the algorithm impractical.