I'm after some ideas for demonstrating the usefulness of Floyd-Warshall visually. So far all I can think of is generating a random graph, allowing the user to select a start/finish and highlight the shortest path. What are some more fun yet simple demonstrations of the usefulness of path-finding?
Since you will want to show all pairs shortest path (Floyd Warshal) rather than single pair shortes path (Dijkstra) a minimum distances table between all pairs of large cities in a country might be nice. This is not a graphical visualisation, but still a useful one. There used to be such a table in a book with roadmaps that I used, before the days of electronic route planning.
Animate a sprite that moves through obstacles.
I've used Floyd-Warshall to compute the signal path of cardiac activation as described in this paper in the paragraph 'shortest path of activation'. It proved very practical, fast and simple. Fig. 5 gives you a nice visualisation of the resulting time dependent potentials. In addition to this, the image below visualizes the minimum path lengths computed starting at the sinus node. Blue == short, red == long.
Related
I have a game system that can be represented as an undirected, unweighted graph where each vertex has one (relevant) property: a color. The goal of the game in terms of the graph representation is to reduce it down to one vertex in the fewest "steps" possible. In each step, the player can change the color of any one vertex, and all adjacent vertices of the same color are merged with it. (Note that in the example below I just happened to show the user only changing one specific vertex the whole game, but the user can pick any vertex in each step.)
What I am after is a way to compute the fewest amount of steps necessary to "beat" a given graph per the procedure described above, and also provide the specific moves needed to do so. I'm familiar with the basics of path-finding, BFS, and things of that nature, but I'm having a hard time framing this problem in terms of a "fastest path" solution.
I am unable to find this same problem anywhere on Google, or even a graph-theory term that encapsulates the problem. Does anyone have an idea of at least how to get started approaching this problem? Can anyone point me in the right direction?
EDIT Since this problem seems to be really difficult to solve efficiently, perhaps I could change the aim of my question. Could someone describe how I would even set up a brute force, breadth first search for this? (Brute force could possibly be okay, since in practice these graphs will only be 20 vertices at most.) I know how to write a BFS for a normal linked graph data structure... but in this case it seems quite weird since each vertex would have to contain a whole graph within itself, and the next vertices in the search graph would have to be generated based on possible moves to make in the graph within the vertex. How would one setup the data structure and search algorithm to accomplish this?
EDIT 2 This is an old question, but I figured it might help to just state outright what the game was. The game was essentially to be a rip-off of Kami 2 for iOS, except my custom puzzle editor would automatically figure out the quickest possible way to solve your puzzle, instead of having to find the shortest move number by trial and error yourself. I'm not sure if Kami was a completely original game concept, or if there is a whole class of games like it with the same "flood-fill" mechanic that I'm unaware of. If this is a common type of game, perhaps knowing the name of it could allow finding more literature on the algorithm I'm seeking.
EDIT 3 This Stack Overflow question seems like it may have some relevant insights.
Intuitively, the solution seems global. If you take a larger graph, for example, which dot you select first will have an impact on the direct neighbours which will have an impact on their neighbours and so on.
It sounds as if it were of the same breed of problems as the map colouring problem. Not because of the colours but because of the implications of a local selection to the other end of the graph down the road. In the map colouring, you have to decide what colour to draw a country and its neighbouring countries so two countries that touch don't have the same colour. That first set of selections have an impact on whether there is a solution in the subsequent iterations.
Just to show how complex problem is.
Lets check simpler problem where graph is changed with a tree, and only root vertex can change a colour. In that case path to a leaf can be represented as a sequence of colours of vertices on that path. Sequence A of colour changes collapses a leaf if leaf's sequence is subsequence of A.
Problem can be stated that for given set of sequences problem is to find minimal length sequence (S) so that each initial sequence is contained in S. That is called shortest common supersequence problem, and it is NP-complete.
Your problem is for sure more complex than this one :-/
Edit *
This is a comment on question's edit. Check this page for a terms.
Number of minimal possible moves is >= than graph radius. With that it seems good strategy to:
use central vertices for moves,
use moves that reduce graph radius, or at least reduce distance from central vertices to 'large' set of vertices.
I would go with a strategy that keeps track of central vertices and distances of all graph vertices to these central vertices. Step is to check all meaningful moves and choose one that reduce radius or distance to central vertices the most. I think BFS can be used for distance calculation and how move influences them. There are tricky parts, like when central vertices changes after moves. Maybe it is good idea to use not only central vertices but also vertices close to central.
I think the graph term you are looking for is the "valence" of a graph, which is the number of edges that a node is connected to. It looks like you want to change the color based on what node has the highest valence. Then in the resulting graph change the color for the node that has the highest valence, etc. until you have just one node left.
I have a floorplan with lots of d3.js polygon objects on it that represent booths. I am wondering what the best approach is to finding a path between the 2 objects that don't overlap other objects. The use case here is that we have booths and want to show the user how to walk to get from point a to b the most efficient. We can assume path must contain only 90 or 45 degree turns.
we took a shot at using Dijkstra but the scale of it seems to be getting away from us.
The example snapshot of our system:
Our constraints are that this needs to run in the browser. Would be nice if it worked well with d3.js.
Since the layout is a matrix (or nested matrices) this is not a Dijkstra problem, it is simpler than that. The technical name for the problem is a "Manhatten routing". Rather than give a code algorithm, I will show you an example of the optimum route (the blue line) in the following diagram. From this it should be obvious what the algorithm is:
Note that there is a subtle nuance here, and that is that you always want to maximize the number of jogs because even though the overall shape is a matrix, at each corner the person will actually walk diagonally (think of a person cutting diagonally across a four-way intersection). Therefore, simply going north, then west is wrong, because you would only get to cut one corner, but on the route shown you get to cut 5 corners.
This problem is known as finding shortest path between two points with polygonal obstacle, and studied a lot in literature. See here for one example. All algorithms for this is by converting problem to the graph theory problem then running Dijkstra. To doing this:
Each vertex in any polygon is vertex in your graph.
Start point and end points are also vertices in the graph.
Between two vertex there is an edge, if they are visible to each other, to achieve this we can use triangulation algorithms.
Weight of each edge is the distance between its two endpoints in Euclidean space.
Now we are ready to run any shortest path algorithm. The hard part is triangulation, I think triangle library fits for your requirements. Also easier way is searching the web by the keywords that I said in the first line to find implementation. I didn't link to any implementation because I see is better to say it in algorithmic manner to be useful to the future readers.
I'm facing a hard problem:
Imagine I have a map of an entire country, represented by a huge matrix of Cells. Each cell represents a 1 square meter of territory. Each Cell is represented as a double value between 0 and 1 that represents the cost of traversing the cell.
The map obviously is not fittable in memory.
I am trying to wrap my mind arround a way to calculate the optimal path for a robot, from a start point to a end position. The first idea I had was to make a TCP-like moving window, with a minimap of the real map arround the moving robot, and executing the A* algorithm inside there, but I'm facing some problems with maps with huge walls, bad pathfinding, etc...
I am searching the literature about A*-like algorithms and I could not visualize an approximation of what would be a good solution for this problem.
I'm wondering if someone has faced a similar problem or can help with a idea of a possible solution!
Thanks in advance :)
Since I do not know exact data, here's some information that could be useful:
A partial path of a shortest path is itself a shortest path. I.e. you might split up your matrix into submatrices and find (all) shortest paths in there. Note that you do not have to store all results: You e.g. can save memory by not saving a complete path but just the information: Path goes from A to B. The intermediate nodes might be computed later again or stored in a file for later. You might even be able to precompute some shortest paths for certain areas.
Another approach is that you might be able to compress your matrix in some way. I.e. if you have large areas consisting only of one and the same number, it might be good to store just that number and the dimensions of that area.
Another approach (in connection to precompute some shortest paths) is to generate different levels of detail of your map. Considering a map of the USA, this might look the following: The coarsest level of detail contains just the cities New York, Los Angeles, Chicago, Dallas, Philadelphia, Houston und Phoenix. The finer the levels get, the more cities they contain, but - on the other hand - the smaller area of your whole map is shown by them.
Does your problem have any special structure, e.g., does the triangle inequality hold/can you guarantee that the shortest path doesn't jog back and forth? Do you want to perform the query many times? (If so you can do pre-processing that will amortize over multiple queries.) Do you need the exact minimum solution, or will something within an epsilon factor be OK?
One thought was that you can coarsen the matrix - form 100 meter by 100 meter squares, and determine the shortest path distances through the 100 \times 100 squares. Now this will fit in memory (about 1 Gigabyte), you can run Dijkstra, and then expand each step through the 100 \times 100 square.
Also, have you tried running a forward-backward version of Dijkstra's algorithm? I.e., expand from the source and search forthe sink at the same time, and stop when there's an intersection.
Incidentally, why do you need such a fine level of granularity?
Here are some ideas that may work
You can model your path as a piecewise linear curve. If you have 31 line segments then your curve is fully described by 60 numbers. Each of the possible curves have a cost, so the cost is a function on the following form
cost(x1, x2, x3 ..... x60)
Now your problem is to find the global optimum of a function of 60 variables. You can use standard methods to do this. One idea is to use genetic algorithms. Another idea is to use a monte carlo method such as parallel tempering
http://en.wikipedia.org/wiki/Parallel_tempering
Whenever you have a promising path then you can use it as a starting point to find a local minimum of the cost function. Maybe you can use some interpolation to make your cost function is differentiable. Then you can use Newtons method (or rather BFGS) to find local mimima of the cost function.
http://en.wikipedia.org/wiki/Local_minimum
http://en.wikipedia.org/wiki/BFGS
Your problem is somewhat similar to the problem of finding reaction paths in chemical systems. Maybe you can find some inspiration in the book "Energy Landscapes" by Davis Wales.
But I also have some questions:
Is it necessary for you to find the optimal path, or are you just looking for an path that is OK?
How much computer power and time do you have at hand?
Can the robot make sharp turns, or do you need extra physics modelling to improve the cost function?
I'm having trouble finding the right pathfinding algorithm for some AI I'm working on.
I have players on a pitch, moving around freely (not stuck to a grid), but they are confined to moving in 8 directions (N NE E etc.)
I was working on using A*, and a graph for this. But I realised, every node on the graph is equally far apart, and all the edges have the same weight - since the pitch is rectangular. And the number of nodes is enormous (being a large pitch, with them able to move between 1 pixel and another)
I figured there must be another algorithm, optimised for this sort of thing?
I would break the pitch down into 10x10 pixel grid. Your routing does not have to be as finely grained as the rest of your system and it makes the algorithm take up far less memory.
As Chris suggests above, choosing the right heuristic is the key to getting the algorithm working right for you.
If players move in straight lines between points on your grid, you really don't need to use A*. Bresenham's line algorithm will provide a straight line path very quickly.
You could weight a direction based on another heuristic. So as opposed to weighting the paths based on actual distance you could weight or scale that based on another factor such as "closeness to another player" meaning players will favour routes that will not collide with other players.
The A* algorithm should work well like this.
I think you should try Jump Point Search. It is a very fast algorithm for path finding on 8-dire.
Here is a blog describing Jump Point Search shortly.
And, this is its academic paper <Online Graph Pruning for Pathfinding on Grid Maps>
Besides, there are some interesting videos on Youtube.
I hope this helps.
Given are two sets of three-dimensional points, a source and a destination set. The number of points on each set is arbitrary (may be zero). The task is to assign one or no source point to every destination point, so that the sum of all distances is minimal. If there are more source than destination points, the additional points are to be ignored.
There is a brute-force solution to this problem, but since the number of points may be big, it is not feasible. I heard this problem is easy in 2D with equal set sizes, but sadly these preconditions are not given here.
I'm interested in both approximations and exact solutions.
Edit: Haha, yes, I suppose it does sound like homework. Actually, it's not. I'm writing a program that receives positions of a large number of cars and i'm trying to map them to their respective parking cells. :)
One way you could approach this problem is to treat is as the classical assignment problem: http://en.wikipedia.org/wiki/Assignment_problem
You treat the points as the vertices of the graph, and the weights of the edges are the distance between points. Because the fastest algorithms assume that you are looking for maximum matching (and not minimum as in your case), and that the weights are non-negative, you can redefine weights to be e.g.:
weight(A, B) = bigNumber- distance(A,B)
where bigNumber is bigger than your longest distance.
Obviously you end up with a bipartite graph. Then you use one of the standard algorithms for maximum weighted bipartite matching (lots of resources on the web, e.g. http://valis.cs.uiuc.edu/~sariel/teach/courses/473/notes/27_matchings_notes.pdf or Wikipedia for overview: http://en.wikipedia.org/wiki/Perfect_matching#Maximum_bipartite_matchings) This way you will end-up with a O(NM max(N,M)) algoritms, where N and M are sizes of your sets of points.
Off the top of my head, spatial sort followed by simulated annealing.
Grid the space & sort the sets into spatial cells.
Solve the O(NM) problem within each cell, then within cell neighborhoods, and so on, to get a trial matching.
Finally, run lots of cycles of simulated annealing, in which you randomly alter matches, so as to explore the nearby space.
This is heuristic, getting you a good answer though not necessarily the best, and it should be fairly efficient due to the initial grid sort.
Although I don't really have an answer to your question, I can suggest looking into the following topics. (I know very little about this, but encountered it previously on Stack Overflow.)
Nearest Neighbour Search
kd-tree
Hope this helps a bit.