Path finding with multiple levels - algorithm

I'm dabbling in some path-finding systems (right now A*), but I'm no where near experienced enough to fully grasp the concepts behind everything. So please forgive me if this post is riddled with ignorance, or false assumptions.
My goal is to be able to have an object traverse multiple planes to reach a destination, with variable entrances to each level, and each level having different designs.
imagine a cave system that has 8 entrances on top, and your goal is to reach 6 layers down... Some of the caves link up, and therefor paths can be shared, but others are isolated until a certain point.... etc etc. Also, the inter connectivity of paths can be altered on each level.
I know there are some systems that can do this, but my goal is eventually best/fastest option, as well as the ability to do this calculation quickly (a possibility of at least 80+ different paths being calculated at any given time)
An example would be something like this:
Every 'layer' is another 'level', or plane. The green is a path that is both up and down between layers. during the course of the instance, paths can be placed anywhere, and any divisions inside a layer can be removed (but for the case of this instance, they are organized like that)
Its been fairly easy to implement A* for some basic path finding on a single level (eg, get from one position to a ladder going down). But trying to decide which ladder will lead to the ultimate goal is what is difficult.
My initial thoughts were to do something more like a data structure linking the paths to each other on the same level, and do some sort of tree traversal to decide which ladder the the path finding should direct you to when you reach a certain level...but that got complicated quickly, seeing as levels can be altered at any given point.
Like i said, I'm not certain how A* actually works, or the basics behind it... but its the basic algorithm that most people have said will work for multi-layered designs.
I know that it really depends on the engine, and the implementation of the algorithm, so I'm not asking for specifics... but rather some pointers on the best way to approach the situation;
Should I find a working A* implementation that has multi-level calculations built in, and change my level architecture to fit it?
Should I simply utilize a 2d A* implementation, and create a 'path' structure, to give the pathing instructions on each level?
Is there another approach to this style of design that would be more beneficial, and efficient (taking into account the number of calculations/paths to find)?
Thanks!

An approach that comes to mind:
For each level, calculate all the Manhattan distances (perhaps /2 if you can move diagonally) (that's assuming each level is a 2D grid) from each ladder going up to each ladder going down, and let each floor's value be the shortest distance of those.
Now we can simply let the A* heuristic be the sum of distances from the current floor to the bottom.
To prevent repeated computation of paths on each level, we can precompute each possible path from ladder to ladder on every level - we can also then use the shortest of these paths rather than the Manhattan distances for the heuristic.
You say there is a possibility of at least 80+ different paths being calculated at any given time.
If these are all on the same map, you should consider, for each ladder, calculating what the next ladder we should go to is which will lead to the shortest path (and also its cost) (possibly using some derivative of Dijkstra's algorithm starting from the bottom), then we can simply, from any starting point, check the distances to all ladders on the same floor and the cost of their shortest path, and simply pick the one giving the lowest sum of distance and shortest path.
We could even extend this by storing, for any given point, what the next square is one should move to for the shortest path.
If the map changes a lot, storing the next point for any given point is probably not viable, but don't discount storing the next ladder for each ladder.

Related

Fastest Path with Acceleration at Points

This is just something I came up with on my own, but it seems like a fun problem and it has me stumped.
You have a set of points in two-dimensional space, with one point designated "Start" and one "End". Each point has coordinates (in meters from the origin), but also an "acceleration number" (in meters/second of delta-V). Upon reaching a point (including the start), you may accelerate by up to that point's acceleration number in any direction. Edge cost is dependent on your current speed, but you also have to be moving in the correct direction.
Is there an efficient algorithm for finding the fastest path through to the end point? I haven't come up with anything better than "Try every path and check results". Djikstra's and other simple algorithms don't work, because you can't easily say that one path to an intermediate point is better or worse than another, if they have you arriving with different initial velocities.
If that's too easy, what if you add the requirement that you have to stop at the end point? (i.e., you must have less than its acceleration value when you reach the end.)
EDIT: To be clear, direction matters. You maintain a velocity vector as you traverse the graph, and acceleration means adding a vector to it, whose magnitude is capped at that point's acceleration number. This means there are situations where building up a huge velocity is detrimental, as you will be going too fast to "steer" towards other valuable points/your destination.
I think that the requirement that you only use the acceleration from each point once makes this problem NP complete in the general case. Consider an input that looks like this:
If the "huge distance" between the end point and the rest of the points is large enough to dominate the cost of the final solution, finding an optimal solution will boil down to finding a way to pick up as many speed boosts as possible from the start of the graph. If you only allow each point to be passed once, this would be equivalent to to the Hamiltonian path problem, which is NP complete.
That said, your problem has some extra rules on top of it (the distances are euclidean, the graph is always complete) which might end up making the problem easier.
You can try solving this problem backwards by recursively tracing paths from the end to each other node, then designate maximum speed along the line to be able to turn from that node to any other. The culling rule will be if a path from current to next node exists with less velocity and less time spent from end, which will mean that the other path is more optimal by default because it can reach more nodes and takes less time. Once a path reaches start node, it should get recalculated based on the maximum speed achievable at the start and stored. Then you gather the path with less time spent.
You have to search for any available path here, because the available paths on your graph are dependent on past state with an indirect mechanics, using less speed allows more choices.

Reduce number of nodes in 3D A* pathfinding using (part of a) uniform grid representation

I am calculating pathfinding inside a mesh which I have build a uniform grid around. The nodes (cells in the 3D grid) close to what I deem a "standable" surface I mark as accessible and they are used in my pathfinding. To get alot of detail (like being able to pathfind up small stair cases) the ammount of accessible cells in my grid have grown quite large, several thousand in larger buildings. (every grid cell is 0.5x0.5x0.5 m and the meshes are rooms with real world dimensions). Even though I only use a fraction of the actual cells in my grid for pathfinding the huge ammount slows the algorithm down. Other than that it works fine and finds the correct path through the mesh, using a weighted manhattan distance heuristic.
Imagine my grid looks like that and the mesh is inside it (can be more or less cubes but its always cubical), however the pathfinding will not be calculated on all the small cubes just a few marked as accessible (usually at the bottom of the grid but that can depend on how many floors the mesh has).
I am looking to reduce the search space for the pathfinding... I have looked at clustering like how HPA* does it and other clustering algorithms like Markov but they all seem to be best used with node graphs and not grids. One obvious solution would be to just increase the size of the small cubes building the grid but then I would lose alot of detail in the pathfinding and it would not be as robust. How could I cluster these small cubes? This is how a typical search space looks when I do my pathfinding (blue are accessible, green is path):
and as you see there is a lot of cubes to search through because the distance between them is quite small!
Never mind that the grid is an unoptimal solution for pathfinding for now.
Does anyone have an idea on how to reduce the ammount of cubes in the grid I have to search through and how would I access the neighbors after I reduce the space? :) Right now it only looks at the closest neighbors while expanding the search space.
A couple possibilities come to mind.
Higher-level Pathfinding
The first is that your A* search may be searching the entire problem space. For example, you live in Austin, Texas, and want to get into a particular building somewhere in Alberta, Canada. A simple A* algorithm would search a lot of Mexico and the USA before finally searching Canada for the building.
Consider creating a second layer of A* to solve this problem. You'd first find out which states to travel between to get to Canada, then which provinces to reach Alberta, then Calgary, and then the Calgary Zoo, for example. In a sense, you start with an overview, then fill it in with more detailed paths.
If you have enormous levels, such as skyrim's, you may need to add pathfinding layers between towns (multiple buildings), regions (multiple towns), and even countries (multiple regions). If you were making a GPS system, you might even need continents. If we'd become interstellar, our spaceships might contain pathfinding layers for planets, sectors, and even galaxies.
By using layers, you help to narrow down your search area significantly, especially if different areas don't use the same co-ordinate system! (It's fairly hard to estimate distance for one A* pathfinder if one of the regions needs latitude-longitude, another 3d-cartesian, and the next requires pathfinding through a time dimension.)
More efficient algorithms
Finding efficient algorithms becomes more important in 3 dimensions because there are more nodes to expand while searching. A Dijkstra search which expands x^2 nodes would search x^3, with x being the distance between the start and goal. A 4D game would require yet more efficiency in pathfinding.
One of the benefits of grid-based pathfinding is that you can exploit topographical properties like path symmetry. If two paths consist of the same movements in a different order, you don't need to find both of them. This is where a very efficient algorithm called Jump Point Search comes into play.
Here is a side-by-side comparison of A* (left) and JPS (right). Expanded/searched nodes are shown in red with walls in black:
Notice that they both find the same path, but JPS easily searched less than a tenth of what A* did.
As of now, I haven't seen an official 3-dimensional implementation, but I've helped another user generalize the algorithm to multiple dimensions.
Simplified Meshes (Graphs)
Another way to get rid of nodes during the search is to remove them before the search. For example, do you really need nodes in wide-open areas where you can trust a much more stupid AI to find its way? If you are building levels that don't change, create a script that parses them into the simplest grid which only contains important nodes.
This is actually called 'offline pathfinding'; basically finding ways to calculate paths before you need to find them. If your level will remain the same, running the script for a few minutes each time you update the level will easily cut 90% of the time you pathfind. After all, you've done most of the work before it became urgent. It's like trying to find your way around a new city compared to one you grew up in; knowing the landmarks means you don't really need a map.
Similar approaches to the 'symmetry-breaking' that Jump Point Search uses were introduced by Daniel Harabor, the creator of the algorithm. They are mentioned in one of his lectures, and allow you to preprocess the level to store only jump-points in your pathfinding mesh.
Clever Heuristics
Many academic papers state that A*'s cost function is f(x) = g(x) + h(x), which doesn't make it obvious that you may use other functions, multiply the weight of the cost functions, and even implement heatmaps of territory or recent deaths as functions. These may create sub-optimal paths, but they greatly improve the intelligence of your search. Who cares about the shortest path when your opponent has a choke point on it and has been easily dispatching anybody travelling through it? Better to be certain the AI can reach the goal safely than to let it be stupid.
For example, you may want to prevent the algorithm from letting enemies access secret areas so that they avoid revealing them to the player, and so that they AI seems to be unaware of them. All you need to achieve this is a uniform cost function for any point within those 'off-limits' regions. In a game like this, enemies would simply give up on hunting the player after the path grew too costly. Another cool option is to 'scent' regions the player has been recently (by temporarily increasing the cost of unvisited locations because many algorithms dislike negative costs).
If you know what places you won't need to search, but can't implement in your algorithm's logic, a simple increase to their cost will prevent unnecessary searching. There's a lot of ways to take advantage of heuristics to simplify and inform your pathfinding, but your biggest gains will come from Jump Point Search.
EDIT: Jump Point Search implicitly selects pathfinding direction using the same heuristics as A*, so you may be able to implement heuristics to a small degree, but their cost function won't be the cost of a node, but rather, the cost of traveling between the two nodes. (A* generally searches adjacent nodes, so the distinction between a node's cost and the cost of traveling to it tends to break down.)
Summary
Although octrees/quad-trees/b-trees can be useful in collision-detection, they aren't as applicable to searches because they section a graph based on its coordinates; not on its connections. Layering your graph (mesh in your vocabulary) into super graphs (regions) is a more effective solution.
Hopefully I've covered anything you'll find useful.
Good luck!

Pathfinding through four dimensional data

The problem is finding an optimal route for a plane through four dimensional winds (winds at differing heights and that change as you travel (predicative wind model)).
I've used the traditional A* search algorithm and hacked it to get it to work in 3 dimensions and with wind vectors.
It works in a lot the cases but is extremely slow (im dealing with huge amounts of data nodes) and doesnt work for some edge cases.
I feel like I've got it working "well" but its feels very hacked together.
Is there a better more efficient way for path finding through data like this (maybe a genetic algorithm or neural network), or something I havent even considered? Maybe fluid dynamics? I dont know?
Edit: further details.
Data is wind vectors (direction, magnitude).
Data is spaced 15x15km at 25 different elevation levels.
By "doesnt always work" I mean it will pick a stupid path for an aircraft because the path weight is the same as another path. Its fine for path finding but sub-optimal for a plane.
I take many things into account for each node change:
Cost of elevation over descending.
Wind resistance.
Ignoring nodes with too high of a resistance.
Cost of diagonal tavel vs straight etc.
I use euclidean distance as my heuristic or H value.
I use various factors for my weight or G value (the list above).
Thanks!
You can always have a trade off of time-optimilaity by using a weighted A*.
Weighted A* [or A* epsilon], is expected to find a path faster then A*, but the path won't be optimal [However, it gives you a bound on its optimality, as a paremeter of your epsilon/weight].
A* isn't advertised to be the fastest search algorithm; it does guarantee that the first solution it finds will be the best (assuming you provide an admissible heuristic). If yours isn't working for some cases, then something is wrong with some aspect of your implementation (maybe w/ the mechanics of A*, maybe the domain-specific stuff; given you haven't provided any details, can't say more than that).
If it is too slow, you might want to reconsider the heuristic you are using.
If you don't need an optimal solution, then some other technique might be more appropriate. Again, given how little you have provided about the problem, hard to say more than that.
Are you planning offline or online?
Typically for these problems you don't know what the winds are until you're actually flying through them. If this really is an online problem you may want to consider trying to construct a near-optimal policy. There is a quite a lot of research in this area already, one of the best is "Autonomous Control of Soaring Aircraft by Reinforcement Learning" by John Wharington.

Optimal algorithm for path-finding in a matrix that does not fit entirely into memory

I'm facing a hard problem:
Imagine I have a map of an entire country, represented by a huge matrix of Cells. Each cell represents a 1 square meter of territory. Each Cell is represented as a double value between 0 and 1 that represents the cost of traversing the cell.
The map obviously is not fittable in memory.
I am trying to wrap my mind arround a way to calculate the optimal path for a robot, from a start point to a end position. The first idea I had was to make a TCP-like moving window, with a minimap of the real map arround the moving robot, and executing the A* algorithm inside there, but I'm facing some problems with maps with huge walls, bad pathfinding, etc...
I am searching the literature about A*-like algorithms and I could not visualize an approximation of what would be a good solution for this problem.
I'm wondering if someone has faced a similar problem or can help with a idea of a possible solution!
Thanks in advance :)
Since I do not know exact data, here's some information that could be useful:
A partial path of a shortest path is itself a shortest path. I.e. you might split up your matrix into submatrices and find (all) shortest paths in there. Note that you do not have to store all results: You e.g. can save memory by not saving a complete path but just the information: Path goes from A to B. The intermediate nodes might be computed later again or stored in a file for later. You might even be able to precompute some shortest paths for certain areas.
Another approach is that you might be able to compress your matrix in some way. I.e. if you have large areas consisting only of one and the same number, it might be good to store just that number and the dimensions of that area.
Another approach (in connection to precompute some shortest paths) is to generate different levels of detail of your map. Considering a map of the USA, this might look the following: The coarsest level of detail contains just the cities New York, Los Angeles, Chicago, Dallas, Philadelphia, Houston und Phoenix. The finer the levels get, the more cities they contain, but - on the other hand - the smaller area of your whole map is shown by them.
Does your problem have any special structure, e.g., does the triangle inequality hold/can you guarantee that the shortest path doesn't jog back and forth? Do you want to perform the query many times? (If so you can do pre-processing that will amortize over multiple queries.) Do you need the exact minimum solution, or will something within an epsilon factor be OK?
One thought was that you can coarsen the matrix - form 100 meter by 100 meter squares, and determine the shortest path distances through the 100 \times 100 squares. Now this will fit in memory (about 1 Gigabyte), you can run Dijkstra, and then expand each step through the 100 \times 100 square.
Also, have you tried running a forward-backward version of Dijkstra's algorithm? I.e., expand from the source and search forthe sink at the same time, and stop when there's an intersection.
Incidentally, why do you need such a fine level of granularity?
Here are some ideas that may work
You can model your path as a piecewise linear curve. If you have 31 line segments then your curve is fully described by 60 numbers. Each of the possible curves have a cost, so the cost is a function on the following form
cost(x1, x2, x3 ..... x60)
Now your problem is to find the global optimum of a function of 60 variables. You can use standard methods to do this. One idea is to use genetic algorithms. Another idea is to use a monte carlo method such as parallel tempering
http://en.wikipedia.org/wiki/Parallel_tempering
Whenever you have a promising path then you can use it as a starting point to find a local minimum of the cost function. Maybe you can use some interpolation to make your cost function is differentiable. Then you can use Newtons method (or rather BFGS) to find local mimima of the cost function.
http://en.wikipedia.org/wiki/Local_minimum
http://en.wikipedia.org/wiki/BFGS
Your problem is somewhat similar to the problem of finding reaction paths in chemical systems. Maybe you can find some inspiration in the book "Energy Landscapes" by Davis Wales.
But I also have some questions:
Is it necessary for you to find the optimal path, or are you just looking for an path that is OK?
How much computer power and time do you have at hand?
Can the robot make sharp turns, or do you need extra physics modelling to improve the cost function?

Beating a greedy algo

For class we have a grid and a bunch of squares on the grid that we need to detect and travel to. We start at (0,0). We scan tiny regions of the grid at a time (for reasons regarding our data structure it's mandatory), and when we detect squares that we need to travel, and then we travel. There are 32 locations on the grid, but we only need to travel 16 of them, and as fast as possible (we're racing someone else to them).
Dijkstra's algo would find the shortest path from our current location to our next location. This is sub-optimal however, because our next location may be really far from the location after that. It would be more beneficial if we could somehow figure out the density of locations as we scan, and then choose to go to a very dense place and travel all the locations within that area.
Is there an algorithm best suited for a situation like this? I know the greedy heuristic isn't optimal. A* and Dijkstra's are the ones we thought about at first but we are hoping there's a completely different solution.
PS this is being done in assembly, unfortunately.
Finding dense clumps of points (e.g. locations you have to visit) is called cluster analysis. See the link for several classes of algorithms.
Assembly language is a really painful way to experiment with high-level algorithms. Is your prof a sadist??

Resources