How to optimize a large graph for pathfinding algorithms? - algorithm

I develop a small program where an user can create a simple diagrams with abstract blocks connected by lines, for example, flow charts or structural diagrams. One of the clause of the statement of work is that lines have to get around other blocks\lines and don't intersect them while moving.
Illustration
I tried to use pathfinding algorithms like A* or Lee's algorithm for it and consider a workspace (a window with diagram elements) like a graph - one pixel is one graph node. However, moving of blocks\lines causes the significant time delay (for example, pathfinding in the workspace with size 500x500 takes about 320-360 ms). It seems like the graph is too big for those algorithms.
Could you please tell me how to reduce the number of nodes with regard to this case? Maybe is there a way to speed up those algorithms or use something other for it?!

Don't think of this as a graph theory problem, think of it as a physics problem.
Visualize it as follows. Each block has a specified force pulling it towards the last place that it was put. Line segments, blocks, and the edge of the graph repel each other with an inverse square law (except that the end of the line you are drawing doesn't repel blocks in front of it). Under sufficient stress, a line segment can be broken into smaller line segments that have a pull towards returning to being a straight line.
The dynamics are complicated, but the number of entities is the number of objects you see on screen, not the number of pixels it is drawn on. Therefore you'll be able to do updates relatively quickly.
You'll need to play with the dynamics a bit to get a good experience, but this should be a more tractable approach.

Related

Reduce number of nodes in 3D A* pathfinding using (part of a) uniform grid representation

I am calculating pathfinding inside a mesh which I have build a uniform grid around. The nodes (cells in the 3D grid) close to what I deem a "standable" surface I mark as accessible and they are used in my pathfinding. To get alot of detail (like being able to pathfind up small stair cases) the ammount of accessible cells in my grid have grown quite large, several thousand in larger buildings. (every grid cell is 0.5x0.5x0.5 m and the meshes are rooms with real world dimensions). Even though I only use a fraction of the actual cells in my grid for pathfinding the huge ammount slows the algorithm down. Other than that it works fine and finds the correct path through the mesh, using a weighted manhattan distance heuristic.
Imagine my grid looks like that and the mesh is inside it (can be more or less cubes but its always cubical), however the pathfinding will not be calculated on all the small cubes just a few marked as accessible (usually at the bottom of the grid but that can depend on how many floors the mesh has).
I am looking to reduce the search space for the pathfinding... I have looked at clustering like how HPA* does it and other clustering algorithms like Markov but they all seem to be best used with node graphs and not grids. One obvious solution would be to just increase the size of the small cubes building the grid but then I would lose alot of detail in the pathfinding and it would not be as robust. How could I cluster these small cubes? This is how a typical search space looks when I do my pathfinding (blue are accessible, green is path):
and as you see there is a lot of cubes to search through because the distance between them is quite small!
Never mind that the grid is an unoptimal solution for pathfinding for now.
Does anyone have an idea on how to reduce the ammount of cubes in the grid I have to search through and how would I access the neighbors after I reduce the space? :) Right now it only looks at the closest neighbors while expanding the search space.
A couple possibilities come to mind.
Higher-level Pathfinding
The first is that your A* search may be searching the entire problem space. For example, you live in Austin, Texas, and want to get into a particular building somewhere in Alberta, Canada. A simple A* algorithm would search a lot of Mexico and the USA before finally searching Canada for the building.
Consider creating a second layer of A* to solve this problem. You'd first find out which states to travel between to get to Canada, then which provinces to reach Alberta, then Calgary, and then the Calgary Zoo, for example. In a sense, you start with an overview, then fill it in with more detailed paths.
If you have enormous levels, such as skyrim's, you may need to add pathfinding layers between towns (multiple buildings), regions (multiple towns), and even countries (multiple regions). If you were making a GPS system, you might even need continents. If we'd become interstellar, our spaceships might contain pathfinding layers for planets, sectors, and even galaxies.
By using layers, you help to narrow down your search area significantly, especially if different areas don't use the same co-ordinate system! (It's fairly hard to estimate distance for one A* pathfinder if one of the regions needs latitude-longitude, another 3d-cartesian, and the next requires pathfinding through a time dimension.)
More efficient algorithms
Finding efficient algorithms becomes more important in 3 dimensions because there are more nodes to expand while searching. A Dijkstra search which expands x^2 nodes would search x^3, with x being the distance between the start and goal. A 4D game would require yet more efficiency in pathfinding.
One of the benefits of grid-based pathfinding is that you can exploit topographical properties like path symmetry. If two paths consist of the same movements in a different order, you don't need to find both of them. This is where a very efficient algorithm called Jump Point Search comes into play.
Here is a side-by-side comparison of A* (left) and JPS (right). Expanded/searched nodes are shown in red with walls in black:
Notice that they both find the same path, but JPS easily searched less than a tenth of what A* did.
As of now, I haven't seen an official 3-dimensional implementation, but I've helped another user generalize the algorithm to multiple dimensions.
Simplified Meshes (Graphs)
Another way to get rid of nodes during the search is to remove them before the search. For example, do you really need nodes in wide-open areas where you can trust a much more stupid AI to find its way? If you are building levels that don't change, create a script that parses them into the simplest grid which only contains important nodes.
This is actually called 'offline pathfinding'; basically finding ways to calculate paths before you need to find them. If your level will remain the same, running the script for a few minutes each time you update the level will easily cut 90% of the time you pathfind. After all, you've done most of the work before it became urgent. It's like trying to find your way around a new city compared to one you grew up in; knowing the landmarks means you don't really need a map.
Similar approaches to the 'symmetry-breaking' that Jump Point Search uses were introduced by Daniel Harabor, the creator of the algorithm. They are mentioned in one of his lectures, and allow you to preprocess the level to store only jump-points in your pathfinding mesh.
Clever Heuristics
Many academic papers state that A*'s cost function is f(x) = g(x) + h(x), which doesn't make it obvious that you may use other functions, multiply the weight of the cost functions, and even implement heatmaps of territory or recent deaths as functions. These may create sub-optimal paths, but they greatly improve the intelligence of your search. Who cares about the shortest path when your opponent has a choke point on it and has been easily dispatching anybody travelling through it? Better to be certain the AI can reach the goal safely than to let it be stupid.
For example, you may want to prevent the algorithm from letting enemies access secret areas so that they avoid revealing them to the player, and so that they AI seems to be unaware of them. All you need to achieve this is a uniform cost function for any point within those 'off-limits' regions. In a game like this, enemies would simply give up on hunting the player after the path grew too costly. Another cool option is to 'scent' regions the player has been recently (by temporarily increasing the cost of unvisited locations because many algorithms dislike negative costs).
If you know what places you won't need to search, but can't implement in your algorithm's logic, a simple increase to their cost will prevent unnecessary searching. There's a lot of ways to take advantage of heuristics to simplify and inform your pathfinding, but your biggest gains will come from Jump Point Search.
EDIT: Jump Point Search implicitly selects pathfinding direction using the same heuristics as A*, so you may be able to implement heuristics to a small degree, but their cost function won't be the cost of a node, but rather, the cost of traveling between the two nodes. (A* generally searches adjacent nodes, so the distinction between a node's cost and the cost of traveling to it tends to break down.)
Summary
Although octrees/quad-trees/b-trees can be useful in collision-detection, they aren't as applicable to searches because they section a graph based on its coordinates; not on its connections. Layering your graph (mesh in your vocabulary) into super graphs (regions) is a more effective solution.
Hopefully I've covered anything you'll find useful.
Good luck!

Simple algorithm to compute vertical map of set of segments?

The paper An optimal algorithm for intersecting line segments in the plane by Chazelle and Edelsbrunner defines the "vertical map" of a set of segments as
the planar subdivision obtained by drawing a rectangular frame around the
segments and connecting by a vertical line segment every endpoint to
the edges immediately above and below
(It really is easier to understand looking at the diagram on the top of page 3).
They make the comment
we do not know of any simple algorithm, no matter how slow, for
computing the vertical map of a set of segments
To me this seems confusing, because I immediately thought of a simple algorithm to do so:
Iterate through every endpoint p
Iterate through every other segment s
t=intersection(s, vertical line through p)
keep track of the closest t above and below p
add the closest point above and the closest point below and connect them with a segment
And then we're done? I mean, this would only be O(N^2) running time, not even that bad.
What more could they mean that this algorithm doesn't satisfy? Do they expect to find the intersection points as well? Wouldn't an O(N^2) pairwise intersection check do that? Do they want to enumerate the subdivisions of the plane? Couldn't that just be done very simply using standard algorithms to go from points+edges to faces? e.g. by walking around the border of the face?
Is there something I'm missing?
The papers is from 1992, and - as an example -, the author works with C-code.
At that time, there was not (except perhaps in specific languages, but not fast, not easy to use and code), collections, sort and other very sympathetic and optimized tools we have today.
So, if you try to do it, only with arrays and simple types, you probably will need a lot of arrays, indexes and loops, and this will not be very easy to understand.
Note that the paper is plenty of process time, and memory usage.

Algorithm for building a graph from a set of points

I need some input on solving the following problem:
Given a set of unordered (X,Y) points, I need to reduce/simplify the points and end up with a connected graph representation.
The following image show an example of an actual data set and the corresponding desired output (hand-drawn by me in MSPaint, sorry for shitty drawing, but the basic idea should be clear enough).
Some other things:
The input size will be between 1000-20000 points
The algorithm will be run by a user, who can see the input/output visually, tweak input parameters, etc. So automatically finding a solution is not a requirement, but the user should be able to achieve one within a fairly limited number of retries (and parameter tweaks). This also means that the distance between the nodes on the resulting graph can be a parameter and does not need to be derived from the data.
The time/space complexity of the algorithm is not important, but in practice it should be possible to finish a run within a few seconds on a standard desktop machine.
I think it boils down to two distinct problems:
1) Running a filtering pass, reducing the number of points (including some noise filtering for removing stray points)
2) Some kind of connect-the-dots graph problem afterwards. A very problematic area can be seen in the bottom/center part on the example data. Its very easy to end up connecting wrong parts of the graph.
Could anyone point me in the right direction for solving this? Cheers.
K-nearest neighbors (or, perhaps more accurately, a sigma neighborhood) might be a good starting point. If you're working in strictly Euclidean space, you may able to achieve 90% of what you're looking for by specifying some L2 distance threshold beyond which points are not connected.
The next step might be some sort of spectral graph analysis where you can define edges between points using some sort of spectral algorithm in addition to a distance metric. This would give the user a lot more knobs to turn with regards to the connectivity of the graph.
Both of these approaches should be able to handle outliers, e.g. "noisy" points that simply won't be connected to anything else. That said, you could probably combine them for the best possible performance (as spectral clustering performs a lot better when there are no 1-point clusters): run a basic KNN to identify and remove outliers, then a spectral analysis to more robustly establish edges.

How to fit more than one line to data points

I am trying to fit more than one line to a list of points in 2D. My points are quite low in number (16 or 32).
These points are coming from a simulated environment of a robot with laser range finders attached to its side. If the points lie on a line it means that they detected a wall, if not, it means they detected an obstacle. I am trying to detect the walls and calculate their intersection, and for this I thought the best idea is to fit lines on the dataset.
Fitting one line to a set of points is not a problem, if we know all those points line on or around a line.
My problem is that I don't know how can I detect which sets of points should be classified for fitting on the same line and which should not be, for each line. Also, I don't even now the number of points on a line, while naturally it would be the best to detect the longest possible line segment.
How would you solve this problem? If I look at all the possibilities for example for groups of 5 points for all the 32 points then it gives 32 choose 5 = 201376 possibilities. I think it takes way too much time to try all the possibilities and try to fit a line to all 5-tuples.
So what would be a better algorithm what would run much faster? I could connect points within limit and create polylines. But even connecting the points is a hard task, as the edge distances change even within a single line.
Do you think it is possible to do some kind of Hough transform on a discrete dataset with such a low number of entries?
Note: if this problem is too hard to solve, I was thinking about using the order of the sensors and use it for filtering. This way the algorithm could be easier but if there is a small obstacle in front of a wall, it would distract the continuity of the line and thus break the wall into two halves.
A good way to find lines in noisy point data like this is to use RANSAC. Standard RANSAC usage is to pick the best hypothesis (=line in this case) but you can just as easy pick the best 2 or 4 lines given your data. Have a look at the example here:
http://www.janeriksolem.net/2009/06/ransac-using-python.html
Python code is available here
http://www.scipy.org/Cookbook/RANSAC
The first thing I would point out is that you seem to be ignoring a vital aspect of the data, which is you know which sensors (or readings) are adjacent to each other. If you have N laser sensors you know where they are bolted to the robot and if you are rotating a sensor you know the order (and position) in which the measurements are taken. So, connecting points together to form a piecewise linear fit (polylines) should be trivial. Once you had these correspondances you could take each set of four points and determine if they can be modeled effectively by only 2 lines, or something.
Secondly, it's well known that finding a globally optimal fit for even two lines to an arbitrary set of points is NP-Hard as it can be reduced to k-means clustering, so I would not expect to find an efficient algorithm for this. When I used the Hough transform it was for finding corners based on pixel intensities, in theory it is probably applicable to this sort of problem but it is just an approximation and it is probably going to take a fair bit of work to figure out and implement.
I hate to suggest it but it seems like you might benefit by looking at the problem in a slightly different way. When I was working in autonomous navigation with a laser range finder we solved this problem by discretizing the space into an occupancy grid, which is the default approach. Of course, this just assumes the walls are just another object, which is not a particularly outrageous idea IMO. Beyond that, can you assume you have a map of the walls and you are just trying to find the obstacles and localize (find the pose of) the robot? If so, there are a large number of papers and tech reports on this problem.
A part of the solution could be (it's where I would start) to investigate the robot. Questions such as:
How fast is the robot turning/moving?
At what interval is the robot shooting the laser?
Where was the robot and what was its orientation when a point was found?
The answers to these questions might help you better than trying to find some algorithm that relies on the points only. Especially when there are so very few.
The answers to these questions give you more information/points. E.g., if you know the robot was at a specific position when it detected a point, there are no points in between the position of the robot and the detected point. that is very valuable.
For all the triads, fit a line through them and compute how much the line is deviating or not from the points.
Then use only the good (little-deviating) triads and merge them if they have two points at common, and the grow the sets by appending all triads that have (at least) two points in the set.
As the last step you can ditch the triads that haven't been merged with any other but have some common element with result set of at least 4 elements (to get away with crossing lines) but keep those triads that did not merge with anyone but does not have any element in common with sets (this would yield three points on the right side of your example as one of the lines).
I presume this would find the left line and bottom line in the second step, and the right line the third.

Sparse (Pseudo) Infinite Grid Data Structure for Web Game

I'm considering trying to make a game that takes place on an essentially infinite grid.
The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells.
The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that)
This needs to be easy to persist.
Here are the operations I may want to perform (reasonably efficiently) on this grid:
Ask for some small rectangular region of cells and all their contents (a player's current neighborhood)
Set individual cells or blit small regions (the player is making a move)
Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview)
Find some regions with approximately a given density (player spawning location)
Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching)
Approximate convex hull for a region
Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process).
Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were?
Thanks a lot, everyone!
I think you can do everything using quadtrees, as others have suggested, and maybe a few additional data structures. Here's a bit more detail:
Asking for cell contents, setting cell contents: these are the basic quadtree operations.
Rough shape/outline: Given a rectangle, go down sufficiently many steps within the quadtree that most cells are empty, and make the nonempty subcells at that level black, the others white.
Region with approximately given density: if the density you're looking for is high, then I would maintain a separate index of all objects in your map. Take a random object and check the density around that object in the quadtree. Most objects will be near high density areas, simply because high-density areas have many objects. If the density near the object you picked is not the one you were looking for, pick another one.
If you're looking for low-density, then just pick random locations on the map - given that it's a sparse map, that should typically give you low density spots. Again, if it doesn't work right try again.
Approximate shortest path: if this is a not-too-frequent operation, then create a rough graph of the area "between" the starting point A and end point B, for some suitable definition of between (maybe the square containing the circle with the midpoint of AB as center and 1.5*AB as diameter, except if that diameter is less than a certain minimum, in which case... experiment). Make the same type of grid that you would use for the rough shape / outline, then create (say) a Delaunay triangulation of the black points. Do a shortest path on this graph, then overlay that on the actual map and refine the path to one that makes sense given the actual map. You may have to redo this at a few different levels of refinement - start with a very rough graph, then "zoom in" taking two points that you got from the higher level as start and end point, and iterate.
If you need to do this very frequently, you'll want to maintain this type of graph for the entire map instead of reconstructing it every time. This could be expensive, though.
Approx convex hull: again start from something like the rough shape, then take the convex hull of the black points in that.
I'm not sure if this would be easy to put into a relational database; a file-based storage could work but it would be impractical to have a write operation be concurrent with anything else, which you would probably want if you want to allow this to grow to a reasonable number of players (per world / map, if there are multiple worlds / maps). I think in that case you are probably best off keeping a separate process alive... and even then making this properly respect multithreading is going to be a headache.
A kd tree or a quadtree is a good data structure to solve your problem. Especially the latter it's a clever way to address the grid and to reduce the 2d complexity to a 1d complexity. Quadtrees is also used in many maps application like bing and google maps. Here is a good start: Nick quadtree spatial index hilbert curve blog.

Resources