I'm trying to link points in the plane, ie draw a graph, but using only axis-aligned lines. I found the KDTree algorithm
to be quite promising and close to what I need
but it does not make sure the segments are as small as possible.
The result I'm looking for is closer to
I have also read up on
https://en.wikipedia.org/wiki/Delaunay_triangulation
because initially, I thought that would be it;
but it turns out its way off:
- based on circles and triangles
- traces a perimeter
- nodes have multiple connections (>=3)
Can you point me towards an algorithm that already exists?
or can you help me with drafting a new algorithm?
PS: Only 1000-1100 points so efficiency is not super important.
In terms of Goals and Costs, reaching all nodes is the Goal
and the length of all segments is the Cost.
Thanks to MBo, I now know that this is known as 'The Steiner Tree Problem'. This is the subject of a 1992 book (of the same name) demonstrating that it is an NP-hard problem.
This is the Rectilinear variant of that. There are a few approximate algorithms or heuristic algorithms known to (help) solve it.
( HAN, HAN4, LBH, HWAB, HWAD, BEA are listed inside
https://www.sciencedirect.com/science/article/pii/0166218X9390010L )
I haven't found anything yet that a "practitioner" might be able to actually use. Still looking.
It seems like a 'good' way forward is:
Compute edges using Delaunay triangulation.
Label each edge with its length or rectilinear distance.
Find the minimum spanning tree (with algorithms such as Borůvka's, Prim's, or Kruskal's).
Add Steiner points restricted to the Hanan grid.
Move edges to the grid.
Still unclear about that last step. How?
Related
I'm working on my bachelor thesis (on Computer Science) and right now I'm having a problem about finding shortest path between two points on 3D triangular mesh that is manifold. I already read about MMP, but which computes distance function $d(x)$ between given point and vertex $x$ on mesh.
I got to know that the problem I'm solving is named Geodesics but What I really couldn't find is some good algorithm which uses A* for finding shortest path between two given points on two given vertices.
I 'invented' also algorithm which uses A* by using Euclidian Distance Heuristics and correction after finding new Point on any Edge..
I also have edges saved in half-edge structure.
So my main idea is this:
We will find closest edge by A* algorithm and find on this edge point with minimalizing function $f(x) + g(x)$ where $f$ is our current distance and $g$ is heuristics(euclidean distance)
Everytime we find new edge, we will unfold current mesh and find closest path to our starting point
So now my questions:
Do you know some research paper which talks about this problem ??
Why nobody wrote about algorithm that uses A* ??
What are your opinions about algorithm I proposed ?
Here are some papers and tools related to finding geodesics (or approximations) on a surface mesh:
A Survey of Algorithms for Geodesic Paths and Distances
You Can Find Geodesic Paths in Triangle Meshes by Just Flipping Edges (code)
The Vector Heat Method
(code)
You can find more papers in the survey paper.
I implemented the algorithm you mentionned (MMP) a long time ago and it's quite difficult to get it right and quite time consuming since the number of splits along an edge grows quite fast.
I am no expert in the matter so read with prejudice. Also sorry this is more of a comment than answer...
First You should clarify some things:
the mesh is convex or concave?
are the path always on surface or can fly between faces on the outside (if concave) but never inside?
are the start/end points on edges of faces or can be inside?
Assuming concave, points on edges and only surface paths...
I think the graph A* approach is unusable as there is infinite possible paths between point and any edge of the same face so how you test all of them?
If you really want A* then you can do something similar to raster A*
so resample all your edges to more points
so either n points or use some density like 10 points per average edge length or some detail size.
use graph A* on resampled points (do not handle them as edges anymore)
However this will produce only close to shortest path so in order to improve the accuracy you should recursively resample the edges near used point with higher and higher density until the distance between resampled points get smaller than accuracy.
Another option would be using something similar to CCD (cyclic coordinate descent) so:
create plane that goes through your 2 points and center of your mesh
create path that goes through all intersection of plane and faces between the 2 points (use the shorter on from the 2 options)
iterativelly move intersections back and forward and use greedy approach to get the result
However this might get stuck in local minima... You could use search/fitting approaches instead but those will get very slow with increasing number of faces
I got the feeling you might also do this using RANSAC ...
From my point of view I think the first A* approach is the most promising, you just need linked list of points per each edge and one cost counter per each its point from this you can simply encode even the recursive improvement of accuracy. It can be done even in-place so no reallocation needed in the recursion ... And also the algo is not complicated so you should have no problems implementing it, and the result is guaranteed which is not the case with other approaches I mention... Another pros is that it can be used even if start/endpoint does not belong to edge...
I have a floorplan with lots of d3.js polygon objects on it that represent booths. I am wondering what the best approach is to finding a path between the 2 objects that don't overlap other objects. The use case here is that we have booths and want to show the user how to walk to get from point a to b the most efficient. We can assume path must contain only 90 or 45 degree turns.
we took a shot at using Dijkstra but the scale of it seems to be getting away from us.
The example snapshot of our system:
Our constraints are that this needs to run in the browser. Would be nice if it worked well with d3.js.
Since the layout is a matrix (or nested matrices) this is not a Dijkstra problem, it is simpler than that. The technical name for the problem is a "Manhatten routing". Rather than give a code algorithm, I will show you an example of the optimum route (the blue line) in the following diagram. From this it should be obvious what the algorithm is:
Note that there is a subtle nuance here, and that is that you always want to maximize the number of jogs because even though the overall shape is a matrix, at each corner the person will actually walk diagonally (think of a person cutting diagonally across a four-way intersection). Therefore, simply going north, then west is wrong, because you would only get to cut one corner, but on the route shown you get to cut 5 corners.
This problem is known as finding shortest path between two points with polygonal obstacle, and studied a lot in literature. See here for one example. All algorithms for this is by converting problem to the graph theory problem then running Dijkstra. To doing this:
Each vertex in any polygon is vertex in your graph.
Start point and end points are also vertices in the graph.
Between two vertex there is an edge, if they are visible to each other, to achieve this we can use triangulation algorithms.
Weight of each edge is the distance between its two endpoints in Euclidean space.
Now we are ready to run any shortest path algorithm. The hard part is triangulation, I think triangle library fits for your requirements. Also easier way is searching the web by the keywords that I said in the first line to find implementation. I didn't link to any implementation because I see is better to say it in algorithmic manner to be useful to the future readers.
I have a detailed 2D polygon (representing a geographic area) that is defined by a very large set of vertices. I'm looking for an algorithm that will simplify and smooth the polygon, (reducing the number of vertices) with the constraint that the area of the resulting polygon must contain all the vertices of the detailed polygon.
For context, here's an example of the edge of one complex polygon:
My research:
I found the Ramer–Douglas–Peucker algorithm which will reduce the number of vertices - but the resulting polygon will not contain all of the original polygon's vertices. See this article Ramer-Douglas-Peucker on Wikipedia
I considered expanding the polygon (I believe this is also known as outward polygon offsetting). I found these questions: Expanding a polygon (convex only) and Inflating a polygon. But I don't think this will substantially reduce the detail of my polygon.
Thanks for any advice you can give me!
Edit
As of 2013, most links below are not functional anymore. However, I've found the cited paper, algorithm included, still available at this (very slow) server.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
I think Visvalingam’s algorithm can be adapted for this purpose - by skipping removal of triangles that would reduce the area.
I had a very similar problem : I needed an inflating simplification of polygons.
I did a simple algorithm, by removing concav point (this will increase the polygon size) or removing convex edge (between 2 convex points) and prolongating adjacent edges. In any case, doing one of those 2 possibilities will remove one point on the polygon.
I choosed to removed the point or the edge that leads to smallest area variation. You can repeat this process, until the simplification is ok for you (for example no more than 200 points).
The 2 main difficulties were to obtain fast algorithm (by avoiding to compute vertex/edge removal variation twice and maintaining possibilities sorted) and to avoid inserting self-intersection in the process (not very easy to do and to explain but possible with limited computational complexity).
In fact, after looking more closely it is a similar idea than the one of Visvalingam with adaptation for edge removal.
That's an interesting problem! I never tried anything like this, but here's an idea off the top of my head... apologies if it makes no sense or wouldn't work :)
Calculate a convex hull, that might be way too big / imprecise
Divide the hull into N slices, for example joining each one of the hull's vertices to the center
Calculate the intersection of your object with each slice
Repeat recursively for each intersection (calculating the intersection's hull, etc)
Each level of recursion should give a better approximation.... when you reached a satisfying level, merge all the hulls from that level to get the final polygon.
Does that sound like it could do the job?
To some degree I'm not sure what you are trying to do but it seems you have two very good answers. One is Ramer–Douglas–Peucker (DP) and the other is computing the alpha shape (also called a Concave Hull, non-convex hull, etc.). I found a more recent paper describing alpha shapes and linked it below.
I personally think DP with polygon expansion is the way to go. I'm not sure why you think it won't substantially reduce the number of vertices. With DP you supply a factor and you can make it anything you want to the point where you end up with a triangle no matter what your input. Picking this factor can be hard but in your case I think it's the best method. You should be able to determine the factor based on the size of the largest bit of detail you want to go away. You can do this with direct testing or by calculating it from your source data.
http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/Project-10-report.pdf
I've written a simple modification of Douglas-Peucker that might be helpful to anyone having this problem in the future: https://github.com/prakol16/rdp-expansion-only
It's identical to DP except that it pushes a line segment outwards a bit if the points that it would remove are outside the polygon. This guarantees that the resulting simplified polygon contains all the original polygon, but it has almost the same number of line segments as the original DP algorithm and is usually reasonably good at approximating the original shape.
I'm having trouble finding the right pathfinding algorithm for some AI I'm working on.
I have players on a pitch, moving around freely (not stuck to a grid), but they are confined to moving in 8 directions (N NE E etc.)
I was working on using A*, and a graph for this. But I realised, every node on the graph is equally far apart, and all the edges have the same weight - since the pitch is rectangular. And the number of nodes is enormous (being a large pitch, with them able to move between 1 pixel and another)
I figured there must be another algorithm, optimised for this sort of thing?
I would break the pitch down into 10x10 pixel grid. Your routing does not have to be as finely grained as the rest of your system and it makes the algorithm take up far less memory.
As Chris suggests above, choosing the right heuristic is the key to getting the algorithm working right for you.
If players move in straight lines between points on your grid, you really don't need to use A*. Bresenham's line algorithm will provide a straight line path very quickly.
You could weight a direction based on another heuristic. So as opposed to weighting the paths based on actual distance you could weight or scale that based on another factor such as "closeness to another player" meaning players will favour routes that will not collide with other players.
The A* algorithm should work well like this.
I think you should try Jump Point Search. It is a very fast algorithm for path finding on 8-dire.
Here is a blog describing Jump Point Search shortly.
And, this is its academic paper <Online Graph Pruning for Pathfinding on Grid Maps>
Besides, there are some interesting videos on Youtube.
I hope this helps.
Given are two sets of three-dimensional points, a source and a destination set. The number of points on each set is arbitrary (may be zero). The task is to assign one or no source point to every destination point, so that the sum of all distances is minimal. If there are more source than destination points, the additional points are to be ignored.
There is a brute-force solution to this problem, but since the number of points may be big, it is not feasible. I heard this problem is easy in 2D with equal set sizes, but sadly these preconditions are not given here.
I'm interested in both approximations and exact solutions.
Edit: Haha, yes, I suppose it does sound like homework. Actually, it's not. I'm writing a program that receives positions of a large number of cars and i'm trying to map them to their respective parking cells. :)
One way you could approach this problem is to treat is as the classical assignment problem: http://en.wikipedia.org/wiki/Assignment_problem
You treat the points as the vertices of the graph, and the weights of the edges are the distance between points. Because the fastest algorithms assume that you are looking for maximum matching (and not minimum as in your case), and that the weights are non-negative, you can redefine weights to be e.g.:
weight(A, B) = bigNumber- distance(A,B)
where bigNumber is bigger than your longest distance.
Obviously you end up with a bipartite graph. Then you use one of the standard algorithms for maximum weighted bipartite matching (lots of resources on the web, e.g. http://valis.cs.uiuc.edu/~sariel/teach/courses/473/notes/27_matchings_notes.pdf or Wikipedia for overview: http://en.wikipedia.org/wiki/Perfect_matching#Maximum_bipartite_matchings) This way you will end-up with a O(NM max(N,M)) algoritms, where N and M are sizes of your sets of points.
Off the top of my head, spatial sort followed by simulated annealing.
Grid the space & sort the sets into spatial cells.
Solve the O(NM) problem within each cell, then within cell neighborhoods, and so on, to get a trial matching.
Finally, run lots of cycles of simulated annealing, in which you randomly alter matches, so as to explore the nearby space.
This is heuristic, getting you a good answer though not necessarily the best, and it should be fairly efficient due to the initial grid sort.
Although I don't really have an answer to your question, I can suggest looking into the following topics. (I know very little about this, but encountered it previously on Stack Overflow.)
Nearest Neighbour Search
kd-tree
Hope this helps a bit.