We have some polylines (list of points, has start and end point, not cyclic) and polygons (list of points, cyclic, no such thing as endpoints).
We want to map each polyline to a new polyline and each polygon to a new polygon so the total number of edges is small enough.
Let's say the number of edges originally is N, and we want our result to have M edges. N is much larger than M.
Polylines need to keep their start and end points, so they contribute at least 1 edge, one less than their vertex count. Polygons need to still be polygons, so they contribute at least 3 edges, equal to their vertex count. M will be at least large enough for this requirement.
The outputs should be as close as possible to the inputs. This would end up being an optimization problem of minimizing some metric to within some small tolerance of the true optimal solution. Originally I'd have used the area of the symmetric difference of the original and result (area between), but if another metric makes this easier to do I'll gladly take that instead.
It's okay if the results only include vertices in the original, the fit will be a little worse but it might be necessary to keep the time complexity down.
Since I'm asking for an algorithm, it'd be nice to also see an implementation. I'll likely have to re-implement it for where I'll be using it anyway, so details like what language or what data structures won't matter too much.
As for how good the approximation needs to be, about what you'd expect from getting a vector image from a bitmap image. The actual use here is for a tool for a game though, there's some strange details for the specific game, that's why the output edge count is fixed rather than the tolerance.
It's pretty hard to find any information on this kind of thing, so without even providing a full workable algorithm, just some pointers would be very much appreciated.
Ramer–Douglas–Peucker algorithm (mentioned in the comment) is definitely good, but it has some disadvantages:
It requires open polyline on input, for closed polygon one has to fix an arbitrary point, which either decreases final quality or forces to test many points and decreases the performance.
The vertices of simplified polyline are a subset of original polyline vertices, other options are not considered. This permits very fast implementations, but again decreases the precision of simplified polyline.
Another alternative is to take well known algorithm for simplification of triangular meshes Surface Simplification Using Quadric Error Metrics and adapt it for polylines:
distances to planes containing triangles are replaced with distances to lines containing polyline segments,
quadratic forms lose one dimension if the polyline is two dimensional.
But the majority of the algorithm is kept including the queue of edge contraction (minimal heap) according to the estimated distortion such contraction produces in the polyline.
Here is an example of this algorithm application:
Red - original polyline, blue - simplified polyline, and one can see that all its vertices do not lie on the original polyline, while general shape is preserved as much as possible with so few line segments.
it'd be nice to also see an implementation.
One can find an implementation in MeshLib, see MRPolylineDecimate.h/.cpp
Related
I am certain there is already some algorithm that does what I need, but I am not sure what phrase to Google, or what is the algorithm category.
Here is my problem: I have a polyhedron made up by several contacting blocks (hyperslabs), i. e. the edges are axis aligned and the angles between edges are 90°. There may be holes inside the polyhedron.
I want to break up this concave polyhedron in as little convex rectangular axis-aligned whole blocks are possible (if the original polyhedron is convex and has no holes, then it is already such a block, and therefore, the solution). To illustrate, some 2-D images I made (but I need the solution for 3-D, and preferably, N-D):
I have this geometry:
One possible breakup into blocks is this:
But the one I want is this (with as few blocks as possible):
I have the impression that an exact algorithm may be too expensive (is this problem NP-hard?), so an approximate algorithm is suitable.
One detail that maybe make the problem easier, so that there could be a more appropriated/specialized algorithm for it is that all edges have sizes multiple of some fixed value (you may think all edges sizes are integer numbers, or that the geometry is made up by uniform tiny squares, or voxels).
Background: this is the structured grid discretization of a PDE domain.
What algorithm can solve this problem? What class of algorithms should I
search for?
Update: Before you upvote that answer, I want to point out that my answer is slightly off-topic. The original poster have a question about the decomposition of a polyhedron with faces that are axis-aligned. Given such kind of polyhedron, the question is to decompose it into convex parts. And the question is in 3D, possibly nD. My answer is about the decomposition of a general polyhedron. So when I give an answer with a given implementation, that answer applies to the special case of polyhedron axis-aligned, but it might be that there exists a better implementation for axis-aligned polyhedron. And when my answer says that a problem for generic polyhedron is NP-complete, it might be that there exists a polynomial solution for the special case of axis-aligned polyhedron. I do not know.
Now here is my (slightly off-topic) answer, below the horizontal rule...
The CGAL C++ library has an algorithm that, given a 2D polygon, can compute the optimal convex decomposition of that polygon. The method is mentioned in the part 2D Polygon Partitioning of the manual. The method is named CGAL::optimal_convex_partition_2. I quote the manual:
This function provides an implementation of Greene's dynamic programming algorithm for optimal partitioning [2]. This algorithm requires O(n4) time and O(n3) space in the worst case.
In the bibliography of that CGAL chapter, the article [2] is:
[2] Daniel H. Greene. The decomposition of polygons into convex parts. In Franco P. Preparata, editor, Computational Geometry, volume 1 of Adv. Comput. Res., pages 235–259. JAI Press, Greenwich, Conn., 1983.
It seems to be exactly what you are looking for.
Note that the same chapter of the CGAL manual also mention an approximation, hence not optimal, that run in O(n): CGAL::approx_convex_partition_2.
Edit, about the 3D case:
In 3D, CGAL has another chapter about Convex Decomposition of Polyhedra. The second paragraph of the chapter says "this problem is known to be NP-hard [1]". The reference [1] is:
[1] Bernard Chazelle. Convex partitions of polyhedra: a lower bound and worst-case optimal algorithm. SIAM J. Comput., 13:488–507, 1984.
CGAL has a method CGAL::convex_decomposition_3 that computes a non-optimal decomposition.
I have the feeling your problem is NP-hard. I suggest a first step might be to break the figure into sub-rectangles along all hyperplanes. So in your example there would be three hyperplanes (lines) and four resulting rectangles. Then the problem becomes one of recombining rectangles into larger rectangles to minimize the final number of rectangles. Maybe 0-1 integer programming?
I think dynamic programming might be your friend.
The first step I see is to divide the polyhedron into a trivial collection of blocks such that every possible face is available (i.e. slice and dice it into the smallest pieces possible). This should be trivial because everything is an axis aligned box, so k-tree like solutions should be sufficient.
This seems reasonable because I can look at its cost. The cost of doing this is that I "forget" the original configuration of hyperslabs, choosing to replace it with a new set of hyperslabs. The only way this could lead me astray is if the original configuration had something to offer for the solution. Given that you want an "optimal" solution for all configurations, we have to assume that the original structure isn't very helpful. I don't know if it can be proven that this original information is useless, but I'm going to make that assumption in this answer.
The problem has now been reduced to a graph problem similar to a constrained spanning forest problem. I think the most natural way to view the problem is to think of it as a graph coloring problem (as long as you can avoid confusing it with the more famous graph coloring problem of trying to color a map without two states of the same color sharing a border). I have a graph of nodes (small blocks), each of which I wish to assign a color (which will eventually be the "hyperslab" which covers that block). I have the constraint that I must assign colors in hyperslab shapes.
Now a key observation is that not all possibilities must be considered. Take the final colored graph we want to see. We can partition this graph in any way we please by breaking any hyperslab which crosses the partition into two pieces. However, not every partition is meaningful. The only partitions that make sense are axis aligned cuts, which always break a hyperslab into two hyperslabs (as opposed to any more complicated shape which could occur if the cut was not axis aligned).
Now this cut is the reverse of the problem we're really trying to solve. That cutting is actually the thing we did in the first step. While we want to find the optimal merging algorithm, undoing those cuts. However, this shows a key feature we will use in dynamic programming: the only features that matter for merging are on the exposed surface of a cut. Once we find the optimal way of forming the central region, it generally doesn't play a part in the algorithm.
So let's start by building a collection of hyperslab-spaces, which can define not just a plain hyperslab, but any configuration of hyperslabs such as those with holes. Each hyperslab-space records:
The number of leaf hyperslabs contained within it (this is the number we are eventually going to try to minimize)
The internal configuration of hyperslabs.
A map of the surface of the hyperslab-space, which can be used for merging.
We then define a "merge" rule to turn two or more adjacent hyperslab-spaces into one:
Hyperslab-spaces may only be combined into new hyperslab-spaces (so you need to combine enough pieces to create a new hyperslab, not some more exotic shape)
Merges are done simply by comparing the surfaces. If there are features with matching dimensionalities, they are merged (because it is trivial to show that, if the features match, it is always better to merge hyperslabs than not to)
Now this is enough to solve the problem with brute force. The solution will be NP-complete for certain. However, we can add an additional rule which will drop this cost dramatically: "One hyperslab-space is deemed 'better' than another if they cover the same space, and have exactly the same features on their surface. In this case, the one with fewer hyperslabs inside it is the better choice."
Now the idea here is that, early on in the algorithm, you will have to keep track of all sorts of combinations, just in case they are the most useful. However, as the merging algorithm makes things bigger and bigger, it will become less likely that internal details will be exposed on the surface of the hyperslab-space. Consider
+===+===+===+---+---+---+---+
| : : A | X : : : :
+---+---+---+---+---+---+---+
| : : B | Y : : : :
+---+---+---+---+---+---+---+
| : : | : : : :
+===+===+===+ +---+---+---+
Take a look at the left side box, which I have taken the liberty of marking in stronger lines. When it comes to merging boxes with the rest of the world, the AB:XY surface is all that matters. As such, there are only a handful of merge patterns which can occur at this surface
No merges possible
A:X allows merging, but B:Y does not
B:Y allows merging, but A:X does not
Both A:X and B:Y allow merging (two independent merges)
We can merge a larger square, AB:XY
There are many ways to cover the 3x3 square (at least a few dozen). However, we only need to remember the best way to achieve each of those merge processes. Thus once we reach this point in the dynamic programming, we can forget about all of the other combinations that can occur, and only focus on the best way to achieve each set of surface features.
In fact, this sets up the problem for an easy greedy algorithm which explores whichever merges provide the best promise for decreasing the number of hyperslabs, always remembering the best way to achieve a given set of surface features. When the algorithm is done merging, whatever that final hyperslab-space contains is the optimal layout.
I don't know if it is provable, but my gut instinct thinks that this will be an O(n^d) algorithm where d is the number of dimensions. I think the worst case solution for this would be a collection of hyperslabs which, when put together, forms one big hyperslab. In this case, I believe the algorithm will eventually work its way into the reverse of a k-tree algorithm. Again, no proof is given... it's just my gut instinct.
You can try a constrained delaunay triangulation. It gives very few triangles.
Are you able to determine the equations for each line?
If so, maybe you can get the intersection (points) between those lines. Then if you take one axis, and start to look for a value which has more than two points (sharing this value) then you should "draw" a line. (At the beginning of the sweep there will be zero points, then two (your first pair) and when you find more than two points, you will be able to determine which points are of the first polygon and which are of the second one.
Eg, if you have those lines:
verticals (red):
x = 0, x = 2, x = 5
horizontals (yellow):
y = 0, y = 2, y = 3, y = 5
and you start to sweep through of X axis, you will get p1 and p2, (and we know to which line-equation they belong ) then you will get p3,p4,p5 and p6 !! So here you can check which of those points share the same line of p1 and p2. In this case p4 and p5. So your first new polygon is p1,p2,p4,p5.
Now we save the 'new' pair of points (p3, p6) and continue with the sweep until the next points. Here we have p7,p8,p9 and p10, looking for the points which share the line of the previous points (p3 and p6) and we get p7 and p10. Those are the points of your second polygon.
When we repeat the exercise for the Y axis, we will get two points (p3,p7) and then just three (p1,p2,p8) ! On this case we should use the farest point (p8) in the same line of the new discovered point.
As we are using lines equations and points 2 or more dimensions, the procedure should be very similar
ps, sorry for my english :S
I hope this helps :)
Given two 3d objects, how can I find if one fits inside the second (and find the location of the object in the container).
The object should be translated and rotated to fit the container - but not modified otherwise.
Additional complications:
The same situation - but look for the best fit solution, even if it's not a proper match (minimize the volume of the object that doesn't fit in the container)
Support for elastic objects - find the best fit while minimizing the "distortion" in the objects
This is a pretty general question - and I don't expect a complete solution.
Any pointers to relevant papers \ articles \ libraries \ tools would be useful
Here is one perhaps less than ideal method.
You could try fixing the position (in 3D space) of 1 shape. Placing the other shape on top of that shape. Then create links that connect one point in shape to a point in the other shape. Then simulate what happens when the links are pulled equally tight. Causing the point that isn't fixed to rotate and translate until it's stable.
If the fit is loose enough, you could use only 3 links (the bare minimum number of links for 3D) and try every possible combination. However, for tighter fit fits, you'll need more links, perhaps enough to place them on every point of the shape with the least number of points. Which means you'll some method to determine how to place the links, which is not trivial.
This seems like quite hard problem. Probable approach is to have some heuristic to suggest transformation and than check is it good one. If transformation moves object only slightly out of interior (e.g. on one part) than make slightly adjust to transformation and test it. If object is 'lot' out (e.g. on same/all axis on both sides) than make new heuristic guess.
Just an general idea for a heuristic. Make a rasterisation of an objects with same pixel size. It can be octree of an object volume. Make connectivity graph between pixels. Check subgraph isomorphism between graphs. If there is a subgraph than that position is for a testing.
This approach also supports 90deg rotation(s).
Some tests can be done even on graphs. If all volume neighbours of a subgraph are in larger graph, than object is in.
In general this is 'refined' boundary box approach.
Another solution is to project equal number of points on both objects and do a least squares best fit on the point sets. The point sets probably will not be ordered the same so iterating between the least squares best fit and a reordering of points so that the points on both objects are close to same order. The equation development for this is a lot of algebra but not conceptually complicated.
Consider one polygon(triangle) in the target object. For this polygon, find the equivalent polygon in the other geometry (source), ie. the length of the sides, angle between the edges, area should all be the same. If there's just one match, find the rigid transform matrix, that alters the vertices that way : X' = M*X. Since X' AND X are known for all the points on the matched polygons, this should be doable with linear algebra.
If you want a one-one mapping between the vertices of the polygon, traverse the edges of the polygons in the same order, and make a lookup table that maps each vertex one one poly to a vertex in another. If you have a half edge data structure of your 3d object that'll simplify this process a great deal.
If you find more than one matching polygon, traverse the source polygon from both the points, and keep matching their neighbouring polygons with the target polygons. Continue until one of them breaks, after which you can do the same steps as the one-match version.
There're more serious solutions that're listed here, but I think the method above will work as well.
What a juicy problem !. As is typical in computational geometry this problem
can be very complicated with a mismatched geometric abstraction. With all kinds of if-else cases etc.
But pick the right abstraction and the solution becomes trivial with few sub-cases.
Compute the Distance Transform of your shapes and Voilà! Your solution is trivial.
Allow me to elaborate.
The distance map of a shape on a grid (pixels) encodes the distance of the closest point on the
shape's border to that pixel. It can be computed in both directions outwards or inwards into the shape.
In this problem, the outward distance map suffices.
Step 1: Compute the distance map of both shapes D_S1, D_S2
Step 2: Subtract the distance maps. Diff = D_S1-D_S2
Step 3: if Diff has only positive values. Then your shapes can be contained in each other(+ve => S1 bigger than S2 -ve => S2 bigger than S1)
If the Diff has both positive and negative values, the shapes intersect.
There you have it. Enjoy !
This is a question similar to the one here, but I figure that it would be helpful if I can recast it in a more general terms.
I have a set of polygons, these polygons can touch one another, overlap and can take on any shape. My question is, given a list of points, how to devise an efficient algorithm that find which polygons are the points located?
One of the interesting restriction of the location of the points is that, all the points are located at the edges of the polygons, if this helps.
I understand that r-trees can help, but given that I am doing a series of points, is there a more efficient algorithm instead of computing for each point one by one?
The key search term here is point location. Under that name, there are many algorithms in the computational geometry literature for various cases, from special to general. For example, this link lists various software packages, including my own. (A somewhat out-of-date list now.)
There is a significant tradeoff between speed and program complexity (and therefore implementation effort). The easiest-to-program method is to check each point against each polygon, using standard point-in-polygon code. But this could be slow depending on how many polygons you have.
More difficult is to build a point-location data structure by sweeping the plane
and finding all the edge-edge intersection points. See the this Wikipedia article to see some of your options.
I think you are bumping up against intuition about the problem (which is a quasi-analog perception) versus a computational approach which is of necessity O(n).
Given a plane, a degenerate polygon (a line), and an arbitrary set of points on the plane, do the points intersect the line or fall "above" or "below" it? I cannot think of an approach that is smaller than O(n) even for this degenerate case.
Either, each point would have to be checked for its relation to the line, or you'd have to partition the points into some tree-like structure which would require at least O(n) operations but very likely more.
If I were better at computational geometry, I might be able to say with authority that you've just restated Klee's measure problem but as it is I just have to suggest it.
If points can only fall on the edges, then you can find the polygons in O(n) by just examining the edges.
If it were otherwise, you'd have to triangulate the polygons in O(n log n) to test against the triangles in O(n).
You could also divide the space by the line extended from each segment, noting which side is inside/outside of the corresponding polygon. A point is inside a polygon if it falls on an edge or if it's on the inside part of every edge of the polygon. It's O(n) on the number of edges in the worst case, but tends to O(m) on the number of polygons on the average case.
An R-tree would help in both cases, but only if you have several points to test. Otherwise, constructing the R-tree would be more expensive than searching through the list of triangles.
I have polygons that define the contour of counties in the UK. These shapes are very detailed (10k to 20k points each), thus rendering the related computations (is point X in polygon P?) quite computationaly expensive.
Thus, I would like to "subsample" my polygons, to obtain a similar shape but with less points. What are the different techniques to do so?
The trivial one would be to take one every N points (thus subsampling by a factor N), but this feels too "crude". I would rather do some averaging of points, or something of that flavor. Any pointer?
Two solutions spring to mind:
1) since the map of the UK is reasonably squarish, you could choose to render a bitmap with the counties. Assign each a specific colour, and then render the borders with a 1 or 2 pixel thick black line. This means you'll only have to perform the expensive interior/exterior calculation if a sample happens to lie on the border. The larger the bitmap, the less often this will happen.
2) simplify the county outlines. You can use a recursive Ramer–Douglas–Peucker algorithm to recursively simplify the boundaries. Just make sure you cache the results. You may also have to solve this not for entire county boundaries but for shared boundaries only, to ensure no gaps. This might be quite tricky.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
Polygon triangulation should help here. You'll still have to check many polygons, but these are triangles now, so they are easier to check and you can use some optimizations to determine only a small subset of polygons to check for a given region or point.
As it seems you have all the algorithms you need for polygons, not only for triangles, you can also merge several triangles that are too small after triangulation or if triangle count gets too high.
I have a detailed 2D polygon (representing a geographic area) that is defined by a very large set of vertices. I'm looking for an algorithm that will simplify and smooth the polygon, (reducing the number of vertices) with the constraint that the area of the resulting polygon must contain all the vertices of the detailed polygon.
For context, here's an example of the edge of one complex polygon:
My research:
I found the Ramer–Douglas–Peucker algorithm which will reduce the number of vertices - but the resulting polygon will not contain all of the original polygon's vertices. See this article Ramer-Douglas-Peucker on Wikipedia
I considered expanding the polygon (I believe this is also known as outward polygon offsetting). I found these questions: Expanding a polygon (convex only) and Inflating a polygon. But I don't think this will substantially reduce the detail of my polygon.
Thanks for any advice you can give me!
Edit
As of 2013, most links below are not functional anymore. However, I've found the cited paper, algorithm included, still available at this (very slow) server.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
I think Visvalingam’s algorithm can be adapted for this purpose - by skipping removal of triangles that would reduce the area.
I had a very similar problem : I needed an inflating simplification of polygons.
I did a simple algorithm, by removing concav point (this will increase the polygon size) or removing convex edge (between 2 convex points) and prolongating adjacent edges. In any case, doing one of those 2 possibilities will remove one point on the polygon.
I choosed to removed the point or the edge that leads to smallest area variation. You can repeat this process, until the simplification is ok for you (for example no more than 200 points).
The 2 main difficulties were to obtain fast algorithm (by avoiding to compute vertex/edge removal variation twice and maintaining possibilities sorted) and to avoid inserting self-intersection in the process (not very easy to do and to explain but possible with limited computational complexity).
In fact, after looking more closely it is a similar idea than the one of Visvalingam with adaptation for edge removal.
That's an interesting problem! I never tried anything like this, but here's an idea off the top of my head... apologies if it makes no sense or wouldn't work :)
Calculate a convex hull, that might be way too big / imprecise
Divide the hull into N slices, for example joining each one of the hull's vertices to the center
Calculate the intersection of your object with each slice
Repeat recursively for each intersection (calculating the intersection's hull, etc)
Each level of recursion should give a better approximation.... when you reached a satisfying level, merge all the hulls from that level to get the final polygon.
Does that sound like it could do the job?
To some degree I'm not sure what you are trying to do but it seems you have two very good answers. One is Ramer–Douglas–Peucker (DP) and the other is computing the alpha shape (also called a Concave Hull, non-convex hull, etc.). I found a more recent paper describing alpha shapes and linked it below.
I personally think DP with polygon expansion is the way to go. I'm not sure why you think it won't substantially reduce the number of vertices. With DP you supply a factor and you can make it anything you want to the point where you end up with a triangle no matter what your input. Picking this factor can be hard but in your case I think it's the best method. You should be able to determine the factor based on the size of the largest bit of detail you want to go away. You can do this with direct testing or by calculating it from your source data.
http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/Project-10-report.pdf
I've written a simple modification of Douglas-Peucker that might be helpful to anyone having this problem in the future: https://github.com/prakol16/rdp-expansion-only
It's identical to DP except that it pushes a line segment outwards a bit if the points that it would remove are outside the polygon. This guarantees that the resulting simplified polygon contains all the original polygon, but it has almost the same number of line segments as the original DP algorithm and is usually reasonably good at approximating the original shape.