Testing whether a polygon is simple or complex - algorithm

For a polygon defined as a sequence of (x,y) points, how can I detect whether it is complex or not? A complex polygon has intersections with itself, as shown:
Is there a better solution than checking every pair which would have a time complexity of O(N2)?

There are sweep methods which can determine this much faster than a brute force approach. In addition, they can be used to break a non-simple polygon into multiple simple polygons.
For details, see this article, in particular, this code to test for a simple polygon.

See Bentley Ottmann Algorithm for a sweep based O((N + I)log N) method for this.
Where N is the number of line segments and I is number of intersection points.

In fact, this can be done in linear time use Chazelle's triangulation algorithm. It either triangulates the polygon or find out the polygon is not simple.

Related

Determine whether the two classes are linearly separable (algorithmically in 2D)

There are two classes, let's call them X and O. A number of elements belonging to these classes are spread out in the xy-plane. Here is an example where the two classes are not linearly separable. It is not possible to draw a straight line that perfectly divides the Xs and the Os on each side of the line.
How to determine, in general, whether the two classes are linearly separable?. I am interested in an algorithm where no assumptions are made regarding the number of elements or their distribution. An algorithm of the lowest computational complexity is of course preferred.
If you found the convex hull for both the X points and the O points separately (i.e. you have two separate convex hulls at this stage) you would then just need to check whether any segments of the hulls intersected or whether either hull was enclosed by the other.
If the two hulls were found to be totally disjoint the two data-sets would be geometrically separable.
Since the hulls are convex by definition, any separator would be a straight line.
There are efficient algorithms that can be used both to find the convex hull (the qhull algorithm is based on an O(nlog(n)) quickhull approach I think), and to perform line-line intersection tests for a set of segments (sweepline at O(nlog(n))), so overall it seems that an efficient O(nlog(n)) algorithm should be possible.
This type of approach should also generalise to general k-way separation tests (where you have k groups of objects) by forming the convex hull and performing the intersection tests for each group.
It should also work in higher dimensions, although the intersection tests would start to become more challenging...
Hope this helps.
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel language offers an interface for it - R, Python, Octave, Julia, etc.
Let's say you have a set of points A and B:
Then you have to minimize the 0 for the following conditions:
(The A below is a matrix, not the set of points from above)
"Minimizing 0" effectively means that you don't need to actually optimize an objective function because this is not necessary to find out if the sets are linearly separable.
In the end
() is defining the separating plane.
In case you are interested in a working example in R or the math details, then check this out.
Here is a naïve algorithm that I'm quite sure will work (and, if so, shows that the problem is not NP-complete, as another post claims), but I wouldn't be surprised if it can be done more efficiently: If a separating line exists, it will be possible to move and rotate it until it hits two of the X'es or one X and one O. Therefore, we can simply look at all the possible lines that intersect two X'es or one X and one O, and see if any of them are dividing lines. So, for each of the O(n^2) pairs, iterate over all the n-2 other elements to see if all the X'es are on one side and all the O's on the other. Total time complexity: O(n^3).
Linear perceptron is guaranteed to find such separation if one exists.
See: http://en.wikipedia.org/wiki/Perceptron .
You can probably apply linear programming to this problem. I'm not sure of its computational complexity in formal terms, but the technique is successfully applied to very large problems covering a wide range of domains.
Computing a linear SVM then determining which side of the computed plane with optimal marginals each point lies on will tell you if the points are linearly separable.
This is overkill, but if you need a quick one off solution, there are many existing SVM libraries that will do this for you.
As mentioned by ElKamina, Linear Perceptron is guaranteed to find a solution if one exists. This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming.
A code with an example to solve using Perceptron in Matlab is here
In general that problem is NP-hard but there are good approximate solutions like K-means clustering.
Well, both Perceptron and SVM (Support Vector Machines) can tell if two data sets are separable linearly, but SVM can find the Optimal Hiperplane of separability. Besides, it can work with n-dimensional vectors, not only points.
It is used in applications such as face recognition. I recomend to go deep into this topic.

Algorithm for completing a partial triangulation (Constrained Triangulation)

Given a set of points in a plane and an incomplete triangulation of the convex hull of the points (only some edges are given), I'm looking for an algorithm to complete the triangulation (the initial given edges should remain fixed). You can assume that it's possible to complete the partial triangulation but it'd be great if you could also suggest an algorithm for checking that too.
UPDATE" You're given a convex hull of a set of points R^2, which is basically a polygon with some points inside it. We want to triangulate the set of points which is a straightforward matter on itself, but you're also given some edges that any triangulation that you come up with should use those edges."
Perhaps this is a naive answer, but can't you just use a constrained delaunay triangulation? Add the known edges as constraints.
CGAL has a nice implementation. The tool triangle has similar features and is easier to get started with, but has (perhaps) a little less flexibility.
I found out that the book "Computational Geometry: An Introduction" has a detailed treatment of the subject though it doesn't give a ready to implement pseudo-code.
The easiest algorithm is a greedy one which enumerates all the possible edges and then add them one by one avoiding intersection with previously added ages. There is a long discussion in the book about how to reduce the running time to O(n^2 log n).
Then there is a O(n log n) algorithm which first regularizes the convex hull with the given edges and then individually triangulates each monotone polygon.

Given a set of polygons and a series of points, find the which polygons are the points located

This is a question similar to the one here, but I figure that it would be helpful if I can recast it in a more general terms.
I have a set of polygons, these polygons can touch one another, overlap and can take on any shape. My question is, given a list of points, how to devise an efficient algorithm that find which polygons are the points located?
One of the interesting restriction of the location of the points is that, all the points are located at the edges of the polygons, if this helps.
I understand that r-trees can help, but given that I am doing a series of points, is there a more efficient algorithm instead of computing for each point one by one?
The key search term here is point location. Under that name, there are many algorithms in the computational geometry literature for various cases, from special to general. For example, this link lists various software packages, including my own. (A somewhat out-of-date list now.)
There is a significant tradeoff between speed and program complexity (and therefore implementation effort). The easiest-to-program method is to check each point against each polygon, using standard point-in-polygon code. But this could be slow depending on how many polygons you have.
More difficult is to build a point-location data structure by sweeping the plane
and finding all the edge-edge intersection points. See the this Wikipedia article to see some of your options.
I think you are bumping up against intuition about the problem (which is a quasi-analog perception) versus a computational approach which is of necessity O(n).
Given a plane, a degenerate polygon (a line), and an arbitrary set of points on the plane, do the points intersect the line or fall "above" or "below" it? I cannot think of an approach that is smaller than O(n) even for this degenerate case.
Either, each point would have to be checked for its relation to the line, or you'd have to partition the points into some tree-like structure which would require at least O(n) operations but very likely more.
If I were better at computational geometry, I might be able to say with authority that you've just restated Klee's measure problem but as it is I just have to suggest it.
If points can only fall on the edges, then you can find the polygons in O(n) by just examining the edges.
If it were otherwise, you'd have to triangulate the polygons in O(n log n) to test against the triangles in O(n).
You could also divide the space by the line extended from each segment, noting which side is inside/outside of the corresponding polygon. A point is inside a polygon if it falls on an edge or if it's on the inside part of every edge of the polygon. It's O(n) on the number of edges in the worst case, but tends to O(m) on the number of polygons on the average case.
An R-tree would help in both cases, but only if you have several points to test. Otherwise, constructing the R-tree would be more expensive than searching through the list of triangles.

Simplified (or smooth) polygons that contain the original detailed polygon

I have a detailed 2D polygon (representing a geographic area) that is defined by a very large set of vertices. I'm looking for an algorithm that will simplify and smooth the polygon, (reducing the number of vertices) with the constraint that the area of the resulting polygon must contain all the vertices of the detailed polygon.
For context, here's an example of the edge of one complex polygon:
My research:
I found the Ramer–Douglas–Peucker algorithm which will reduce the number of vertices - but the resulting polygon will not contain all of the original polygon's vertices. See this article Ramer-Douglas-Peucker on Wikipedia
I considered expanding the polygon (I believe this is also known as outward polygon offsetting). I found these questions: Expanding a polygon (convex only) and Inflating a polygon. But I don't think this will substantially reduce the detail of my polygon.
Thanks for any advice you can give me!
Edit
As of 2013, most links below are not functional anymore. However, I've found the cited paper, algorithm included, still available at this (very slow) server.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
I think Visvalingam’s algorithm can be adapted for this purpose - by skipping removal of triangles that would reduce the area.
I had a very similar problem : I needed an inflating simplification of polygons.
I did a simple algorithm, by removing concav point (this will increase the polygon size) or removing convex edge (between 2 convex points) and prolongating adjacent edges. In any case, doing one of those 2 possibilities will remove one point on the polygon.
I choosed to removed the point or the edge that leads to smallest area variation. You can repeat this process, until the simplification is ok for you (for example no more than 200 points).
The 2 main difficulties were to obtain fast algorithm (by avoiding to compute vertex/edge removal variation twice and maintaining possibilities sorted) and to avoid inserting self-intersection in the process (not very easy to do and to explain but possible with limited computational complexity).
In fact, after looking more closely it is a similar idea than the one of Visvalingam with adaptation for edge removal.
That's an interesting problem! I never tried anything like this, but here's an idea off the top of my head... apologies if it makes no sense or wouldn't work :)
Calculate a convex hull, that might be way too big / imprecise
Divide the hull into N slices, for example joining each one of the hull's vertices to the center
Calculate the intersection of your object with each slice
Repeat recursively for each intersection (calculating the intersection's hull, etc)
Each level of recursion should give a better approximation.... when you reached a satisfying level, merge all the hulls from that level to get the final polygon.
Does that sound like it could do the job?
To some degree I'm not sure what you are trying to do but it seems you have two very good answers. One is Ramer–Douglas–Peucker (DP) and the other is computing the alpha shape (also called a Concave Hull, non-convex hull, etc.). I found a more recent paper describing alpha shapes and linked it below.
I personally think DP with polygon expansion is the way to go. I'm not sure why you think it won't substantially reduce the number of vertices. With DP you supply a factor and you can make it anything you want to the point where you end up with a triangle no matter what your input. Picking this factor can be hard but in your case I think it's the best method. You should be able to determine the factor based on the size of the largest bit of detail you want to go away. You can do this with direct testing or by calculating it from your source data.
http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/Project-10-report.pdf
I've written a simple modification of Douglas-Peucker that might be helpful to anyone having this problem in the future: https://github.com/prakol16/rdp-expansion-only
It's identical to DP except that it pushes a line segment outwards a bit if the points that it would remove are outside the polygon. This guarantees that the resulting simplified polygon contains all the original polygon, but it has almost the same number of line segments as the original DP algorithm and is usually reasonably good at approximating the original shape.

Fast algorithm for calculating union of 'local convex hulls'

I have a set of 2D points from which I want to generate a polygon (or collection of polygons) outlining the 'shape' of those points, using the following concept:
For each point in the set, calculate the convex hull of all points within radius R of that point. After doing this for each point, take the union of these convex hulls to produce the final shape.
A brute force approach of actually constructing all these convex hulls is something like O(N^2 + R^2 log R). Is there a known, more efficient algorithm to produce the same result? Or perhaps a different way of expressing the problem?
Note: I am aware of alpha shapes, they are different; I am looking for an algorithm to perform what is described above.
The following solution does not work - disproved experimentally in MATLAB.
Update: I have a proposed solution.
Proposition: take the Delaunay Triangulation of the set of points, remove all triangles having circumradius greater than R. Then take the union of the remaining triangles.
A sweep line algorithm can improve searching for the R-neighbors. Alternatively, you can consider only pairs of points that are in neighboring squares of square grid of width R. Both of these ideas can get rid of the N^2 - of course only if the points are relatively sparse.
I believe that a clever combination of sweeping and convex hull finding cat get rid of the N^2 even if the points are not sparse (as in Olexiy's example), but cannot come up with a concrete algorithm.
Yes, using rotating calipers. My prof wrote some stuff on this, it starts on page 19.
Please let me know if I misunderstood the problem.
I don't see how do you get N^2 time for brute-forcing all convex hulls in the worst case (1). What if almost any 2 points are closer than R - in this case you need at least N^2*logN to just construct the convex hulls, leave alone computing their union.
Also, where does R^2*logR in your estimation comes from?
1 The worst case (as I see it) for a huge N - take a circle of radius R / 2 and randomly place points on its border and just outside it.

Resources