Sutherland–Hodgman Algorithm Variation for Non Degenerate Polygon Output - algorithm

I need a fast polygon clipping implementation against an axis aligned rectangle and I've implemented Sutherland–Hodgman algorithm based on my particular data set and it's 40 to 100 times faster than ClipperLib and other generic polygon clipping algorithms, but the output is a degenerate polygon in some cases:
results in:
Is there a variation of Sutherland–Hodgman algorithm to generate multiple polygons instead of a degenerate one or any fast and simple way to transform the degenerate polygon into multiple polygons?

Related

Efficient algorithm to search convex polygons in a 2D space

I have a 2D Space in which there is a convex polygon (in this case the red square) and a large number of triangles. My goal is to select the triangles
Belonging
Intersecting
Surrounding
the red square (For more clarity, the triangles I am searching for are the ones in black).
I am currently using a brute force approach by checking the listed conditions on each triangle.
Is there a more efficient algorithm or any kind of heuristic which can be applied to reduce the time complexity?
Improve efficiency with a pre-filter
As checking orthogonal coordinates are relatively easy (maybe via a quad-tree), consider first a bounding box for each object. Then easy to eliminate objects that could not possibly meet the search criteria. The remaining objects can then use a more time intensive approach.
From a comment of yours, I infer that preprocessing the triangles is not an option. So unless the triangles are known to enjoy some special property that can be exploited, you can't avoid an exhaustive comparison of all triangles to the polygon (using the separating axis theorem, among others).

Calculating the similarity of 2 sets of convex polygons?

I have generated 2 sets of convex polygons from with different algorithms. Every polygon in each set is described by an array of coordinates[n_points, xy_coords], so a square is described by an array [4,2] but a pentagon with rounded corners has [80,2], with the extra 75 points being used to describe the curvatures.
My goal is to quantify how similar the two sets of geometries are.
Can anyone recommend any methods of doing so?
So far I've come across:
Hamming Distance
Hausdorff distance
I would like to know what other robust measures of similarity for 2D polygons. The method ideally needs to be robust across convex polygons and and give a measure of similarity between large sets (10,000+ each).
As I understood you look for shape similarity or shape analysis algorithms. You will find more robust methods here: https://www.cs.princeton.edu/courses/archive/spr00/cs598b/lectures/polygonsimilarity/polygonsimilarity.pdf
and
https://student.cs.uwaterloo.ca/~cs763/Projects/phil.pdf
Turning function;
Graph matching;
Shape signature.
Assuming that both polygons are aligned, centered and convex you could try to assess similarity by computing ratio of area a smaller polygon to area of convex hull of both polygons.
Ratio = Min(Area(A), Area(B)) / Area(ConvexHull(A, B))
The ratio will be 1 if both polygon are equal, and 0 if the differ severely like a point and a square.
Area of the polygon can be computed in O(N) time. See Area of polygon.
The convex hull can be computed in O(N log N). See Convex Hull Computation. It could be speed up to O(N) by merge-sorting already sorted vertices of both polygons and applying the second phase of Graham Scan Algorithm.

calculating bounded polygon from intersecting linestrings

I am using boost geometry and I am trying to calculate a "bounded" polygon (see image below) from intersecting polylines (linestrings 2d in Boost geometry). Currently, my approach is i) to get all the intersection points between these lines and then ii) "split" each line at the intersection points. However, this algorithm is a little bit exhaustive. Does anyone know if boost geometry has something more efficient for this?
Moreover, how could I get the segment (or vector of points) for each linestring that lie withing two intersection points? For example, for the green linestring, if I have the two red intersection points, how can I get the linestring between these two points (vector of points containing the two red intersection points and the two interior blue points)? Is there any "split"-like functionality in boost geometry?
Any suggestion is much appreciated. Thanks a lot in advance.
From the given description, it seems that the (poly)lines intersect in pairs to form a single loop, so that the inner polygon is well defined. If this is not true, the solution is not unique.
As the number of lines is small, exhaustive search for the pairwise intersections will not be a big sin. For 5 (poly)lines, there are 10 pairs to be tried, while you expect 5 intersections. Forming the loop from the intersections is not a big deal.
What matters most is if Boost Geometry uses an efficient algorithm to intersect polylines. Unsure if this is documented. For moderate numbers of vertices (say below 100), this is not so important.
If the number of points is truly large, you can resort to an efficient line segment intersection algorithm, which lowers the complexity from O(n²+k) down to O((n+k) log n). See https://en.wikipedia.org/wiki/Bentley%E2%80%93Ottmann_algorithm. You can process all polylines in a single go.
If your polylines have specific properties, they can be exploited to ease the task.

Distance from ellipses to static set of polygons

I have a static set of simple polygons (they may be nonconvex but are not self-intersecting) and a large number of query ellipses. Assume that this is all being done in 2D. I need to find the distance between each ellipse and the closest polygon to that ellipse. Distance is defined as the short distance between any two points on the ellipse and polygon respectively. If the ellipse intersects a polygon, then we can say the distance is 0 or assign some negative value.
A brute force approach would simply compute the distance between each ellipse and each polygon and return the lowest distance in O(mn) time where m is the number of polygons and n is the average number of vertices per polygon. I would like to reduce the m term here because I think I can cull the amount of polygons being considered though some spacial analysis.
I've considered a few approaches including Voronoi diagrams and R-trees and kd-trees. However, most of these seems to involve points and I'm not sure how to extend these to polygons. I think the most promising approach involves computing bounding boxes for each polygon and the ellipse and using an R-tree to find some set of nearby polygons. However, I'm not quite about the best way to find this close set of polygons. Or perhaps there's a better way that I'm overlooking.
Using bounding boxes or disks has the benefit of reducing the work to compute a distance ellipse/polygon to O(1). And it allows you to obtain a lower and an upper bound on the true distance.
Assume you use disks and also enclose the ellipse in a disk. You will need to perform a modified nearest neighbors search, that enumerates the disks such that the lower bound of their distance to the query disk is smaller than the best upper bound found so far.
This can be accelerated by means of a k-D tree (D=2) built on the disk centers. You can enhance every node in the k-D tree with the radius of the largest and smallest disks in the subtree it roots. During the search you will use this information to evaluate the bounds without knowing the exact radii of the disks, but the deeper you go in the tree, the better you know them.
Perform a search to obtain the tightest upper bound on the distance, then a second search to enumerate all the disks with a lower bound smaller than the tightest upper bound. This will reduce the number of disks to be considered.
You can also use bounding boxes, and store the min/max width/height of the boxes in the tree nodes.

Polygon adding algorithm

I want to do the following: I have some faces in the 3D space as polygons. I have a projection direction and a projection plane. I have a convex clipping polygon in the projection plane. I wnat to get a polygon representing the shaddow of all the faces clipped on the plane.
What I do till now: I calculate the projections of the faces as polygons in the projection plane.
I could use the Sutherland–Hodgman algorithm to clip all the singe projected polygons to clip to the desired area.
Now my question: How can I combine the projected (maybe clipped) polygons together? Do I have to use algorithms like Margalit/Knott?
The algorithm should be quite efficient because it has to run quite often. So what algorithm do you suppose?
Is it maybe possible to modify the algorithm of Sutherland–Hodgman to solve the merging problem?
I'm currently implementing this algorithm (union of n concave polygons) using Bentley–Ottmann to find all edge intersections and meanwhile keeping track of the polygon nesting level on both sides of edge segments (how many overlapping polygons each side of the line is touching). Edges that have a nesting level of 0 on one side are output to the result polygon. It's fairly tricky to get done right. An existing solution with a different algorithm design can be found at:
http://sourceforge.net/projects/polyclipping/

Resources