Best nesting algorithm - algorithm

I have spent time looking for information on the best algorithm to create a nesting of irregular polygons in 2D using manual and automatic positioning. I need to use such an algorithm in the context of CAD/CAM software. Here are the real possibilities I've found so far:
Separating Axis Theorem: is a fairly quick and simple algorithm to implement, but the drawback I find with it is that it only works with convex polygons. To work with concave polygons, a convex decomposition would need to be done first. This implies an increase in the run-time and the implementation of a new algorithm that decomposes the concave polygon into convex polygons.
Nesting by a power function: calculating the partial derivatives in the X and Y axes, you could get the escape direction you should take a polygon so that there is a collision between the two polygons. This function of energy and I tested and the three major problems that I have encountered are: first obtaining local minima , second nesting when the collision occurs over a piece and finally the execution time is very high.
Using no-fit polygon: use the no-fit polygon to the nesting can be somewhat interesting. I have read several papers on the subject although there are very few online documentation on it. Not sure if it can really be a useful choice. I still have several doubts on the details of this approach.
Any idea which of these algorithms to choose? Or if you know any other options that can be used? I'm a little confused :-) .
Thank you very much.

Related

How does the rotational plane sweep algorithm work?

I have the task to implement the creation of a visibility graph based on a set of simple polygons which are given by the task. The polygons have positive whole number coordinates and can be non-convex. I wanted to implement the rotational plane sweep algorithm which was mentioned in the lecture but not very well explained.
The only other source I could find about this algorithm was this page, that did not make it fully clear either: https://tanergungor.blogspot.com/2015/04/robot-navigation-rotational-sweep.html
I would appreciate it if someone could explain the rotational plane sweep algorithm to an extent which I can understand.
Here is a screenshot of an example obstacle arrangement with my attempt at a solution which is not yet working and more or less based on trial and error rather than understanding and implementing the actual algorithm. The algorithm is just using a single vertex which is not located on a polygon in this example.

Reliable test for intersection of two Bezier curves

How to reliably find out whether two Bezier curves intersect? By "reliably" I mean the test will answer "yes" only when the curves intersect, and "no" only when they don't intersect. I don't need to know what parameters the intersection was found at. I also would like to use floating-point numbers in the implementation.
I found several answers here which use the curves' bounding-boxes for the test: this is not what I'm after as such test may report intersection even if the curves don't intersect.
The closest thing I found so far is the "bounding wedge" by Sederberg and Meyers but it "only" distinguishes between at-most-one and two-or-more intersection, whereas I want to know if there is at-most-zero and one-or-more intersections.
I am assuming cubic bezier curves.
The most reliable method for reporting intersections, using floating point computation, is probably to find them, combined with error analysis.
The main problem, when floating point computations are involved, is inconsistency in computed results w.r.t. topology. Unfortunately this is unavoidable, if you need to compute anything in computational geometry within a reasonable amount of time.
So instead of stressing on the right algorithm for intersection calculation, picking a simple one and implementing error analysis is probably the solution.
I would try to implement an efficient subdivision algorithm like bezier-clipping (or a variant of quadratic clipping –Nicholas North's Geo-clip), and with running error analysis to compute tight error bounds so that we don't "miss" intersections.
To elaborate, The main sources of floating-point (double prec.) error in these subdivision based algorithms are:
Truncation error: especially the error in the input coefficients etc. which are also finite —we can't do much here within the algorithm.
Roundoff error during De Casteljau subdivision and point evaluation.
I have used the running error bounds for De Casteljau's algorithm —explained here, along with Geo-clip algorithm. It is fast and robust. (B.t.w. This theses, in general, is a good read if you want to make polynomial/bezier algorithms more robust)
Assuming, you know the basics of the bezier clipping algorithm, the general idea is to expand the hybrid bezier curve (in the first paper linked) and the fat line appropriately with the error bounds for each clip.
Some other unrelated ideas:
You can try a variant of Bentley-Ottmann sweepline algorithm. First you have to split the bezier curves as X monotone segments; and look at their Y orderings as you sweep across them. This method has a few disadvantages, since bezier curves are also capable of intersecting with multiplicity of more than one - think of tangential intersection. Doing an error analysis may be difficult here (when you compute a y value, there is some floating point error involved)
Interval Projected Polyhedron algorithm: This uses rounded interval arithmetic for robustness. But the algorithm for 2D Bezier curves gets quite complicated
There are a few cases you might come across:
Self intersections
Overlapping (coincident) curves: Subdivision algorithms will keep going in this case. This can be easy to check though.
Good luck :)
Assuming cubic beziers, the intersection points are real roots of a 9th degree polynomial. The existence of such roots within an interval (from negative to positive infinity for infinitely long curves, or 0 to 1 for your typical piecewise cubic beziers) can be checked robustly using a Sturm sequence. This will only work if we allow extending one of the curves to infinity. The algorithm will have no loops, and only use basic arithmetic operations (add, subtract and multiply, while division should be avoidable).
For maximum robustness, you could use arbitrary precision math. Since the number of steps is constant, the maximum possible number of digits in all temporary results is bounded. That way, your algorithm will always return the correct result, no matter how pathological the input (eg. curves barely touching).
It might be possible to use ordinary floating point first, and detect potential pathological cases (intermediate results becoming zero, when adding/subtracting previous intermediate results).
The formulas for getting the polynomial terms from Bezier control points are truly a sight to behold, but luckily you don't have to work them out,
they're right here on Github.
There's a thread from 2004, Closed-form of Bezier intersection test on comp.graphics.algorithms with more details.
If you're dealing with quadratic beziers, the polynomial will only be 4th degree.
An idea from the top of my head.
Map them, so they are in the same domain, such that you can subtract them. Then just do a root finding. There are lots and lots of numeric methods to root finding.
If you need to see if two curves intersect visually, say real-time screen graphics in a game or something, then the easiest way to do so, by far, is to simply compare their pixels.
Get the bbox and pixel lookup tables (LUTs) for both curves, check if there's bbox overlap: no overlap? done. There is no intersection. Overlap? sort the LUTs with a fast sorter, and then just run a compare. The moment you find a single match, you're done. There is an overlap, and you don't care where.
If you have to do this for lots of curves: use a library that does this for you, don't waste your time implementing it yourself, you're not going to be as efficient (for large collections things like oct-trees and scanline checks become far more efficient)
However, if you need to know if there is absolute, mathematically precise overlap, say for as-correct-as-possible design work, then you can't cut corners: actually run a real intersection detection algorithm to find all possible intersection points. Real-time is mostly irrelevant in this setting, you can spend the few more cycles to run a proper detection algorithm.

Looking for a "closing curves connecting with respect to points" algorithm

I am looking for an algorithm that can connect points together with a continuous curve line. Imagine drawing from point a to b to c until the last point, and when you draw from point to point, the line must be a curve and is continuous with respect to the previous point and next point, as if the given points are just samples of a closed loop. Please see figure below for illustration.
Are there such algorithm for something like this?
*The circles in the figure are my list of points.
Given that your points are ordered, spline interpolation is definitely the best way to go here. (As indicated by by bo1024's comment) I highly recommend the following notes:
http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/
And specifically the section here would be most relevant to getting a closed loop like you asked for:
http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/spline/B-spline/bspline-curve-closed.html
EDIT: If the curve has to pass through the points, then the unique degree n solution is the Lagrange interpolating polynomial. You can just make one polynomial for each component of your points vectors using the formula on the wiki page:
http://en.wikipedia.org/wiki/Lagrange_polynomial
Unfortunately Lagrange interpolation can be pretty noisy if you have too many points. As a result, I would still recommend using some fixed degree spline interpolation. Instead of B-splines, another option are Hermite polynomials:
http://en.wikipedia.org/wiki/Cubic_Hermite_spline
These will guarantee that the curve passes through the points. To get a closed curve, you need to repeat the the first d points of your curve when solving for the coefficients, where d is the degree of the Hermite spline you are using to approximate your points.
The problem is very similar to the travelling salesman problem, you may be able to extend some of the algorithms used to solve it to suit your case.
For instance, evolutionary algorithms are easy to adapt and you will find lot of references about using them to solve the TSP.

Simplified (or smooth) polygons that contain the original detailed polygon

I have a detailed 2D polygon (representing a geographic area) that is defined by a very large set of vertices. I'm looking for an algorithm that will simplify and smooth the polygon, (reducing the number of vertices) with the constraint that the area of the resulting polygon must contain all the vertices of the detailed polygon.
For context, here's an example of the edge of one complex polygon:
My research:
I found the Ramer–Douglas–Peucker algorithm which will reduce the number of vertices - but the resulting polygon will not contain all of the original polygon's vertices. See this article Ramer-Douglas-Peucker on Wikipedia
I considered expanding the polygon (I believe this is also known as outward polygon offsetting). I found these questions: Expanding a polygon (convex only) and Inflating a polygon. But I don't think this will substantially reduce the detail of my polygon.
Thanks for any advice you can give me!
Edit
As of 2013, most links below are not functional anymore. However, I've found the cited paper, algorithm included, still available at this (very slow) server.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
I think Visvalingam’s algorithm can be adapted for this purpose - by skipping removal of triangles that would reduce the area.
I had a very similar problem : I needed an inflating simplification of polygons.
I did a simple algorithm, by removing concav point (this will increase the polygon size) or removing convex edge (between 2 convex points) and prolongating adjacent edges. In any case, doing one of those 2 possibilities will remove one point on the polygon.
I choosed to removed the point or the edge that leads to smallest area variation. You can repeat this process, until the simplification is ok for you (for example no more than 200 points).
The 2 main difficulties were to obtain fast algorithm (by avoiding to compute vertex/edge removal variation twice and maintaining possibilities sorted) and to avoid inserting self-intersection in the process (not very easy to do and to explain but possible with limited computational complexity).
In fact, after looking more closely it is a similar idea than the one of Visvalingam with adaptation for edge removal.
That's an interesting problem! I never tried anything like this, but here's an idea off the top of my head... apologies if it makes no sense or wouldn't work :)
Calculate a convex hull, that might be way too big / imprecise
Divide the hull into N slices, for example joining each one of the hull's vertices to the center
Calculate the intersection of your object with each slice
Repeat recursively for each intersection (calculating the intersection's hull, etc)
Each level of recursion should give a better approximation.... when you reached a satisfying level, merge all the hulls from that level to get the final polygon.
Does that sound like it could do the job?
To some degree I'm not sure what you are trying to do but it seems you have two very good answers. One is Ramer–Douglas–Peucker (DP) and the other is computing the alpha shape (also called a Concave Hull, non-convex hull, etc.). I found a more recent paper describing alpha shapes and linked it below.
I personally think DP with polygon expansion is the way to go. I'm not sure why you think it won't substantially reduce the number of vertices. With DP you supply a factor and you can make it anything you want to the point where you end up with a triangle no matter what your input. Picking this factor can be hard but in your case I think it's the best method. You should be able to determine the factor based on the size of the largest bit of detail you want to go away. You can do this with direct testing or by calculating it from your source data.
http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/Project-10-report.pdf
I've written a simple modification of Douglas-Peucker that might be helpful to anyone having this problem in the future: https://github.com/prakol16/rdp-expansion-only
It's identical to DP except that it pushes a line segment outwards a bit if the points that it would remove are outside the polygon. This guarantees that the resulting simplified polygon contains all the original polygon, but it has almost the same number of line segments as the original DP algorithm and is usually reasonably good at approximating the original shape.

How do I find the most complex convex polygon enclosing a set of points?

I have a list of (about 200-300) 2d points. I know needs to find the polygon that encloses all of them. The polygon has to be convex, and it should be as complex as possible (i.e. not a rectangular bounding box). It should find this in as low as possible time, but there are no restrictions on memory.
You may answer in pseudocode or any language you want to use.
Sounds like you're looking for a convex hull algorithm? It's been more than a decade since I was taught about these, but the name Graham Scan sticks in my mind and would probably be where I'd start.
Take a look at Graham's Algorithm.
Qhull is good software for computing 2D convex hulls.
If it is a real world problem - as in, not an academic one - there's never really a reason to solve such a generic problem yourself.

Resources