Reliable test for intersection of two Bezier curves - computational-geometry

How to reliably find out whether two Bezier curves intersect? By "reliably" I mean the test will answer "yes" only when the curves intersect, and "no" only when they don't intersect. I don't need to know what parameters the intersection was found at. I also would like to use floating-point numbers in the implementation.
I found several answers here which use the curves' bounding-boxes for the test: this is not what I'm after as such test may report intersection even if the curves don't intersect.
The closest thing I found so far is the "bounding wedge" by Sederberg and Meyers but it "only" distinguishes between at-most-one and two-or-more intersection, whereas I want to know if there is at-most-zero and one-or-more intersections.

I am assuming cubic bezier curves.
The most reliable method for reporting intersections, using floating point computation, is probably to find them, combined with error analysis.
The main problem, when floating point computations are involved, is inconsistency in computed results w.r.t. topology. Unfortunately this is unavoidable, if you need to compute anything in computational geometry within a reasonable amount of time.
So instead of stressing on the right algorithm for intersection calculation, picking a simple one and implementing error analysis is probably the solution.
I would try to implement an efficient subdivision algorithm like bezier-clipping (or a variant of quadratic clipping –Nicholas North's Geo-clip), and with running error analysis to compute tight error bounds so that we don't "miss" intersections.
To elaborate, The main sources of floating-point (double prec.) error in these subdivision based algorithms are:
Truncation error: especially the error in the input coefficients etc. which are also finite —we can't do much here within the algorithm.
Roundoff error during De Casteljau subdivision and point evaluation.
I have used the running error bounds for De Casteljau's algorithm —explained here, along with Geo-clip algorithm. It is fast and robust. (B.t.w. This theses, in general, is a good read if you want to make polynomial/bezier algorithms more robust)
Assuming, you know the basics of the bezier clipping algorithm, the general idea is to expand the hybrid bezier curve (in the first paper linked) and the fat line appropriately with the error bounds for each clip.
Some other unrelated ideas:
You can try a variant of Bentley-Ottmann sweepline algorithm. First you have to split the bezier curves as X monotone segments; and look at their Y orderings as you sweep across them. This method has a few disadvantages, since bezier curves are also capable of intersecting with multiplicity of more than one - think of tangential intersection. Doing an error analysis may be difficult here (when you compute a y value, there is some floating point error involved)
Interval Projected Polyhedron algorithm: This uses rounded interval arithmetic for robustness. But the algorithm for 2D Bezier curves gets quite complicated
There are a few cases you might come across:
Self intersections
Overlapping (coincident) curves: Subdivision algorithms will keep going in this case. This can be easy to check though.
Good luck :)

Assuming cubic beziers, the intersection points are real roots of a 9th degree polynomial. The existence of such roots within an interval (from negative to positive infinity for infinitely long curves, or 0 to 1 for your typical piecewise cubic beziers) can be checked robustly using a Sturm sequence. This will only work if we allow extending one of the curves to infinity. The algorithm will have no loops, and only use basic arithmetic operations (add, subtract and multiply, while division should be avoidable).
For maximum robustness, you could use arbitrary precision math. Since the number of steps is constant, the maximum possible number of digits in all temporary results is bounded. That way, your algorithm will always return the correct result, no matter how pathological the input (eg. curves barely touching).
It might be possible to use ordinary floating point first, and detect potential pathological cases (intermediate results becoming zero, when adding/subtracting previous intermediate results).
The formulas for getting the polynomial terms from Bezier control points are truly a sight to behold, but luckily you don't have to work them out,
they're right here on Github.
There's a thread from 2004, Closed-form of Bezier intersection test on comp.graphics.algorithms with more details.
If you're dealing with quadratic beziers, the polynomial will only be 4th degree.

An idea from the top of my head.
Map them, so they are in the same domain, such that you can subtract them. Then just do a root finding. There are lots and lots of numeric methods to root finding.

If you need to see if two curves intersect visually, say real-time screen graphics in a game or something, then the easiest way to do so, by far, is to simply compare their pixels.
Get the bbox and pixel lookup tables (LUTs) for both curves, check if there's bbox overlap: no overlap? done. There is no intersection. Overlap? sort the LUTs with a fast sorter, and then just run a compare. The moment you find a single match, you're done. There is an overlap, and you don't care where.
If you have to do this for lots of curves: use a library that does this for you, don't waste your time implementing it yourself, you're not going to be as efficient (for large collections things like oct-trees and scanline checks become far more efficient)
However, if you need to know if there is absolute, mathematically precise overlap, say for as-correct-as-possible design work, then you can't cut corners: actually run a real intersection detection algorithm to find all possible intersection points. Real-time is mostly irrelevant in this setting, you can spend the few more cycles to run a proper detection algorithm.

Related

Given an unordered list of n-dimensional points, how can I best find the smallest volume defined by n+1 of those points that encloses a given point?

Note: I'm very much from a programming background, not a mathematics background. This will become obvious very quickly.
Assume I have a bounded n-dimensional space - for example here I'll use n=2. In that space, I have a set of pre-defined points. (As it happens, I'm doing something questionable with genetic algorithms for finding minima in non-mathematically-solvable equations, but that's not directly relevant).
Next, I have defined a new point within that 2D space. I want to know which three (n+1) points form the smallest (or possibly nearest?) triangle that contains that point. Illustration here:
Now, as that illustration shows, I'm not entirely sure what I'm doing, in that I've failed to adequately describe the criteria by which the candidate triangles are judged - points 10, 5, and 8 would enclose the point within a triangle of smaller area, for example. This is because those specifics are somewhat flexible. What I really care about is:
Computational efficiency: I could potentially end up scaling this algorithm up to thousands of points in a hundred dimensions. As such, I need a solution that's better than exhaustively testing every potential convex hull against a specific equation, and ideally one with a decent big-O notation. Presumably I'm going to need a more intelligent structure that a big unordered list to do this.
If I had a preferred criteria, it's proximity of the hull vertices to the test point. However, if it's easier to build an algorithm which judges by area/volume or something else, I can live with that.
I need to be able to handle edge-cases, where the test-point is, for example, on the edge between two vertices.
Where do I even start with this? Some cursory googling suggests Voronoi diagrams might be a step in the right direction, is that correct? What are the right tools for this job? Any help would be greatly appreciated.

How to break a geometry into blocks?

I am certain there is already some algorithm that does what I need, but I am not sure what phrase to Google, or what is the algorithm category.
Here is my problem: I have a polyhedron made up by several contacting blocks (hyperslabs), i. e. the edges are axis aligned and the angles between edges are 90°. There may be holes inside the polyhedron.
I want to break up this concave polyhedron in as little convex rectangular axis-aligned whole blocks are possible (if the original polyhedron is convex and has no holes, then it is already such a block, and therefore, the solution). To illustrate, some 2-D images I made (but I need the solution for 3-D, and preferably, N-D):
I have this geometry:
One possible breakup into blocks is this:
But the one I want is this (with as few blocks as possible):
I have the impression that an exact algorithm may be too expensive (is this problem NP-hard?), so an approximate algorithm is suitable.
One detail that maybe make the problem easier, so that there could be a more appropriated/specialized algorithm for it is that all edges have sizes multiple of some fixed value (you may think all edges sizes are integer numbers, or that the geometry is made up by uniform tiny squares, or voxels).
Background: this is the structured grid discretization of a PDE domain.
What algorithm can solve this problem? What class of algorithms should I
search for?
Update: Before you upvote that answer, I want to point out that my answer is slightly off-topic. The original poster have a question about the decomposition of a polyhedron with faces that are axis-aligned. Given such kind of polyhedron, the question is to decompose it into convex parts. And the question is in 3D, possibly nD. My answer is about the decomposition of a general polyhedron. So when I give an answer with a given implementation, that answer applies to the special case of polyhedron axis-aligned, but it might be that there exists a better implementation for axis-aligned polyhedron. And when my answer says that a problem for generic polyhedron is NP-complete, it might be that there exists a polynomial solution for the special case of axis-aligned polyhedron. I do not know.
Now here is my (slightly off-topic) answer, below the horizontal rule...
The CGAL C++ library has an algorithm that, given a 2D polygon, can compute the optimal convex decomposition of that polygon. The method is mentioned in the part 2D Polygon Partitioning of the manual. The method is named CGAL::optimal_convex_partition_2. I quote the manual:
This function provides an implementation of Greene's dynamic programming algorithm for optimal partitioning [2]. This algorithm requires O(n4) time and O(n3) space in the worst case.
In the bibliography of that CGAL chapter, the article [2] is:
[2] Daniel H. Greene. The decomposition of polygons into convex parts. In Franco P. Preparata, editor, Computational Geometry, volume 1 of Adv. Comput. Res., pages 235–259. JAI Press, Greenwich, Conn., 1983.
It seems to be exactly what you are looking for.
Note that the same chapter of the CGAL manual also mention an approximation, hence not optimal, that run in O(n): CGAL::approx_convex_partition_2.
Edit, about the 3D case:
In 3D, CGAL has another chapter about Convex Decomposition of Polyhedra. The second paragraph of the chapter says "this problem is known to be NP-hard [1]". The reference [1] is:
[1] Bernard Chazelle. Convex partitions of polyhedra: a lower bound and worst-case optimal algorithm. SIAM J. Comput., 13:488–507, 1984.
CGAL has a method CGAL::convex_decomposition_3 that computes a non-optimal decomposition.
I have the feeling your problem is NP-hard. I suggest a first step might be to break the figure into sub-rectangles along all hyperplanes. So in your example there would be three hyperplanes (lines) and four resulting rectangles. Then the problem becomes one of recombining rectangles into larger rectangles to minimize the final number of rectangles. Maybe 0-1 integer programming?
I think dynamic programming might be your friend.
The first step I see is to divide the polyhedron into a trivial collection of blocks such that every possible face is available (i.e. slice and dice it into the smallest pieces possible). This should be trivial because everything is an axis aligned box, so k-tree like solutions should be sufficient.
This seems reasonable because I can look at its cost. The cost of doing this is that I "forget" the original configuration of hyperslabs, choosing to replace it with a new set of hyperslabs. The only way this could lead me astray is if the original configuration had something to offer for the solution. Given that you want an "optimal" solution for all configurations, we have to assume that the original structure isn't very helpful. I don't know if it can be proven that this original information is useless, but I'm going to make that assumption in this answer.
The problem has now been reduced to a graph problem similar to a constrained spanning forest problem. I think the most natural way to view the problem is to think of it as a graph coloring problem (as long as you can avoid confusing it with the more famous graph coloring problem of trying to color a map without two states of the same color sharing a border). I have a graph of nodes (small blocks), each of which I wish to assign a color (which will eventually be the "hyperslab" which covers that block). I have the constraint that I must assign colors in hyperslab shapes.
Now a key observation is that not all possibilities must be considered. Take the final colored graph we want to see. We can partition this graph in any way we please by breaking any hyperslab which crosses the partition into two pieces. However, not every partition is meaningful. The only partitions that make sense are axis aligned cuts, which always break a hyperslab into two hyperslabs (as opposed to any more complicated shape which could occur if the cut was not axis aligned).
Now this cut is the reverse of the problem we're really trying to solve. That cutting is actually the thing we did in the first step. While we want to find the optimal merging algorithm, undoing those cuts. However, this shows a key feature we will use in dynamic programming: the only features that matter for merging are on the exposed surface of a cut. Once we find the optimal way of forming the central region, it generally doesn't play a part in the algorithm.
So let's start by building a collection of hyperslab-spaces, which can define not just a plain hyperslab, but any configuration of hyperslabs such as those with holes. Each hyperslab-space records:
The number of leaf hyperslabs contained within it (this is the number we are eventually going to try to minimize)
The internal configuration of hyperslabs.
A map of the surface of the hyperslab-space, which can be used for merging.
We then define a "merge" rule to turn two or more adjacent hyperslab-spaces into one:
Hyperslab-spaces may only be combined into new hyperslab-spaces (so you need to combine enough pieces to create a new hyperslab, not some more exotic shape)
Merges are done simply by comparing the surfaces. If there are features with matching dimensionalities, they are merged (because it is trivial to show that, if the features match, it is always better to merge hyperslabs than not to)
Now this is enough to solve the problem with brute force. The solution will be NP-complete for certain. However, we can add an additional rule which will drop this cost dramatically: "One hyperslab-space is deemed 'better' than another if they cover the same space, and have exactly the same features on their surface. In this case, the one with fewer hyperslabs inside it is the better choice."
Now the idea here is that, early on in the algorithm, you will have to keep track of all sorts of combinations, just in case they are the most useful. However, as the merging algorithm makes things bigger and bigger, it will become less likely that internal details will be exposed on the surface of the hyperslab-space. Consider
+===+===+===+---+---+---+---+
| : : A | X : : : :
+---+---+---+---+---+---+---+
| : : B | Y : : : :
+---+---+---+---+---+---+---+
| : : | : : : :
+===+===+===+ +---+---+---+
Take a look at the left side box, which I have taken the liberty of marking in stronger lines. When it comes to merging boxes with the rest of the world, the AB:XY surface is all that matters. As such, there are only a handful of merge patterns which can occur at this surface
No merges possible
A:X allows merging, but B:Y does not
B:Y allows merging, but A:X does not
Both A:X and B:Y allow merging (two independent merges)
We can merge a larger square, AB:XY
There are many ways to cover the 3x3 square (at least a few dozen). However, we only need to remember the best way to achieve each of those merge processes. Thus once we reach this point in the dynamic programming, we can forget about all of the other combinations that can occur, and only focus on the best way to achieve each set of surface features.
In fact, this sets up the problem for an easy greedy algorithm which explores whichever merges provide the best promise for decreasing the number of hyperslabs, always remembering the best way to achieve a given set of surface features. When the algorithm is done merging, whatever that final hyperslab-space contains is the optimal layout.
I don't know if it is provable, but my gut instinct thinks that this will be an O(n^d) algorithm where d is the number of dimensions. I think the worst case solution for this would be a collection of hyperslabs which, when put together, forms one big hyperslab. In this case, I believe the algorithm will eventually work its way into the reverse of a k-tree algorithm. Again, no proof is given... it's just my gut instinct.
You can try a constrained delaunay triangulation. It gives very few triangles.
Are you able to determine the equations for each line?
If so, maybe you can get the intersection (points) between those lines. Then if you take one axis, and start to look for a value which has more than two points (sharing this value) then you should "draw" a line. (At the beginning of the sweep there will be zero points, then two (your first pair) and when you find more than two points, you will be able to determine which points are of the first polygon and which are of the second one.
Eg, if you have those lines:
verticals (red):
x = 0, x = 2, x = 5
horizontals (yellow):
y = 0, y = 2, y = 3, y = 5
and you start to sweep through of X axis, you will get p1 and p2, (and we know to which line-equation they belong ) then you will get p3,p4,p5 and p6 !! So here you can check which of those points share the same line of p1 and p2. In this case p4 and p5. So your first new polygon is p1,p2,p4,p5.
Now we save the 'new' pair of points (p3, p6) and continue with the sweep until the next points. Here we have p7,p8,p9 and p10, looking for the points which share the line of the previous points (p3 and p6) and we get p7 and p10. Those are the points of your second polygon.
When we repeat the exercise for the Y axis, we will get two points (p3,p7) and then just three (p1,p2,p8) ! On this case we should use the farest point (p8) in the same line of the new discovered point.
As we are using lines equations and points 2 or more dimensions, the procedure should be very similar
ps, sorry for my english :S
I hope this helps :)

Determine whether the two classes are linearly separable (algorithmically in 2D)

There are two classes, let's call them X and O. A number of elements belonging to these classes are spread out in the xy-plane. Here is an example where the two classes are not linearly separable. It is not possible to draw a straight line that perfectly divides the Xs and the Os on each side of the line.
How to determine, in general, whether the two classes are linearly separable?. I am interested in an algorithm where no assumptions are made regarding the number of elements or their distribution. An algorithm of the lowest computational complexity is of course preferred.
If you found the convex hull for both the X points and the O points separately (i.e. you have two separate convex hulls at this stage) you would then just need to check whether any segments of the hulls intersected or whether either hull was enclosed by the other.
If the two hulls were found to be totally disjoint the two data-sets would be geometrically separable.
Since the hulls are convex by definition, any separator would be a straight line.
There are efficient algorithms that can be used both to find the convex hull (the qhull algorithm is based on an O(nlog(n)) quickhull approach I think), and to perform line-line intersection tests for a set of segments (sweepline at O(nlog(n))), so overall it seems that an efficient O(nlog(n)) algorithm should be possible.
This type of approach should also generalise to general k-way separation tests (where you have k groups of objects) by forming the convex hull and performing the intersection tests for each group.
It should also work in higher dimensions, although the intersection tests would start to become more challenging...
Hope this helps.
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel language offers an interface for it - R, Python, Octave, Julia, etc.
Let's say you have a set of points A and B:
Then you have to minimize the 0 for the following conditions:
(The A below is a matrix, not the set of points from above)
"Minimizing 0" effectively means that you don't need to actually optimize an objective function because this is not necessary to find out if the sets are linearly separable.
In the end
() is defining the separating plane.
In case you are interested in a working example in R or the math details, then check this out.
Here is a naïve algorithm that I'm quite sure will work (and, if so, shows that the problem is not NP-complete, as another post claims), but I wouldn't be surprised if it can be done more efficiently: If a separating line exists, it will be possible to move and rotate it until it hits two of the X'es or one X and one O. Therefore, we can simply look at all the possible lines that intersect two X'es or one X and one O, and see if any of them are dividing lines. So, for each of the O(n^2) pairs, iterate over all the n-2 other elements to see if all the X'es are on one side and all the O's on the other. Total time complexity: O(n^3).
Linear perceptron is guaranteed to find such separation if one exists.
See: http://en.wikipedia.org/wiki/Perceptron .
You can probably apply linear programming to this problem. I'm not sure of its computational complexity in formal terms, but the technique is successfully applied to very large problems covering a wide range of domains.
Computing a linear SVM then determining which side of the computed plane with optimal marginals each point lies on will tell you if the points are linearly separable.
This is overkill, but if you need a quick one off solution, there are many existing SVM libraries that will do this for you.
As mentioned by ElKamina, Linear Perceptron is guaranteed to find a solution if one exists. This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming.
A code with an example to solve using Perceptron in Matlab is here
In general that problem is NP-hard but there are good approximate solutions like K-means clustering.
Well, both Perceptron and SVM (Support Vector Machines) can tell if two data sets are separable linearly, but SVM can find the Optimal Hiperplane of separability. Besides, it can work with n-dimensional vectors, not only points.
It is used in applications such as face recognition. I recomend to go deep into this topic.

Continuous Physics Engine's Collision Detection Techniques

I'm working on a purely continuous physics engine, and I need to choose algorithms for broad and narrow phase collision detection. "Purely continuous" means I never do intersection tests, but instead want to find ways to catch every collision before it happens, and put each into "planned collisions" stack that is ordered by TOI.
Broad Phase
The only continuous broad-phase method I can think of is encasing each body in a circle and testing if each circle will ever overlap another. This seems horribly inefficient however, and lacks any culling.
I have no idea what continuous analogs might exist for today's discrete collision culling methods such as quad-trees either. How might I go about preventing inappropriate and pointless broad test's such as a discrete engine does?
Narrow Phase
I've managed to adapt the narrow SAT to a continuous check rather than discrete, but I'm sure there's other better algorithms out there in papers or sites you guys might have come across.
What various fast or accurate algorithm's do you suggest I use and what are the advantages / disatvantages of each?
Final Note:
I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave, convex, round, or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if I choose an algorithm that breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).
You said circles, so I'm assuming you have 2D objects. You could extend your 2D object (or their bounding shapes) into 3D by adding a time dimension, and then you can use the normal techniques for checking for static collisions among a set of 3D objects.
For example, if you have a circle in (x, y) moving to the right (+x) with constant velocity, then, when you extend that with a time dimension, you have a diagonal cylinder in (x, y, t). By doing intersections between these 3D objects (just treat time as z), you can see if two objects will ever intersect. If point P is a point of intersection, then you know the time of that intersection simply by looking at P.t.
This generalizes into higher dimensions, too, though the math gets hard (for me anyway).
The collision detection might be tricky if objects have complex paths. For example, if your circle is influenced by gravity, then the extruded space-time object is a parabolic sphere sweep rather than a simple cylinder. You could pad the bounding objects a bit and use linear approximations over shorter periods of time and iterate, but I'm not sure if that violates what you mean by continuous.
I am going to assume you want things like gravity or other conservative forces in your simulation. If that's the case the trajectories of your objects are most likely not going to be lines, in which case, just like Adrian pointed out, the math will be somewhat harder. I can't think of a way to avoid checking all possible combinations of curves for collisions, but you can calculate the minimum distance between two curves rather easily, as long as both are solutions to linear systems (or, in general, if you have a closed form solution for the curves). If you know that x1(t) = f(t) and x2(t) = g(t) then what you'll want to
do is calculate the distance ||x1(t) - x2(t)|| and set its derivative to zero. This should be an expression that depends on f(t), g(t) and their derivatives and will give you a time tmin (or maybe a few possible ones) at which you then evaluate the distance and check to see if it is greater or smaller than r1+r2 --- the sum of the radii of the two bounding circles. If it is smaller, then you have a potential collision at that time so you run the narrow phase algorithm.

Algorithms to normalize finger touch data (reduce the number of points)

I'm working on an app that lets users select regions by finger painting on top of a map. The points then get converted to a latitude/longitude and get uploaded to a server.
The touch screen is delivering way too many points to be uploaded over 3G. Even small regions can accumulate up to ~500 points.
I would like to smooth this touch data (approximate it within some tolerance). The accuracy of drawing does not really matter much as long as the general area of the region is the same.
Are there any well known algorithms to do this? Is this work for a Kalman filter?
There is the Ramer–Douglas–Peucker algorithm (wikipedia).
The purpose of the algorithm is, given
a curve composed of line segments, to
find a similar curve with fewer
points. The algorithm defines
'dissimilar' based on the maximum
distance between the original curve
and the simplified curve. The
simplified curve consists of a subset
of the points that defined the
original curve.
You probably don't need anything too exotic to dramatically cut down your data.
Consider something as simple as this:
Construct some sort of error metric. An easy one would be a normalized sum of the distances from the omitted points to the line that was approximating them. Decide what a tolerable error using this metric is.
Then starting from the first point construct the longest line segment that falls within the tolerable error range. Repeat this process until you have converted the entire path into a polyline.
This will not give you the globally optimal approximation but it will probably be good enough.
If you want the approximation to be more "curvey" you might consider using splines or bezier curves rather than straight line segments.
You want to subdivide the surface into a grid with a quadtree or a space-filling-curve. A sfc reduce the 2d complexity to a 1d complexity. You want to look for Nick's hilbert curve quadtree spatial index blog.
I was going to do something this in an app, but was intending on generating a path from the points on-the-fly. I was going to use a technique mentioned in this Point Sequence Interpolation thread

Resources