I'm working on a purely continuous physics engine, and I need to choose algorithms for broad and narrow phase collision detection. "Purely continuous" means I never do intersection tests, but instead want to find ways to catch every collision before it happens, and put each into "planned collisions" stack that is ordered by TOI.
Broad Phase
The only continuous broad-phase method I can think of is encasing each body in a circle and testing if each circle will ever overlap another. This seems horribly inefficient however, and lacks any culling.
I have no idea what continuous analogs might exist for today's discrete collision culling methods such as quad-trees either. How might I go about preventing inappropriate and pointless broad test's such as a discrete engine does?
Narrow Phase
I've managed to adapt the narrow SAT to a continuous check rather than discrete, but I'm sure there's other better algorithms out there in papers or sites you guys might have come across.
What various fast or accurate algorithm's do you suggest I use and what are the advantages / disatvantages of each?
Final Note:
I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave, convex, round, or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if I choose an algorithm that breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).
You said circles, so I'm assuming you have 2D objects. You could extend your 2D object (or their bounding shapes) into 3D by adding a time dimension, and then you can use the normal techniques for checking for static collisions among a set of 3D objects.
For example, if you have a circle in (x, y) moving to the right (+x) with constant velocity, then, when you extend that with a time dimension, you have a diagonal cylinder in (x, y, t). By doing intersections between these 3D objects (just treat time as z), you can see if two objects will ever intersect. If point P is a point of intersection, then you know the time of that intersection simply by looking at P.t.
This generalizes into higher dimensions, too, though the math gets hard (for me anyway).
The collision detection might be tricky if objects have complex paths. For example, if your circle is influenced by gravity, then the extruded space-time object is a parabolic sphere sweep rather than a simple cylinder. You could pad the bounding objects a bit and use linear approximations over shorter periods of time and iterate, but I'm not sure if that violates what you mean by continuous.
I am going to assume you want things like gravity or other conservative forces in your simulation. If that's the case the trajectories of your objects are most likely not going to be lines, in which case, just like Adrian pointed out, the math will be somewhat harder. I can't think of a way to avoid checking all possible combinations of curves for collisions, but you can calculate the minimum distance between two curves rather easily, as long as both are solutions to linear systems (or, in general, if you have a closed form solution for the curves). If you know that x1(t) = f(t) and x2(t) = g(t) then what you'll want to
do is calculate the distance ||x1(t) - x2(t)|| and set its derivative to zero. This should be an expression that depends on f(t), g(t) and their derivatives and will give you a time tmin (or maybe a few possible ones) at which you then evaluate the distance and check to see if it is greater or smaller than r1+r2 --- the sum of the radii of the two bounding circles. If it is smaller, then you have a potential collision at that time so you run the narrow phase algorithm.
Related
How to reliably find out whether two Bezier curves intersect? By "reliably" I mean the test will answer "yes" only when the curves intersect, and "no" only when they don't intersect. I don't need to know what parameters the intersection was found at. I also would like to use floating-point numbers in the implementation.
I found several answers here which use the curves' bounding-boxes for the test: this is not what I'm after as such test may report intersection even if the curves don't intersect.
The closest thing I found so far is the "bounding wedge" by Sederberg and Meyers but it "only" distinguishes between at-most-one and two-or-more intersection, whereas I want to know if there is at-most-zero and one-or-more intersections.
I am assuming cubic bezier curves.
The most reliable method for reporting intersections, using floating point computation, is probably to find them, combined with error analysis.
The main problem, when floating point computations are involved, is inconsistency in computed results w.r.t. topology. Unfortunately this is unavoidable, if you need to compute anything in computational geometry within a reasonable amount of time.
So instead of stressing on the right algorithm for intersection calculation, picking a simple one and implementing error analysis is probably the solution.
I would try to implement an efficient subdivision algorithm like bezier-clipping (or a variant of quadratic clipping –Nicholas North's Geo-clip), and with running error analysis to compute tight error bounds so that we don't "miss" intersections.
To elaborate, The main sources of floating-point (double prec.) error in these subdivision based algorithms are:
Truncation error: especially the error in the input coefficients etc. which are also finite —we can't do much here within the algorithm.
Roundoff error during De Casteljau subdivision and point evaluation.
I have used the running error bounds for De Casteljau's algorithm —explained here, along with Geo-clip algorithm. It is fast and robust. (B.t.w. This theses, in general, is a good read if you want to make polynomial/bezier algorithms more robust)
Assuming, you know the basics of the bezier clipping algorithm, the general idea is to expand the hybrid bezier curve (in the first paper linked) and the fat line appropriately with the error bounds for each clip.
Some other unrelated ideas:
You can try a variant of Bentley-Ottmann sweepline algorithm. First you have to split the bezier curves as X monotone segments; and look at their Y orderings as you sweep across them. This method has a few disadvantages, since bezier curves are also capable of intersecting with multiplicity of more than one - think of tangential intersection. Doing an error analysis may be difficult here (when you compute a y value, there is some floating point error involved)
Interval Projected Polyhedron algorithm: This uses rounded interval arithmetic for robustness. But the algorithm for 2D Bezier curves gets quite complicated
There are a few cases you might come across:
Self intersections
Overlapping (coincident) curves: Subdivision algorithms will keep going in this case. This can be easy to check though.
Good luck :)
Assuming cubic beziers, the intersection points are real roots of a 9th degree polynomial. The existence of such roots within an interval (from negative to positive infinity for infinitely long curves, or 0 to 1 for your typical piecewise cubic beziers) can be checked robustly using a Sturm sequence. This will only work if we allow extending one of the curves to infinity. The algorithm will have no loops, and only use basic arithmetic operations (add, subtract and multiply, while division should be avoidable).
For maximum robustness, you could use arbitrary precision math. Since the number of steps is constant, the maximum possible number of digits in all temporary results is bounded. That way, your algorithm will always return the correct result, no matter how pathological the input (eg. curves barely touching).
It might be possible to use ordinary floating point first, and detect potential pathological cases (intermediate results becoming zero, when adding/subtracting previous intermediate results).
The formulas for getting the polynomial terms from Bezier control points are truly a sight to behold, but luckily you don't have to work them out,
they're right here on Github.
There's a thread from 2004, Closed-form of Bezier intersection test on comp.graphics.algorithms with more details.
If you're dealing with quadratic beziers, the polynomial will only be 4th degree.
An idea from the top of my head.
Map them, so they are in the same domain, such that you can subtract them. Then just do a root finding. There are lots and lots of numeric methods to root finding.
If you need to see if two curves intersect visually, say real-time screen graphics in a game or something, then the easiest way to do so, by far, is to simply compare their pixels.
Get the bbox and pixel lookup tables (LUTs) for both curves, check if there's bbox overlap: no overlap? done. There is no intersection. Overlap? sort the LUTs with a fast sorter, and then just run a compare. The moment you find a single match, you're done. There is an overlap, and you don't care where.
If you have to do this for lots of curves: use a library that does this for you, don't waste your time implementing it yourself, you're not going to be as efficient (for large collections things like oct-trees and scanline checks become far more efficient)
However, if you need to know if there is absolute, mathematically precise overlap, say for as-correct-as-possible design work, then you can't cut corners: actually run a real intersection detection algorithm to find all possible intersection points. Real-time is mostly irrelevant in this setting, you can spend the few more cycles to run a proper detection algorithm.
With an emphasis on finding the time (when the intersection starts), although the position is also important. The bounding boxes (not axis aligned) have a position, rotation, velocity, and angular velocity (rate of rotation). NO accelerations, which should really simplify things... And I could probably remove the angular velocity component as well if necessary. Either a continuous or iterative function would work, but unless the iterative function actively converges toward a solution (or lack thereof), it probably would be too slow.
I looked at the SAT, but it doesn't seem to be built to find the actual time of collision of moving objects. It seems to only work with non-moving snapshots and is designed to work with more complicated objects than rectangles, so it actually seems ill-suited to this problem.
I've considered possibly drawing the trajectory out of each of the 8 points then somehow having a function for if a point is in or out of the other shape and getting a time range of that occurring, but I'm pretty lost on how to go about that. One nice feature would be that it operates entirely with time and ignores the idea of discrete "steps", but it also strikes me as an inefficient approach.
No worries about broad phase (determining if it's worth seeing if these two bounding boxes may overlap), I already have that tackled.
Finding an exact collision time is essentially a nonlinear root-finding problem. This means that you will ultimately need an iterative approach to determine the final collision time -- but the clever bit in designing a collision solver is to avoid the root-solving when it isn't actually necessary...
The SAT is a theorem, not an algorithm: it can be used to guide the design of a collision solver, but it is not one itself. Briefly, it says that, if you can demonstrate a separating axis exists, the objects have not collided. Conversely, if you can show that there is no such axis, then the objects currently do overlap. As you point out, you can use this principle more-or-less directly, to design a binary "yes/no" query as to whether two objects in given positions overlap or not.
The difference with a collision solver is that the problem is animated, or kinetic: object position is a function of time. One way to solve this problem is to start with a valid "yes/no" collision test, treat all the inequalities as functions of time, and use root-finding methods to look for the actual collision times on that basis.
There is a variety of existing methods in published academic literature. I recommend some library research: the best option probably depends on the details of your application.
First of all, instead of thinking of two rectangles moving with speed (x1, y1) and (x2, y2) respectively, you may fix one of them (set its speed to (0, 0) ) and think of another one moving with speed (x2 - x1, y2 - y1).
This way, the situation looks like one rectangle is immovable, and another one is passing by, possibly hitting the first.
Assuming you don't have any angular velocity
Not hard to see, that you can then intersect 4 trajectories of a second rectangle (they are rays starting from different corners of a bounding box in (x2 - x1, y2 - y1) direction) with 4 sides of the first rectangle, standing still. Then you'll have to do the same vice versa - find the intersection of a first rectangle moving in reverse direction - (-(x2 - x1), -(y2 - y1)) with 4 sides of a second rectangle. Choose the minimum distance between the all intersection points you've found (there might be 0-8 of them) and you're done.
Don't forget to consider many special cases - when the sides of both rectangles are parallel, when there's no intersection at all etc.
Note, this all is done in O(1) time, though the calculations are quite complex - 32 intersections of a ray and a segment.
If you really wish your rectangles to rotate with some speed, I would suggest considering what #comingstorm has said: this is a problem of finding roots of a non-linear equation, however, even in such case, if you have a limited angular speed of your rectangles, you may split the task into a series of ternary search subtasks, though I suppose this is just one of possible methods of solving non-linear problems.
I am solving a fourth order non-linear partial differential equation in time and space (t, x) on a square domain with periodic or free boundary conditions with MATHEMATICA.
WITHOUT using conformal mapping, what boundary conditions at the edge or corner could I use to make the square domain "seem" like a circular domain for my non-linear partial differential equation which is cartesian?
The options I would NOT like to use are:
Conformal mapping
changing my equation to polar/cylindrical coordinates?
This is something I am pursuing purely out of interest just in case someone screams bloody murder if misconstrued as a homework problem! :P
That question was asked on the time people found out that the world was spherical. They wanted to make rectangular maps of the surface of the world...
It is not possible.
The reason why is not possible is because the sphere has an intrinsic curvature, while the cube/parallelepiped has not. It can be shown that for two elements with different intrinsic curvatures, their surfaces cannot be mapped while either keeping constant infinitesimal distances, either the distance between two points is given by the euclidean distance.
The easiest way to understand this problem is to pick some rectangular piece of paper and try to make a sphere of it without locally stretch it or compress it (you can fold). You can't. On the other hand, you can make a cylinder surface, because the cylinder has also no intrinsic curvature.
In maps, normally people use one of the two options:
approximate the local surface of the sphere by a tangent plane and make a rectangle out of it. (a local map of some region)
make world maps but implement some curved lines everywhere identifying that the measuring distances must be made according to those lines.
This is also the main reason why when traveling from Europe to North America the airplanes seems to make a curve always trying to pass near canada. If we measured the distance from the rectangular map, we see that they should go on a strait line to minimize the distance. However, because we are mapping two different intrinsic curvatures, the real distance must be measured in a different way (and not via a strait line).
For 2D (in fact for nD) the same reasoning applies.
I have polygons that define the contour of counties in the UK. These shapes are very detailed (10k to 20k points each), thus rendering the related computations (is point X in polygon P?) quite computationaly expensive.
Thus, I would like to "subsample" my polygons, to obtain a similar shape but with less points. What are the different techniques to do so?
The trivial one would be to take one every N points (thus subsampling by a factor N), but this feels too "crude". I would rather do some averaging of points, or something of that flavor. Any pointer?
Two solutions spring to mind:
1) since the map of the UK is reasonably squarish, you could choose to render a bitmap with the counties. Assign each a specific colour, and then render the borders with a 1 or 2 pixel thick black line. This means you'll only have to perform the expensive interior/exterior calculation if a sample happens to lie on the border. The larger the bitmap, the less often this will happen.
2) simplify the county outlines. You can use a recursive Ramer–Douglas–Peucker algorithm to recursively simplify the boundaries. Just make sure you cache the results. You may also have to solve this not for entire county boundaries but for shared boundaries only, to ensure no gaps. This might be quite tricky.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
Polygon triangulation should help here. You'll still have to check many polygons, but these are triangles now, so they are easier to check and you can use some optimizations to determine only a small subset of polygons to check for a given region or point.
As it seems you have all the algorithms you need for polygons, not only for triangles, you can also merge several triangles that are too small after triangulation or if triangle count gets too high.
I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.
Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.
I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.
I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.
Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe
There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.
unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?
It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp