The problem at hand is part of a scientific simulation concerned with 2D growth within 3D space. The 2D shape grows by adding (triangular) segments to the previously grown shape.
Note that the actual segments in 3D have a thickness, thus, my code actually works with triangular prisms.
At one point, these 2D shapes (with whatever relative orientation and position) collide.
If one of the new triangular prisms intersect with previously inserted segments, I only want to insert the "part" of a segment which does not intersect with the previously inserted segments. As shown below for the segments labelled T1 and T2.
In the first step, I calculated all intersections edges to faces. I then used the CGAL Delaunay Triangulation package in 3D to mesh the resultant point set in a tetrahedral mesh. As a last step, I throw away all those tetrahedrons which intersect with previously inserted segments.
This works beautifully in most cases - but I am convinced by now that the idea cannot work for fundamental reasons.
What's a more reliable way to compute this?
So you're looking to find the intersection of 2 convex hulls?
I think your correct that your approach won't work in all cases, but only in certain degenerate cases. For example if one convex hull is entirely inside the other then there are no face/edge intersections.
A more obviously robust approach is to start off with a convex hull big enough to enclose both your hulls (so basically representing all of space), and then for each plane in both your convex hulls partition this hull by that plane, throwing away the part on the "outside".
The result will be the intersection - which may be nothing.
Related
This question is an extension on some computation details of this question.
Suppose one has a set of (potentially overlapping) circles, and one wishes to compute the area this set of circles covers. (For simplicity, one can assume some precomputation steps have been made, such as getting rid of circles included entirely in other circles, as well as that the circles induce one connected component.)
One way to do this is mentioned in Ants Aasma's and Timothy's Shields' answers, being that the area of overlapping circles is just a collection of circle slices and polygons, both of which the area is easy to compute.
The trouble I'm encountering however is the computation of these polygons. The nodes of the polygons (consisting of circle centers and "outer" intersection points) are easy enough to compute:
And at first I thought a simple algorithm of picking a random node and visiting neighbors in clockwise order would be sufficient, but this can result in the following "outer" polygon to be constructed, which is not part of the correct polygons.
So I thought of different approaches. A Breadth First Search to compute minimal cycles, but I think the previous counterexample can easily be modified so that this approach results in the "inner" polygon containing the hole (and which is thus not a correct polygon).
I was thinking of maybe running a Las Vegas style algorithm, taking random points and if said point is in an intersection of circles, try to compute the corresponding polygon. If such a polygon exists, remove circle centers and intersection points composing said polygon. Repeat until no circle centers or intersection points remain.
This would avoid ending up computing the "outer" polygon or the "inner" polygon, but would introduce new problems (outside of the potentially high running time) e.g. more than 2 circles intersecting in a single intersection point could remove said intersection point when computing one polygon, but would be necessary still for the next.
Ultimately, my question is: How to compute such polygons?
PS: As a bonus question for after having computed the polygons, how to know which angle to consider when computing the area of some circle slice, between theta and 2PI - theta?
Once we have the points of the polygons in the right order, computing the area is a not too difficult.
The way to achieve that is by exploiting planar duality. See the Wikipedia article on the doubly connected edge list representation for diagrams, but the gist is, given an oriented edge whose right face is inside a polygon, the next oriented edge in that polygon is the reverse direction of the previous oriented edge with the same head in clockwise order.
Hence we've reduced the problem to finding the oriented edges of the polygonal union and determining the correct order with respect to each head. We actually solve the latter problem first. Each intersection of disks gives rise to a quadrilateral. Let's call the centers C and D and the intersections A and B. Assume without loss of generality that the disk centered at C is not smaller than the disk centered at D. The interior angle formed by A→C←B is less than 180 degrees, so the signed area of that triangle is negative if and only if A→C precedes B→C in clockwise order around C, in turn if and only if B→D precedes A→D in clockwise order around D.
Now we determine which edges are actually polygon boundaries. For a particular disk, we have a bunch of angle intervals around its center from before (each sweeping out the clockwise sector from the first endpoint to the second). What we need amounts to a more complicated version of the common interview question of computing the union of segments. The usual sweep line algorithm that increases the cover count whenever it scans an opening endpoint and decreases the cover count whenever it scans a closing endpoint can be made to work here, with the adjustment that we need to initialize the count not to 0 but to the proper cover count of the starting angle.
There's a way to do all of this with no trigonometry, just subtraction and determinants and comparisons.
I have a 3D mesh consisting of triangle polygons. My mesh can be either oriented left or right:
I'm looking for a method to detect mesh direction: right vs left.
So far I tried to use mesh centroid:
Compare centroid to bounding-box (b-box) center
See if centroid is located left of b-box center
See if centroid is located right of b-box center
But the problem is that the centroid and b-box center don't have a reliable difference in most cases.
I wonder what is a quick algorithm to detect my mesh direction.
Update
An idea proposed by #collapsar is ordering Convex Hull points in clockwise order and investigating the longest edge:
UPDATE
Another approach as suggested by #YvesDaoust is to investigate two specific regions of the mesh:
Count the vertices in two predefined regions of the bounding box. This is a fairly simple O(N) procedure.
Unless your dataset is sorted in some way, you can't be faster than O(N). But if the point density allows it, you can subsample by taking, say, every tenth point while applying the procedure.
You can as well keep your idea of the centroid, but applying it also in a subpart.
The efficiency of an algorithm to solve your problem will depend on the data structures that represent your mesh. You might need to be more specific about them in order to obtain a sufficiently performant procedure.
The algorithms are presented in an informal way. For a more rigorous analysis, math.stackexchange might be a more suitable place to ask (or another contributor is more adept to answer ...).
The algorithms are heuristic by nature. Proposals 1 and 3 will work fine for meshes whose local boundary's curvature is mostly convex locally (skipping a rigorous mathematical definition here). Proposal 2 should be less dependent on the mesh shape (and can be easily tuned to cater for ill-behaved shapes).
Proposal 1 (Convex Hull, 2D)
Let M be the set of mesh points, projected onto a 'suitable' plane as suggested by the graphics you supplied.
Compute the convex hull CH(M) of M.
Order the n points of CH(M) in clockwise order relative to any point inside CH(M) to obtain a point sequence seq(P) = (p_0, ..., p_(n-1)), with p_0 being an arbitrary element of CH(M). Note that this is usually a by-product of the convex hull computation.
Find the longest edge of the convex polygon implied by CH(M).
Specifically, find k, such that the distance d(p_k, p_((k+1) mod n)) is maximal among all d(p_i, p_((i+1) mod n)); 0 <= i < n;
Consider the vector (p_k, p_((k+1) mod n)).
If the y coordinate of its head is greater than that of its tail (ie. its projection onto the line ((0,0), (0,1)) is oriented upwards) then your mesh opens to the left, otherwise to the right.
Step 3 exploits the condition that the mesh boundary be mostly locally convex. Thus the convex hull polygon sides are basically short, with the exception of the side that spans the opening of the mesh.
Proposal 2 (bisector sampling, 2D)
Order the mesh points by their x coordinates int a sequence seq(M).
split seq(M) into 2 halves, let seq_left(M), seq_right(M) denote the partition elements.
Repeat the following steps for both point sets.
3.1. Select randomly 2 points p_0, p_1 from the point set.
3.2. Find the bisector p_01 of the line segment (p_0, p_1).
3.3. Test whether p_01 lies within the mesh.
3.4. Keep a count on failed tests.
Statistically, the mesh point subset that 'contains' the opening will produce more failures for the same given number of tests run on each partition. Alternative test criteria will work as well, eg. recording the average distance d(p_0, p_1) or the average length of (p_0, p_1) portions outside the mesh (both higher on the mesh point subset with the opening). Cut off repetition of step 3 if the difference of test results between both halves is 'sufficiently pronounced'. For ill-behaved shapes, increase the number of repetitions.
Proposal 3 (Convex Hull, 3D)
For the sake of completeness only, as your problem description suggests that the analysis effectively takes place in 2D.
Similar to Proposal 1, the computations can be performed in 3D. The convex hull of the mesh points then implies a convex polyhedron whose faces should be ordered by area. Select the face with the maximum area and compute its outward-pointing normal which indicates the direction of the opening from the perspective of the b-box center.
The computation gets more complicated if there is much variation in the side lengths of minimal bounding box of the mesh points, ie. if there is a plane in which most of the variation of mesh point coordinates occurs. In the graphics you've supplied that would be the plane in which the mesh points are rendered assuming that their coordinates do not vary much along the axis perpendicular to the plane.
The solution is to identify such a plane and project the mesh points onto it, then resort to proposal 1.
I've got some 2D polygons, each as a list of clockwise coordinates. The polygons are
simple (i.e. they may be concave but they don't intersect themselves) and they don't overlap eachother.
I need to subdivide these polygons into smaller polygons to fit a size constraint. Just like the original polygons, the smaller ones should be simple (non-self-intersecting) and the constraint is they should each fit within one 'unit square' (which, for sake of simplicity, I can assume to be 1x1).
The thing is, I need to do this as efficiently as possible, where 'efficient' means the lowest number of resulting (small) polygons possible. Computation time is not important.
Is there some smart algorithm for this? At first I thought about recursively subdividing each polygon (splitting it in half, either horizontally or vertically whichever direction is larger) which works, but I don't seem to get very optimal results with this. Any ideas?
Draw a circle with a center of one of the initial points of initial polygon and radius of your desired length constraint.
The circle will intersect at least two lines at two points. Now you have your first triangle by the biggest as possible. Then choose those intersections as next target. Do until there is no initial points left outside. You have your triangles as large as possible(so as few as possible)
Do not account the already-created triangle edges as an intersection point.
Resulting polygons are not always triangle, they can be quads too. Maybe larger point-numbers too!
They all just nearly equal to the desired size.
Fine-tuning the interior parts would need some calculation.
I suggest you use the following:
Triangulate the polygon, e.g. using a sweep line algorithm.
Make sure all the triangles do not violate the constraint. If one violates the constraint, first try edge-flips to fix it, otherwise subdivide on the longest edge.
Use dynamic programming to join the triangles, while maintaining the constraint and only joining adjacent polygons.
What is a fast algorithm for determining whether or not a point is inside a 3D mesh? For simplicity you can assume the mesh is all triangles and has no holes.
What I know so far is that one popular way of determining whether or not a ray has crossed a mesh is to count the number of ray/triangle intersections. It has to be fast because I am using it for a haptic medical simulation. So I cannot test all of the triangles for ray intersection. I need some kind of hashing or tree data structure to store the triangles in to help determine which triangle are relevant.
Also, I know that if I have any arbitrary 2D projection of the vertices, a simple point/triangle intersection test is all necessary. However, I'd still need to know which triangles are relevant and, in addition, which triangles lie in front of a the point and only test those triangles.
I solved my own problem. Basically, I take an arbitrary 2D projection (throw out one of the coordinates), and hash the AABBs (Axis Aligned Bounding Boxes) of the triangles to a 2D array. (A set of 3D cubes as mentioned by titus is overkill, as it only gives you a constant factor speedup.) Use the 2D array and the 2D projection of the point you are testing to get a small set of triangles, which you do a 3D ray/triangle intersection test on (see Intersections of Rays, Segments, Planes and Triangles in 3D) and count the number of triangles the ray intersection where the z-coordinate (the coordinate thrown out) is greater than the z-coordinate of the point. An even number of intersections means it is outside the mesh. An odd number of intersections means it is inside the mesh. This method is not only fast, but very easy to implement (which is exactly what I was looking for).
This is algorithm is efficient only if you have many queries to justify the time for constructing the data structure.
Divide the space into cubes of equal size (we'll figure out the size later). For each cube know which triangles has at least a point in it. Discard the cubes that don't contain anything. Do a ray casting algorithm as presented on wikipedia, but instead o testing if the line intersects each triangle, get all the cubes that intersect with the line, and then do ray casting only with the triangles in these cubes. Watch out not to test the same triangle more than one time because it is present in two cubes.
Finding the proper cube size is tricky, it shouldn't be neither to big or too small. It can only be found by trial and error.
Let's say number of cubes is c and number of triangles is t.
The mean number of triangles in a cube is t/c
k is mean number of cubes that intersect the ray
line-cube intersections + line-triangle intersection in those cubes has to be minimal
c+k*t/c=minimal => c=sqrt(t*k)
You'll have to test out values for the size of the cubes until c=sqrt(t*k) is true
A good starting guess for the size of the cube would be sqrt(mesh width)
To have some perspective, for 1M triangles you'll test on the order of 1k intersections
Ray Triangle Intersection appears to be a good algorithm when it comes to accuracy. The Wiki has some more algorithms. I am linking it here, but you might have seen this already.
Can you, perhaps improvise by, maintaining a matrix of relationship between the points and the plane to which they make the vertices? This subject appears to be a topic of investigation in the academia. Not sure how to access more discussions related to this.
I have a 3D polygon mesh and a corresponding 2D polygon mesh (actually from a UV map) which I'm using to map the geometry onto a 2D plane. Given a point on the plane, how can I efficiently find the polygon on which it's resting in order to map that 2D point back into 3D?
The best approach I can think of is to store the polygons in a 2D interval tree, and use that to get candidate polygons. Is there a simpler approach?
To clarify, this is not for a shader. I'm actually taking a 2D physical simulation and rendering it wrapped around a 3D mesh. For drawing each object, I need to figure out what point in 3D corresponds to its real 2D position.*
One approach I've seen for triangle meshes goes as follows: choose a triangle, and imagine that each of the sides defines a half space. For a given edge, the half space boundary is the line containing the edge, and the half space does not contain the triangle. Choose an edge whose corresponding half space contains your target point. Then select the triangle on the other side of edge, and repeat the process.
Using this method, you will eventually end up at the triangle that contains your target point.
This method is arguable simpler than implementing a 2D interval tree, although the search is less efficient (if n is the number of triangles, it is O(√n) rather than O(log n). Also, it should work for a polygon mesh, as long as the polygons are convex.
So, if I were trying to just get the thing implemented, I'd probably start with a global search of all triangles - compute the barycentric coordinates of that 2d point for each triangle, find the triangle where the barycentric coordinates are all positive, and then use those to map to 3d (multiply the stu position by the 3d points). I'd do this first, and only if it's not fast enough would I try something more complex.
If it's possible to iterate by triangle rather than by 2d points, then the barycentric method would probably be fast enough. But it seems like you've got a bunch of 2d points at arbitrary positions that need to be mapped, and the points change position from frame to frame?
If you've got this kind of situation, you could probably get a big speedup by implementing a local update per frame. Each 2d point would remember which triangle it was within. Set that as the current triangle. Test if the new position is within the current triangle. If not, then you want to walk the mesh to the adjacent triangle which is closest to the target 2d point. Each edge-adjacent triangle is composed of the two common points on the edge, plus another point. Find which edge-adjacent triangle's other point is closest to the target, and set that as current. Then iterate - seems like it should find it pretty quickly? You could also cache a max size for each triangle, so if the point has moved a lot you can just iterate to the next neighbor without doing the barycentric computation (the max size would need to be the distance such that if you are farther than that distance from any triangle point there is no chance you're inside the triangle. This is the length of the largest edge).
But as you mention in your comments, you can run into problems with meshes that have concavities, holes, or separate connected components, where you may fall into a local minimum. There are a couple of ways to deal with this. I think the simplest is to keep a list of all visited triangles (maybe as a flag on the triangle, vector< bool > or set< triangle index >) and refuse to revisit a triangle. If you find that you've visited all the neighbors of your current triangle, then fall back to a global search. Such failures are likely to be uncommon, so it shouldn't hurt your performance too much.
This kind of per-frame updating can be very fast, and might even be a decent approach for computing the initial containing triangles - just choose a random triangle and walk from there (changes from checking all n triangles to only those that are in roughly a straight line to the target). If it's not fast enough, what you could do is keep a k-d tree (or something similar) of the 2d mesh points as well as a single touching triangle index for each mesh point. To seed the iteration, find the closest point to the target 2d point in the k-d tree, set the adjacent triangle to be current, and then iterate.