Algorithm for determining whether a point is inside a 3D mesh - algorithm

What is a fast algorithm for determining whether or not a point is inside a 3D mesh? For simplicity you can assume the mesh is all triangles and has no holes.
What I know so far is that one popular way of determining whether or not a ray has crossed a mesh is to count the number of ray/triangle intersections. It has to be fast because I am using it for a haptic medical simulation. So I cannot test all of the triangles for ray intersection. I need some kind of hashing or tree data structure to store the triangles in to help determine which triangle are relevant.
Also, I know that if I have any arbitrary 2D projection of the vertices, a simple point/triangle intersection test is all necessary. However, I'd still need to know which triangles are relevant and, in addition, which triangles lie in front of a the point and only test those triangles.

I solved my own problem. Basically, I take an arbitrary 2D projection (throw out one of the coordinates), and hash the AABBs (Axis Aligned Bounding Boxes) of the triangles to a 2D array. (A set of 3D cubes as mentioned by titus is overkill, as it only gives you a constant factor speedup.) Use the 2D array and the 2D projection of the point you are testing to get a small set of triangles, which you do a 3D ray/triangle intersection test on (see Intersections of Rays, Segments, Planes and Triangles in 3D) and count the number of triangles the ray intersection where the z-coordinate (the coordinate thrown out) is greater than the z-coordinate of the point. An even number of intersections means it is outside the mesh. An odd number of intersections means it is inside the mesh. This method is not only fast, but very easy to implement (which is exactly what I was looking for).

This is algorithm is efficient only if you have many queries to justify the time for constructing the data structure.
Divide the space into cubes of equal size (we'll figure out the size later). For each cube know which triangles has at least a point in it. Discard the cubes that don't contain anything. Do a ray casting algorithm as presented on wikipedia, but instead o testing if the line intersects each triangle, get all the cubes that intersect with the line, and then do ray casting only with the triangles in these cubes. Watch out not to test the same triangle more than one time because it is present in two cubes.
Finding the proper cube size is tricky, it shouldn't be neither to big or too small. It can only be found by trial and error.
Let's say number of cubes is c and number of triangles is t.
The mean number of triangles in a cube is t/c
k is mean number of cubes that intersect the ray
line-cube intersections + line-triangle intersection in those cubes has to be minimal
c+k*t/c=minimal => c=sqrt(t*k)
You'll have to test out values for the size of the cubes until c=sqrt(t*k) is true
A good starting guess for the size of the cube would be sqrt(mesh width)
To have some perspective, for 1M triangles you'll test on the order of 1k intersections

Ray Triangle Intersection appears to be a good algorithm when it comes to accuracy. The Wiki has some more algorithms. I am linking it here, but you might have seen this already.
Can you, perhaps improvise by, maintaining a matrix of relationship between the points and the plane to which they make the vertices? This subject appears to be a topic of investigation in the academia. Not sure how to access more discussions related to this.

Related

3D mesh direction detection

I have a 3D mesh consisting of triangle polygons. My mesh can be either oriented left or right:
I'm looking for a method to detect mesh direction: right vs left.
So far I tried to use mesh centroid:
Compare centroid to bounding-box (b-box) center
See if centroid is located left of b-box center
See if centroid is located right of b-box center
But the problem is that the centroid and b-box center don't have a reliable difference in most cases.
I wonder what is a quick algorithm to detect my mesh direction.
Update
An idea proposed by #collapsar is ordering Convex Hull points in clockwise order and investigating the longest edge:
UPDATE
Another approach as suggested by #YvesDaoust is to investigate two specific regions of the mesh:
Count the vertices in two predefined regions of the bounding box. This is a fairly simple O(N) procedure.
Unless your dataset is sorted in some way, you can't be faster than O(N). But if the point density allows it, you can subsample by taking, say, every tenth point while applying the procedure.
You can as well keep your idea of the centroid, but applying it also in a subpart.
The efficiency of an algorithm to solve your problem will depend on the data structures that represent your mesh. You might need to be more specific about them in order to obtain a sufficiently performant procedure.
The algorithms are presented in an informal way. For a more rigorous analysis, math.stackexchange might be a more suitable place to ask (or another contributor is more adept to answer ...).
The algorithms are heuristic by nature. Proposals 1 and 3 will work fine for meshes whose local boundary's curvature is mostly convex locally (skipping a rigorous mathematical definition here). Proposal 2 should be less dependent on the mesh shape (and can be easily tuned to cater for ill-behaved shapes).
Proposal 1 (Convex Hull, 2D)
Let M be the set of mesh points, projected onto a 'suitable' plane as suggested by the graphics you supplied.
Compute the convex hull CH(M) of M.
Order the n points of CH(M) in clockwise order relative to any point inside CH(M) to obtain a point sequence seq(P) = (p_0, ..., p_(n-1)), with p_0 being an arbitrary element of CH(M). Note that this is usually a by-product of the convex hull computation.
Find the longest edge of the convex polygon implied by CH(M).
Specifically, find k, such that the distance d(p_k, p_((k+1) mod n)) is maximal among all d(p_i, p_((i+1) mod n)); 0 <= i < n;
Consider the vector (p_k, p_((k+1) mod n)).
If the y coordinate of its head is greater than that of its tail (ie. its projection onto the line ((0,0), (0,1)) is oriented upwards) then your mesh opens to the left, otherwise to the right.
Step 3 exploits the condition that the mesh boundary be mostly locally convex. Thus the convex hull polygon sides are basically short, with the exception of the side that spans the opening of the mesh.
Proposal 2 (bisector sampling, 2D)
Order the mesh points by their x coordinates int a sequence seq(M).
split seq(M) into 2 halves, let seq_left(M), seq_right(M) denote the partition elements.
Repeat the following steps for both point sets.
3.1. Select randomly 2 points p_0, p_1 from the point set.
3.2. Find the bisector p_01 of the line segment (p_0, p_1).
3.3. Test whether p_01 lies within the mesh.
3.4. Keep a count on failed tests.
Statistically, the mesh point subset that 'contains' the opening will produce more failures for the same given number of tests run on each partition. Alternative test criteria will work as well, eg. recording the average distance d(p_0, p_1) or the average length of (p_0, p_1) portions outside the mesh (both higher on the mesh point subset with the opening). Cut off repetition of step 3 if the difference of test results between both halves is 'sufficiently pronounced'. For ill-behaved shapes, increase the number of repetitions.
Proposal 3 (Convex Hull, 3D)
For the sake of completeness only, as your problem description suggests that the analysis effectively takes place in 2D.
Similar to Proposal 1, the computations can be performed in 3D. The convex hull of the mesh points then implies a convex polyhedron whose faces should be ordered by area. Select the face with the maximum area and compute its outward-pointing normal which indicates the direction of the opening from the perspective of the b-box center.
The computation gets more complicated if there is much variation in the side lengths of minimal bounding box of the mesh points, ie. if there is a plane in which most of the variation of mesh point coordinates occurs. In the graphics you've supplied that would be the plane in which the mesh points are rendered assuming that their coordinates do not vary much along the axis perpendicular to the plane.
The solution is to identify such a plane and project the mesh points onto it, then resort to proposal 1.

calculating bounded polygon from intersecting linestrings

I am using boost geometry and I am trying to calculate a "bounded" polygon (see image below) from intersecting polylines (linestrings 2d in Boost geometry). Currently, my approach is i) to get all the intersection points between these lines and then ii) "split" each line at the intersection points. However, this algorithm is a little bit exhaustive. Does anyone know if boost geometry has something more efficient for this?
Moreover, how could I get the segment (or vector of points) for each linestring that lie withing two intersection points? For example, for the green linestring, if I have the two red intersection points, how can I get the linestring between these two points (vector of points containing the two red intersection points and the two interior blue points)? Is there any "split"-like functionality in boost geometry?
Any suggestion is much appreciated. Thanks a lot in advance.
From the given description, it seems that the (poly)lines intersect in pairs to form a single loop, so that the inner polygon is well defined. If this is not true, the solution is not unique.
As the number of lines is small, exhaustive search for the pairwise intersections will not be a big sin. For 5 (poly)lines, there are 10 pairs to be tried, while you expect 5 intersections. Forming the loop from the intersections is not a big deal.
What matters most is if Boost Geometry uses an efficient algorithm to intersect polylines. Unsure if this is documented. For moderate numbers of vertices (say below 100), this is not so important.
If the number of points is truly large, you can resort to an efficient line segment intersection algorithm, which lowers the complexity from O(n²+k) down to O((n+k) log n). See https://en.wikipedia.org/wiki/Bentley%E2%80%93Ottmann_algorithm. You can process all polylines in a single go.
If your polylines have specific properties, they can be exploited to ease the task.

Polygon congruence algorithm

Does anyone know an algorithm which checks for congruence between two sets of polygons? To be more specific, see the figure below.
I'm looking for a way to check whether a given set of colored triangles is congruent to another set, i.e. whether a given set (e.g. the blue triangles) via a number of translations, rotations or reflections can be superimposed on another set (e.g. the red triangles). In the example above, all 3 sets of triangles (blue, red and green) are congruent.
The actual triangle I'm working on is larger than this and has more sets.
I've googled and found this paper, but it concerns 3-D polygons and isn't directly (in my view) implementable.
Any constructive ideas or links would be welcome.
Edit
Just to clarify, each set of triangles must be treated as a whole connected figure, i.e. each triangle in the set is fixed in it's position relative to the other triangles in the set.
Also, I only need an algorithm which could determine whether one set of triangles is congruent to another set, but with a much larger triangle than the one above and with many more sets. Imagine a triangle with side length N and a total of N^2 smaller triangles, divided into N differently colored sets of N triangles.
A combination of rotations and reflections can be represented by a rotation and at most one reflection, so you can ignore reflections if you run a rotation-only algorithm twice, once with the original figure and once with a reflected figure.
The centre of gravity of the triangles (or, easier, the centre of gravity of a figure which has mass only at the vertices of the triangles) is not affected by rotation, so I would start by computing the centre of gravity of each figure. Now represent the figure by a list giving the direction and distance of each point in the figure from its centre of gravity.
If the set of distances are different the figures cannot be rotations of each other, and I guess most non-congruencies will be spotted at this stage. For total cost N^2 you can consider rotating a vertex in one figure to each possible vertex of the other figure and then applying this calculated rotation to all of the other vertices and seeing if they match up. Possibly some version of https://en.wikipedia.org/wiki/Lexicographically_minimal_string_rotation could be used to speed this up. It may help to represent directions by the angle between the directions to vertices after sorting them into order.

How to compute the intersection between a convex polyhedron and another polyhedron?

The problem at hand is part of a scientific simulation concerned with 2D growth within 3D space. The 2D shape grows by adding (triangular) segments to the previously grown shape.
Note that the actual segments in 3D have a thickness, thus, my code actually works with triangular prisms.
At one point, these 2D shapes (with whatever relative orientation and position) collide.
If one of the new triangular prisms intersect with previously inserted segments, I only want to insert the "part" of a segment which does not intersect with the previously inserted segments. As shown below for the segments labelled T1 and T2.
In the first step, I calculated all intersections edges to faces. I then used the CGAL Delaunay Triangulation package in 3D to mesh the resultant point set in a tetrahedral mesh. As a last step, I throw away all those tetrahedrons which intersect with previously inserted segments.
This works beautifully in most cases - but I am convinced by now that the idea cannot work for fundamental reasons.
What's a more reliable way to compute this?
So you're looking to find the intersection of 2 convex hulls?
I think your correct that your approach won't work in all cases, but only in certain degenerate cases. For example if one convex hull is entirely inside the other then there are no face/edge intersections.
A more obviously robust approach is to start off with a convex hull big enough to enclose both your hulls (so basically representing all of space), and then for each plane in both your convex hulls partition this hull by that plane, throwing away the part on the "outside".
The result will be the intersection - which may be nothing.

Finding the polygon in a 2D mesh which contains a point

I have a 3D polygon mesh and a corresponding 2D polygon mesh (actually from a UV map) which I'm using to map the geometry onto a 2D plane. Given a point on the plane, how can I efficiently find the polygon on which it's resting in order to map that 2D point back into 3D?
The best approach I can think of is to store the polygons in a 2D interval tree, and use that to get candidate polygons. Is there a simpler approach?
To clarify, this is not for a shader. I'm actually taking a 2D physical simulation and rendering it wrapped around a 3D mesh. For drawing each object, I need to figure out what point in 3D corresponds to its real 2D position.*
One approach I've seen for triangle meshes goes as follows: choose a triangle, and imagine that each of the sides defines a half space. For a given edge, the half space boundary is the line containing the edge, and the half space does not contain the triangle. Choose an edge whose corresponding half space contains your target point. Then select the triangle on the other side of edge, and repeat the process.
Using this method, you will eventually end up at the triangle that contains your target point.
This method is arguable simpler than implementing a 2D interval tree, although the search is less efficient (if n is the number of triangles, it is O(√n) rather than O(log n). Also, it should work for a polygon mesh, as long as the polygons are convex.
So, if I were trying to just get the thing implemented, I'd probably start with a global search of all triangles - compute the barycentric coordinates of that 2d point for each triangle, find the triangle where the barycentric coordinates are all positive, and then use those to map to 3d (multiply the stu position by the 3d points). I'd do this first, and only if it's not fast enough would I try something more complex.
If it's possible to iterate by triangle rather than by 2d points, then the barycentric method would probably be fast enough. But it seems like you've got a bunch of 2d points at arbitrary positions that need to be mapped, and the points change position from frame to frame?
If you've got this kind of situation, you could probably get a big speedup by implementing a local update per frame. Each 2d point would remember which triangle it was within. Set that as the current triangle. Test if the new position is within the current triangle. If not, then you want to walk the mesh to the adjacent triangle which is closest to the target 2d point. Each edge-adjacent triangle is composed of the two common points on the edge, plus another point. Find which edge-adjacent triangle's other point is closest to the target, and set that as current. Then iterate - seems like it should find it pretty quickly? You could also cache a max size for each triangle, so if the point has moved a lot you can just iterate to the next neighbor without doing the barycentric computation (the max size would need to be the distance such that if you are farther than that distance from any triangle point there is no chance you're inside the triangle. This is the length of the largest edge).
But as you mention in your comments, you can run into problems with meshes that have concavities, holes, or separate connected components, where you may fall into a local minimum. There are a couple of ways to deal with this. I think the simplest is to keep a list of all visited triangles (maybe as a flag on the triangle, vector< bool > or set< triangle index >) and refuse to revisit a triangle. If you find that you've visited all the neighbors of your current triangle, then fall back to a global search. Such failures are likely to be uncommon, so it shouldn't hurt your performance too much.
This kind of per-frame updating can be very fast, and might even be a decent approach for computing the initial containing triangles - just choose a random triangle and walk from there (changes from checking all n triangles to only those that are in roughly a straight line to the target). If it's not fast enough, what you could do is keep a k-d tree (or something similar) of the 2d mesh points as well as a single touching triangle index for each mesh point. To seed the iteration, find the closest point to the target 2d point in the k-d tree, set the adjacent triangle to be current, and then iterate.

Resources