Mapping points of the a solid box to a tetrahedral meshed box - algorithm

Given a 3d solid box with points in it. Given a box meshed with tetraheda. The dimensions of both boxes are the same.
I need to find an algorithm, that maps points of the solid to respective tetrahedra in the mesh.
I used the next algorithm:
Refine solid with an octree
Iterate over tetrahedra in the mesh and check if it intersects with a branch or a leave of the octree. (Ratschek & Rockne's algorithm)
If it intersects, map the points from the octree to the tetrahedron.
But the algorithm is very slow, moreover I have huge problems checking the intersection between a box and a tetrahedron.
I could still stick with an octree, but I definitely need something reasonable to check the intersections. Any comment will be highly appreciated.
UPDATE: I have 2 million solid points and 200k tetrahedra
UPDATE 2: I am trying to implement Walking in a triangulation

One standard simplification would be to first compute approximate octree-tetrahedron intersections using axis-aligned bounding boxes. The resulting intersection tests are then very simple.
Then, once you've traversed to the leaf level of the tree, you can use an exact test to determine which points are contained within a given tetrahedron.
To summarise:
Form an octree T for your points X
for (all tetrahedrons ti in mesh M)
Form a minimal axis-aligned bounding-box Bi for tetrahedron ti
Traverse T from root, accumulating a list Li of all leaf nodes
that overlap with box Bi
for (all leaf nodes li in list L)
for (all points pi in leaf node li)
if (point pi is inside tetrahedron ti /*exact test*/ )
Associate point pi with tetrahedron ti
endif
endfor
endfor
endfor
This algorithm is efficient if: (i) the points X are well distributed within the mesh M, and (ii) the tetrahedrons in M have reasonable aspect ratios.
The key to achieving good performance is to ensure that the tree-traversal step is implemented as efficiently as possible.
A point-in-tetrahedron test can be done by checking whether a given point pi lies on the "inner" side of the 4 faces of a tetrahedron. Given a tetrahedron [i,j,k,l], the point pi is on the "inner" side of the face [i,j,k] if it lies on the same side of the plane [i,j,k] as the "opposing" vertex l.
These orientation tests can be performed robustly using adaptive precision arithmetic. Jonathan Shewchuk offers such an implementation here.

Assuming you know the vertices of the tetrahedrons, you can check if a point is inside a tetrahedron, by checking if it lies to the left or rightside of each of its planes, say left is the side pointed along the normal.
The math for determining if a point is to the left or right of a plane is straight forward.
There's another method I found, but looks like a variant of my answer.
Of course, if a point is inside a tet, it gets mapped to the tet. This method can be implemented as a vertex shader or as a OpenCL/CUDA kernel, and that'll make it highly parallel.

Related

Efficient line segment-triangle intersection

I have a collection of triangles that compose an arbitrary geometry (read from an OFF file). Each triangle is defined by three vertexes, which are 3D points. I have an observation point, and I want to remove from the object all those vertexes that are not visible, meaning that the segment line that joins the observation point and the vertex does not intersect with any triangle.
It is important to note that the number of vertexes and triangles in a single object is in the other of 10^4. A simple approach will be to follow [1] accepted answer, which uses the signed volumes. I have implemented it, but it includes two nested for loops: for each vertex, I need to check if the line to the observation point intersects with ANY of the triangles. Of course, I break the look when it intersects with one, but it still leads to 192M calls to the signed volume function.
Some efficient algorithms [2] assume an infinite line, not a line segment, which leads to the deletion of the vertex even if the intersection is with a further away triangle.
Any ideas on how to solve this efficiently?
[1] Intersection between line and triangle in 3D
[2] Möller and Trumbore, « Fast, Minimum Storage Ray-Triangle Intersection », Journal of Graphics Tools, vol. 2,‎ 1997, p. 21–28
Your problem statement is literally a ray-tracing problem: find if a bunch of points are occluded by the geometry or not. Therefore you can apply any ray-tracing tricks to make it fast; i.e. build some spacial index over your model and trace the rays using that spacial index. Some of the most common spacial indexes are BVH, BSP trees, or kd-trees.
If an approximate solution is good-enough, you can rasterize the geometry from the vantage point, and then test all your vertices against the generated Z-buffer.
Notice that either of these methods will filter the vertices (which is what you asked for). However, some triangles may be visible even if all their vertices are occluded. Those triangles will be removed too, even though they are visible. The rasterization method is easy to modify to filter triangles instead of vertices.
There are two question in one:
efficiency
segment-triangle intersection instead of ray-triangle intersection
efficiency
You can use a Bounding Volume Hierarchy (BVH)
The idea is to construct a hierarchy of boxes. Each box contains either other boxes or a bunch of triangles. The boxes are organized as a tree, and the root of the tree corresponds to a box that contains everything.
The function that computes an intersection works like that:
Compute the intersection between the ray and the box. If there is an intersection, open the box and recursively compute the intersections between the ray and what's in the box.
In more formal terms:
Intersect_Ray_Node(Ray, node)
if(Intersect_Ray_Box(Ray, node.box)) {
if(node.is_leaf) {
for each triangle t in node
Intersect_Ray_Triangle(Ray, t)
} else {
for each child in children(node)
Intersect_Ray_Node(Ray,child)
}
}
Now, the "boxes" can be different things. The simplest thing to use is an AABB (Axis Aligned Bounding Box), that is, simply the minimum and maximum values of the coordinates of what's in the box.
There are several ways of implementing the tree. The most efficient implementations use no data structure and simply reorder the facets in the mesh, in such a way that facets that are in the same node have contiguous indices. Then you just need a couple of additional arrays, one that stores for each node the bounds of the boxes, and another one that indicate for each node how many triangles there are in the left child (then the number of triangles in the right child is implicit).
An implementation is available here, in my geogram library. Constructing the AABB is just a matter
of sorting the triangles geometrically here and constructing the bouding box recursively.
I made also a ShaderToy implementation here (and a big comment at the end of BufferA contains the C++ source of my "MeshCompiler" that creates the AABB).
Note that if you want to be even more compact in memory, Pierre Terdiman proposed the Zero Byte AABB-tree (see here and here). The idea is to encode everything in the order of the triangles and the order of the vertices, based on the interesting remark that the bounds of the boxes are always coordinates of existing points in the mesh.
line segment versus ray
If what you want to intersect is a segment (with two vertices) instead of a ray (line that starts from a point and that extends towards infinity in a direction), you can do that with a simple modification of my answer here. Replace the last line
of the intersect_triangle() function with:
return (det >= 1e-6 && t >= 0.0 && t <= 1.0 && u >= 0.0 && v >= 0.0 && (u+v) <= 1.0);
(test that t is between 0 and 1, which means that the point is inside the segment)

How to compute the set of polygons from a set of overlapping circles?

This question is an extension on some computation details of this question.
Suppose one has a set of (potentially overlapping) circles, and one wishes to compute the area this set of circles covers. (For simplicity, one can assume some precomputation steps have been made, such as getting rid of circles included entirely in other circles, as well as that the circles induce one connected component.)
One way to do this is mentioned in Ants Aasma's and Timothy's Shields' answers, being that the area of overlapping circles is just a collection of circle slices and polygons, both of which the area is easy to compute.
The trouble I'm encountering however is the computation of these polygons. The nodes of the polygons (consisting of circle centers and "outer" intersection points) are easy enough to compute:
And at first I thought a simple algorithm of picking a random node and visiting neighbors in clockwise order would be sufficient, but this can result in the following "outer" polygon to be constructed, which is not part of the correct polygons.
So I thought of different approaches. A Breadth First Search to compute minimal cycles, but I think the previous counterexample can easily be modified so that this approach results in the "inner" polygon containing the hole (and which is thus not a correct polygon).
I was thinking of maybe running a Las Vegas style algorithm, taking random points and if said point is in an intersection of circles, try to compute the corresponding polygon. If such a polygon exists, remove circle centers and intersection points composing said polygon. Repeat until no circle centers or intersection points remain.
This would avoid ending up computing the "outer" polygon or the "inner" polygon, but would introduce new problems (outside of the potentially high running time) e.g. more than 2 circles intersecting in a single intersection point could remove said intersection point when computing one polygon, but would be necessary still for the next.
Ultimately, my question is: How to compute such polygons?
PS: As a bonus question for after having computed the polygons, how to know which angle to consider when computing the area of some circle slice, between theta and 2PI - theta?
Once we have the points of the polygons in the right order, computing the area is a not too difficult.
The way to achieve that is by exploiting planar duality. See the Wikipedia article on the doubly connected edge list representation for diagrams, but the gist is, given an oriented edge whose right face is inside a polygon, the next oriented edge in that polygon is the reverse direction of the previous oriented edge with the same head in clockwise order.
Hence we've reduced the problem to finding the oriented edges of the polygonal union and determining the correct order with respect to each head. We actually solve the latter problem first. Each intersection of disks gives rise to a quadrilateral. Let's call the centers C and D and the intersections A and B. Assume without loss of generality that the disk centered at C is not smaller than the disk centered at D. The interior angle formed by A→C←B is less than 180 degrees, so the signed area of that triangle is negative if and only if A→C precedes B→C in clockwise order around C, in turn if and only if B→D precedes A→D in clockwise order around D.
Now we determine which edges are actually polygon boundaries. For a particular disk, we have a bunch of angle intervals around its center from before (each sweeping out the clockwise sector from the first endpoint to the second). What we need amounts to a more complicated version of the common interview question of computing the union of segments. The usual sweep line algorithm that increases the cover count whenever it scans an opening endpoint and decreases the cover count whenever it scans a closing endpoint can be made to work here, with the adjustment that we need to initialize the count not to 0 but to the proper cover count of the starting angle.
There's a way to do all of this with no trigonometry, just subtraction and determinants and comparisons.

3D mesh direction detection

I have a 3D mesh consisting of triangle polygons. My mesh can be either oriented left or right:
I'm looking for a method to detect mesh direction: right vs left.
So far I tried to use mesh centroid:
Compare centroid to bounding-box (b-box) center
See if centroid is located left of b-box center
See if centroid is located right of b-box center
But the problem is that the centroid and b-box center don't have a reliable difference in most cases.
I wonder what is a quick algorithm to detect my mesh direction.
Update
An idea proposed by #collapsar is ordering Convex Hull points in clockwise order and investigating the longest edge:
UPDATE
Another approach as suggested by #YvesDaoust is to investigate two specific regions of the mesh:
Count the vertices in two predefined regions of the bounding box. This is a fairly simple O(N) procedure.
Unless your dataset is sorted in some way, you can't be faster than O(N). But if the point density allows it, you can subsample by taking, say, every tenth point while applying the procedure.
You can as well keep your idea of the centroid, but applying it also in a subpart.
The efficiency of an algorithm to solve your problem will depend on the data structures that represent your mesh. You might need to be more specific about them in order to obtain a sufficiently performant procedure.
The algorithms are presented in an informal way. For a more rigorous analysis, math.stackexchange might be a more suitable place to ask (or another contributor is more adept to answer ...).
The algorithms are heuristic by nature. Proposals 1 and 3 will work fine for meshes whose local boundary's curvature is mostly convex locally (skipping a rigorous mathematical definition here). Proposal 2 should be less dependent on the mesh shape (and can be easily tuned to cater for ill-behaved shapes).
Proposal 1 (Convex Hull, 2D)
Let M be the set of mesh points, projected onto a 'suitable' plane as suggested by the graphics you supplied.
Compute the convex hull CH(M) of M.
Order the n points of CH(M) in clockwise order relative to any point inside CH(M) to obtain a point sequence seq(P) = (p_0, ..., p_(n-1)), with p_0 being an arbitrary element of CH(M). Note that this is usually a by-product of the convex hull computation.
Find the longest edge of the convex polygon implied by CH(M).
Specifically, find k, such that the distance d(p_k, p_((k+1) mod n)) is maximal among all d(p_i, p_((i+1) mod n)); 0 <= i < n;
Consider the vector (p_k, p_((k+1) mod n)).
If the y coordinate of its head is greater than that of its tail (ie. its projection onto the line ((0,0), (0,1)) is oriented upwards) then your mesh opens to the left, otherwise to the right.
Step 3 exploits the condition that the mesh boundary be mostly locally convex. Thus the convex hull polygon sides are basically short, with the exception of the side that spans the opening of the mesh.
Proposal 2 (bisector sampling, 2D)
Order the mesh points by their x coordinates int a sequence seq(M).
split seq(M) into 2 halves, let seq_left(M), seq_right(M) denote the partition elements.
Repeat the following steps for both point sets.
3.1. Select randomly 2 points p_0, p_1 from the point set.
3.2. Find the bisector p_01 of the line segment (p_0, p_1).
3.3. Test whether p_01 lies within the mesh.
3.4. Keep a count on failed tests.
Statistically, the mesh point subset that 'contains' the opening will produce more failures for the same given number of tests run on each partition. Alternative test criteria will work as well, eg. recording the average distance d(p_0, p_1) or the average length of (p_0, p_1) portions outside the mesh (both higher on the mesh point subset with the opening). Cut off repetition of step 3 if the difference of test results between both halves is 'sufficiently pronounced'. For ill-behaved shapes, increase the number of repetitions.
Proposal 3 (Convex Hull, 3D)
For the sake of completeness only, as your problem description suggests that the analysis effectively takes place in 2D.
Similar to Proposal 1, the computations can be performed in 3D. The convex hull of the mesh points then implies a convex polyhedron whose faces should be ordered by area. Select the face with the maximum area and compute its outward-pointing normal which indicates the direction of the opening from the perspective of the b-box center.
The computation gets more complicated if there is much variation in the side lengths of minimal bounding box of the mesh points, ie. if there is a plane in which most of the variation of mesh point coordinates occurs. In the graphics you've supplied that would be the plane in which the mesh points are rendered assuming that their coordinates do not vary much along the axis perpendicular to the plane.
The solution is to identify such a plane and project the mesh points onto it, then resort to proposal 1.

Finding the polygon in a 2D mesh which contains a point

I have a 3D polygon mesh and a corresponding 2D polygon mesh (actually from a UV map) which I'm using to map the geometry onto a 2D plane. Given a point on the plane, how can I efficiently find the polygon on which it's resting in order to map that 2D point back into 3D?
The best approach I can think of is to store the polygons in a 2D interval tree, and use that to get candidate polygons. Is there a simpler approach?
To clarify, this is not for a shader. I'm actually taking a 2D physical simulation and rendering it wrapped around a 3D mesh. For drawing each object, I need to figure out what point in 3D corresponds to its real 2D position.*
One approach I've seen for triangle meshes goes as follows: choose a triangle, and imagine that each of the sides defines a half space. For a given edge, the half space boundary is the line containing the edge, and the half space does not contain the triangle. Choose an edge whose corresponding half space contains your target point. Then select the triangle on the other side of edge, and repeat the process.
Using this method, you will eventually end up at the triangle that contains your target point.
This method is arguable simpler than implementing a 2D interval tree, although the search is less efficient (if n is the number of triangles, it is O(√n) rather than O(log n). Also, it should work for a polygon mesh, as long as the polygons are convex.
So, if I were trying to just get the thing implemented, I'd probably start with a global search of all triangles - compute the barycentric coordinates of that 2d point for each triangle, find the triangle where the barycentric coordinates are all positive, and then use those to map to 3d (multiply the stu position by the 3d points). I'd do this first, and only if it's not fast enough would I try something more complex.
If it's possible to iterate by triangle rather than by 2d points, then the barycentric method would probably be fast enough. But it seems like you've got a bunch of 2d points at arbitrary positions that need to be mapped, and the points change position from frame to frame?
If you've got this kind of situation, you could probably get a big speedup by implementing a local update per frame. Each 2d point would remember which triangle it was within. Set that as the current triangle. Test if the new position is within the current triangle. If not, then you want to walk the mesh to the adjacent triangle which is closest to the target 2d point. Each edge-adjacent triangle is composed of the two common points on the edge, plus another point. Find which edge-adjacent triangle's other point is closest to the target, and set that as current. Then iterate - seems like it should find it pretty quickly? You could also cache a max size for each triangle, so if the point has moved a lot you can just iterate to the next neighbor without doing the barycentric computation (the max size would need to be the distance such that if you are farther than that distance from any triangle point there is no chance you're inside the triangle. This is the length of the largest edge).
But as you mention in your comments, you can run into problems with meshes that have concavities, holes, or separate connected components, where you may fall into a local minimum. There are a couple of ways to deal with this. I think the simplest is to keep a list of all visited triangles (maybe as a flag on the triangle, vector< bool > or set< triangle index >) and refuse to revisit a triangle. If you find that you've visited all the neighbors of your current triangle, then fall back to a global search. Such failures are likely to be uncommon, so it shouldn't hurt your performance too much.
This kind of per-frame updating can be very fast, and might even be a decent approach for computing the initial containing triangles - just choose a random triangle and walk from there (changes from checking all n triangles to only those that are in roughly a straight line to the target). If it's not fast enough, what you could do is keep a k-d tree (or something similar) of the 2d mesh points as well as a single touching triangle index for each mesh point. To seed the iteration, find the closest point to the target 2d point in the k-d tree, set the adjacent triangle to be current, and then iterate.

fast sphere-grid intersection

given a 3D grid, a 3d point as sphere center and a radius, i'd like to quickly calculate all cells contained or intersected by the sphere.
Currently i take the the (gridaligned) boundingbox of the sphere and calculate the two cells for the min anx max point of this boundingbox. then, for each cell between those two cells, i do a box-sphere intersection test.
would be great if there was something more efficient
thanks!
There's a version of the Bresenham algorithm for drawing circles. Consider the two dimensional place at z=0 (assume the sphere is at 0,0,0 for now), and look at only the x-y plane of grid points. Starting at x= R, y=0, follow the Bresenham algorithm up to y = y_R, x=0, except instead of drawing, you just use the result to know that all grid points with lower x coordinates are inside the circle, down to x=x_center. Put those in a list, count them or otherwise make note of. When done with two dimensional problem, repeat with varying z and using a reduced radius R(z) = sqrt(R^2-z^2) in place of R, until z=R.
If the sphere center is indeed located on a grid point, you know that every grid point inside or outside the right half of the sphere has a mirror partner on the left side, and likewise top/bottom, so you can do half the counting/listing per dimension. You can also save time running Bresenham only to the 45 degree line, because any x,y point relative to the center has a partner y,x. If the sphere can be anywhere, you will have to compute results for each octant.
No matter how efficiently you calculate an individual cell being inside or outside the sphere, your algorithm will always be O(radius^3) because you have to mark that many cells. DarenW's suggestion of the midpoint (aka Bresenham) circle algorithm could give a constant factor speedup, as could simply testing for intersection using the squared radius to avoid the sqrt() call.
If you want better than O(r^3) performance, then you may be able to use an octree instead of a flat grid. Each node of the tree could be marked as being entirely inside, entirely outside, or partially inside the sphere. For partially inside nodes, you recurse down the tree until you get to the finest-grained cells. This will still require marking O(r^2 log r) nodes [O(r^2) nodes on the boundary, O(log r) steps through the tree to get to each of them], so it might not be worth the trouble in your application.

Resources