Mesh to mesh intersections - computational-geometry

I'm looking for a library or a paper that describes how to determine if one triangular mesh intersects another.
Interestingly I am coming up empty. If there is some way to do it in CGAL, it is eluding me.
It seems like it clearly should be possible, because triangle intersection is possible and because each mesh contains a finite number of triangles. But I assume there must be a better way to do it than the obvious O(n*m) approach where one mesh has n triangles and the other has m triangles.

The way we usually do it using CGAL is with CGAL::box_intersection_d.
You can make it by mixing this example with this one.
EDIT:
Since CGAL 4.12 there is now the function CGAL::Polygon_mesh_processing::do_intersect().

The book Real-Time Collision Detection has some good suggestions for implementing such algorithms. The basic approach is to use spatial partitioning or bounding volumes to reduce the number of tri-tri intersection tests that you need to perform.
There are a number of good academic packages that address this problem including the Proximity Query Package, and the other work of the GAMMA research group at University of North Carolina, SWIFT, I-COLLIDE, and RAPID are all well known. Check that the licenses on these libraries are acceptable.
The Open Dynamics Engine (ODE), is a physics engine that contains optimized implementations of a large number of intersection primitives. You can check out the documentation for the triangle-triangle intersection test on their wiki.
While it isn't exactly what you're looking for, I believe that this is also possible with CGAL - Tree of Triangles, for Intersection and Distance Queries

I think the search term you are missing is overlay. For example, here is a web page on Surface Mesh Overlay. That site has a short bibliography, all by the same authors.
Here is another paper on the topic: "Overlay mesh construction using interleaved spanning trees,"
INFOCOM 2004: Twenty-third Annual Joint Conference of the IEEE Computer and Communications Societies.
See also the GIS SE question, "Performing Overlay of Two Triangulated Irregular Networks (TIN)."

To add to the other answers, there are also techniques involving the 3D Minkowski sum of convex polyhedra - concave polyhedra can be decomposed into convex parts. Check out this.

In libigl, we wrap up cgal's CGAL::box_intersection_dto intersect a mesh with vertices V and faces F with another mesh with vertices U and faces G, storing pairs of intersecting facets as rows in IF:
igl::intersect_other(V,F,U,G,false,IF);
This will ignore self-intersections. For completeness, I'll mention that we also support self-intersections in a separate function:
igl::self_intersect(V,F,...,IF);

One of the approaches is to construct a bounding volume hierarchy BVH (e.g. AABB-tree) for each mesh.
Then one will need to find whether there is a pair of intersecting triangles from two meshes, and it will be much faster (at best logarithmic time complexity) using constructed hierarchies than checking every possible pair of triangles from two meshes.
For example, you can look at open-source library MeshLib where this algorithm is implemented in findCollidingTriangles function, which must be called with firstIntersectionOnly=true argument to find just the fact of collision instead of all colliding triangle pairs.

Related

Identification of the most relevant vertices of a 2D polygon

I've been researching for a known algorithm that identifies the "most relevant" vertices of a 2D polygon. I may be using the wrong keywords (I've been trying to search for mesh simplification algorithms), but I've not yet found anything useful.
I should define what I mean by "most relevant" vertices with some context. I want to take a 2D polygon, apply a geometrical transformation, and render both the pre-transformed and post-transformed polygons with a mapping between the vertices to visualize the effects of the transformation. However, with small highly detailed polygons (high vertex count per area), there is a lot of "visual clutter".
The idea is that there should be an algorithm that could identify which vertices would be eligible for mapping and which ones wouldn't. I can design such an algorithm by taking into account two things:
Edge length: ignore a vertex if the length between it and the previous one is smaller than a threshold. An accumulator would be needed to avoid ignoring multiple subsequent vertices.
Internal angle: ignore a vertex if the internal angle at the vertex is higher than a threshold. An "accumulator" would be needed to avoid ignoring multiple subsequent vertices.
Despite probably being able to implement such a thing, I don't like reinventing the wheel and decided to ask you if you came across something like this which could actually solve other problems that I didn't think of (e.g., complex polygons).
It sounds like you're looking for the Ramer-Douglas-Peucker algorithm, which does "path simplification" but can be extended for use with polygons. It works by starting with only a couple of endpoints, then greedily adding back whichever vertices are necessary to approximate the original shape to within a certain tolerance. There are a variety of other algorithms and heuristics, but none of them has a reputation for reliably producing significantly better results than RDP, and RDP is easy to understand and implement.

Finding vertices in a mesh that are within certain proximity of each other

I have a 3D mesh that is comprised of a certain amount of vertices.
I know that there are some vertices that are really close to one another. I want to find groups of these, so that I can normalize them.
I could make a KD and do basic NNS, but that doesn't scale so well if I don't have a reference point.
I want to find these groups in relation to all points.
In my searches I also found k-means but I cannot seem to wrap my head around it's scientific descriptions to find out if that's really what I need.
I'm not well versed in spatial algorithms in general. I know where one can apply them, for instance, for this case, but I lack the actual know-how, to even have the correct keywords.
So, yeah, what algorithms are meant for such task?
Simple idea that might work:
Compue a slightly big bounding volume for each vertex in the mesh. For instance is you use a Sphere, use a small radius for it e.g., the radius can be equal to the length of the smallest edge of the mesh.
Compute the intersection of bounding volumes for each vertex. Use a collision detection algorithm for that such as the I-Collide. Use a disjoint-set datastrcture for grouping the points in collision.
Merge all the points residing in the same set.
You can fine-tune the algorithm by changing the size of the bounding volumes. Also you can use this algorithm as a starting point for a k-means algoritm or other sound clustering technique.

Minimum Quadrilaterlization of Polygons - existing algorithms?

I'm currently trying to find a way to take irregularly shaped polygons and divide them into as few quadrilaterals as possible.
I can't find an obvious out of the box algorithm anywhere that does this, so I'm thinking of going down two possible routes.
1.Getting the optimal triangulation first, and then converting these to quadrilaterals
2.Trying to alter the CGAL optimal_convex_partitions function from their 2d polygon partitioning package to create quadrilateral partitions https://doc.cgal.org/latest/Partition_2/group__PkgPolygonPartitioning2.html#ga3ca9fb1f363f9f792bfbbeca65ad5cc5
I'm a total beginner to computational geometry, so I'd just like to know if either of these approaches seems like a fools errand before I try to learn C++? If anyone knows anything about the best possible approach to this that'd be even better. Thanks!
(Edit) Including a sample polygon - None of them should have holes, though they may have complex exteriors and concavity.
I assume that if you start with triangles and then try to merge two adjacent triangles into one quadrilateral in a greedy fashion you may end up with many isolated triangles.
Not sure how the convex partition may come handy.
You may find useful information in the articles below. As far as know, finite element analysis requires that the input object comprise of triangles or quads, so there has been some research done in this direction. Here are two papers that might be relevant:
Ted D. Blacker amd Michael B. Stephenson, “Paving: a new approach to automated quadrilateral mesh generation” Int. J. Num.Meth.Engg, Vol 32, 811-847 (1991)
Jinwoo Choi and Yohngjo Kim , Development of a New Algorithm for Automatic Generation of a Quadrilateral Mesh, International Journal of CAD/CAM Vol. 10, No. 2, pp. 00~00 (2011
I'm far from being an expert on this subjest, but I'm certain that these alg. can be implemented using CGAL...

Ray tracing: Bresenham's vs Siddon's algorithm

I'm developping a tool for radiotherapy inverse planning based in a pencil-beam approach. An important step in these methods (particularly in dose calculation) is a ray-tracing from many sources and one of the most used algorithms is Siddon's one (here there is a nice short description http://on-demand.gputechconf.com/gtc/2014/poster/pdf/P4218_CT_reconstruction_iterative_algebraic.pdf). Now, I will try to simplify my question:
The input data is a CT image (a 3D matrix with values) and some source positions around the image. You can imagine a cube and many points around, all at same distance but different orientation angles, where the radiation rays come from. Each ray will go through the volume and a value is assigned to each voxel according to the distance from the source. The advantage of Siddon's algorithm is that the length is calculated on-time during the iterative process of the ray-tracing. However, I know that Bresenham's algorithm is an efficient way to evaluate the path from one point to another in a matrix. Thus, the length from the source to a specific voxel could be easily calculated as the euclidean distance two points, even during Bresenham's iterative process.
So then, knowing that both are methods quite old already and efficient, there is a definitive advantage of using Siddon instead of Bresenham? Maybe I'm missing an important detail here but it is weird to me that in these dose calculation procedures Bresenham is not really an option and always Siddon appears as the gold standard.
Thanks for any comment or reply!
Good day.
It seems to me that in most applications involving medical ray tracing, you want not only the distance from a source to a particular voxel, but also the intersection lengths of that path with every single voxel on its way. Now, Bresenham gives you the voxels on that path, but not the intersection lengths, while Siddon does.

Algorithm for simplifying 3d surface?

I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.
Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.
I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.
I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.
Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe
There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.
unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?
It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp

Resources