Algorithm for simplifying 3d surface? - algorithm

I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.

Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.

I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.

I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.

Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe

There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.

unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?

It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp

Related

Finding vertices in a mesh that are within certain proximity of each other

I have a 3D mesh that is comprised of a certain amount of vertices.
I know that there are some vertices that are really close to one another. I want to find groups of these, so that I can normalize them.
I could make a KD and do basic NNS, but that doesn't scale so well if I don't have a reference point.
I want to find these groups in relation to all points.
In my searches I also found k-means but I cannot seem to wrap my head around it's scientific descriptions to find out if that's really what I need.
I'm not well versed in spatial algorithms in general. I know where one can apply them, for instance, for this case, but I lack the actual know-how, to even have the correct keywords.
So, yeah, what algorithms are meant for such task?
Simple idea that might work:
Compue a slightly big bounding volume for each vertex in the mesh. For instance is you use a Sphere, use a small radius for it e.g., the radius can be equal to the length of the smallest edge of the mesh.
Compute the intersection of bounding volumes for each vertex. Use a collision detection algorithm for that such as the I-Collide. Use a disjoint-set datastrcture for grouping the points in collision.
Merge all the points residing in the same set.
You can fine-tune the algorithm by changing the size of the bounding volumes. Also you can use this algorithm as a starting point for a k-means algoritm or other sound clustering technique.

Ray tracing: Bresenham's vs Siddon's algorithm

I'm developping a tool for radiotherapy inverse planning based in a pencil-beam approach. An important step in these methods (particularly in dose calculation) is a ray-tracing from many sources and one of the most used algorithms is Siddon's one (here there is a nice short description http://on-demand.gputechconf.com/gtc/2014/poster/pdf/P4218_CT_reconstruction_iterative_algebraic.pdf). Now, I will try to simplify my question:
The input data is a CT image (a 3D matrix with values) and some source positions around the image. You can imagine a cube and many points around, all at same distance but different orientation angles, where the radiation rays come from. Each ray will go through the volume and a value is assigned to each voxel according to the distance from the source. The advantage of Siddon's algorithm is that the length is calculated on-time during the iterative process of the ray-tracing. However, I know that Bresenham's algorithm is an efficient way to evaluate the path from one point to another in a matrix. Thus, the length from the source to a specific voxel could be easily calculated as the euclidean distance two points, even during Bresenham's iterative process.
So then, knowing that both are methods quite old already and efficient, there is a definitive advantage of using Siddon instead of Bresenham? Maybe I'm missing an important detail here but it is weird to me that in these dose calculation procedures Bresenham is not really an option and always Siddon appears as the gold standard.
Thanks for any comment or reply!
Good day.
It seems to me that in most applications involving medical ray tracing, you want not only the distance from a source to a particular voxel, but also the intersection lengths of that path with every single voxel on its way. Now, Bresenham gives you the voxels on that path, but not the intersection lengths, while Siddon does.

What "boundary conditions" can make a rectangle "look" like a circle?

I am solving a fourth order non-linear partial differential equation in time and space (t, x) on a square domain with periodic or free boundary conditions with MATHEMATICA.
WITHOUT using conformal mapping, what boundary conditions at the edge or corner could I use to make the square domain "seem" like a circular domain for my non-linear partial differential equation which is cartesian?
The options I would NOT like to use are:
Conformal mapping
changing my equation to polar/cylindrical coordinates?
This is something I am pursuing purely out of interest just in case someone screams bloody murder if misconstrued as a homework problem! :P
That question was asked on the time people found out that the world was spherical. They wanted to make rectangular maps of the surface of the world...
It is not possible.
The reason why is not possible is because the sphere has an intrinsic curvature, while the cube/parallelepiped has not. It can be shown that for two elements with different intrinsic curvatures, their surfaces cannot be mapped while either keeping constant infinitesimal distances, either the distance between two points is given by the euclidean distance.
The easiest way to understand this problem is to pick some rectangular piece of paper and try to make a sphere of it without locally stretch it or compress it (you can fold). You can't. On the other hand, you can make a cylinder surface, because the cylinder has also no intrinsic curvature.
In maps, normally people use one of the two options:
approximate the local surface of the sphere by a tangent plane and make a rectangle out of it. (a local map of some region)
make world maps but implement some curved lines everywhere identifying that the measuring distances must be made according to those lines.
This is also the main reason why when traveling from Europe to North America the airplanes seems to make a curve always trying to pass near canada. If we measured the distance from the rectangular map, we see that they should go on a strait line to minimize the distance. However, because we are mapping two different intrinsic curvatures, the real distance must be measured in a different way (and not via a strait line).
For 2D (in fact for nD) the same reasoning applies.

Mesh to mesh intersections

I'm looking for a library or a paper that describes how to determine if one triangular mesh intersects another.
Interestingly I am coming up empty. If there is some way to do it in CGAL, it is eluding me.
It seems like it clearly should be possible, because triangle intersection is possible and because each mesh contains a finite number of triangles. But I assume there must be a better way to do it than the obvious O(n*m) approach where one mesh has n triangles and the other has m triangles.
The way we usually do it using CGAL is with CGAL::box_intersection_d.
You can make it by mixing this example with this one.
EDIT:
Since CGAL 4.12 there is now the function CGAL::Polygon_mesh_processing::do_intersect().
The book Real-Time Collision Detection has some good suggestions for implementing such algorithms. The basic approach is to use spatial partitioning or bounding volumes to reduce the number of tri-tri intersection tests that you need to perform.
There are a number of good academic packages that address this problem including the Proximity Query Package, and the other work of the GAMMA research group at University of North Carolina, SWIFT, I-COLLIDE, and RAPID are all well known. Check that the licenses on these libraries are acceptable.
The Open Dynamics Engine (ODE), is a physics engine that contains optimized implementations of a large number of intersection primitives. You can check out the documentation for the triangle-triangle intersection test on their wiki.
While it isn't exactly what you're looking for, I believe that this is also possible with CGAL - Tree of Triangles, for Intersection and Distance Queries
I think the search term you are missing is overlay. For example, here is a web page on Surface Mesh Overlay. That site has a short bibliography, all by the same authors.
Here is another paper on the topic: "Overlay mesh construction using interleaved spanning trees,"
INFOCOM 2004: Twenty-third Annual Joint Conference of the IEEE Computer and Communications Societies.
See also the GIS SE question, "Performing Overlay of Two Triangulated Irregular Networks (TIN)."
To add to the other answers, there are also techniques involving the 3D Minkowski sum of convex polyhedra - concave polyhedra can be decomposed into convex parts. Check out this.
In libigl, we wrap up cgal's CGAL::box_intersection_dto intersect a mesh with vertices V and faces F with another mesh with vertices U and faces G, storing pairs of intersecting facets as rows in IF:
igl::intersect_other(V,F,U,G,false,IF);
This will ignore self-intersections. For completeness, I'll mention that we also support self-intersections in a separate function:
igl::self_intersect(V,F,...,IF);
One of the approaches is to construct a bounding volume hierarchy BVH (e.g. AABB-tree) for each mesh.
Then one will need to find whether there is a pair of intersecting triangles from two meshes, and it will be much faster (at best logarithmic time complexity) using constructed hierarchies than checking every possible pair of triangles from two meshes.
For example, you can look at open-source library MeshLib where this algorithm is implemented in findCollidingTriangles function, which must be called with firstIntersectionOnly=true argument to find just the fact of collision instead of all colliding triangle pairs.

How to subsample a 2D polygon?

I have polygons that define the contour of counties in the UK. These shapes are very detailed (10k to 20k points each), thus rendering the related computations (is point X in polygon P?) quite computationaly expensive.
Thus, I would like to "subsample" my polygons, to obtain a similar shape but with less points. What are the different techniques to do so?
The trivial one would be to take one every N points (thus subsampling by a factor N), but this feels too "crude". I would rather do some averaging of points, or something of that flavor. Any pointer?
Two solutions spring to mind:
1) since the map of the UK is reasonably squarish, you could choose to render a bitmap with the counties. Assign each a specific colour, and then render the borders with a 1 or 2 pixel thick black line. This means you'll only have to perform the expensive interior/exterior calculation if a sample happens to lie on the border. The larger the bitmap, the less often this will happen.
2) simplify the county outlines. You can use a recursive Ramer–Douglas–Peucker algorithm to recursively simplify the boundaries. Just make sure you cache the results. You may also have to solve this not for entire county boundaries but for shared boundaries only, to ensure no gaps. This might be quite tricky.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
Polygon triangulation should help here. You'll still have to check many polygons, but these are triangles now, so they are easier to check and you can use some optimizations to determine only a small subset of polygons to check for a given region or point.
As it seems you have all the algorithms you need for polygons, not only for triangles, you can also merge several triangles that are too small after triangulation or if triangle count gets too high.

Resources