I know B-Rep (ParaSolid) is the popular solid representation. From my past experience, I always touch the triangle mesh representation like OBJ, STL file format. I am wondering why B-Rep is better than mesh representation? What's the main difference?
A boundary representation (b-rep) solid modeler uses a combination of precise geometry and boundary topology to represent objects such as solids (3d manifolds), surfaces (2d manifolds) and wires (1d manifolds).
The salient property of a b-rep is that it represents geometry precisely. Faces of the b-rep are defined by the equations of the surfaces associated with the face. Edges are represented with precise curves, often the curve of intersection of its adjacent faces. (Sometimes approximate curves are used when precise curves are too difficult to compute or when faces don't fit together exactly--this is called a "tolerant" model).
Because the underlying geometry of a b-rep is precise, the model can be queried (in principle) to arbitrary precision. For example, if you have a b-rep of a box with a cylindrical hole through it, you can query the volume of the box to an arbitrary precision. With a tessellated model you can only compute the volume to the precision of the tessellation, which can never represent the cylindrical hole exactly.
Another benefit of b-reps is they tend to be much more compact than tessellated models. As a simple example, a sphere represented as a b-rep has a single face associated with the geometry of the sphere. It only takes a center and radius to define that sphere, and a few bytes more for the b-rep data structure to support it. A tessellated model of a sphere may have many vertices, each with 3 coordinates.
Diving a little deeper, Boolean operations on a tessellation are problematic, since the facets on one of the bodies may not line up with the facets on the other. There needs to be some sort of rectification process which will add complexity and inaccuracy to the combined model. No such problem occurs with b-reps, since new curves can be computed as intersections of the surfaces that underlie the intersecting faces.
On the other hand, tessellated models are becoming more popular now that the technology of manipulating them is maturing. For example, with discrete differential geometry and discrete spectral methods we can manipulate the meshes in a Boolean in a way that minimizes the local changes to discrete curvature, or we can manipulate regions of the tessellation with simple controls that move many points.
Another benefit of tessellated models is they are better for scanned data. If you scan a human face, there is no need to try to find precise surfaces to represent the data, the tessellated image is good enough.
First of all, better for what?
For example, for 3D printing, or pure visualization purposes mesh representation is better suited.
B-Rep preserves the underlying geometry (surfaces, curves, points), as well as connectivity between model's topological items (faces, edges, vertices). Thus, allowing richer operation (feature) set: filleting, blending, etc.
Related
I'm working on a real-time ballistics simulation of many particles under the effect of highly non-uniform wind. The wind data is obtained from CFD in a form of 2D discretized vector field (unstructured mesh, each grid point has associated with it a vector which tells the direction and magnitude of air velocity).
The problem is that I need to be able to extract the wind vector at any position that a particle occupies, so that aerodynamic drag can be computed and injected into ballistics physics. This is trivial if the wind data can be approximated by an analytical/numerical vector field where a vector can be computed with an algebraic expression. However, the wind data I'm working with is quite complex and there doesn't seem to be any way to approximate it.
I have two ideas:
Find a way to interpolate the vector field every time each particle's position is updated. This sounds computationally expensive, so I'm not sure if it can be done real-time. Also, the mesh is unstructured, and I'm not sure if 2D interpolation can be done with this kind of mesh.
Just pick the grid point closest to the particle's position and get the vector from there (given that the mesh is fine enough for this to accurately represent the actual vector field). This will then turn into a real-time nearest-neighbor problem with rapid and numerous queries.
I'm not sure if these are the only two solutions for this problem, and if these can be done in real-time at all. How should I go about solving this?
I observed some applications create a geometric structure apparently by just having a set of touch points. Like this example:
I wonder which algorithms can possibly help me to recreate such geometric structures?
UPDATE
In 3D printing, sometimes a support structure is needed:
The need for support is due to collapse of some 3D object regions, i.e. overhangs, while printing. Support structure is supposed to connect overhangs either to print floor or to 3D object itself. The geometric structure shown in the screenshot above is actually a sample support structure.
I am not a specialist in that matter and I may be missing important issues. So here is what I would naively do.
The triangles having a external normal pointing downward will reveal the overhangs. When projected vertically and merged by common edges, they define polygonal regions of the base plane. You first have to build those projected polygons, find their intersections, and order the intersections by Z. (You might also want to consider the facing polygons to take the surface thickness into account).
Now for every intersection polygon, you draw verticals to the one just below. The projections of the verticals might be sampled from a regular grid or elsehow, to tune the density. You might also consider sampling those pillars from the basement continuously to the upper surface, possibly stopping some of them earlier.
The key ingredient in this procedure is a good polygon intersection algorithm.
Contour lines (aka isolines) are curves that trace constant values across a 2D scalar field. For example, in a geographical map you might have contour lines to illustrate the elevation of the terrain by showing where the elevation is constant. In this case, let's store contour lines as lists of points on the map.
Suppose you have map that has several contour lines at known elevations, and otherwise you know nothing about the elevations of the map. What algorithm would you use to fill in additional contour lines to approximate the unknown elevations of the map, assuming the landscape is continuous and doesn't do anything surprising?
It is easy to find advise about interpolating the elevation of an individual point using contour lines. There are also algorithms like Marching Squares for turning point elevations into contour lines, but none of these exactly capture this use case. We don't need the elevation of any particular point; we just want the contour lines. Certainly we could solve this problem by filling an array with estimated elevations and then using Marching Squares to estimate the contour lines based on the array, but the two steps of that process seem unnecessarily expensive and likely to introduce artifacts. Surely there is a better way.
IMO, about all methods will amount to somehow reconstructing the 3D surface by interpolation, even if implicitly.
You may try by flattening the curves (turning them to polylines) and triangulating the resulting polygons thay they will define. (There will be a step of closing the curves that end on the border of the domain.)
By intersection of the triangles with a new level (unsing linear interpolation along the sides), you will obtain new polylines corresponding to new isocurves. Notice that the intersections with the old levels recreates the old polylines, which is sound.
You may apply a post-smoothing to the curves, but you will have no guarantee to retrieve the original old curves and cannot prevent close surves to cross each other.
Beware that increasing the density of points along the curves will give you a false feeling of accuracy, as the error due to the spacing of the isolines will remain (indeed the reconstructed surface will be cone-like, with one of the curvatures being null; the surface inside the bottommost and topmost lines will be flat).
Alternatively to using flat triangles, one may think of a scheme where you compute a gradient vector at every vertex (f.i. from a least square fit of a plane on the vertex and its neighbors), and use this information to generate a bivariate polynomial surface in the triangle. You must do this in such a way that the values along a side will coincide for the two triangles that share it. (Unfortunately, I have no formula to give you.)
The isolines are then obtained by a further subdivision of the triangle in smaller triangles, with a flat approximation.
Actually, this is not very different from getting sample points, (Delaunay) triangulating them and fitting picewise continuous patches to the triangles.
Whatever method you will use, be it 2D or 3D, it is useful to reason on what happens if you sweep the range of z values in a continous way. This thought experiment does reconstruct a 3D surface, which will possess continuity and smoothness properties.
A possible improvement over the crude "flat triangulation" model could be to extend every triangle side between to iso-polylines with sides leading to the next iso-polylines. This way, higher order interpolation (cubic) can be achieved, giving a smoother reconstruction.
Anyway, you can be sure that this will introduce discontinuities or other types of artifacts.
A mixed method:
flatten the isolines to polylines;
triangulate the poygons formed by the polylines and the borders;
on every node, estimate the surface gradient (least-square fit of a plane to the node and its neighborrs);
in every triangle, consider the two sides along which you need to interpolate and compute the derivative at endpoints (from the known gradients and the side directions);
use Hermite interpolation along these sides and solve for the desired iso-levels;
join the points obtained on both sides.
This method should be a good tradeoff between complexity and smoothness. It does reconstruct a continuous surface (except maybe for the remark below).
Note that is some cases, yo will obtain three solutions of the cubic. If there are three on each side, join them in order. Otherwise, make a decision on which to join and use the remaining two to close the curve.
I am attempting to use Three.js to morph one geometry into another. Here's what I've done so far (see http://stemkoski.github.io/Three.js/Morph-Geometries.html for a live example).
I am attempting to morph from a small polyhedron to a larger cube (both triangulated and centered at the origin). The animating is done via shaders. Each vertex on the smaller polyhedron has two associated attributes, its final position and its final UV coordinate. To calculate the final position of each vertex, I raycasted from the origin through each vertex of the smaller polyhedron and found the point of intersection with the larger cube. To calculate the final UV value, I used barycentric coordinates and the UV values at the vertices of the intersected face of the larger cube.
That led to a not awful but not great first attempt. Since (usually) none of the vertices of the larger cube were the final position of any of the vertices of the smaller polyhedron, big chunks of the surface of the cube were missing. So next I refined the smaller polyhedron by adding more vertices as follows: for each vertex of the larger cube, I raycasted toward the origin, and where each ray intersected a face of the smaller polyhedron, I removed that triangular face and added the point of intersection and three smaller faces to replace it. Now the morph is better (this is the live example linked to above), but the morph still does not fill out the entire volume of the cube.
My best guess is that in addition to projecting the vertices of the larger cube onto the smaller polyhedron, I also need to project the edges -- if A and B are vertices connected by an edge on the larger cube, then the projections of these vertices on the smaller polyhedron should also be connected by an edge. But then, of course it is possible that the projected edge will cross over multiple pre-existing triangles in the mesh of the smaller polyhedron, requiring multiple new vertices be added, retriangularization, etc. It seems that what I actually need is an algorithm to calculate a common refinement of two triangular meshes. Does anyone know of such an algorithm and/or examples (with code) of morphing (between two meshes with different triangularizations) as described above?
As it turns out, this is an intricate question. In the technical literature, the algorithm I am interested in is sometimes called the "map overlay algorithm"; the mesh I am constructing is sometimes called the "supermesh".
Some useful works I have been reading about this problem include:
Morphing of Meshes: The State of the Art and Concept.
PhD. Thesis by Jindrich Parus
http://herakles.zcu.cz/~skala/MSc/Diploma_Data/REP_2005_Parus_Jindrich.pdf
(chapter 4 especially helpful)
Computational Geometry: Algorithms and Applications (book)
Mark de Berg et al
(chapter 2 especially helpful)
Shape Transformation for Polyhedral Objects (article)
Computer Graphics, 26, 2, July 1992
by James R. Kent et al
http://www.cs.uoi.gr/~fudos/morphing/structural-morphing.pdf
I have started writing a series of demos to build up the machinery needed to implement the algorithms discussed in the literature referenced above to solve my original question. So far, these include:
Spherical projection of a mesh # http://stemkoski.github.io/Three.js/Sphere-Project.html
Topological data structure of a THREE.Geometry # http://stemkoski.github.io/Three.js/Topology-Data.html
There is still more work to be done; I will update this answer periodically as I make additional progress, and still hope that others have information to contribute!
If I construct a shape using constructive solid geometry techniques, how can I construct a wireframe mesh for rendering?
I'm aware of algorithms for directly rendering CSG shapes, but I want to convert it into a wireframe mesh just once so that I can render it "normally"
To add a little more detail. Given a description of a shape such as "A cube here, intersection with a sphere here, subtract a cylinder here" I want to be able to calculate a polygon mesh.
There are two main approaches. If you have a set of polygonal shapes, it is possible to create a BSP tree for each shape, then the BSP trees can be merged. From Wikipedia,
1990 Naylor, Amanatides, and Thibault
provide an algorithm for merging two
bsp trees to form a new bsp tree from
the two original trees. This provides
many benefits including: combining
moving objects represented by BSP
trees with a static environment (also
represented by a BSP tree), very
efficient CSG operations on polyhedra,
exact collisions detection in O(log n
* log n), and proper ordering of transparent surfaces contained in two
interpenetrating objects (has been
used for an x-ray vision effect).
The paper is found here Merging BSP trees yields polyhedral set operations.
Alternatively, each shape can be represented as a function over space (for example signed distance to the surface). As long as the surface is defined as where the function is equal to zero, the functions can then be combined using (MIN == intersection), (MAX == union), and (NEGATION = not) operators to mimic the set operations. The resulting surface can then be extracted as the positions where the combined function is equal to zero using a technique like Marching Cubes. Better surface extraction methods like Dual Marching Cubes or Dual Contouring can also be used. This will, of course, result in a discrete approximation of the true CSG surface. I suggest using Dual Contouring, because it is able to reconstruct sharp features like the corners of cubes .
These libraries seems to do what you want:
www.solidgraphics.com/SolidKit/
carve-csg.com/
gts.sourceforge.net/
See also "Constructive Solid Geometry for Triangulated Polyhedra" (1990) Philip M. Hubbard doi:10.1.1.34.9374
Here are some Google Scholar links which may be of use.
From what I can tell of the abstracts, the basic idea is to generate a point cloud from the volumetric data available in the CSG model, and then use some more common algorithms to generate a mesh of faces in 3D to fit that point cloud.
Edit: Doing some further research, this kind of operation is called "conversion from CSG to B-Rep (boundary representation)". Searches on that string lead to a useful PDF:
http://www.scielo.br/pdf/jbsmse/v29n4/a01v29n4.pdf
And, for further information, the key algorithm is called the "Marching Cubes Algorithm". Essentially, the CSG model is used to create a volumetric model of the object with voxels, and then the Marching Cubes algorithm is used to create a 3D mesh out of the voxel data.
You could try to triangulate (tetrahedralize) each primitive, then perform the boolean operations on the tetrahedral mesh, which is "easier" since you only need to worry about tetrahedron-tetrahedron operations. Then you can perform boundary extraction to get the B-rep. Since you know the shapes of your primitives analytically, you can construct custom tetrahedralizations of your primitives to suit your needs instead of relying on a mesh generation library.
For example, suppose your object was the union of a cube and a cylinder, and suppose you have a tetrahedralization of both objects. In order to compute the boundary representation of the resulting object, you first label all the boundary facets of the tetrahedra of each primitive object. Then, you perform the union operation: if two tetrahedra are disjoint, then nothing needs to be done; both tetrahedra must exist in the resulting polyhedron. If they intersect, then there are a number of cases (probably on the order of a dozen or so) that need to be handled. In each of these cases, the volume of the two tetrahedra needs to be re-triangulated in a way that respects the surface constraints. This is made somewhat easier by the fact that you only need to worry about tetrahedra, as opposed to more complicated shapes. The boundary facet labels need to be maintained in the process so that in the final collection of tetrahedra, the boundary facets can be extracted to form a triangle mesh of the surface.
I've had some luck with the BRL-CAD application MGED where I can construct a convex polyhedron by intersecting planes using CSG then extract the boundary representation using the command-line g-stl command. Check http://brlcad.org/
Malcolm
If you can convert you input primitives to polyhedral meshes then you could use libigl's C++ mesh boolean routines. The following computes the union of a mesh (VA,FA) and another mesh (VB,FB):
igl::mesh_boolean(VA,FA,VB,FB,"union",VC,FC);
where VA is a #VA by 3 matrix of vertex positions and FA is a #FA by 3 matrix of triangle indices into VA, and so on. The technique used in libigl is different from those two mentioned in Joe's answer. All pairs of triangles are intersected against each other (using spatial acceleration) and then resulting sub-triangles are categorized as belonging to the output surface or not.