I want to implement boolean operations on nonconvex polyhedral objects and want to render them with OpenGL. I have read about the two predominant techniques to do boolean operations on polyhedra: Boundary Representation (BReps) and Constructive Solid Geometry (CSG).
According to some papers, implementing booleans with CSG should be easer so I think about using CSG rather than BReps.
I know that BReps describe geometry by vertices and polygons, whereas CSG uses basic primitive objects like cylinders or spheres that are getting combined within a tree structure.
I know that performing booleans on BReps are implemented by cutting the polygons which are intersecting and removing those polygons which are not needed (depending if the operatins is union or difference or ...).
But how are boolean operations implemented in terms of CSG? How can I implements CSG boolean operations? I've already looked on the internet and found this for example http://evanw.github.io/csg.js/ or https://www.andrew.cmu.edu/user/jackiey/resources/CSG/CSG_report.pdf
The curious thing is that these algorithms just use BReps for their booleans. So I don't understand where the advantage of CSG should be or why CSG booleans should be easier to implement.
You are somehow talking apples and pears.
CSG is a general way to describe complex solids from primitive ones, an "arithmetic" on solids if you want. This process is independent of the exact representation of these solids. Examples of alternative/complementary modeling techniques are free-from surface generation, generalized cylinders, algebraic methods...
BRep is one of the possible representation of a solid, based on a vertex/edge/face graph structure. Some alternative representations are space-occupancy models such as voxels and octrees.
Usually, a CSG expression is evaluated using the representation on hand; in some cases, the original CSG tree is kept as such, with basic primitives at the leaves.
A polyhedral BRep model is conceptually simple to implement; anyway, CSG expression evaluation is arduous (polyhedron intersection raises uneasy numerical and topological problems).
Rendering of a BRep requires the triangulation of the faces, which can then be handled by a standard rendering pipeline.
A voxel model is both simple to implement and makes the CSG expressions trivial to process; on the other hand it gives a crude approximation of the shapes.
A raw CSG tree can be used for direct rendering by the ray-tracing technique: after traversing all primitives by a ray, you combine the ray sections using the CSG expression. This approach combines a relatively simple implementation with accuracy, at the expense of a high computational cost (everything needs to be repeated on every pixel of the image, and for every view).
A CSG model just represents the desired operations (unions, intersections,...etc) applied onto transformed prmitives. It does not really modify the primitives (such as trimming the corner of a cube). The reason you can see a complex model being displayed on the screen is because the rendering engine is doing the trick. When displaying a CSG model, there are typically two ways: the first way is to convert it to a Brep model on the fly and the 2nd way is to use a CSG direct display algorithm, which is often based on scanline algorithim.
So, if you already have a good CSG rendering engine, then you don't have to worry about trimming the Brep model and your life is indeed easier. But if you have to write the rendering engine yourself, you will not save as much time by going with CSG.
in mine opinion CSG is not easier at all
but it is more precise
there are not used cylinders,spheres,...
instead there are used rotation surfaces of curves
Operations on CSG
if you do (bool)operations on rotation surfaces with the same axis
then just you do the operation on the curves ... (this is where the CSG is better then BRep)
when the axises are not the same then you have to create new entry in the CSG tree
also operations on compatible object (like boxes join/cut with same joining/intersecting surface) can be done by simply updating the size of them ...
but most implementation do not do such things and instead every operation is stored into tree
which makes the rendering slow after any change in the tree
also the render must do all the operations not during the operations it self but in rendering or pre-rendering stage
which makes the CSG implementation more complicated.
the only advantage I see in it is that the model can have analytic representation
which is far more accurate especially if multiple operations are on top of it...
Related
I'm working on an OpenGL visualisation for navigating a 3D dataset. Briefly, the visualisation takes in a large (~1 million data points) array of matrices, which are then eigendecomposed and visualised as ellipsoids.
I have found that performance improves significantly when I calculate ellipsoid vertex transformations "up-front" (i.e. calculate all model transformations once only on the CPU), rather than in shaders (where the model transformations have to be calculated for each draw). For scene navigation/lighting etc., view and projection tranformations are calculated as normal as uniforms passed to the relevant shaders.
The result of this approach is the program taking longer to initialise (due to the CPU being tied up calculating all the model transformations), but significantly higher frame rates.
I understand from this, that it is common to decompose matrices to avoid unnecessary shader computations, however I haven't come across anything describing this practice of completely pre-calculating the world space.
I understand that this approach is only appropriate for my narrow usecase (i.e. where the scene is static, meaning there will never be a situation where a vertex's position in world space will change while the program is running). Apart from that, are there any significant reasons that I should avoid doing this?
It's a common optimization to remove redundant transformations from static objects. Your objects are static in the world, so you've collapsed all the redundant transformations right up to the root of your scene, which is not a problem.
Having said that, the performance gain you're seeing is probably not coming from the cost of doing the model transform in the shader, but from passing that transform to the shader for each object. You have not said much about how you organize the ellipsoids, but if you are updating a program with the model matrix uniform and issuing a DrawElements call for each ellipsoid, that is very slow indeed. Even doing something more exotic -- like using instances and passing each transform in a VBO -- you would still have the overhead of updating them,which you can now avoid. If you are not doing this already, you can group your ellipsoid vertices into large arrays and draw them with only a few DrawElements calls.
Let's say I have a static object and a movable object which can be moved and rotated, what is the best way to very quickly calculate the difference of those two meshes?
Precision here is not so important, speed is though, since I have to use it in the update phase of the main loop.
Maybe, given the strict time limit, modifying the static object's vertices and triangles directly is to be preferred. Should voxels be preferred here instead?
EDIT: The use case is an interactive viewer of a wood panel (parallelepiped) and a milling tool (a revolved contour, some like these).
The milling tool can be rotated and can work oriented at varying degrees (5 axes).
EDIT 2: The milling tool may not pierce the wood.
EDIT 3: The panel can be as large as 6000x2000mm and the milling tool can be as little as 3x3mm.
If you need the best possible performance then the generic CSG approach may be too slow for you (but still depending on meshes and target hardware).
You may try to find some specialized algorithm, coded for your specific meshes. Let's say you have two cubes - one is a 'wall' and second is a 'window' - then it's much easier/faster to compute resulting mesh with your custom code, than full CSG. Unfortunately you don't say anything about your meshes.
You may also try to make it a 2D problem, use some simplified meshes to compute the result that will 'look like expected'.
If the movement of your meshes is somehow limited you may be able to precompute full or partial results for different mesh combinations to use at runtime.
You may use some space partitioning like BSP or Octrees to divide your meshes during precomputing stage. This way you could split one big problem into many smaller ones that may be faster to compute or at least to make the solution multi-threaded.
You've said about voxels - if you're fine with their look and limits you may voxelize both meshes and just read and mix two voxel values, instead of one. Then you would triangulate it using algorithm like Marching Cubes.
Those are all just some general ideas but we'll need better info to help you more.
EDIT:
With your description it looks like you're modeling some bas-relief, so you may use Relief Mapping to fake this effect. It's based on a height map stored as a texture, so you'd need to just update few pixels of the texture and render a plane. It should be quite fast compared to other approaches, the downside is that it's based on height map, so you can't get shapes that Tee Slot or Dovetail cutter would create.
If you want the real geometry then I'd start from a simple plane as your panel (don't need full 3D yet, just a front surface) and divide it with a 2D grid. The grid element should be slightly bigger than the drill size and every element is a separate mesh. In the frame update you'd cut one, or at most 4 elements that are touched with a drill. Thanks to this grid all your cutting operations will be run with very simple mesh so they may work with your intended speed. You can also cut all current elements in separate threads. After the cutting is done you'll upload to the GPU only currently modified elements so you may end up with quite complex mesh but small modifications per frame.
Recently I've been looking into some different methods of polygon simplification.
Popular methods include Ramer-Douglas-Peucker path simplification algorithm & Visvalingam, while they are both good algorithms, in some cases gives poor results by only ever removing points, never placing points in new locations (both a pro and a con depending on the usage).
I've been looking into using a simplified segment collapsing method, common for 3D geometry, see: Surface simplification using quadric error metrics.
From some quick tests this works reasonably well, however I suspect this isn't all that novel, possibly there are better methods for 2D polygons too.
I also looked into PO-Trace's method of polygon simplification, which is excellent, but focused on simplifying polygons extracted from bitmap images.
Are there well known algorithms for polygon simplification using segment collapsing?
Asking because I'm about to write my own function that uses quadric error metrics, but suspect this may exist already, possibly named differently.
If not, I'll link the code once its done.
The CGAL library provides an implementation of a polyline simplification algorithm.
It is based on the work of Dyken et al..
Let's say i want to create simple physics object with a shape of "matryoshka" or banal snowman . As i see it , i have two options to do it: 1. To create 2 circle (or may be custom) bodies and connect them with weld joint , or 2. To create one body with two circle (or may be custom) shapes in it.
So the question is: what is more expensive for CPU: bodies connected with joints or complicate-shaped bodies. If i have one object may be i don't feel difference in performance , but what if i have many object of that type?
I know that joints are expensive , but may be custom shaped bodies is more expensiver?
I'm working with Box2dFlash.
Since the question is about CPU use, joints use more CPU than shapes alone with no joint. Circles shapes can be more efficient than polygons in many cases, but not all.
For CPU optimization, use as few bodies and as simple polygons as possible. For every normal defined in a polygon, a calculation may need to be performed if an object overlaps with another. For circles, a maximum of one calculation is needed.
As an aside, unless you are already experiencing performance problems, you should not worry about if your shapes are the idea CPU use. Instead, you should ask whether the simulation the create is the one you want to happen. Box2D contains many many special case optimizations to make it run smoothly. You can also decrease its accuracy per tick by setting the velocity and position iteration variables. This will have a far greater effect on efficiency than geometry unless your geometry is extremely complex.
If you don't want the two sections of the snowman's body to move in relation to each other, then a single body is the way to go. You will ideally be defining the collision shape(s) manually anyway, so there is absolutely no gain to be had using a weld.
Think of it like this: If you use (a) weld(s), the collision shapes will be no less complicated than if you simply made an approximation of the collision geometry for a single body; it would still require either multiple collision shapes, or a single complex collision shape regardless of your choice. The weld will simply be adding an extra step.
If you need the snowman's body to be dynamic in anyway (break apart or flex) then a weld or regular joint is the way to go.
It is necessary to have a matrix representation of a set of geometric primitives (i.e., line, curve, circle, rectangle, also their filled forms). For simplicity you may suppose we are dealing only with lines, so the answer is already on [SO]. Rectangles therefore could be easily pixelated. For the rest of primitives however two questions appear to me:
1) How to pixelate a curve including circle (~closed curve)?
2) How to pixelate a filled simple / complex shape (rectangle, multi-patch)?
The simplest way (currently in use) may be utilizing a visualizing library (such a MatPlotLib for Python) to save the result (a map of geometric primitives) as a pixelated image on disk (or RAM) and then reuse it for the purpose of interest. Apparently, this method can handle any complexity since in background whatever it (the visualizer) does the output is a 2D image, i.e., 2D matrix. Some serious problems however emerge in this application:
1) the procedure is very slow!
2) the procedure is not standard but heavily dependent to the setting of the visualizer, that is often the low-level configuration is impossible or difficult to be set for visualizer. In other words, the black box being used lacks controlling on the procedure as required.
What you are doing is called "scan conversion" of geometric primitives.
For line segments, you already know about the Bresenham algorithm.
There is a similar one for circles, a bit trickier (as regards handling of the endpoints).
General curves is a broader topic. You can think of conics, splines or hand-drawn. One approach is to approximate them with a polyline.
To fill polygons, there is a scanline algorithm available (consider a sweeping horizontal line and fill between the intersections with the polygon outline).
To fill arbitrary shapes, an option is to draw the outline and use seed filling (from a given internal point).
You will find relevant material at http://www.cse.ohio-state.edu/~gurari/course/cis681/cis681Ch5.html