I'm trying to develop a 3D object editor with Three.js, where letting the user to perform CSG operations such as union, intersection, subtract, etc.I used source code from "Three.js Editor" for the main functions of loading the stl file, and rendering the object.
And now, I'd like to add-in CSG functions for users to do dynamic CSG function. I found "CSG.js" and "JSModeler.js" for performing the CSG operations.My intention is to load the stl files, then customise(union, subtract) the objects using the CSG operation on the go. My ideal is to be like "123D Design", at least for the Boolean operation; select the objects on the canvas and perform the operation
So, my question is How should i go about selecting the multiple objects on the canvas to do the CSG operation. I'm new to three.js and still learning, so possibly where should I look for the literature or reference to achieve this kind of function?
Thank you in advance.
You should be able to use ThreeCSG. Here's a tutorial:
http://learningthreejs.com/blog/2011/12/10/constructive-solid-geometry-with-csg-js/
Related
I was looking at three.js as a replacement of deck.gl in our existing WebGL software, for reasons not relevant to this question. One of the input data sources is large vector data exported from CAD systems. One scene integrates about 5 collections of linear features and areas in the scene. Each such collection is 10-50MB SVG. In deck.gl, we did a crazy but very effective hack - converted the vectors to geo coordinates and used lazy loading via deck.gl tile layer. It improved the rendering performance tremendously but required additional tweaking, because majority of the data is still in cartesian coordinates.
I haven't found a comparable lazy loading of such large vector data in three.js. There are plenty of format-specific loaders but it's still just different means of upfront loading. While vector tiles were created in geographical context, the principle of pre-rendering and pre-tiling data for lazy loading should be universally applicable? But i could not find any non-geo implementation, the less with support in three.js. Our geo-hacking was effective, but never felt correct because the model is naturally cartesian. The internets are suggesting that Cesium may be more flexible than deck.gl for our new requirements, but we would prefer to avoid the geo-hacking, not dive deeper in it.
What did i miss?
I'm reading up on Direct2D before I migrate my GDI code to it, and I'm trying to figure out how paths work. I understand most of the work involved with geometries and geometry sinks, but there's one thing I don't understand: the D2D1_FIGURE_BEGIN type and its parameter to BeginFigure().
First, why is this value even needed? Why does a geometry need to know if it's filled or hollow ahead of time? I don't know nay other drawing API which cares about whether path objects are filled or not ahead of time; you just define the endpoints of the shapes and then call fill() or stroke() to draw your path, so how are geometries any different?
And if this parameter is necessary, how does choosing one value over the other affect the shapes I draw in?
Finally, if I understand the usage of this enumeration correctly, you're supposed to only use filled paths with FillGeometry() and hollow paths with DrawGeometry(). However, the hourglass example here and cited by several method documentation pages (like the BeginFigure() one) creates a filled figure and draws it with both DrawGeometry() and FillGeometry()! Is this undefined behavior? Does it have anything to do with the blue border around the gradient in the example picture, which I don't see anywhere in the code?
Thanks.
EDIT Okay I think I understand what's going on with the gradient's weird outline: the gradient is also transitioning alpha values, and the fill is overlapping the stroke because the stroke is centered on the line, and the fill is drawn after the stroke. That still doesn't explain why I can fill and stroke with a filled geometry, or what the difference between hollow and filled geometries are...
Also I just realized that hollow geometries are documented as not having bounds. Does this mean that hollow geometries are purely an optimization for stroke-only geometries and otherwise behave identically to a filled geometry?
If you want to better understand Direct2D's geometry system, I recommend studying the WPF geometry system. WPF, XPS, Direct2D, Silverlight, and the newer "XAML" frameworks all use the same building blocks (the same "language", if you will). I found it easier to understand the declarative object-oriented API in WPF, and after that it was a breeze to work with the imperative API in Direct2D. You can think of WPF's mutable geometry system as an implementation of the "builder" pattern from Java, where the build() method is behind the scenes (hidden from you) and spits out an immutable Direct2D geometry when it comes time to render things on-screen (WPF uses something called "MIL", which IIRC/AFAICT, Direct2D was forked from. They really are the same thing!) It is also straightforward to write code that converts between the two representations, e.g. walking a WPF PathGeometry and streaming it into a Direct2D geometry sink, and you can also use ID2D1PathGeometry::Stream and a custom ID2D1GeometrySink implementation to reconstitute a WPF PathGeometry.
(BTW this is not theoretical :) It's exactly what I do in Paint.NET 4.0+: I use a WPF-esque declarative, mutable object model that spits out immutable Direct2D geometries at render time. It works really well!)
Okay, anyway, to get directly to your specific question: BeginFigure() and D2D1_FIGURE_BEGIN map directly to the PathFigure.IsFilled property in WPF. In order to get an intuitive understanding of what effect this has, you can use something like KaXAML to play around with some geometries from WPF or Silverlight samples and see what the results look like. And the documentation is definitely better for WPF and Silverlight than for Direct2D.
Another key concept is that DrawGeometry is basically a helper method. You can accomplish the same thing by first widening your geometry with ID2D1Geometry::Widen and then using FillGeometry ("widening" seems like a misnomer to me, btw: in Photoshop or Illustrator you'd probably use a verb like "stroke"). That's not to say that either one always performs better/worse ... be sure to benchmark. I've seen it go both ways. The reason you can think of this as a helper method is dependent on the fact that the lowest level of the rasterization engine can only do one thing: fill a triangle. All other drawing "primitives" must be converted to triangle lists or strips (this is also why ID2D1Mesh is so fast: it bypasses all sorts of processing code!). Filling a geometry requires tessellation of its interior to a list of triangle strips which can then be filled by Direct3D. "Drawing" a geometry requires applying a stroke (width and/or style): even a simple 1-pixel wide straight line must be first converted to 2 filled triangles.
Oh, also, if you want to compute the "real" bounds of a geometry with hollow figures, use ID2D1Geometry::GetWidenedBounds with a strokeWidth of zero. This is a discrepancy between Direct2D and WPF that puzzles me. Geometry.Bounds (in WPF) is equivalent to ID2D1Geometry::GetWidenedBounds(0.0f).
I want to implement boolean operations on nonconvex polyhedral objects and want to render them with OpenGL. I have read about the two predominant techniques to do boolean operations on polyhedra: Boundary Representation (BReps) and Constructive Solid Geometry (CSG).
According to some papers, implementing booleans with CSG should be easer so I think about using CSG rather than BReps.
I know that BReps describe geometry by vertices and polygons, whereas CSG uses basic primitive objects like cylinders or spheres that are getting combined within a tree structure.
I know that performing booleans on BReps are implemented by cutting the polygons which are intersecting and removing those polygons which are not needed (depending if the operatins is union or difference or ...).
But how are boolean operations implemented in terms of CSG? How can I implements CSG boolean operations? I've already looked on the internet and found this for example http://evanw.github.io/csg.js/ or https://www.andrew.cmu.edu/user/jackiey/resources/CSG/CSG_report.pdf
The curious thing is that these algorithms just use BReps for their booleans. So I don't understand where the advantage of CSG should be or why CSG booleans should be easier to implement.
You are somehow talking apples and pears.
CSG is a general way to describe complex solids from primitive ones, an "arithmetic" on solids if you want. This process is independent of the exact representation of these solids. Examples of alternative/complementary modeling techniques are free-from surface generation, generalized cylinders, algebraic methods...
BRep is one of the possible representation of a solid, based on a vertex/edge/face graph structure. Some alternative representations are space-occupancy models such as voxels and octrees.
Usually, a CSG expression is evaluated using the representation on hand; in some cases, the original CSG tree is kept as such, with basic primitives at the leaves.
A polyhedral BRep model is conceptually simple to implement; anyway, CSG expression evaluation is arduous (polyhedron intersection raises uneasy numerical and topological problems).
Rendering of a BRep requires the triangulation of the faces, which can then be handled by a standard rendering pipeline.
A voxel model is both simple to implement and makes the CSG expressions trivial to process; on the other hand it gives a crude approximation of the shapes.
A raw CSG tree can be used for direct rendering by the ray-tracing technique: after traversing all primitives by a ray, you combine the ray sections using the CSG expression. This approach combines a relatively simple implementation with accuracy, at the expense of a high computational cost (everything needs to be repeated on every pixel of the image, and for every view).
A CSG model just represents the desired operations (unions, intersections,...etc) applied onto transformed prmitives. It does not really modify the primitives (such as trimming the corner of a cube). The reason you can see a complex model being displayed on the screen is because the rendering engine is doing the trick. When displaying a CSG model, there are typically two ways: the first way is to convert it to a Brep model on the fly and the 2nd way is to use a CSG direct display algorithm, which is often based on scanline algorithim.
So, if you already have a good CSG rendering engine, then you don't have to worry about trimming the Brep model and your life is indeed easier. But if you have to write the rendering engine yourself, you will not save as much time by going with CSG.
in mine opinion CSG is not easier at all
but it is more precise
there are not used cylinders,spheres,...
instead there are used rotation surfaces of curves
Operations on CSG
if you do (bool)operations on rotation surfaces with the same axis
then just you do the operation on the curves ... (this is where the CSG is better then BRep)
when the axises are not the same then you have to create new entry in the CSG tree
also operations on compatible object (like boxes join/cut with same joining/intersecting surface) can be done by simply updating the size of them ...
but most implementation do not do such things and instead every operation is stored into tree
which makes the rendering slow after any change in the tree
also the render must do all the operations not during the operations it self but in rendering or pre-rendering stage
which makes the CSG implementation more complicated.
the only advantage I see in it is that the model can have analytic representation
which is far more accurate especially if multiple operations are on top of it...
I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.
I've been trying to figure out how you'd take a mesh generated in a program like 3ds max and bring that into your game with animations, textures, etc.
I've looked at FBX and Collada, but from what I've read, they're used as an intermediate step between the modelling software and some final format that may be custom to the game. What I'm looking for is a book or tutorial that would go over in a general way what you would store in your custom file, how you would store animation data, etc.
Right now I don't really have a general plan of attack and all of the guides I've seen stick to rendering a few triangles.
It doesn't have to be implementation specific to OpenGL, although that is what I'll be using.
Yes Collada is an interchange format.
What that means is it is very much generic. And if I am right that is exactly what you are looking for!
You can use a library such as Assimp to load collada into a generic scene graph, and then have your game/renderer use it directly, or preprocess and then consume it.