CGAL - Preserve attributes on vertex using corefine boolean operations - computational-geometry

I'm relatively new to CGAL, so please pardon me if there is an obvious fix to this that I'm missing.
I was following this example on preserving attributes on facets of a polyhedral mesh after a corefine+boolean operation:
https://github.com/CGAL/cgal/blob/master/Polygon_mesh_processing/examples/Polygon_mesh_processing/corefinement_mesh_union_with_attributes.cpp
I wanted to know if it was possible to construct a Visitor struct which similarly operates on vertices of a polyhedral mesh. Ideally, I'd like to interpolate property values (a vector of doubles) from the original mesh onto the new boolean output vertices, but I can settle for imputing nearest neighbor values.
The difficulty I was running into was that the after_subface_created and after_face_copy functions overloaded within Visitor operate before the halfedge structure is set for the target face and hence, I'm not sure how to access the vertices of the target face. Is there a way to use the Visitor structure within corefinement to do this ?

In an older version of the code I used to have a visitor handling vertex creation/copy but it has not been backported (due to lack of time). What you are trying to do is a good workaround but you should use the visitor to collect the information (say fill a map [input face] -> std::vector<output face>) and process that map once the algorithm is done.

Related

Why should I use the node structure for a non-rigged model?

I'm working with Vulkan since a few weeks. I have a question about the data structure of a model:
I'm using Assimp for loading fbx and dae files. The loaded model then usually contains several nodes (RootNode and their ChildNodes).
Should I keep this structure on non-rigged models? Or could I transform all the meshes (or rather their vertices) into world space at the first loading by multiplying with the offset matrix of the node (and then delete the node structure in my program)? Because I've never seen that someone transform a node (and their meshes) after loading, if the model isn't rigged.
Or is there any other reason why I should keep this structure?
Assimp also offers the flag aiProcess_PreTransformVertices to flatten the transformation hierarchy. You can also do it manually, of course, by using the aiNode::mTransformation matrices and multiplying them in the right order.
A potential problem with these approaches (especially with the flag) can be that the material properties of sub-meshes can get lost. Assimp doesn't care if sub-meshes have different material properties and just merges them, s.t. the material properties can get lost for a certain sub-mesh. This also applies to other properties like sub-mesh names. If sub-meshes get merged, only one (arbitrarily chosen?) sub-mesh name remains.
I.e. you'll want to prevent flattening the node hierarchy if you would like to use specific properties (names, material properties) of sub-meshes.
If your concern is only w.r.t. transforming them to world space: If you are never going to do something with the nodes in object space (like, e.g., transforming a sub-node in relation to a parent node), then I don't see a reason to not transform them into world space.

Understanding of NurbsSurface

I want to create a NurbsSurface in OpenGL. I use a grid of control points size of 40x48. Besides I create indices in order to determine the order of vertices.
In this way I created my surface of triangles.
Just to avoid misunderstandings. I have
float[] vertices=x1,y1,z1,x2,y2,z2,x3,y3,z3....... and
float[] indices= 1,6,2,7,3,8....
Now I don't want to draw triangles. I would like to interpolate the surface points. I thought about nurbs or B-Splines.
The clue is:
In order to determine the Nurbs algorithms I have to interpolate patch by patch. In my understanding one patch is defined as for example points 1,6,2,7 or 2,7,3,8(Please open the picture).
First of all I created the vertices and indices in order to use a vertexshader.
But actually it would be enough to draw it on the old way. In this case I would determine vertices and indices as follows:
float[] vertices= v1,v2,v3... with v=x,y,z
and
float[] indices= 1,6,2,7,3,8....
In OpenGL, there is a Nurbs function ready to use. glNewNurbsRenderer. So I can render a patch easily.
Unfortunately, I fail at the point, how to stitch the patches together. I found an explanation Teapot example but (maybe I have become obsessed by this) I can't transfer the solution to my case. Can you help?
You have set of control points from which you want to draw surface.
There are two ways you can go about this
Which is described in Teapot example link you have provided.
Calculate the vertices from control points and pass then down the graphics
pipeline with GL_TRIANGLE as topology. Please remember graphics hardware
needs triangulated data in order to draw.
Follow this link which shows how to evaluate vertices from control points
http://www.glprogramming.com/red/chapter12.html
You can prepare path of your control points and use tessellation shaders to
triangulate and stitch those points.
For this you prepare set of control points as patch use GL_PATCH primitive
and pass it to tessellation control shader. In this you will specify what
tessellation level you want. Depending on that your patch will be tessellated
by another fixed function stage known as Primitive Generator.
Then your generated vertices will be pass to tessellation evaluation shader
in which you can fine tune. Here you can specify outer or inner tessellation
level which will further subdivide your patch.
I would suggest you put your VBO and IBO like you have with control points and when drawing use GL_PATCH primitive. Follow below tutorial about how to use tessellation shader to draw nurb surfaces.
Note : Second method I have suggested is kind of tricky and you will have to read lot of research papers.
I think if you dont want to go with modern pipeline then I suggest go with option 1.

Obtain indices from .obj

I dont know opengl-es. but I must use features of .obj 3d model in my android app.
In .obj file, I can find Vertices, Texcoords and Normals. but there is no Indices and instead there are face elements.
Can anyone clearly explain me how to obtain indices from .obj file?
Possibly a little hard to write clearly because what OBJ considers a vertex is not what OpenGL considers a vertex. Let's find out...
An OBJ file establishes lists of obj-vertices (v), texture coordinates (vt), normals (n). You probably don't ever want to hand these to OpenGL (but skip to the end for the caveat). They're just a way for your loading code to establish the meaning of v1, vt3, etc.
The only place that openGL-vertices are specified is within the f statement. E.g. v1/vt1/vn1 means "the OpenGL vertex with location, texture coordinate and normal as specified back in the list".
So a workable solution to loading is, in pseudocode:
instantiate an empty hash map from v/vt/vn triples to opengl-vertex indices, an empty opengl-vertex list, and an empty list of indices for later supplication to glDrawElements;
for each triple in the OBJ file:
look into the hash map to determine whether it is already in the opengl-vertex list and, if so, get the index and add it to your elements list;
if not then assign the next available index to the triple (so, this is just an incrementing number), put that into the hash map and the elements list, combine the triple and insert it into the opengl-vertex list.
You can try to do better than that by attempting to minimise the potential highly random access it implies to your opengl-vertex list at drawing time, but don't prematurely optimise.
Caveat:
If your GPU supports vertex texture fetch (i.e. texture sampling within the vertex shader) then you could just supply the triples directly to OpenGL, having accumulated the obj-vertices, etc, into texture maps, and do the indirect lookup in your vertex shader. With vertex texture fetch, textures really just become random access 2d arrays. However, many Android GPUs don't support vertex texture fetch (even if they support ES 3 which ostensibly makes it a requirement, as it allows an implementation to specify that it supports a maximum of zero samplers).

How to implement batches using webgl?

I am working on a small game using webgl. Within this game I have some kind of forest which consists out of many (100+) tree objects. Because I only have a few different tree models, I rotate and scale these models in a different way before I display them.
At the moment I loop over all trees to display them:
for (var tree in trees) {
tree.display();
}
While the display() method of tree looks like:
display : function() { // tree
this.treeModel.setRotation(this.rotation);
this.treeModel.setScale(this.scale);
this.treeModel.setPosition(this.position);
this.treeModel.display();
}
Many tree objects share the same treeModel object, so I have to set rotation/scale/position of the model everytime before I display it. The rotation/scale/position values are different for every tree.
The display method of treeModel does all the gl stuff:
display : function() { // treeModel
// bind texture
// set uniforms for projection/modelview matrix based on rotation/scale/position
// bind buffers
// drawArrays
}
All tree models use the same shader but can use different textures.
Because a single tree model consists only out of a few triangles I want to combine all trees into one VBO and display the whole forest with one drawArrays() call.
Some assumptions to make talking about numbers easier:
There are 250 trees to display
There are 5 different tree models
Every tree model has 50 triangles
Questions I have:
At the moment I have 5 buffers that are 50 * 3 * 8 (position + normal + texCoord) * floatSize bytes large. When i want to display all trees with one vbo i would have a buffer with 250 * 50 * 3 * 8 * floatSize byte size. I think I can't use an index buffer because I have different position values for every tree (computed out of the position value of the tree model and the tree position/scale/rotation). Is this correct or is there still a way I can use index buffers to reduce the buffer size at least a bit? Maybe there are other ways to optimize this?
How to handle different textures of the tree models? I can bind all textures to different texture units but how can I decide within the shader which texture should be used for the fragment that is currently displayed?
When I want to add a new tree (or any other kind of object) to this buffer at runtime: Do I have to create a new buffer and copy the content? I think new values can't be added by using glMapBuffer. Is this correct?
Index element buffers can only reach over attributes that are equal to or below 65535 in length, so you need to use drawArrays instead. It's usually not a big loss.
You can add trees to the end of the buffers using GL.bufferSubData.
If your textures are in reasonable sized (like 128x128 or 256x256), you can probably merge them into one big texture and handle the whole thing with the UV-coords. If not, you can add another attribute saying what texture the vertex belongs to and have a condition in the vertex shader, alternatively an array of sampler2Ds (not sure it works, never tried it). Remember that conditions in shaders are pretty slow.
If you decide to stick to your current solution, make sure to sort the trees so the once using the same textures are rendered after each other - keeping state switching down is essential, always.
A few thoughts:
Once you plant a tree in your world, do you ever modify it? Will it animate at all? Or is it just static geometry? If it's truly static, you could always build a single buffer with several copies of each tree. As you append trees, first apply (in Javascript) that instance's world transform to the vertices. If using triangle strips, you can link trees together using degenerate polygons.
You could roll your own pseudo-instanced drawing:
Encode an instance ID in the array buffer. Just set this to the same value for all vertices that are part of the same tree instance. I seem to recall that you can't have non-floaty vertex attributes in ES GLSL (maybe that's a Chrome limitation), so you will need to bring it in as a float but use it as an int. Since it's coming in as a float, you will have to deal with the fact that it's interpolated across your triangle, and so the value will have minor fluctuations - but simply rounding to the nearest integer fixes that right up.
Use a separate texture (which I will call the data texture) to encode all the per-instance information. In your vertex shader, look at the instance ID of the current vertex and use that to compute a texture coordinate in the data texture. Pull out whatever you need to transform the current vertex, and apply it. I think this is called a "dependent texture read", which is generally frowned upon because it can cause performance issues, but it might help you batch your geometry, which can help solve performance issues. If you're interested, you'll have to try it and see what happens.
Hope for an extension to support real instanced drawing.
Your current approach isn't so bad. I'd say: Stick with it until you hit some wall.
50 triangles is already a reasonable batch size for a single drawElements/drawArrays call. It's not optimal, but also not so bad. So for every tree change the paramters like location, texture and maybe shape through uniforms. Then do a draw call for each tree. Also a total of 250 drawElements calls isn't so bad either.
So I'd use one single VBO that contains all the used tree geometry variants. I'd actually split up the trees into building blocks, so that I could recombine them for added variety. And for each tree set appropriate offsets into the VBO before calling drawArrays or drawElements.
Also don't forget that you can do a very cheap field of view culling of each tree.

Quickest Way to create Face and Edge objects in Sketchup

I have to render a mesh of a few thousand polygons in Google Sketchup. I find that add_face tends to get slower as the number of faces in the model increases. I believe this to be due to some edge detection algorithm that Sketchup is running behind the scenes. Hopefully, there should be some way to suppress this edge detection or other processing that Sketchup is doing till all faces have been added to the model.
I found add_faces_from_mesh and fill_from_mesh to be much faster but I end up with a mesh consisting of Surface instances instead of the Face and Edge objects I am looking for.
So, what is the fastest way to generate a model consisting of Face and Edge objects in Sketchup? Is there a way to generate Edge and Face objects from a Surface object?
Update: I just read here that using Model::start_transaction and Model::commit_transaction can be used to speed things up but I found that the improvements are not very significant. Anything else I can do?
I found add_faces_from_mesh and
fill_from_mesh to be much faster but I
end up with a mesh consisting of
Surface instances instead of the Face
and Edge objects I am looking for.
Calling add_faces_from_mesh or fill_from_mesh with the smooth_flags parameter explicitly set to zero correctly constructs Face and Edge objects. Sketchup Documentation claims that smooth_flags defaults to zero... my trials show otherwise.
Just to clarify - add_faces_from_mesh and fill_from_mesh do add Edges and Faces - however, the default behaviour is to create a mesh with soft and smooth edges. When you have a set of Faces connected by Soft edges they will be treated as a surface by SketchUp and Entity Info window will say "Surface" when you select them.
However - internally it's still just a set of edges and faces - SketchUp has no Surface entity.
As for Model::start_transaction - you must specify true for the second disable_ui argument in order to see any speed gains. But as you have noticed, SU is very slow to add entities - the more there is in the entities collection you add to the slower it gets. The absolute fastest way to add entities is fill_from_mesh.

Resources