Difference between buffer geometry and geometry - three.js

I'm new to three JS where I have researched through all the topics such as camera, renderer, scene and geometry. Where coming through the geometry there are geometry and buffer geometry(say conebufferGeometry and coneGeometry).Where the features are same in both. So whats the difference between geometry and buffer geometry. Is that influence anything in performance or something

The difference is essentially in underlying data structures (how geometry stores and handles vertices, faces etc in memory).
For learning purposes you should not care about it, just use ConeGeometry until you come across performance issues. Then come to the topic again, next time you will be more prepared to get the difference between two.
Please check BufferGeometry
An efficient representation of mesh, line, or point geometry. Includes
vertex positions, face indices, normals, colors, UVs, and custom
attributes within buffers, reducing the cost of passing all this data
to the GPU.
To read and edit data in BufferGeometry attributes, see
BufferAttribute documentation.
For a less efficient but easier-to-use representation of geometry, see
Geometry.
On another side Geometry:
Geometry is a user-friendly alternative to BufferGeometry. Geometries
store attributes (vertex positions, faces, colors, etc.) using objects
like Vector3 or Color that are easier to read and edit, but less
efficient than typed arrays.
Prefer BufferGeometry for large or serious projects.
BufferGeometry performance explained here: why-is-the-geometry-faster-than-buffergeometry

From 2021
This is now a moot point, geometry was removed from threejs in r125.
Geometry is now just an alias for BufferGeometry, source here.
export { BoxGeometry, BoxGeometry as BoxBufferGeometry };

Geometry is converted to buffergeometry in the end so if you dont have any performance issues, stick to geometry if its convenient for you.
Here you can see that ConeGeometry calls CylinderGeometry constructor.
CylinderGeometry.call( this, 0, radius, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength );
https://github.com/mrdoob/three.js/blob/dev/src/geometries/ConeGeometry.js
Then CylinderGeometry is created using CylinderBufferGeometry.
this.fromBufferGeometry( new CylinderBufferGeometry( radiusTop, radiusBottom, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength ) );
https://github.com/mrdoob/three.js/blob/dev/src/geometries/CylinderGeometry.js

Related

What is Buffer Geometry of three.js?

The explanation presented in the three.js documentation about the command BufferGeometry is quite hard to understand to me.
It says..
BufferGeometry is a representation of mesh, line, or point geometry.
Includes vertex positions, face indices, normals, colors, UVs, and
custom attributes within buffers, reducing the cost of passing all
this data to the GPU.
I didn't quite understand what those sentences meant.
What is the purpose of BufferGeometry? How do you visualize BufferGeometry in real life?
Thank you!
An instance of this class holds the geometry data intended for rendering.
If you want to visualize these data, you have to define a material and the type of 3D object (mesh, lines or points). The code example of the documentation page shows the respective JavaScript statements.

What is more efficient: new mesh with cloned material or cloned mesh with cloned material?

For a large number of objects (graph nodes in my case) which all have same geometry but different color, what is more efficient:
1) creating a mesh and then cloning the mesh and material of the mesh (with mesh.traverse) for each node
2) create new mesh with cloned material for each node
The only difference between an object creation via new and .clone() is that the latter one also copies all properties from the source object. If you don't need this, it's better to just create your meshes via the new operator.
Since you have large number of objects, you might want to consider to use instanced rendering in order to reduce the amount of draw calls in your application. Right now, you have to draw each node separately. With instanced rendering, the nodes are drawn all at once. There are a couple of examples that demonstrate instanced rendering with three.js:
https://threejs.org/examples/?q=instancing
Using instanced rendering is not a must but it can be helpful if you run into performance issues. Another option which is easier to implement but less flexible is to merge all meshes into a large single geometry and use vertex colors. You will have a single geometry, material and mesh object but an additional vertex color attribute will ensure that your nodes are colored differently.
three.js R103

Three.js Merge objects and textures

My question is related to this article:
http://blog.wolfire.com/2009/06/how-to-project-decals/
If my understanding is correct, a mesh made from the intersection of the original mesh and a cube is added to the scene to make a decal appear.
I need to save the final texture. So I was wondering if there is a way to 'merge' the texture of the original mesh and the added decal mesh?
You'd need to do some tricky stuff to convert from the model geometry space into UV coordinate space so you could draw the new pixels into the texture map. If you want to be able to use more than one material that way, you'd also probably need to implement some kind of "material map" similar to how some deferred rendering systems work. Otherwise you're limited to at most, one material per face, which wouldn't work for detailed decals with alpha.
I guess you could copy the UV coordinates from the original mesh into the decal mesh, and the use that information to reproject the decal texture into the original texture

Blender can add verticies dynamically why can't Three.js?

In Blender, or any multitude of modeling software you can add vertices on the fly.
However in Three.js this is would be an very expensive operation. You would have to replace the geometry with a new one every time.
What is the difference?
Blender ultimately does the same thing. When adding geometry (vertex, faces, edges, etc) you effectively have to re-upload a buffer containing the vertices / faces / texture UV coordinates to the GPU when they are changed. This is why it seems complex. You can easily wrap these concepts in functions to make it easier on yourself.

Fixed texture size in Three.js

I am building quite a complex 3D environment in Three.js (FPS-a-like). For this purpose I wanted to structure the loading of textures and materials in an object oriƫnted way. For example; materials.wood.brownplank is a reusable material with a certain texture and other properties. Below is a simplified visualisation of the process where models uses materials and materials uses textures.
loadTextures();
loadMaterials();
loadModels();
//start doing stuff in the scene
I want to use that material on differently sized objects. However, in Three.js you can't (AFAIK) set a certain texture scale. You will have to set the repeat to scale it appropiate to your object. But I don't want to do that for every plane of every object I use.
Here is how it looks now
As you can see, the textures are not uniform in size.
Is there an easy way achieve this? So cloning the texture and/or material every time and setting the repeat according to the geometry won't do :)
I hope someone can help me.
Conclusion:
There is no real easy way to do this. I ended up changing my loading methods, where things like materials.wood.brownplank are now for example getMaterial('wood', 'brownplank') In the function new objects are instantiated
You should be able to do this by modifying your geometry UV coordinates according to the "real" dimensions of each face.
In Three.js, UV coordinates are relative to the face and texture (as in, 0.0 = one edge, 1.0 = other edge), no matter what the actual size of texture or face is. But by modifying the UVs in geometry (multiply them by some factor based on face physical size), you can use the same material and texture in different sizes (and orientations) per face.
You just need to figure out the mapping between UVs, geometry scale and your desired working units (eg. mm or m). Sorry I don't have, or know a ready algorithm to do it, but that's the approach you probably need to take. Should be quite doable with a bit of experimentation and google-fu.

Resources