What is Buffer Geometry of three.js? - three.js

The explanation presented in the three.js documentation about the command BufferGeometry is quite hard to understand to me.
It says..
BufferGeometry is a representation of mesh, line, or point geometry.
Includes vertex positions, face indices, normals, colors, UVs, and
custom attributes within buffers, reducing the cost of passing all
this data to the GPU.
I didn't quite understand what those sentences meant.
What is the purpose of BufferGeometry? How do you visualize BufferGeometry in real life?
Thank you!

An instance of this class holds the geometry data intended for rendering.
If you want to visualize these data, you have to define a material and the type of 3D object (mesh, lines or points). The code example of the documentation page shows the respective JavaScript statements.

Related

Is it more performant for three.js to load a mesh that's already been triangulated than a mesh using quads?

I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.

Difference between buffer geometry and geometry

I'm new to three JS where I have researched through all the topics such as camera, renderer, scene and geometry. Where coming through the geometry there are geometry and buffer geometry(say conebufferGeometry and coneGeometry).Where the features are same in both. So whats the difference between geometry and buffer geometry. Is that influence anything in performance or something
The difference is essentially in underlying data structures (how geometry stores and handles vertices, faces etc in memory).
For learning purposes you should not care about it, just use ConeGeometry until you come across performance issues. Then come to the topic again, next time you will be more prepared to get the difference between two.
Please check BufferGeometry
An efficient representation of mesh, line, or point geometry. Includes
vertex positions, face indices, normals, colors, UVs, and custom
attributes within buffers, reducing the cost of passing all this data
to the GPU.
To read and edit data in BufferGeometry attributes, see
BufferAttribute documentation.
For a less efficient but easier-to-use representation of geometry, see
Geometry.
On another side Geometry:
Geometry is a user-friendly alternative to BufferGeometry. Geometries
store attributes (vertex positions, faces, colors, etc.) using objects
like Vector3 or Color that are easier to read and edit, but less
efficient than typed arrays.
Prefer BufferGeometry for large or serious projects.
BufferGeometry performance explained here: why-is-the-geometry-faster-than-buffergeometry
From 2021
This is now a moot point, geometry was removed from threejs in r125.
Geometry is now just an alias for BufferGeometry, source here.
export { BoxGeometry, BoxGeometry as BoxBufferGeometry };
Geometry is converted to buffergeometry in the end so if you dont have any performance issues, stick to geometry if its convenient for you.
Here you can see that ConeGeometry calls CylinderGeometry constructor.
CylinderGeometry.call( this, 0, radius, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength );
https://github.com/mrdoob/three.js/blob/dev/src/geometries/ConeGeometry.js
Then CylinderGeometry is created using CylinderBufferGeometry.
this.fromBufferGeometry( new CylinderBufferGeometry( radiusTop, radiusBottom, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength ) );
https://github.com/mrdoob/three.js/blob/dev/src/geometries/CylinderGeometry.js

InstancedBufferGeometry lookatcamera

I'm using Three.js to create a spiral galaxy I've gone down the InstancedBufferGeometry so I can render lots of stars with great performance.
For now, I'm using a plane as my object, the trouble I have is that when I orbit around the galaxy these planes don't look at the camera.
I have tried using the lookat function however that doesn't seem to work.
Does anyone know how to get InstancedBufferGeometry to look at the camera.
Many thanks in advance.
The lookAt method belongs to THREE.Object3D, and it makes the entire object rotate towards a point, not each of its geometry's instances. If you're using InstancedBufferGeometry, you could perform these calculations in the vertexShader, but can be computationally expensive, given the quantity of planes you're rendering.
If you're using InstancedBufferGeometry for planes only, I recommend you use THREE.Points instead, which is made to automatically generate planes that always look towards the camera, as demonstrated in these examples:
https://threejs.org/examples/?q=point#webgl_points_sprites
https://threejs.org/examples/?q=point#webgl_custom_attributes_points
All you'd need to worry about is their positions, and the rotations will always "billboard" towards the camera without the need of manually calculating rotations.

Three.js doesn't use different shader programs for different mesh objects, why?

I've tried to figure out, how three.js is working and have tried some shader debugger for it.
I've added two simple planes with basic material (single color without any shading model), which are rotating within rendering process.
First of all, my question was... Why is three.js using a single shader program (look at the WebGL context function .useProgram()) for both meshes.
I suppose, that objects are the same, and that's why for performance reasons a single shader program is using for similar objects.
But... I have changed my three.js application source code, and now there are a plane and a cube in scene, which are rotating.
And let's look in shader debugger again:
Here you can see, that three.js is using again one shader program, but the objects are different right now. And this moment is not clear for me.
If to look at that shader, it seems to be very generic and huge shader program, and there are also two different shader programs, which were compiled, but not used.
So, why is three.js using a single shader program? What are those correct (or maybe not) reasons?
Most of the work done in a shader is related to the material part of the mesh, not the geometry.
In webgl (or opengl for that matter) the geometry as you understand it (if it is a cube, a sphere, or whatever) is pretty irrelevant.
It would be a little bit more relevant if you talk about how the geometry is constructed. But in these days where faces of more than 3 vertices are gone, and triangle strips are seldom used, that are few different geometries... face3 geometries, line geometries, particle geometries, and buffer geometries.
Most of the time, the key difference to use a different shader will be in the material.

Fixed texture size in Three.js

I am building quite a complex 3D environment in Three.js (FPS-a-like). For this purpose I wanted to structure the loading of textures and materials in an object oriƫnted way. For example; materials.wood.brownplank is a reusable material with a certain texture and other properties. Below is a simplified visualisation of the process where models uses materials and materials uses textures.
loadTextures();
loadMaterials();
loadModels();
//start doing stuff in the scene
I want to use that material on differently sized objects. However, in Three.js you can't (AFAIK) set a certain texture scale. You will have to set the repeat to scale it appropiate to your object. But I don't want to do that for every plane of every object I use.
Here is how it looks now
As you can see, the textures are not uniform in size.
Is there an easy way achieve this? So cloning the texture and/or material every time and setting the repeat according to the geometry won't do :)
I hope someone can help me.
Conclusion:
There is no real easy way to do this. I ended up changing my loading methods, where things like materials.wood.brownplank are now for example getMaterial('wood', 'brownplank') In the function new objects are instantiated
You should be able to do this by modifying your geometry UV coordinates according to the "real" dimensions of each face.
In Three.js, UV coordinates are relative to the face and texture (as in, 0.0 = one edge, 1.0 = other edge), no matter what the actual size of texture or face is. But by modifying the UVs in geometry (multiply them by some factor based on face physical size), you can use the same material and texture in different sizes (and orientations) per face.
You just need to figure out the mapping between UVs, geometry scale and your desired working units (eg. mm or m). Sorry I don't have, or know a ready algorithm to do it, but that's the approach you probably need to take. Should be quite doable with a bit of experimentation and google-fu.

Resources