ThreeJS: Use BufferGeometry with Line - three.js

For performance reasons, I would like to migrate to BufferGeometry instead of Geometry. It works great for Mesh and ParticleSystem objects, but when setting the geometry of a Line to a BufferGeometry, initLineBuffer() expects a geometry.vertices Vector3 array which a BufferGeometry does not have.
The call stack:
render --> initWebGLObjects --> addObject --> initLineBuffers
Is there a simple solution for this, or do I need to hack? ;)

BufferGeometry does support lines now. Look at webgl_buffergeometry_lines.html example.

It looks like right now BufferGeometry will only draw out to triangles, so you you cannot have it draw lines (unless you do something like make the two of the three triangle vertices the same, but that is pretty hacky). It supports meshes and particles, so maybe soon it will support lines. Here is the line in the source

Related

What is Buffer Geometry of three.js?

The explanation presented in the three.js documentation about the command BufferGeometry is quite hard to understand to me.
It says..
BufferGeometry is a representation of mesh, line, or point geometry.
Includes vertex positions, face indices, normals, colors, UVs, and
custom attributes within buffers, reducing the cost of passing all
this data to the GPU.
I didn't quite understand what those sentences meant.
What is the purpose of BufferGeometry? How do you visualize BufferGeometry in real life?
Thank you!
An instance of this class holds the geometry data intended for rendering.
If you want to visualize these data, you have to define a material and the type of 3D object (mesh, lines or points). The code example of the documentation page shows the respective JavaScript statements.

Implement lights and shadows in ShaderMaterial in Three.js r136

What is the current solution in r136 to blend lights, shadows and color in a ShaderMaterial ? I already found the solution for the fog support.
I found some examples in previous revision (r108) like this codesandbox.
Actually, I'm looking for this kind of result : codesandbox.
Should I copy MeshPhongMaterial shaders as code base for my own shaders ?
The usage of custom shaders is mandatory in my projects, that's why i'm not using built-in materials.
Any idea or example ?
Thanks !
This question is huge, and does not have a single answer. Creating lights, shadows, and color varies from material to material, and includes so many elements that it would require a full course to learn.
However, you can look at the segments of shader code used by Three.js in this folder called /ShaderChunk. If you look up "light", you'll see shader segments (or "chunks"), for each material, like toon, lambert, physical, etc. Some materials need parameters to be defined at the beginning of the shader code, (those are the _pars files), some are calculated in the vertex shader, some in fragment, some need to split the code between _begin and _end, etc:
Shadows are even more complex because they require a separate render pass to build the shadowmap. Like I said, re-building your own lights, shadows, and color is a huge undertaking, and it would need a full course to learn. I hope this answer at least points you in the right direction.

InstancedBufferGeometry lookatcamera

I'm using Three.js to create a spiral galaxy I've gone down the InstancedBufferGeometry so I can render lots of stars with great performance.
For now, I'm using a plane as my object, the trouble I have is that when I orbit around the galaxy these planes don't look at the camera.
I have tried using the lookat function however that doesn't seem to work.
Does anyone know how to get InstancedBufferGeometry to look at the camera.
Many thanks in advance.
The lookAt method belongs to THREE.Object3D, and it makes the entire object rotate towards a point, not each of its geometry's instances. If you're using InstancedBufferGeometry, you could perform these calculations in the vertexShader, but can be computationally expensive, given the quantity of planes you're rendering.
If you're using InstancedBufferGeometry for planes only, I recommend you use THREE.Points instead, which is made to automatically generate planes that always look towards the camera, as demonstrated in these examples:
https://threejs.org/examples/?q=point#webgl_points_sprites
https://threejs.org/examples/?q=point#webgl_custom_attributes_points
All you'd need to worry about is their positions, and the rotations will always "billboard" towards the camera without the need of manually calculating rotations.

How to detect interaction with a line expanded in vertex shader in three.js?

I'm working on a three.js project that requires crisp, thick 2D lines.
Because of limitations in the ANGLE layer, the WebGL renderer on Windows doesn't allow thick lines with LineBasicMaterial.
To get around this, I'm expanding polylines in a vertex shader using three-line-2d. This works by pairing BufferGeometry with a simple ShaderMaterial.
Visually, I'm happy with the result.
Now, I'd like to detect mouse interactions with these lines. The usual Raycaster techniques don't work. I suspect that this is because my lines lack geometry that three.js understands (because I'm expanding in a shader).
My question: What are my options for picking these lines? Do I need to extrude outside of the shader, or are there other good options?

Three.js doesn't use different shader programs for different mesh objects, why?

I've tried to figure out, how three.js is working and have tried some shader debugger for it.
I've added two simple planes with basic material (single color without any shading model), which are rotating within rendering process.
First of all, my question was... Why is three.js using a single shader program (look at the WebGL context function .useProgram()) for both meshes.
I suppose, that objects are the same, and that's why for performance reasons a single shader program is using for similar objects.
But... I have changed my three.js application source code, and now there are a plane and a cube in scene, which are rotating.
And let's look in shader debugger again:
Here you can see, that three.js is using again one shader program, but the objects are different right now. And this moment is not clear for me.
If to look at that shader, it seems to be very generic and huge shader program, and there are also two different shader programs, which were compiled, but not used.
So, why is three.js using a single shader program? What are those correct (or maybe not) reasons?
Most of the work done in a shader is related to the material part of the mesh, not the geometry.
In webgl (or opengl for that matter) the geometry as you understand it (if it is a cube, a sphere, or whatever) is pretty irrelevant.
It would be a little bit more relevant if you talk about how the geometry is constructed. But in these days where faces of more than 3 vertices are gone, and triangle strips are seldom used, that are few different geometries... face3 geometries, line geometries, particle geometries, and buffer geometries.
Most of the time, the key difference to use a different shader will be in the material.

Resources