Three.js indexed BufferGeometry vs. InstancedBufferGeometry - three.js

I'm trying to learn more about performant geometries in THREE.js, and have come to understand that an indexed BufferGeometry and InstancedBufferGeometry are the two most performant geometry types.
My understanding so far is that in an indexed BufferGeometry, vertices that are re-used in a geometry are only added to the geometry once, and each instance of a given re-used vertex are referenced by their index position in the vertex array.
My understanding of the InstancedBufferGeometry is that this geometry allows one to create a "blueprint" of an object, send one copy of that object's vertices to a shader, then use custom attributes to modify each copy of the blueprint's position, rotation, scale, etc. [source]
I'd like to better understand: are there cases in which an indexed BufferGeometry will be more performant than the InstancedBufferGeometry.
Also, in the InstancedBufferGeometry, are there WebGL maximum parameters (such as maximum vertices per mesh) that one must consider so as to avoid making a mesh too large? How are the vertices in an InstancedBufferGeometry counted?
If anyone could help clarify the situations in which indexed BufferGeometry and InstancedBufferGeometry should be used, and the performance ceilings of InstancedBufferGeometry, I'd be very grateful.

[...] IndexedBufferGeometry and InstancedBufferGeometry are the two most performant geometry types.
Yes, BufferGeometries in general are the most performant way to deal with geometry-data as they store data in exactly the format that is used in the communication with the GPU via WebGL. Any plain Geometry is internally converted to a BufferGeometry before rendering.
You are also correct in your descriptions of the indexed and instanced geometries, but I'd like to note one more detail: In an indexed geometry, the instructions for the GPU how to assemble the triangles are separated from the vertex-data and presented to the GPU in a special index-attribute (as opposed to being an implied part of the vertices for non-indexed arrays).
I'd like to better understand: are there cases in which an IndexedBufferGeometry will be more performant than the InstancedBufferGeometry.
They do different things at different levels, so I don't think there are many use-cases where a choice between them makes much sense.
In fact, you can even create an instanced geometry based on a "blueprint"-geometry that has is an indexed BufferGeometry.
Let's dive a bit into the details to explain. An instanced geometry allows you to render multiple "clones" of the same "blueprint"-geometry in a single draw-call.
The first part of this, the creation of the blueprint, is identical to rendering a single geometry. For this, the attributes (positions, normals, uv-coordinates and possibly the index for an indexed geometry) need to be transferred to the GPU.
The special thing for instanced geometries are some extra attributes (in three.js InstancedBufferAttribute). These control how many times the geometry will be rendered and provide some instance-specific values. A typical use-case would be to have an additional vec3-attribute for the instance-position and a vec4-attribute for the quaternion per instance. But it really could be anything else as well.
In the vertex-shader, these special attributes look just like any other attribute and you need to manually apply the instance-specific updates per vertex. So instead of this:
attribute vec3 position;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
You would have something like this:
attribute vec3 position;
attribute vec3 instanceOffset; // this is the InstancedBufferAttribute
void main() {
gl_Position =
projectionMatrix
* modelViewMatrix
* vec4(position + instanceOffset, 1.0);
}
What you don't see here is that the vertex-shader in the instanced version will not only be called once per vertex of your geometry (as it is the case for regular rendering) but once per vertex and instance.
So there isn't actually any magic going on, instanced geometries are in fact nothing but a very efficient way to express duplication of entire geometries.
Also, in the InstancedBufferGeometry, are there WebGL maximum parameters (such as maximum vertices per mesh) that one must consider so as to avoid making a mesh too large?
I am not sure about that, but I didn't encounter any so far. If you are aware that rendering 1000 instances of an object with 1000 vertices will invoke the vertex-shader a million times that should help your judgement of performance implications.
If anyone could help clarify the situations in which IndexedBufferGeometry and InstancedBufferGeometry should be used, and the performance ceilings of InstancedBufferGeometry, I'd be very grateful.
You can (and maybe should) use indexed geometries for almost any kind of geometry. But they are not free of drawbacks:
when using indices, all attributes will get the same treatment. So for instance, you can't use per-face colors in indexed geometries (see Access to faces in BufferGeometry)
for point-clouds or geometries with little repeated vertices, they will do more harm than good (due to the extra amount of memory/bandwidth needed for the index)
most of the time though they will get a performance-benefit:
less memory/bandwidth required for vertex-data
GPUs can cache results of vertex-shaders and re-use them for repeated vertices (so, in an optimal case you'd end up with one VS-invocation per stored vertex, not per index)
For instanced geometries
if you have a larger number of somewhat similar objects where the differences can be expressed in just a few numbers, go for instanced geometries (simple case: render copies of the same object at different locations, complex case: render a forest by changing the tree's geometry based on some instance-attribute or render a crowd of people by changing the individual persons pose with an instance-attribute)
another thing I found quite inspiring: rendering of fat lines using instancing: use instancing to render a bunch of line-segments where each line-segment consists of 6 triangles (see https://github.com/mrdoob/three.js/blob/dev/examples/js/lines/LineSegmentsGeometry.js by #WestLangley)
Drawbacks:
as it is right now, there is no builtin-support for using regular materials together with instanced geometries. You have to write your shaders yourself. (to be precise: there is a way to do it, but it requires some intimate knowledge of how the three.js shaders work).

Related

rendering millions of voxels using 3D textures with three.js

I am using three.js to render a voxel representation as a set of triangles. I have got it render 5 million triangles comfortably but that seems to be the limit. you can view it online here.
select the Dublin model at resolution 3 to see a lot of triangles being drawn.
I have used every trick to get it this far (buffer geometry, voxel culling, multiple buffers) but I think it has hit the maximum amount that openGL triangles can accomplish.
Large amounts of voxels are normally rendered as a set of images in a 3D texture and while there are several posts on how to hack 2d textures into 3D textures but they seem to have a maximum limit on the texture size.
I have searched for tutorials or examples using this approach but haven't found any. Has anyone used this approach before with three.js
Your scene is render twice, because SSAO need depth texture. You could use WEBGL_depth_texture extension - which have pretty good support - so you just need a single render pass. You can stil fallback to low-perf-double-pass if extension is unavailable.
Your voxel's material is double sided. It's may be on purpose, but it may create a huge overdraw.
In your demo, you use a MeshPhongMaterial and directional lights. It's a needlessly complex material. Your geometries don't have any normals so you can't have any lighting. Try to use a simpler unlit material.
Your goal is to render a huge amount of vertices, so assuming the framerate is bound by vertex shader :
try stuff like https://github.com/GPUOpen-Tools/amd-tootle to preprocess your geometries. Focusing on prefetch vertex cache and vertex cache.
reduce the bandwidth used by your vertex buffers. Since your vertices are aligned on a "grid", you could store vertices position as 3 Shorts instead of 3 floats, reducing your VBO size by 2. You could use a same tricks if you had normals since all normals should be Axis aligned (cubes)
generally reduce the amount of varyings needed by fragment shader
if you need more attributes than just vec3 position, use one single interleaved VBO instead of one per attrib.

Three.js "Raycasting Points" example – why are there different types of point cloud?

In the example Interactive Raycasting Points there are 4 different functions to generate the point cloud:
1. generatePointcloud (with buffer geometry)
2. generateIndexedPointcloud (buffer geometry with indices)
3. generateIndexedWithOffsetPointcloud (buffer geometry with added drawcall)
4. generateRegularPointcloud (with normal geometry)
Could someone explain what the difference is between these 4 types, and if there are any performance benefits/certain situations where one is suited more than the others?
Thanks!
The purpose of the example Interactive Raycasting Points is to demonstrate that raycasting against THREE.Points works for a variety of geometry types.
So-called "regular geometry", THREE.Geometry, is the least memory-efficient geometry type, and in general, has longer load times than THREE.BufferGeometry.
BufferGeometry can be "indexed" or "non-indexed". Indexed BufferGeometry, when used with meshes, allows for vertices to be reused; that is, faces that share an edge can share a vertex. In the case of point clouds, however, I do not see a benefit to the "indexed" type.
BufferGeometry with draw calls -- now called groups -- allows for only a subset of the geometry to be rendered, and also allows for a different material index to be associated with each group.
The function generateIndexedWithOffsetPointcloud appears to have been named when draw calls, a.k.a. groups, were called "offsets".
I do not believe raycasting in three.js honors groups. I believe it raycasts against the entire geometry. In fact, I am not sure groups are working correctly at all in the example you reference.
three.js r.73

Approach to write a fragment shader for each triangle in a mesh

I have a mesh that consists of several triangles (order 100). I would like to define a different fragment shader for each of them. So to be able to show different kind of reflection behaviour for each triangle.
How should I approach this problem? Should I start defining a GLSL program and try to distinguish between different triangles? this answer is suggesting me that this is not the right approach glDrawElements and flat shading . Even this Using a different vertex and fragment shader for each object in webgl seems not the right approach since I do not want to have multiple objects, but just one with different materials(fragment shaders) on it.
My suggestion would be to create a super shader which can handle all the different scenarios you desire.
In order to set this up you'll need attributes that dictate which part of the shader to use.
e.g. in your vertex or fragment shader:
attribute bool flatShading;
attribute bool phongShading;
if (flatShading) {
// perform flat shading
} else if (phongShading) {
// perform phong shading
}
Then setup your buffers as so that the vertices in each triangle have a certain shading attribute applied.

OpenGL ES 2.0 Per Fragment Lighting for Untextured Generated Geometry

I'm currently generating geometry rather than importing it as a model. This makes it necessary to calculate all normals within the application.
I've implemented Gouraud shading (per vertex lighting) successfully, and now wish to implement Phong shading (per fragment/pixel).
I've had a look at relevant tutorials online and there are two camps: one offers a simple Gouraud-to-Phong reshuffling of shader code which, while offering improved lighting, isn't truly per-pixel. The second does things the right way by utilising normal maps embedded within textures, but these are generated within a modelling toolkit such as RenderMonkey.
My questions are:
How I should go about programmatically generating normals for my
generated geometry, considered it as a vertex set? In other words, given a set of discrete polygonal points, will it be necessary to manually calculated interpolated normals?
Should I store generated normals within a texture as exemplified
online, and if so how would I go about doing this within code rather
than through modelling software?
Computing the lighting in the fragment shader on the intepolated per-vertex normals will definitely yield better results (assuming you properly re-normalize the interpolated normals in the fragment shader) and it is truly per-pixel. Although the strength of the difference may very depending on the model tessellation and the lighting variation. Have you just tried it out?
As long as you don't have any changing normals inside a face (think of bump mapping) and only interpolate the per-vertex normals, a normal map is completely unneccessary, as you get interpolated normals from the rasterizer anyway. Whereas normal mapping can give nicer effects if you really have per-pixel normal variations (like a very rough surface), it is not neccessarily the right way to do per-pixel lighting.
It makes a huge difference if you compute the lighting per-vertex and interpolate the colors or if you compute the lighting per fragment (even if you just interpolate the per-vertex normals, that's what classical Phong shading is about), especially when you have quite large triangles or very shiny surfaces (very high frequency lighting variation).
Like said, if you don't have high-frequency normal variations (changing normals inside a triangle), you don't need a normal map and neither interpolate the per-vertex normals yourself. You just generate per-vertex normals like you did for the per-vertex lighting (e.g. by averaging adjacent face normals). The rasterizer does the interpolation for you.
You should first try out simple per-pixel lighting before delving into techniques like normal mapping. If you got not so finely tessellated geometry or very shiny surfaces, you will surely see the difference to simple per-vertex lighting. Then when this works you can try normal mapping techniques, but for them to work you surely need to first understand the meaning of per-pixel lighting and Phong shading in contrast to Gouraud shading.
Normal maps are not a requirement for per-pixel lighting.
The only requirement, by definition, is that the lighting solution is evaluated for every output pixel/fragment. You can store the normals on the vertexes just as well (and more easily).
Normal maps can either provide full normal data (rgb maps) or simply modulate the stored vertex normals (du/dv maps, appear red/blue). The latter form is perhaps more common and relies on vertex normals to function.
To generate the normals depends on your code and geometry. Typically, you use dot products and surrounding faces or vertexes for smooth normals, or just create a unit vector pointing in whatever is "out" for your geometry.

Array of texture identifiers to OpenGL DrawElements/DrawArrays?

An OpenGL ES sequence like this can be used to render multiple objects in one pass:
glVertexPointer(...params..., vertex_Array );
glTexCoordPointer(...params..., texture_Coordinates_Array );
glBindTexture(...params..., one_single_texture_ID );
glDrawArrays( GL_TRIANGLES, number_Triangles );
Here, the vertex array and texture coordinates array can refer to innumerable primitives that can be described in one step to OpenGL.
But do all these primitives' texture coordinates have to reference the one, single texture in the glBindTexture command?
It would be nice to pass in an array of texture identifiers:
glBindTexture(...params..., texture_identifier_array[] );
Here, there would be a texture ID in the array for every primitive shape described in the preceding calls. So, each shape's texture coordinates would pertain to the texture identified in "texture_identifier_array[]".
I can see one option is to place all textures of interest one one large texture that can be referenced as a single entity in the drawing calls. On my platform, this creates an intermediate step with a large bitmap that might cause memory issues.
It would be best for me to be able to pass an array of texture identifiers to the OpenGL ES drawing calls. Can this be done?
No, that's not possible. You could perhaps emulate it by using a texture array and giving your vertices a texture index. Then in the fragment shader you could look up the right texture with the index, but I doubt that ES supports texture arrays. And even then I don't know if this really works. Or if a texture atlas solution would be much more efficient.
If you want to render multiple versions of the same geometry (what I doubt), you're looking for instanced rendering, which also isn't supported by on ES devices, I think.
So the way to go at the moment will be a texture atlas (multiple textures in one) or just calling glDrawArrays multiple times.

Resources