How to specify a face on a mesh in gltf? - three.js

I want to be able to reference a specific face on a specific mesh in a glTF file. I am confused by the notion of primitives however. Normally, I would use the face index (ie, in three.js) and I would always be able to reference the same face. However, sometimes meshes in glTF have multiple primitives. Do these use the same face buffer? Do they at least use consecutive face buffers? I am wondering if I can reference a face in a mesh using just one number (ie, a face index) or if I need to also use a primitive index.
Do mesh primitives share a pool of vertices?

Two glTF primitives in a single mesh could be related, or unrelated, the same ways as two glTF meshes each containing a single primitive. Two primitives could have:
same vertex attributes but different indices.
same vertex attributes AND indices, but different materials.
no shared vertex attributes or indices
entirely different draw modes (POINTS, LINES, TRIANGLES, ...)
So unless you're fully in control of the files you're loading, the default and safest assumption would be to treat each primitive as a completely separate mesh. If there are more specific cases you want to check for (like the first two bullets above), you can always add that as a later optimization.
If you're loading a glTF file into threejs, each primitive will become a separate THREE.Mesh under a common THREE.Group.
For further details, see the glTF specification section on Meshes.

Related

Is it more performant for three.js to load a mesh that's already been triangulated than a mesh using quads?

I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.

Three.js indexed BufferGeometry vs. InstancedBufferGeometry

I'm trying to learn more about performant geometries in THREE.js, and have come to understand that an indexed BufferGeometry and InstancedBufferGeometry are the two most performant geometry types.
My understanding so far is that in an indexed BufferGeometry, vertices that are re-used in a geometry are only added to the geometry once, and each instance of a given re-used vertex are referenced by their index position in the vertex array.
My understanding of the InstancedBufferGeometry is that this geometry allows one to create a "blueprint" of an object, send one copy of that object's vertices to a shader, then use custom attributes to modify each copy of the blueprint's position, rotation, scale, etc. [source]
I'd like to better understand: are there cases in which an indexed BufferGeometry will be more performant than the InstancedBufferGeometry.
Also, in the InstancedBufferGeometry, are there WebGL maximum parameters (such as maximum vertices per mesh) that one must consider so as to avoid making a mesh too large? How are the vertices in an InstancedBufferGeometry counted?
If anyone could help clarify the situations in which indexed BufferGeometry and InstancedBufferGeometry should be used, and the performance ceilings of InstancedBufferGeometry, I'd be very grateful.
[...] IndexedBufferGeometry and InstancedBufferGeometry are the two most performant geometry types.
Yes, BufferGeometries in general are the most performant way to deal with geometry-data as they store data in exactly the format that is used in the communication with the GPU via WebGL. Any plain Geometry is internally converted to a BufferGeometry before rendering.
You are also correct in your descriptions of the indexed and instanced geometries, but I'd like to note one more detail: In an indexed geometry, the instructions for the GPU how to assemble the triangles are separated from the vertex-data and presented to the GPU in a special index-attribute (as opposed to being an implied part of the vertices for non-indexed arrays).
I'd like to better understand: are there cases in which an IndexedBufferGeometry will be more performant than the InstancedBufferGeometry.
They do different things at different levels, so I don't think there are many use-cases where a choice between them makes much sense.
In fact, you can even create an instanced geometry based on a "blueprint"-geometry that has is an indexed BufferGeometry.
Let's dive a bit into the details to explain. An instanced geometry allows you to render multiple "clones" of the same "blueprint"-geometry in a single draw-call.
The first part of this, the creation of the blueprint, is identical to rendering a single geometry. For this, the attributes (positions, normals, uv-coordinates and possibly the index for an indexed geometry) need to be transferred to the GPU.
The special thing for instanced geometries are some extra attributes (in three.js InstancedBufferAttribute). These control how many times the geometry will be rendered and provide some instance-specific values. A typical use-case would be to have an additional vec3-attribute for the instance-position and a vec4-attribute for the quaternion per instance. But it really could be anything else as well.
In the vertex-shader, these special attributes look just like any other attribute and you need to manually apply the instance-specific updates per vertex. So instead of this:
attribute vec3 position;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
You would have something like this:
attribute vec3 position;
attribute vec3 instanceOffset; // this is the InstancedBufferAttribute
void main() {
gl_Position =
projectionMatrix
* modelViewMatrix
* vec4(position + instanceOffset, 1.0);
}
What you don't see here is that the vertex-shader in the instanced version will not only be called once per vertex of your geometry (as it is the case for regular rendering) but once per vertex and instance.
So there isn't actually any magic going on, instanced geometries are in fact nothing but a very efficient way to express duplication of entire geometries.
Also, in the InstancedBufferGeometry, are there WebGL maximum parameters (such as maximum vertices per mesh) that one must consider so as to avoid making a mesh too large?
I am not sure about that, but I didn't encounter any so far. If you are aware that rendering 1000 instances of an object with 1000 vertices will invoke the vertex-shader a million times that should help your judgement of performance implications.
If anyone could help clarify the situations in which IndexedBufferGeometry and InstancedBufferGeometry should be used, and the performance ceilings of InstancedBufferGeometry, I'd be very grateful.
You can (and maybe should) use indexed geometries for almost any kind of geometry. But they are not free of drawbacks:
when using indices, all attributes will get the same treatment. So for instance, you can't use per-face colors in indexed geometries (see Access to faces in BufferGeometry)
for point-clouds or geometries with little repeated vertices, they will do more harm than good (due to the extra amount of memory/bandwidth needed for the index)
most of the time though they will get a performance-benefit:
less memory/bandwidth required for vertex-data
GPUs can cache results of vertex-shaders and re-use them for repeated vertices (so, in an optimal case you'd end up with one VS-invocation per stored vertex, not per index)
For instanced geometries
if you have a larger number of somewhat similar objects where the differences can be expressed in just a few numbers, go for instanced geometries (simple case: render copies of the same object at different locations, complex case: render a forest by changing the tree's geometry based on some instance-attribute or render a crowd of people by changing the individual persons pose with an instance-attribute)
another thing I found quite inspiring: rendering of fat lines using instancing: use instancing to render a bunch of line-segments where each line-segment consists of 6 triangles (see https://github.com/mrdoob/three.js/blob/dev/examples/js/lines/LineSegmentsGeometry.js by #WestLangley)
Drawbacks:
as it is right now, there is no builtin-support for using regular materials together with instanced geometries. You have to write your shaders yourself. (to be precise: there is a way to do it, but it requires some intimate knowledge of how the three.js shaders work).

Three.js "Raycasting Points" example – why are there different types of point cloud?

In the example Interactive Raycasting Points there are 4 different functions to generate the point cloud:
1. generatePointcloud (with buffer geometry)
2. generateIndexedPointcloud (buffer geometry with indices)
3. generateIndexedWithOffsetPointcloud (buffer geometry with added drawcall)
4. generateRegularPointcloud (with normal geometry)
Could someone explain what the difference is between these 4 types, and if there are any performance benefits/certain situations where one is suited more than the others?
Thanks!
The purpose of the example Interactive Raycasting Points is to demonstrate that raycasting against THREE.Points works for a variety of geometry types.
So-called "regular geometry", THREE.Geometry, is the least memory-efficient geometry type, and in general, has longer load times than THREE.BufferGeometry.
BufferGeometry can be "indexed" or "non-indexed". Indexed BufferGeometry, when used with meshes, allows for vertices to be reused; that is, faces that share an edge can share a vertex. In the case of point clouds, however, I do not see a benefit to the "indexed" type.
BufferGeometry with draw calls -- now called groups -- allows for only a subset of the geometry to be rendered, and also allows for a different material index to be associated with each group.
The function generateIndexedWithOffsetPointcloud appears to have been named when draw calls, a.k.a. groups, were called "offsets".
I do not believe raycasting in three.js honors groups. I believe it raycasts against the entire geometry. In fact, I am not sure groups are working correctly at all in the example you reference.
three.js r.73

three.js, clothing and shape keys

How can I dress a human body?. I have imported the body model and t-shirt in two separated meshes. The human body includes shape keys.
But when I modify the morphTargetInfluences key of the body, the t-shirt doesn't fit in the new body shape.
How can I make the T-shirt fits when the key change the value?, How can I do that using three.js?
I'm using the version 1.4.0 of the Three.js exporter (three.js r71) and Blender 2.75a
The point is, your morph targets are only present in your character model and won't magically fit the cloth unfortunately. Apply the morph to the cloth too in your editing tool and morph both equally, this would work without extra effort.
I'm actually also working on a solution for wearable cloth, i'll give shared vertex buffers a try where the vertices "connects" to the vertices of the domain model with a relative offset, so you would just have to take care about assigning the cloth once, instead applying and exporting whole morph target sets at all.
The downside would be, your vertices has to stay the same, once you modify the mesh, you'd have to export all related cloth again. This can be basically solved by a automated process, like one which searches for nearest vertices, but cloth is usually extruded from the base mesh to perfectly "fit" without intersections, so this isn't really a surprising thing.

In GLGE, Is it possible specify the face of a mesh that a texture should be mapped to? (WEBGL as well)

I'm trying to make an environment map which is in the form of a cube that has images mapped onto particular faces to give the illusion of being in the area (sorta like google's street view)
I'm trying to do It in glgehowever, with my limited experience, I only know how to map one texture to a whole mesh (Which is what I'm doing at the moment). If I were to create 6 different textures, would it possible for me to specify the faces that those textures should be loaded to?
You could generate the six faces of the cube as separate objects and use a different texture for each. Alternative is to set different texture coordinates for the different faces of the cube.
If you want ready-to-run code, three.js has a couple of skybox examples. E.g. http://mrdoob.github.com/three.js/examples/webgl_panorama_equirectangular.html
You should look at "UV Mapping". Check this example. Roughly, UVs describe how the polygons are mapped (in x,y) on the texture.
Sounds like you want a cube map texture — it takes six separate images, and you lookup in it with a direction vector rather than (u,v) coordinates. They are the usual way to do environments. Cube map textures are available in WebGL.

Resources