What is the best way to group a bunch of vertices in Blender? After exporting I want to manipulate only this group of vertices in ThreeJS (eg. change their position).
I tried:
Use the integrated .obj exporter from Blender(v2.73) and the .json exporter
from ThreeJS (Add-on "io_three") but they ignored for example
Blenders "Vertex Group" and did not separate vertices in groups.
With the .obj exporter I somehow managed to have a node (type
"Object3D") with 3 children (type "Mesh") instead of 2 children. And
in the last child there were some separated vertices of the mesh (but
not the ones I selected as a Vertex Group in Blender). This probably would be a good solution if I knew how the vertices got in the other child node.
Wavefront (.obj) has no concept of grouping or nesting object.
Although there is some kind of grouping mechanism where you can group
faces with different materials with the annotation g
(http://www.fileformat.info/format/wavefrontobj/egff.htm#WAVEOBJ-DMYID.3.1).
This behaviour also exists in some kind of way in Blender with the
Vertex-Groups. But the default Wavefront exporter in Blender doesn't
support that kind of grouping.
Once the usemtl attribute (I don't know which Blender setting caused this to occur in my example) is set anywhere in the faces definition of a .obj ThreeJs will separate the faces after that attribute into a new child node.
My solution: Put all objects in Blender (which should belong to one group) into one single object. The exporter then creates an object o for each Blender object. ThreeJS will create a child node for each of this objects.
Only problem: Manipulating the original objects in Blender (that are now combined in one object) is more difficult.
Related
I want to be able to reference a specific face on a specific mesh in a glTF file. I am confused by the notion of primitives however. Normally, I would use the face index (ie, in three.js) and I would always be able to reference the same face. However, sometimes meshes in glTF have multiple primitives. Do these use the same face buffer? Do they at least use consecutive face buffers? I am wondering if I can reference a face in a mesh using just one number (ie, a face index) or if I need to also use a primitive index.
Do mesh primitives share a pool of vertices?
Two glTF primitives in a single mesh could be related, or unrelated, the same ways as two glTF meshes each containing a single primitive. Two primitives could have:
same vertex attributes but different indices.
same vertex attributes AND indices, but different materials.
no shared vertex attributes or indices
entirely different draw modes (POINTS, LINES, TRIANGLES, ...)
So unless you're fully in control of the files you're loading, the default and safest assumption would be to treat each primitive as a completely separate mesh. If there are more specific cases you want to check for (like the first two bullets above), you can always add that as a later optimization.
If you're loading a glTF file into threejs, each primitive will become a separate THREE.Mesh under a common THREE.Group.
For further details, see the glTF specification section on Meshes.
I'm new to three js but i have managed to make polyhedron with one texture. but with multiple and with caption is somewhat advanced
Applying multiple diffuse textures in three.js requires the usage of multiple materials. THREE.DodecahedronGeometry as well as all other geometry classes derived from THREE.PolyhedronGeometry do no support multiple materials.
If you still want to use such a geometry with multiple materials, you need to define so called group data. But since you are a beginner in three.js, it might be easier to create your mesh in a DCC tool like Blender, export it to glTF and then import it into your application via THREE.GLTFLoader.
three.js R107
For a large number of objects (graph nodes in my case) which all have same geometry but different color, what is more efficient:
1) creating a mesh and then cloning the mesh and material of the mesh (with mesh.traverse) for each node
2) create new mesh with cloned material for each node
The only difference between an object creation via new and .clone() is that the latter one also copies all properties from the source object. If you don't need this, it's better to just create your meshes via the new operator.
Since you have large number of objects, you might want to consider to use instanced rendering in order to reduce the amount of draw calls in your application. Right now, you have to draw each node separately. With instanced rendering, the nodes are drawn all at once. There are a couple of examples that demonstrate instanced rendering with three.js:
https://threejs.org/examples/?q=instancing
Using instanced rendering is not a must but it can be helpful if you run into performance issues. Another option which is easier to implement but less flexible is to merge all meshes into a large single geometry and use vertex colors. You will have a single geometry, material and mesh object but an additional vertex color attribute will ensure that your nodes are colored differently.
three.js R103
In the example Interactive Raycasting Points there are 4 different functions to generate the point cloud:
1. generatePointcloud (with buffer geometry)
2. generateIndexedPointcloud (buffer geometry with indices)
3. generateIndexedWithOffsetPointcloud (buffer geometry with added drawcall)
4. generateRegularPointcloud (with normal geometry)
Could someone explain what the difference is between these 4 types, and if there are any performance benefits/certain situations where one is suited more than the others?
Thanks!
The purpose of the example Interactive Raycasting Points is to demonstrate that raycasting against THREE.Points works for a variety of geometry types.
So-called "regular geometry", THREE.Geometry, is the least memory-efficient geometry type, and in general, has longer load times than THREE.BufferGeometry.
BufferGeometry can be "indexed" or "non-indexed". Indexed BufferGeometry, when used with meshes, allows for vertices to be reused; that is, faces that share an edge can share a vertex. In the case of point clouds, however, I do not see a benefit to the "indexed" type.
BufferGeometry with draw calls -- now called groups -- allows for only a subset of the geometry to be rendered, and also allows for a different material index to be associated with each group.
The function generateIndexedWithOffsetPointcloud appears to have been named when draw calls, a.k.a. groups, were called "offsets".
I do not believe raycasting in three.js honors groups. I believe it raycasts against the entire geometry. In fact, I am not sure groups are working correctly at all in the example you reference.
three.js r.73
How can I dress a human body?. I have imported the body model and t-shirt in two separated meshes. The human body includes shape keys.
But when I modify the morphTargetInfluences key of the body, the t-shirt doesn't fit in the new body shape.
How can I make the T-shirt fits when the key change the value?, How can I do that using three.js?
I'm using the version 1.4.0 of the Three.js exporter (three.js r71) and Blender 2.75a
The point is, your morph targets are only present in your character model and won't magically fit the cloth unfortunately. Apply the morph to the cloth too in your editing tool and morph both equally, this would work without extra effort.
I'm actually also working on a solution for wearable cloth, i'll give shared vertex buffers a try where the vertices "connects" to the vertices of the domain model with a relative offset, so you would just have to take care about assigning the cloth once, instead applying and exporting whole morph target sets at all.
The downside would be, your vertices has to stay the same, once you modify the mesh, you'd have to export all related cloth again. This can be basically solved by a automated process, like one which searches for nearest vertices, but cloth is usually extruded from the base mesh to perfectly "fit" without intersections, so this isn't really a surprising thing.