three.js, clothing and shape keys - three.js

How can I dress a human body?. I have imported the body model and t-shirt in two separated meshes. The human body includes shape keys.
But when I modify the morphTargetInfluences key of the body, the t-shirt doesn't fit in the new body shape.
How can I make the T-shirt fits when the key change the value?, How can I do that using three.js?
I'm using the version 1.4.0 of the Three.js exporter (three.js r71) and Blender 2.75a

The point is, your morph targets are only present in your character model and won't magically fit the cloth unfortunately. Apply the morph to the cloth too in your editing tool and morph both equally, this would work without extra effort.
I'm actually also working on a solution for wearable cloth, i'll give shared vertex buffers a try where the vertices "connects" to the vertices of the domain model with a relative offset, so you would just have to take care about assigning the cloth once, instead applying and exporting whole morph target sets at all.
The downside would be, your vertices has to stay the same, once you modify the mesh, you'd have to export all related cloth again. This can be basically solved by a automated process, like one which searches for nearest vertices, but cloth is usually extruded from the base mesh to perfectly "fit" without intersections, so this isn't really a surprising thing.

Related

three.js: When to move / rotate geometry and when mesh?

I compose multiple STLs for 3D printing / milling. For that I also use CSG and need some raytracing for detecting features of the models.
My scene is pretty much static. Just have to move around the models to arrange them. For this use case I'm not really sure which approach for moving / rotating the models is right.
Currently I manipulate the BufferGeometries directly. So everything in the geometry is like in the real world. Each position, each normal. No calculation from / to local or world coordinates.
On the other hand I could do the same thing with changing the meshes, which means to change just a matrix.
For me, working with the mesh is more for animation etc. While working with the geometry to manipulate the real object, which is my intention.
I'm wondering when one would translate / rotate the geometry and when the mesh. I know that manipulating the geometry is not best for CPU, which is not a problem for my use case.
Geometry can be translated so that subsequent transformations (such as scale or rotation) originate from a more preferred vector. Meshes can share a geometry. There are unique use cases for either if you care to memorize the list. Sometimes I integrate preexisting code samples. Sometimes the decision is made for me by some aspect of the process. As for the properties which may be similar, which is more convenient? I like the pattern of modifying an Object3D dummy using those methods and then updating from its matrix. There's a whole book on normals, but I didn't write it, sadly...

Is it more performant for three.js to load a mesh that's already been triangulated than a mesh using quads?

I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.

Creating Heatmap Over 3D Model From Vector 3 Point Data

I am attempting to render a flat, dynamically created heatmap on top of a 3D model that is loaded from an OBJ (or STL).
I am currently loading and rendering an OBJ with Three.js. I have vector3 points that I am currently drawing as simple red cubes (image below). These data points are all raycasted to my OBJs mesh and are lying on the surface. The vector3 points are loaded from an external data source and will change depending on what data is being viewed/collected.
I would like to render my vector3 point data into a heatmap on the surface of my OBJ. Here are some examples illustrating the type of visual effects I am trying to achieve:
I feel like vertex coloring is the method of achieving this, but my issue is that my OBJ model does not have enough tessellation to do this. As you can see many red dots fall on each face. I am struggling to find a way to draw over my object's mesh with colors exactly where my red point data is. I was assuming I would need to convert my random vector3 points into a mesh, but cannot find a method to do so.
I've looked at the possibility of generating a texture, but 1) I do not have a UV map for my OBJs and do not see a way to programmatically generate them and 2) I am a bit lost on how I would correlate vector3 point data to UV points.
I've looked at using shaders, but my vector3 point data appears to be too large for using a shader (could be hundreds of thousands of points). I also feel it is not the right approach to render the heatmap every frame and would rather only render it once on load.
I've looked into isosurfaces with point clouds and the marching cubes algorithm, but I didn't think this was the right direction since only my data is a bit like a point cloud, and I am unsure as to how I would keep this smooth along the surface of my OBJ mesh.
Although I would prefer to keep everything in JavaScript for viewing in the browser, I am open to doing server side processing in any language/program with REST so long as it can be automated without human intervention, and pushed back to the browser for rendering.
Any suggestions or guidance is appreciated.
I'm only guessing but it seems like first you need to have UV coordinates that map every triangle to a texture. Rather than do this by hand I'd suggest using a modeling package. Most modeling packages have some way of automatically and uniformly mapping every triangle to a texture. For example in Blender
Next to put the heatmap in the texture by computing which triangles are affected by each dot (your raycasting), looking up their texture coordinates, projecting that dot into texture space and then putting the colors in that part of the texture. I'm only guessing that you need to not just do exact points but probably need to consider adjacent triangles since some heat info that hits near the edge of a triangle needs to bleed over into the adjacent triangle but that adjacent triangle might be using a completely different part of the texture.

Fit and rotate two 3D mesh nodes in parallel

I'm trying to fit the human skeleton completely inside the human body then rotate both meshes, but I'm not getting the result expected. I need your help.
The human integument 3D model was obtained from MakeHuman, I then bought a different 3D human skeleton from elsewhere to fit it inside the human integument model. The skeleton model is significantly larger than the integument model, so I used Blender to scale down the skeleton. Within Blender, the skeleton fit nicely inside the integument.
My problems start when I integrate those two models into iOS.
First problem: As both the skeleton and integument models loaded, the skeleton mesh node still appear much bigger than the human integument although it was already scaled down via Blender. I had to scale it down again using Cocos3D's uniformScale property in order to fit it inside the integument model. Note that both mesh nodes are position at the exact location distance from the camera.
Second problem: As I rotate both mesh nodes, the skeleton mesh node began surfacing and bleed through the integument mesh node. Both has the exact same rotation vector and same origin.
Help is much needed and appreciated.
Thanks to Bill Hollings, this problem is solved by adding the skeleton as the child node of the integument model.

How to remove a section of a CubeGeometry?

At the moment I'm using ThreeCSG/CSG to subtract a small cube from a much larger cube. This works fine, but only the look of it changes not the actual geometry. So when using PhysiJS (Physics engine) on another cube, it doesn't fall into the hole but acts like it normally would. Click for Demo.
Is there any way I can actually remove a section from a CubeGeometry so that objects can fall into it - not just for display purposes? Thanks!
ThreeCSG does change the geometry, by which I mean geometry in the sense of Three.js -- the collection of vertices, the faces, etc. I think what you mean is that ThreeCSG does not change the physics-based properties of your object.
According to https://github.com/chandlerprall/Physijs/wiki/Basic-Shapes , it appears as though you have to use Physijs.ConcaveMesh as it "matches any concave geometry you have, i.e. arbitrary mesh", and it is the only one that has a change of supporting a non-convex physical object.

Resources