How to render only the surface geometry of a collada model in threejs.
I have a 40mb collada (dae) file that I am importing in my scene using thhreejs loader. Then I show the same model from 4 camera views and render all 4 views with basic rotation animation. Problem is, that the rendering is slow (due to the low performance mini PC I am using).
Here's a folder with the optimized models (in .dae and .obj). I used an answer from blender's stackexchange. I managed to reduce significantly the number of vertices 73.4926% and 56.0847% of faces.
*I scaled the models to 1000X their original size, watchout for that.
Related
I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.
I'm trying to load a skinned mesh from blender into a THREE.js scene, but it... looks a little odd:
There is a skeleton that was loaded from the animation only, the mesh modified for the game, and there is a little one that is directly loaded from the three.js ObjectLoader.
It's supposed to look more like this (from blender):
I found it!
when exporting from blender, I modified the number of influences to 4. It appears that some vertices were being modified by more than one bone and therefore when only two influencing bones were exported it distorted.
I'm wrapping a 3D model with texture in three js. But in some areas the image is stretching even though my texture is in good resolution.
this is a UV mapping issue, not a THREE.js issue. check over in 3d modelling or game dev.
I have a basic THREE.js scene, created in Blender, including cubes and rotated planes. Is there any way that I can automatically convert this THREE.js scene into a CANNON.js world ?
Thanks
Looking at the Three.js Blender exporter, it looks like it only exports mesh data, no information about mathematical shapes (boxes, planes, spheres etc) that Cannon.js needs to work. You could try to import your meshes directly into Cannon.js, using its Trimesh class, but this would sadly only work for collisions against spheres and planes.
What you need to feed Cannon.js is mathematical geometry data, telling it which of your triangles in your mesh that represent a box (or plane) and where its center of mass is.
A common (manual) workflow for creating 3D WebGL physics, is importing the 3D models into a WebGL-enabled game engine (like Unity, Goo Create or PlayCanvas). In the game engine you can add collider shapes to your models (boxes, planes, spheres, etc), so the physics engine can work efficiently. You can from there preview your physics simulation and export a complete WebGL experience.
Going to post another answer since there are a few new options to consider here...
I wrote a simple mesh2shape(...) helper that can convert (one object at a time) from THREE.Mesh to CANNON.Shape objects. It doesn't support certain features, such as heightmaps/terrain.
Example:
var shape = mesh2shape(object3D, {type: mesh2shape.Type.BOX})
There is a (experimental!) BLENDER_physics extension for the glTF format to include physics data with a model. You could add physics data in Blender, export to glTF, and then modify THREE.GLTFLoader to pass along the physics data to your application, helping you construct CANNON.js objects.
I'm trying to fit the human skeleton completely inside the human body then rotate both meshes, but I'm not getting the result expected. I need your help.
The human integument 3D model was obtained from MakeHuman, I then bought a different 3D human skeleton from elsewhere to fit it inside the human integument model. The skeleton model is significantly larger than the integument model, so I used Blender to scale down the skeleton. Within Blender, the skeleton fit nicely inside the integument.
My problems start when I integrate those two models into iOS.
First problem: As both the skeleton and integument models loaded, the skeleton mesh node still appear much bigger than the human integument although it was already scaled down via Blender. I had to scale it down again using Cocos3D's uniformScale property in order to fit it inside the integument model. Note that both mesh nodes are position at the exact location distance from the camera.
Second problem: As I rotate both mesh nodes, the skeleton mesh node began surfacing and bleed through the integument mesh node. Both has the exact same rotation vector and same origin.
Help is much needed and appreciated.
Thanks to Bill Hollings, this problem is solved by adding the skeleton as the child node of the integument model.