I have successfully used three.js to display 3D models, I used ObjLoader and MtlLoader to complete this task, the snippet shows a portion of the working code, but loading large model files takes too much time, and I can't afford this much time. Is it possible to solve this issue? Is there a way to solve this issue?
Related
I’ve designed a 3D model in SketchUp and I didn’t use any texture. I’m faced with an issue related with lagging on mouse move and rotate process. When I exported the model by Dae format and imported to the three js online editor (three js online editor) mouse movement is being very slow. I think it occurs fps drop. I couldn’t understand what’s problem with my model that I designed. I need your suggestions and ideas how to resolve this issue. Thanks for your support. I’ve uploaded 3D model’s image. Please take a look.
Object Count: 98.349, Vertices: 2,107.656, Triangles: 702.552
Object Count: 98.349,
The object count results in an equal number draw calls. Such a high value will degrade the performance no matter how complex the respective geometry eventually is.
I suggest you redesign the model and ensure to merge individual objects as much as possible. Also try to lower the number of vertices and faces.
Keep in mind that three.js does not automatically merge or batch render items. So it's your responsibility to optimize assets for rendering. It's best to do this right when designing the model. Or in code via methods like BufferGeometryUtils.mergeBufferGeometries() or via instanced rendering.
I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!
GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances
recently I started working with models from mixamo.
To give those models several animations I first extracted animations from some meshes, put them together and gave them to all the fbx- meshes.
That worked most of the times.
Then I got aware that these animations can be simply downloaded and so did I using a similar procedure to collect some animations and share them with the meshes.
BUT now the meshes zombie- look like
Is this a known issue ? I know the odds are against me as many people use mixamo. If someone already stumbled about it, please be so kind and tell me a solution.
Thanks for your time.
edit:
I have issues loading some 3d gltf models using threejs on iPad. Loading works fine actually, it loads up fine on desktop computers and android tablets, but in my specific case it needs to run on an iPad tablet and the page keeps crashing because it uses up all of the memory trying to render the model (I guess Android gives the browser more memory to use).
My question is how to optimize the model in order for it to be able to run on iPad? My first thought was that the number of vertices/indices etc. affects rendering, but it turned out that a model with more vertices and indices was able to load while the "optimized" model couldn't load. We throw the model into babylon online previewer to see its info and the thing I noticed is that the older model with more vertices and indices had less meshes and less draw calls than the new one that doesn't work. So is that something that we should focus on optimizing instead of number of vertices and indices?
The problem is that we need to optimize the model to render on iPad but I can't figure out which part of the model needs to be optimized so any help would be much appreciated!
P.S. I tried implementing DRACO compression and DRACOLoader but it doesn't help because it just compresses the file, and once it needs to be rendered on screen that compression doesn't matter at all because it's basically still the same 3d file that needs to be rendered. I can share code if needed, but I don't think it matters because there is no issues with the loading, it's just that the model is not optimized.
Oversized textures were the problem. We had textures that were 2048x2048px but it was just one color inside. So I reduced all of the textures to 1x1px and it worked perfectly.
When I load a large FBX model, which is 100MB in three.js, Chrome crashes, though it doesn't crash when I load a small FBX model. Can anyone tell me how to fix it?
Load less than 100MB?
100MB is an extremely large single asset for a page, three.js or otherwise. I highly recommend some optimization of your asset. Even if you get the contents into memory, three.js may not be able to render the contents.
Assuming everything in there is absolutely required, you could break the model into smaller chunks and load a whole bunch of assets. That might be possible. You also might have better luck with other 3d solutions (non-JS webpage) which could handle such a (still quite fat) asset.