THREE JS Imported Model Size and Performance - three.js

I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!

GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances

Related

Autodesk forge custom geometry

I am currently facing the problem that rendering custom geometry into my forge is allocation a huge amount of memory. I am using the technique suggested on the autodesk website: https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/custom-geometry/.
I need to render up to 300 custom geometrys into my viewer but by trying to do so the site is simply crashing and my memory jumps over 5 GB. Is there any good way to render a large amount on custom geometry into the forge and keeping the performance up to a usefull level?
Thx, JT
I'm afraid this is not something Forge Viewer will be able to help with. Here's why:
The viewer contains many interesting optimizations for efficient loading and rendering of a single complex model. For example, it builds a special BVH of all the geometries in a model so that they can be traversed and rendered in an efficient way. The viewer can also handle very complex models by moving their geometry data in and out of the GPU to make room for others. But again, all this assumes that there's just one (or only a few) models in the scene. If you try and add hundreds of models into the scene instead, many of these optimizations cannot be applied anymore, and in some cases they can even make things worse (imagine that the viewer suddenly has to traverse 300 BVHs instead of a single one).
So what I would recommend would be: try and avoid situations where you would have hundreds of separate models in the scene. If possible, consider "consolidating" them into a single model. For example, if the 300 models were Inventor assemblies that you need to place at specific positions, you could:
aggregate all the assemblies using Design Automation for Inventor, and convert the result into a single Forge model, or
create a single Forge model with all 300 geometries, and then move them around during runtime using the Viewer APIs
If none of these options work for you, you could also take a look at the new format called SVF2 (https://forge.autodesk.com/blog/svf2-public-beta-new-optimized-viewer-format) which significantly reduces the memory footprint.

Is JSON model format is better for THREE.js

I am using Three.js to create a simple game, i load about 100 low poly models in obj format but the performance is not smooth, all models size not more than 18 MB, if i use JSON format will it be faster although the size will be more than double?
i tried Collada but for simple objects like my case obj is faster, if JSON is not the best solution, what is the best one?
Not any one file format is better overall, depending on your needs and requirements external software used and if it consist of animation .Personally I generally don't use json that much i use obj but json is heavily supported by three.js.. but that's more of an opinion.
There are many factors as too why your application can be heavy.
with out source code or the model files themselves I can only speculate.
Few things to consider:
Are your models optimized as best you can , 100 models in one scene is quiet allot at one time at 18mb, is this including textures?.
Are Textures compressed and reused.This will increase performance.
From shadows , lighting and animation types all have an impact, Google has plenty of resources to offer you.
There are several techniques to keep your poly count down: subdivision is a good example of this, there is a really useful article on this.
http://www.kadrmasconcepts.com/blog/2011/11/06/subdivision-surfaces-with-three-js/
Also LOD Level OF DETAIL is visible depending on how far or near an object is.
A great useful explanation here:
http://www.pheelicks.com/2014/03/rendering-large-terrains/
Three.js supports this with out any added libs..
Detail and how you render it is the key for best performance..
Even down to how you have set up your project can have a major influence.Take a look at functions and how you use them, for example on mouse move and dom element clicks can slow your three.js app dramatically if they are not optimized and used efficiently.
Reuse and share is your best option, There is no point in loading the same model twice because one is blue and the other is green...
I think that there is no better format? It really depends what you need and what not. However, for me I will go for obj!

What is the best approach for making large number of 2d rectangles using Three.js

Three.JS noob here trying to do 2d visualization.
I used d3.js to make an interactive visualization involving thousands of nodes (rectangle shaped). Needless to say there were performance issues during animation because Browsers have to create an svg DOM element for every one of those 10 thousand nodes.
I wish to recreate the same visualization using WebGl in order to leverage hardware acceleration.
Now ThreeJS is a library which I have choosen because of its popularity (btw, I did look at PixiJS and its api didn't appeal to me). I am wanting to know what is the best approach to do 2d graphics in three.js.
I tried creating one PlaneGeometry for every rectangle. But it seems that 10 thousand Plane geometries are not the say to go (animation becomes super duper slow).
I am probably missing something. I just need to know what is the best primitive way to create 2d rectangles and still identify them uniquely so that I can interact with them once drawn.
Thanks for any help.
EDIT: Would you guys suggest to use another library by any chance?
I think you're on the right track with looking at WebGL, but depending on what you're doing in your visualization you might need to get closer to the metal than "out of the box" threejs.
I recommend taking a look at GLSL and taking a look at how you can implement your visualization using vertex and fragment shaders. You can still use threejs for a lot of the WebGL plumbing.
The reason you'll probably need to get directly into GLSL shader work is because you want to take most of the poly manipulation logic out of javascript, at least as much as is possible. Any time you ask js to do a tight loop over tens of thousands of polys to update position, etc... you are going to struggle with CPU usage.
It is going to be much more performant to have js pass in data parameters to your shaders and let the vertex manipulation happen there.
Take a look here: http://www.html5rocks.com/en/tutorials/webgl/shaders/ for a nice shader tutorial.

ThreeJS: is it possible to simplify an object / reduce the number of vertexes?

I'm starting to learn ThreeJS. I have some very complex models to display.
These models come from Autocad files that my customer provides.
But sometimes the amount of details in the model is just way too much for the purpose of the website.
I would like to reduce the amount of vertexes in the model to simplify the display and enhance performance.
Is this possible from within ThreeJS? Or is there maybe an other solution for this?
There's a modifier called SimplifyModifier that works very well. You'll find it in the Three.js examples
https://threejs.org/examples/#webgl_modifier_simplifier
If you can import the model into Blender, you could try Decimate Modifier. In the latest version of Blender, it features three different methods with configurable "amount" parameters. Depending on your mesh topology, it might reduce the poly count drastically with virtually no visual changes, or it might completely break the model with even a slight reduction attempt. Other 3d packages should include a similar functionality, but I've never used those.
.
Another thing that came into mind: Sometimes when I've encountered a too high-poly Blender model, a good start has been checking if it has a Subdivision Modifier applied and removing that if so. While I do not know if there's something similar in Autocad, it might be worth investigating.
I updated SimplifyModifier function. it works with Textured models. Here is example:
https://rigmodels.com/3d_LOD.php?view=5OOOEWDBCBR630L4R438TGJCD
you can extract JS codes and use in your project.

Animation and Instancing performances

talking about the storage and loading of models and animations, which would be better for a Game Engine:
1 - Have a mesh and a bone for each model, both in the same file, each bone system with 10~15 animations. (so each model has its own animations)
2 - Have alot of meshes and a low number of bones, but the files are separated from each other and the same bone (animations too) can be used for more then one mesh, each bone set can have alot of animations. (notice that in this case, using the same boneset and the same animations will cause a loss of uniqueness).
And now, if I need to show 120~150 models in each frame (animated and skinned by the GPU), 40 of them are the same type, is better:
1 - Use a instancing system for all models in the game, even if I only need 1 model for each type.
2 - Detect wich model need instancing (if they repeat more then one time) and use a diferent render system (other shader programs), use a non-instancing for the other models.
3 - Dont use instancing because the "gain" would be very low for this number of models.
All the "models" talked here are animated models, currently I use the MD5 file with GPU skinning but without instancing, and I would know if there are better ways to do all the process of animating.
If someone know a good tutorial or can put me on the way... I dont know how I could create a interpolated skeleton and use instancing for it, let me explain..:
I can compress all the bone transformations (matrices) for all animation for all frames in a simple texture and send it to the vertex shader, then read for each vertex for each model the repective animation/frame transformation. This is ok, I can use instancing here because I will always send the same data for the same model type, but, when I need to use a interpolate skeleton, should I do this interpolation on vertex shader too? (more loads from the texture could cause some lost of performance).
I would need calculate the interpolated skeleton on the CPU too anyway, because I need it for colision...
Any solutions/ideas?
Im using directX but I think this applies to other systems
=> Now I just need an answer for the first question, the second is solved (but if anyone wants to give any other suggestions its ok)
The best example I can think of and one I have personally used is one by NVidia called Skinned Instancing. The example describes a way to render many instances of the same bone mesh. There is code and a whitepaper available too :)
Skinned Instancing by NVidia

Resources