Is JSON model format is better for THREE.js - three.js

I am using Three.js to create a simple game, i load about 100 low poly models in obj format but the performance is not smooth, all models size not more than 18 MB, if i use JSON format will it be faster although the size will be more than double?
i tried Collada but for simple objects like my case obj is faster, if JSON is not the best solution, what is the best one?

Not any one file format is better overall, depending on your needs and requirements external software used and if it consist of animation .Personally I generally don't use json that much i use obj but json is heavily supported by three.js.. but that's more of an opinion.
There are many factors as too why your application can be heavy.
with out source code or the model files themselves I can only speculate.
Few things to consider:
Are your models optimized as best you can , 100 models in one scene is quiet allot at one time at 18mb, is this including textures?.
Are Textures compressed and reused.This will increase performance.
From shadows , lighting and animation types all have an impact, Google has plenty of resources to offer you.
There are several techniques to keep your poly count down: subdivision is a good example of this, there is a really useful article on this.
http://www.kadrmasconcepts.com/blog/2011/11/06/subdivision-surfaces-with-three-js/
Also LOD Level OF DETAIL is visible depending on how far or near an object is.
A great useful explanation here:
http://www.pheelicks.com/2014/03/rendering-large-terrains/
Three.js supports this with out any added libs..
Detail and how you render it is the key for best performance..
Even down to how you have set up your project can have a major influence.Take a look at functions and how you use them, for example on mouse move and dom element clicks can slow your three.js app dramatically if they are not optimized and used efficiently.
Reuse and share is your best option, There is no point in loading the same model twice because one is blue and the other is green...

I think that there is no better format? It really depends what you need and what not. However, for me I will go for obj!

Related

Autodesk forge custom geometry

I am currently facing the problem that rendering custom geometry into my forge is allocation a huge amount of memory. I am using the technique suggested on the autodesk website: https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/custom-geometry/.
I need to render up to 300 custom geometrys into my viewer but by trying to do so the site is simply crashing and my memory jumps over 5 GB. Is there any good way to render a large amount on custom geometry into the forge and keeping the performance up to a usefull level?
Thx, JT
I'm afraid this is not something Forge Viewer will be able to help with. Here's why:
The viewer contains many interesting optimizations for efficient loading and rendering of a single complex model. For example, it builds a special BVH of all the geometries in a model so that they can be traversed and rendered in an efficient way. The viewer can also handle very complex models by moving their geometry data in and out of the GPU to make room for others. But again, all this assumes that there's just one (or only a few) models in the scene. If you try and add hundreds of models into the scene instead, many of these optimizations cannot be applied anymore, and in some cases they can even make things worse (imagine that the viewer suddenly has to traverse 300 BVHs instead of a single one).
So what I would recommend would be: try and avoid situations where you would have hundreds of separate models in the scene. If possible, consider "consolidating" them into a single model. For example, if the 300 models were Inventor assemblies that you need to place at specific positions, you could:
aggregate all the assemblies using Design Automation for Inventor, and convert the result into a single Forge model, or
create a single Forge model with all 300 geometries, and then move them around during runtime using the Viewer APIs
If none of these options work for you, you could also take a look at the new format called SVF2 (https://forge.autodesk.com/blog/svf2-public-beta-new-optimized-viewer-format) which significantly reduces the memory footprint.

THREE JS Imported Model Size and Performance

I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!
GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances

Object Instancing using XTK?

I have seen an instancing example in the following link.
http://www.vasava.es/lab/webgl/instancing_batch/
Is it possible to do this using XTK or three.js?
Regarding XTK: XTK is optimized for fewer large objects rather than for a lot of small objects due to the focus on medical imaging data. If you look at the example code you posted you see that they use custom shaders etc. and other optimizations to get the great performance. Three.js might be the right solution to do something like this.
Nevertheless, I quickly did an animation with 1000 cubes in XTK: http://jsfiddle.net/haehn/jjzuD/
If you compare the FPS, you see that XTK is pretty slow in this use case.

Lightweight 3D animation driven by external data

I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.

Overall strategy for storing animated meshes

I've been trying to figure out how you'd take a mesh generated in a program like 3ds max and bring that into your game with animations, textures, etc.
I've looked at FBX and Collada, but from what I've read, they're used as an intermediate step between the modelling software and some final format that may be custom to the game. What I'm looking for is a book or tutorial that would go over in a general way what you would store in your custom file, how you would store animation data, etc.
Right now I don't really have a general plan of attack and all of the guides I've seen stick to rendering a few triangles.
It doesn't have to be implementation specific to OpenGL, although that is what I'll be using.
Yes Collada is an interchange format.
What that means is it is very much generic. And if I am right that is exactly what you are looking for!
You can use a library such as Assimp to load collada into a generic scene graph, and then have your game/renderer use it directly, or preprocess and then consume it.

Resources