I was looking at three.js as a replacement of deck.gl in our existing WebGL software, for reasons not relevant to this question. One of the input data sources is large vector data exported from CAD systems. One scene integrates about 5 collections of linear features and areas in the scene. Each such collection is 10-50MB SVG. In deck.gl, we did a crazy but very effective hack - converted the vectors to geo coordinates and used lazy loading via deck.gl tile layer. It improved the rendering performance tremendously but required additional tweaking, because majority of the data is still in cartesian coordinates.
I haven't found a comparable lazy loading of such large vector data in three.js. There are plenty of format-specific loaders but it's still just different means of upfront loading. While vector tiles were created in geographical context, the principle of pre-rendering and pre-tiling data for lazy loading should be universally applicable? But i could not find any non-geo implementation, the less with support in three.js. Our geo-hacking was effective, but never felt correct because the model is naturally cartesian. The internets are suggesting that Cesium may be more flexible than deck.gl for our new requirements, but we would prefer to avoid the geo-hacking, not dive deeper in it.
What did i miss?
Related
I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!
GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances
Can anyone help we out with the difference between InstancedMesh and InterleavedBuffer in threejs. I'm kinds confused with both the topics and can anyone let me know which is the optimized way to go with to render some large amount of geometry.
Thanks in advance.
Instanced rendering and interleaved buffers a two separate things. You can use both techniques on their own or in combination.
THREE.InstancedMesh provides a convenient interface for instanced rendering. This approach is useful when you have to render a huge number of objects with the same material and geometry but with different world transformations. THREE.InstancedMesh allows you to improve the performance of your app by reducing the amount of draw calls. So instead of drawing each object with a single draw call, you can draw them all at once.
InterleavedBuffer provides the possibility to manage your vertex data in an interleaved fashion. The motivation of doing this is to improve the amount of cache hits on the GPU. If you are more interested in the theory behind this approach, I suggest you google "structure of arrays vs. array of structures". The latter one applies to InterleavedBuffer.
In general, the performance benefits of both techniques depends on the specific use case. According to my personal experiences, the benefits of interleaved buffers is hard to measure since the performance improvements depend on the respective GPU. In many cases, I've seen no difference in FPS when using interleaved buffers. However, it's much more easier to see a performance improvement if the amount of draw calls is high and you lower it by using instanced rendering.
three.js provides examples for both techniques. webgl_buffergeometry_instancing_interleaved demonstrates a combination.
three.js R114
I have a sphere with texture of earth that I generate on the fly with the canvas element from an SVG file and manipulate it.
The texture size is 16384x8192 , and less than this - it's look blurry on close zoom.
But this is a huge texture size and causing memory problems... (But it's look very good when it is working)
I think a better approach would be to split the sphere into 32 separated textures, each in size of 2048x2048
A few questions:
How can I split the sphere and assign the right textures?
Is this approach better in terms of memory and performance from a single huge texture?
Is there a better solution?
Thanks
You could subdivide a cube, and cubemap this.
Instead of having one texture per face, you would have NxN textures. 32 doesn't sound like a good number, but 24 for example does, (6x2x2).
You will still use the same amount of memory. If the shape actually needs to be spherical you can further subdivide the segments and normalize the entire shape (spherify it).
You probably cant even use such a big texture anyway.
notice the top sphere (cubemap, ignore isocube):
Typically, that's not something you'd do programmatically, but in a 3D program like Blender or 3D max. It involves some trivial mesh separation, UV mapping and material assignment. One other approach that's worth experimenting with would be to have multiple materials but only one mesh - you'd still get (somewhat) progressive loading. BUT
Are you sure you'd be better off with "chunks" loading sequentially rather than one big texture taking a huge amount of time? Sure, it'll improve a bit in terms of timeouts and caching, but the tradeoff is having big chunks of your mesh be textureless, which is noticeable and unasthetic.
There are a few approaches that would mitigate your problem. First, it's important to understand that texture loading optimization techniques - while common in game engines - aren't really part of threejs or what it's built for. You'll never get the near-seamless LODs or GPU optimization techniques that you'll get with UE4 or Unity. Furthermore webGL - while having made many strides over the past decade - is not ideal for handling vast texture sizes, not at the GPU level (since it's based on OpenGL ES, suited primarily for mobile devices) and certainly not at the caching level - we're still dealing with broswers here. You won't find a lot of webGL work done with vast textures of the dimensions you refer to.
Having said that,
A. A loader will let you do other things while your textures are loading so your user isn't staring at an 'unfinished mesh'. It lets you be pretty clever with dynamic loading times and UX design. Additionally, take a look at this gist to give you an idea for what a progressive texture loader could look like. A much more involved technique, that's JPEG specific, can be found here but I wouldn't approach it unless you're comfortable with low-level graphics programming.
B. Threejs does have a basic implementation of LOD although I haven't tinkered with it myself and am not sure it's useful for textures; that said, the basic premise to inquire into is whether you can load progressively higher-resolution files on a per-need basis, just like Google Earth does it for example.
C. This is out of the scope of your question - but I'd look into what happens under the hood in Unity's webgl export (which is based on threejs), and what kind of clever tricks are being employed there for similar purposes.
Finally, does your project have to be in webgl? For something ambitious and demanding, sometimes "proper" openGL / DX makes much more sense.
I am using Three.js to create a simple game, i load about 100 low poly models in obj format but the performance is not smooth, all models size not more than 18 MB, if i use JSON format will it be faster although the size will be more than double?
i tried Collada but for simple objects like my case obj is faster, if JSON is not the best solution, what is the best one?
Not any one file format is better overall, depending on your needs and requirements external software used and if it consist of animation .Personally I generally don't use json that much i use obj but json is heavily supported by three.js.. but that's more of an opinion.
There are many factors as too why your application can be heavy.
with out source code or the model files themselves I can only speculate.
Few things to consider:
Are your models optimized as best you can , 100 models in one scene is quiet allot at one time at 18mb, is this including textures?.
Are Textures compressed and reused.This will increase performance.
From shadows , lighting and animation types all have an impact, Google has plenty of resources to offer you.
There are several techniques to keep your poly count down: subdivision is a good example of this, there is a really useful article on this.
http://www.kadrmasconcepts.com/blog/2011/11/06/subdivision-surfaces-with-three-js/
Also LOD Level OF DETAIL is visible depending on how far or near an object is.
A great useful explanation here:
http://www.pheelicks.com/2014/03/rendering-large-terrains/
Three.js supports this with out any added libs..
Detail and how you render it is the key for best performance..
Even down to how you have set up your project can have a major influence.Take a look at functions and how you use them, for example on mouse move and dom element clicks can slow your three.js app dramatically if they are not optimized and used efficiently.
Reuse and share is your best option, There is no point in loading the same model twice because one is blue and the other is green...
I think that there is no better format? It really depends what you need and what not. However, for me I will go for obj!
I'm trying to get an idea of the practicality of WebGL for rendering large interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material.
I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up.
So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example,
what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements()
should I try to write ubershaders to minimize shader switching?
should I try to aggregate geometry to minimize the number of gl.drawElements() calls
I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.
Have you read batch, batch, batch? Admittedly, it focuses on directX, but the reasoning applies to a lesser extent to Open/WebGL also: Each API call has significant overhead on the CPU. The advice is use all the API's options to share textures, use instancing (if available), write complex shaders to avoid many draw calls. So if you can draw the whole house as a single mesh in a single call, that would be better than 1000 calls for each room. Writing ubershaders is reccomended but mostly because it may allow you to remove draw calls, not because GPU state switching is expensive.
This assumes recent hardware. For low end platforms (iPad?) or Intel GMA chips, the bottlenecks will be elsewhere (like in software vertex processing).