I have a map definition with plenty of light sources (about 350). When I'm trying to create THREE.PointLight objects and add them to the scene, I'm getting the following error along with low FPS:
three.js:29438 THREE.WebGLProgram: shader error: 0 gl.VALIDATE_STATUS false gl.getProgramInfoLog Fragment shader active uniforms exceed MAX_FRAGMENT_UNIFORM_VECTORS (1024).
What does it mean? Is there some limit of THREE.PointLight objects on the scene? Are there any good practices to keep high performance when you have many light sources?
For now the only idea that comes to mind is somehow reduce the number of light sources and leaving only those that I really need.
This error means you've exceeded the maximum number of uniforms in your fragment shader. This limit is determined by your graphics card and/or driver. You can check by going to http://webglreport.com/.
Looks like on your system the limit is 1024. A Three.js light typically uses 6-10 uniforms depending on the type of light and the material. Given you're using ~350 lights, it makes sense that you're blowing way past this limit.
Generally speaking, 350 discrete lights is a lot, probably way more than you need. Using more lights is also computationally intensive. A typical WebGL scene has no more than a handful. You might want to consider other techniques to achieve what you want.
Related
Is there a way to do imposters in three.js - or is that not going to help with performance at all for a scene with >10,000 objects most of them being the same model?
If you have thousands of the same object (with variations of position/size/rotation and perhaps color) then your first priority should be to make sure you don't have thousands of GPU draw call. A couple options:
(a) static batching — apply the objects' positions to their geometries (geometry.applyMatrix( mesh.matrixWorld )) then merge them with THREE.BufferGeometryUtils.mergeBufferGeometries()). The result can be drawn as a single large mesh. This takes up more memory, but is easier to set up.
(b) gpu instancing — more memory-efficient, but harder to do. See https://threejs.org/examples/webgl_interactive_instances_gpu.html or https://www.npmjs.com/package/three-instanced-mesh.
Once you've reduced the number of draw calls, profile the application again. If performance is still poor, you can reduce the total vertex count with impostors (or, really, just simpler meshes...). threejs does not generate impostors for you, per Spherical Impostors in three.js.
I have a sphere with texture of earth that I generate on the fly with the canvas element from an SVG file and manipulate it.
The texture size is 16384x8192 , and less than this - it's look blurry on close zoom.
But this is a huge texture size and causing memory problems... (But it's look very good when it is working)
I think a better approach would be to split the sphere into 32 separated textures, each in size of 2048x2048
A few questions:
How can I split the sphere and assign the right textures?
Is this approach better in terms of memory and performance from a single huge texture?
Is there a better solution?
Thanks
You could subdivide a cube, and cubemap this.
Instead of having one texture per face, you would have NxN textures. 32 doesn't sound like a good number, but 24 for example does, (6x2x2).
You will still use the same amount of memory. If the shape actually needs to be spherical you can further subdivide the segments and normalize the entire shape (spherify it).
You probably cant even use such a big texture anyway.
notice the top sphere (cubemap, ignore isocube):
Typically, that's not something you'd do programmatically, but in a 3D program like Blender or 3D max. It involves some trivial mesh separation, UV mapping and material assignment. One other approach that's worth experimenting with would be to have multiple materials but only one mesh - you'd still get (somewhat) progressive loading. BUT
Are you sure you'd be better off with "chunks" loading sequentially rather than one big texture taking a huge amount of time? Sure, it'll improve a bit in terms of timeouts and caching, but the tradeoff is having big chunks of your mesh be textureless, which is noticeable and unasthetic.
There are a few approaches that would mitigate your problem. First, it's important to understand that texture loading optimization techniques - while common in game engines - aren't really part of threejs or what it's built for. You'll never get the near-seamless LODs or GPU optimization techniques that you'll get with UE4 or Unity. Furthermore webGL - while having made many strides over the past decade - is not ideal for handling vast texture sizes, not at the GPU level (since it's based on OpenGL ES, suited primarily for mobile devices) and certainly not at the caching level - we're still dealing with broswers here. You won't find a lot of webGL work done with vast textures of the dimensions you refer to.
Having said that,
A. A loader will let you do other things while your textures are loading so your user isn't staring at an 'unfinished mesh'. It lets you be pretty clever with dynamic loading times and UX design. Additionally, take a look at this gist to give you an idea for what a progressive texture loader could look like. A much more involved technique, that's JPEG specific, can be found here but I wouldn't approach it unless you're comfortable with low-level graphics programming.
B. Threejs does have a basic implementation of LOD although I haven't tinkered with it myself and am not sure it's useful for textures; that said, the basic premise to inquire into is whether you can load progressively higher-resolution files on a per-need basis, just like Google Earth does it for example.
C. This is out of the scope of your question - but I'd look into what happens under the hood in Unity's webgl export (which is based on threejs), and what kind of clever tricks are being employed there for similar purposes.
Finally, does your project have to be in webgl? For something ambitious and demanding, sometimes "proper" openGL / DX makes much more sense.
Does anyone know how to make a WebGL texture which is of lower precision? I want to push quite a lot of values onto the GPU (its tiled 3D data for ray tracing). The volume rendering is animated, so the data is updated each frame. This is currently a bottleneck in the performance.
I'd like to reduce the precision of each of the texel values as much as possible, as I don't think it will affect the visualisation too much. What is the default precision of a texel a value? How can I specify it to be lower? I'd be happy with 4 bit precision if it helped with performance.
When I google I seem to find lots of stuff about setting the precision of variables once they are on the shader, but I want to do it to the texture before it gets sent to the GPU.
Thanks everyone. Code is here if you want a look - or if you need to know something specific about the code then let me know
Thanks
When I am changing the texture of my mesh, on some computers, the application freeze for like half a second. I do that on 100 different mesh. On the Chrome profiler I see that the Three.js method setTexture is on top of the CPU usage.
The method I use to apply the next texture is the simplest:
this.materials.map = this.nextTexture;
This is working but I have no idea how to optimize this.
If use a particle system instead, would it improve something?
Thanks a lot
Are you really using 100 different textures?
Try sorting your objects according to texture, to minimize texture swapping.
Texture-change is one of the more expensive GPU operations.
To minimize count of state changes, I should sort drawing order of meshes. Anyway If I have multiple meshes using multiple shaders, I need to choose sorting by one of vertex attribute bindings or shader uniform parameters.
What should I choose? I think I have to choose minimizing vertex attribute change because of GPU cache hit rate, but I have no idea about shader changing cost. What's the generic concern when deciding drawing order? Can I get some basis to choose this?
PS. I'm targeting iOS/PowerVR SGX chips.
Edit
I decided to go sort-by-material because many meshes will use just a few materials, but there're bunch of meshes to draw. This means I will have more opportunity to share materials than share meshes. So it will have more chance to decrease state change count. Anyway I'm not sure, so if you have better opinion, please let me know.
You don't need to depth sort opaque objects on the PowerVR SGX since uses order-independent, pixel perfect hidden surface removal.
Depth sort only to achieve proper transparency/translucency rendering.
The best practice on SGX is to sort by state, in the following order:
Viewport
Framebuffer
Shader
Textures
Clipping, Blending etc.
Texture state change can be significantly reduced by using texture atlases.
The amount of draw calls can be reduced by batching.
Thats just the golden rules, remember that you should profile first and then optimize :)
See:
http://www.imgtec.com/powervr/insider/docs/PowerVR.Performance%20Recommendations.1.0.28.External.pdf
Use something like this http://realtimecollisiondetection.net/blog/?p=86 and then you can change which parts are sorted first in your code at run time to achieve the best speeds per device