I have some models being loaded in that I want to all render using the same shader.
Since I have 100+ model chunks, each of which has its own texture, I would like to configure things in a way that I can reuse the same material for multiple Meshes. The problem, however, is that by assigning a texture, I have to do it to the material, that is to say the architecture seems to limit me from having a per-mesh texture without completely making a new Material for each of them.
So, everything still works, but the performance of a large scene composed of hundreds of meshes is problematic because of all of the looping state changes and calls being made to switch programs hundreds of times to render each frame. Of course, I should be making one big mesh instead of many little ones, as that'll help reduce actual number of draw calls... but for the time being I'm trying to optimize a little without addressing these issues arising from other parts of the data pipeline. The main issue is that regardless of how many draw calls are involved, all of the shader program changes and uniform assignments (aside from the texture sampler) are unnecessary.
Are there any tricks I can use or ways to easily hack the library to make it recycle the same shader? One of the problems is that due to the way assigning a texture works I do have to create a new ShaderMaterial for each of my Meshes. It's totally unclear how I could avoid doing such a thing and still get the different textures working.
Related
I'm creating a game in three.js. It has a lot of objects, and I'm using hashtables and chunk data structures to increase performance.
However, every object (simple cubes/planes) is added to the scene by Scene.add(geometry);
So, three.js stores the objects in a datastructure which could be not the best structure.
My questions are:
Does it matter how three.js stores the added objects?
And is there a way to render objects in the render loop manually, without having to add it to the scene by using Scene.add(...); ?
How do other games, like voxel.js, solve this problem? It looks like voxel.js has a great performance compared to my game. And actually, a voxel game exists just of simple planes. My game is almost the same as a minecraft-like block world, where all blocks exist of planes. Only the visible planes are added to the scene. But I don't get the same good performance as a voxel.js game. I want to figure out what I can do to make it faster.
I'm looking at the three.js code and notice it interates over all objects while drawing. This in turn then updates the GL context for each object. But if I have a bunch of objects sharing a material this is highly inefficient, since it might be interleaved with other objects.
How can I put my objects in an order to minimize the gl calls? I know which objects share properties, I just don't know how to tell three.js that information.
Update: I modified the three.js code and counted the updates. It is quite wasteful. Given one logical object with two materials, for each one I add to the scene it needs to swap programs twice. So for 100 such objects it will swap 200 times as opposed to the desired 2 swaps!
What is "optimal" is case-specific, so your question is too general to be answered. State changes are not the only issue of concern.
three.js sorts opaque objects from front to back, and transparent ones from back to front. It renders transparent objects last.
If you set
renderer.sortObjects = false;
then objects will be rendered in the order they are added to the scene. Since you know what your objects are, this is your work-around.
You can also merge your geometry, or use BufferGeometry to reduce the number of draw calls.
You can get info about the renderer by inspecting renderer.info in the console (or see https://github.com/spite/rstats). That way, you don't have to hack the source.
three.js r.64
I am currently learning to apply more materials with lighting to my application, but then I got confused on how I should scale it. I'm using WebGL and I'm learning from learningwebgl.com (which they say the same as NeHe OpenGL tutorial), and it only shows simple shader programs that every sample have one program with embedded lighting on it.
Say I have multiple lighting setup, like some point lights/spot lights, and I have multiple meshes with different materials, but every mesh need to react with those lights. What should I do? make individual shader programs where you put colors/textures to meshes and then switch to lighting program? or always have every shader strings in my application with those lights (as functions) as default in it, append it to loaded shaders, and simply make variable passes to enable them?
Also I am focusing on per-fragment lighting, so maybe things only happen in fragment shaders.
There are generally 2 approches
Have an uber shader
In this case you make a big shader with every option possible and lots of branching or ways to effectively nullify parts of the shader (like multiplying with 0)
A simple example might be to have an array of lights uniforms in the shader. For lights you don't want to have an effect you just set their color 0,0,0,0 or their power to 0 so they are still calculated but they contribute nothing to the final scene.
Generate shaders on the fly
In this case for each model you figure out what options you need for that shader and generate the appropriate shader with the exact features you need.
A variation of #2 is the same but all the various shaders needed are generated offline.
Most game engines use technique #2 as it's far more efficient to use the smallest shader possible for each situation than to run an uber shader but many smaller projects and even game prototypes often use an uber shader because it's easier then generating shaders. Especially if you don't know all the options you'll need yet.
In a three.js project (viewable here) I have 500 cubes, all of the same size and all statically positioned. On each of these cubes, five of the faces always remain the same color; however, the color of the sixth face can be dynamically updated, and this modification occurs across many of the cubes in a single frame and also occurs across most frames.
I've been able to implement this scene several different ways, but I have not been completely satisfied with the performance of anything I've tried. I know I must not have hit upon the right technique yet or maybe I'm not implementing one quite right. From a performance standpoint, what is the best way to change the color of these cube faces while maintaining independence across each of the cubes?
Here is what I have tried so far:
Create 500 individual CubeGeometry and Mesh instances. Change the color of a geometry face as described in the answer here: Change the colors of a cube's faces. So far this method has performed the best for me, but 500 identical geometries seems less than ideal, especially because I'm not able to achieve a regular 60fps with a good GPU. Rendering takes about 11-20ms here.
Create one CubeGeometry and use it across 500 Mesh instances. Create an array of MeshBasicMaterials to create a MeshFaceMaterial for each Mesh. Five of the MeshBasicMaterial instances are the same, representing the five statically colored sides of each cube. Create a unique MeshBasicMaterial to add to the MeshFaceMaterial for each Mesh. Update the color of this unique material with thisMesh.material.materials[3].uniforms.diffuse.value.copy(newColor). This method renders quite slower than the first method, 90-110ms, which seems surprising to me. Maybe it's because 500 cubes with 6 materials each = 3000 materials to process???
Any advice you can offer would be much appreciated!
I discovered that three.js performs a WebGL draw for each mesh in your scene, and this is what was really hurting my performance. I looked into yaku's suggestion of using BufferGeometry, which I'm sure would be a great route, but using BufferGeometry appears to be relatively difficult unless you have a good amount of experience with WebGL/OpenGL.
However, I came across an alternative solution that was incredibly effective. I still created individual meshes for each of my 500 cubes, but then I used GeometryUtils.merge() to merge each of those meshes into a generic geometry to represent the entire group of cubes. I then used that group geometry to create a group mesh. An explanation of GeometryUtils.merge() is here.
What's especially nice about this tactic is that you still have access to all the faces that were part of the underlying geometries/meshes that you merge. In my project, this allowed me to still have full control over the face colors that I wanted control over:
// For 500 merged cubes, there will be 3000 faces in the geometry.
// This code will get the fourth face (index 3) of any cube.
_mergedCubesMesh.geometry.faces[(cubeIdx * 6) + 3].color
As the title says, I would like to reuse a given ShaderMaterial for different meshes, but with a different set of uniforms for each mesh (in fact, some uniforms may vary between meshes, but not necessarily all of them): is it possible ?
It seems a waste of resources to me to have to create a full ShaderMaterial for each mesh in this circumstance, the idea being to have a single vertex/fragment shader program but to configurate it through different uniforms, whose values would change depending on the mesh. If I create a new ShaderMaterial for each mesh, I will end up with a lots of duplications (vertex+fragment programs + all other data members of the Material / ShaderMaterial classes).
If the engine was able to call a callback before drawing a mesh, I could change the uniforms and achieve what I want to do. Another possibility would be to have a "LiteShaderMaterial" which would hold a pointer to the shared ShaderMaterial + only the specific uniforms for my mesh.
Note that my question is related to this one Many meshes with the same geometry and material, can I change their colors? but is still different, as I'm mostly concerned about the waste of resources - performance wise I don't think it would be a lot different between having multiple ShaderMaterial or a single one, as the engine should be smart enough to note that all materials have the same programs and don't resend them to the gfx card.
Thanks
When cloning a ShaderMaterial, the attributes and vertex/fragment programs are copied by reference. Only the uniforms are copied by value, which is what you want.
This should work efficiently.
You can prove it to yourself by creating a ShaderMaterial and then using ShaderMaterial.clone() to clone it for each mesh. Then assign each material unique uniform values.
In the console type "render.info". It should show 1 program.
three.js r.64
You can safely create multiple ShaderMaterial instances with the same parameters, with clone or otherwise. Three.js will do some extra checks as a consequence of material.needsUpdate being initially true for each instance, but then it will be able to reuse the same program for all instances.
In newer releases another option is to use a single ShaderMaterial, but to add changes to uniforms in the objects' onBeforeRender functions. This avoids unnecessary calls to initMaterial in the renderer, but whether or not this makes it a faster solution overall would have to be tested. It may be a risky solution if you push too much what is being modified before the rendering, as in worst case the single material could then have to be recompiled multiple times during the render. I recommend this guide for further tips.