I am currently learning to apply more materials with lighting to my application, but then I got confused on how I should scale it. I'm using WebGL and I'm learning from learningwebgl.com (which they say the same as NeHe OpenGL tutorial), and it only shows simple shader programs that every sample have one program with embedded lighting on it.
Say I have multiple lighting setup, like some point lights/spot lights, and I have multiple meshes with different materials, but every mesh need to react with those lights. What should I do? make individual shader programs where you put colors/textures to meshes and then switch to lighting program? or always have every shader strings in my application with those lights (as functions) as default in it, append it to loaded shaders, and simply make variable passes to enable them?
Also I am focusing on per-fragment lighting, so maybe things only happen in fragment shaders.
There are generally 2 approches
Have an uber shader
In this case you make a big shader with every option possible and lots of branching or ways to effectively nullify parts of the shader (like multiplying with 0)
A simple example might be to have an array of lights uniforms in the shader. For lights you don't want to have an effect you just set their color 0,0,0,0 or their power to 0 so they are still calculated but they contribute nothing to the final scene.
Generate shaders on the fly
In this case for each model you figure out what options you need for that shader and generate the appropriate shader with the exact features you need.
A variation of #2 is the same but all the various shaders needed are generated offline.
Most game engines use technique #2 as it's far more efficient to use the smallest shader possible for each situation than to run an uber shader but many smaller projects and even game prototypes often use an uber shader because it's easier then generating shaders. Especially if you don't know all the options you'll need yet.
Related
What is the current solution in r136 to blend lights, shadows and color in a ShaderMaterial ? I already found the solution for the fog support.
I found some examples in previous revision (r108) like this codesandbox.
Actually, I'm looking for this kind of result : codesandbox.
Should I copy MeshPhongMaterial shaders as code base for my own shaders ?
The usage of custom shaders is mandatory in my projects, that's why i'm not using built-in materials.
Any idea or example ?
Thanks !
This question is huge, and does not have a single answer. Creating lights, shadows, and color varies from material to material, and includes so many elements that it would require a full course to learn.
However, you can look at the segments of shader code used by Three.js in this folder called /ShaderChunk. If you look up "light", you'll see shader segments (or "chunks"), for each material, like toon, lambert, physical, etc. Some materials need parameters to be defined at the beginning of the shader code, (those are the _pars files), some are calculated in the vertex shader, some in fragment, some need to split the code between _begin and _end, etc:
Shadows are even more complex because they require a separate render pass to build the shadowmap. Like I said, re-building your own lights, shadows, and color is a huge undertaking, and it would need a full course to learn. I hope this answer at least points you in the right direction.
Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.
I'd like to have a dynamic GLSL shader texture to be used as a reference map (for displacement and other stuff) on multiple, different materials on different Meshes.
My approach would be to do the computation one time, using a THREE.WebGLRenderTarget, setup a ortho cam, a 1X1 plane with a THREE.ShaderMaterial, and access the WebGLRenderTarget.texture, that I'd embed in a "master" object, whenever and wherever I need.
Is there any "official" object I can / may use for this? I seen the postprocessing objects are pretty similar (EG ShaderPass) but I'm unsure if and how to use them.
Thank you.
I have some models being loaded in that I want to all render using the same shader.
Since I have 100+ model chunks, each of which has its own texture, I would like to configure things in a way that I can reuse the same material for multiple Meshes. The problem, however, is that by assigning a texture, I have to do it to the material, that is to say the architecture seems to limit me from having a per-mesh texture without completely making a new Material for each of them.
So, everything still works, but the performance of a large scene composed of hundreds of meshes is problematic because of all of the looping state changes and calls being made to switch programs hundreds of times to render each frame. Of course, I should be making one big mesh instead of many little ones, as that'll help reduce actual number of draw calls... but for the time being I'm trying to optimize a little without addressing these issues arising from other parts of the data pipeline. The main issue is that regardless of how many draw calls are involved, all of the shader program changes and uniform assignments (aside from the texture sampler) are unnecessary.
Are there any tricks I can use or ways to easily hack the library to make it recycle the same shader? One of the problems is that due to the way assigning a texture works I do have to create a new ShaderMaterial for each of my Meshes. It's totally unclear how I could avoid doing such a thing and still get the different textures working.
I've tried to figure out, how three.js is working and have tried some shader debugger for it.
I've added two simple planes with basic material (single color without any shading model), which are rotating within rendering process.
First of all, my question was... Why is three.js using a single shader program (look at the WebGL context function .useProgram()) for both meshes.
I suppose, that objects are the same, and that's why for performance reasons a single shader program is using for similar objects.
But... I have changed my three.js application source code, and now there are a plane and a cube in scene, which are rotating.
And let's look in shader debugger again:
Here you can see, that three.js is using again one shader program, but the objects are different right now. And this moment is not clear for me.
If to look at that shader, it seems to be very generic and huge shader program, and there are also two different shader programs, which were compiled, but not used.
So, why is three.js using a single shader program? What are those correct (or maybe not) reasons?
Most of the work done in a shader is related to the material part of the mesh, not the geometry.
In webgl (or opengl for that matter) the geometry as you understand it (if it is a cube, a sphere, or whatever) is pretty irrelevant.
It would be a little bit more relevant if you talk about how the geometry is constructed. But in these days where faces of more than 3 vertices are gone, and triangle strips are seldom used, that are few different geometries... face3 geometries, line geometries, particle geometries, and buffer geometries.
Most of the time, the key difference to use a different shader will be in the material.