I'd like to have a dynamic GLSL shader texture to be used as a reference map (for displacement and other stuff) on multiple, different materials on different Meshes.
My approach would be to do the computation one time, using a THREE.WebGLRenderTarget, setup a ortho cam, a 1X1 plane with a THREE.ShaderMaterial, and access the WebGLRenderTarget.texture, that I'd embed in a "master" object, whenever and wherever I need.
Is there any "official" object I can / may use for this? I seen the postprocessing objects are pretty similar (EG ShaderPass) but I'm unsure if and how to use them.
Thank you.
Related
What is the current solution in r136 to blend lights, shadows and color in a ShaderMaterial ? I already found the solution for the fog support.
I found some examples in previous revision (r108) like this codesandbox.
Actually, I'm looking for this kind of result : codesandbox.
Should I copy MeshPhongMaterial shaders as code base for my own shaders ?
The usage of custom shaders is mandatory in my projects, that's why i'm not using built-in materials.
Any idea or example ?
Thanks !
This question is huge, and does not have a single answer. Creating lights, shadows, and color varies from material to material, and includes so many elements that it would require a full course to learn.
However, you can look at the segments of shader code used by Three.js in this folder called /ShaderChunk. If you look up "light", you'll see shader segments (or "chunks"), for each material, like toon, lambert, physical, etc. Some materials need parameters to be defined at the beginning of the shader code, (those are the _pars files), some are calculated in the vertex shader, some in fragment, some need to split the code between _begin and _end, etc:
Shadows are even more complex because they require a separate render pass to build the shadowmap. Like I said, re-building your own lights, shadows, and color is a huge undertaking, and it would need a full course to learn. I hope this answer at least points you in the right direction.
I compose multiple STLs for 3D printing / milling. For that I also use CSG and need some raytracing for detecting features of the models.
My scene is pretty much static. Just have to move around the models to arrange them. For this use case I'm not really sure which approach for moving / rotating the models is right.
Currently I manipulate the BufferGeometries directly. So everything in the geometry is like in the real world. Each position, each normal. No calculation from / to local or world coordinates.
On the other hand I could do the same thing with changing the meshes, which means to change just a matrix.
For me, working with the mesh is more for animation etc. While working with the geometry to manipulate the real object, which is my intention.
I'm wondering when one would translate / rotate the geometry and when the mesh. I know that manipulating the geometry is not best for CPU, which is not a problem for my use case.
Geometry can be translated so that subsequent transformations (such as scale or rotation) originate from a more preferred vector. Meshes can share a geometry. There are unique use cases for either if you care to memorize the list. Sometimes I integrate preexisting code samples. Sometimes the decision is made for me by some aspect of the process. As for the properties which may be similar, which is more convenient? I like the pattern of modifying an Object3D dummy using those methods and then updating from its matrix. There's a whole book on normals, but I didn't write it, sadly...
So, I want to start to make a game engine and I realized that I would have to draw 3D Objects and GUI(Immediate Mode) at the same time.
3D objects will use the perspective projection matrix and as GUI is in 2D space I will have to use Orthographic projection matrix.
So how can I implement that please anyone guide me. I'm not one of the professional Graphics programmers.
Also I'm using DirectX 11 so keep it that way.
To preface my answer, when I say "draw at the same time", I mean all drawing that takes place with a single call to ID3D11DeviceContext::Draw (or DrawIndexed/DrawAuto/etc). You might mean something different.
You do not required to draw objects with orthographic and perspective projections at the same time, and this isn't very commonly done.
Generally the projection matrix is provided to a vertex shader via a shader constant (or frequently via a concatenation of the World, View and Projection matrices). When you made a draw of a perspective object, you would bind one set of constants, when drawing an orthographic one, you'd set different ones. Frequently, different shaders are used to render perspective and orthographic objects, because they generally have completely different properties (eg. lighting, etc.).
You could draw the two different types of objects at the same time, and there are several ways you could accomplish that. A straightforward way would be to provide both projection matrices to the vertex shader, and have an additional vertex stream which determines which projection matrix to use.
In some edge cases, you might get some small performance benefit from this sort of batching. I don't suggest you do that. Make you life easier and use separate draw calls for orthographic and perspective objects.
I am currently learning to apply more materials with lighting to my application, but then I got confused on how I should scale it. I'm using WebGL and I'm learning from learningwebgl.com (which they say the same as NeHe OpenGL tutorial), and it only shows simple shader programs that every sample have one program with embedded lighting on it.
Say I have multiple lighting setup, like some point lights/spot lights, and I have multiple meshes with different materials, but every mesh need to react with those lights. What should I do? make individual shader programs where you put colors/textures to meshes and then switch to lighting program? or always have every shader strings in my application with those lights (as functions) as default in it, append it to loaded shaders, and simply make variable passes to enable them?
Also I am focusing on per-fragment lighting, so maybe things only happen in fragment shaders.
There are generally 2 approches
Have an uber shader
In this case you make a big shader with every option possible and lots of branching or ways to effectively nullify parts of the shader (like multiplying with 0)
A simple example might be to have an array of lights uniforms in the shader. For lights you don't want to have an effect you just set their color 0,0,0,0 or their power to 0 so they are still calculated but they contribute nothing to the final scene.
Generate shaders on the fly
In this case for each model you figure out what options you need for that shader and generate the appropriate shader with the exact features you need.
A variation of #2 is the same but all the various shaders needed are generated offline.
Most game engines use technique #2 as it's far more efficient to use the smallest shader possible for each situation than to run an uber shader but many smaller projects and even game prototypes often use an uber shader because it's easier then generating shaders. Especially if you don't know all the options you'll need yet.
As the title says, I would like to reuse a given ShaderMaterial for different meshes, but with a different set of uniforms for each mesh (in fact, some uniforms may vary between meshes, but not necessarily all of them): is it possible ?
It seems a waste of resources to me to have to create a full ShaderMaterial for each mesh in this circumstance, the idea being to have a single vertex/fragment shader program but to configurate it through different uniforms, whose values would change depending on the mesh. If I create a new ShaderMaterial for each mesh, I will end up with a lots of duplications (vertex+fragment programs + all other data members of the Material / ShaderMaterial classes).
If the engine was able to call a callback before drawing a mesh, I could change the uniforms and achieve what I want to do. Another possibility would be to have a "LiteShaderMaterial" which would hold a pointer to the shared ShaderMaterial + only the specific uniforms for my mesh.
Note that my question is related to this one Many meshes with the same geometry and material, can I change their colors? but is still different, as I'm mostly concerned about the waste of resources - performance wise I don't think it would be a lot different between having multiple ShaderMaterial or a single one, as the engine should be smart enough to note that all materials have the same programs and don't resend them to the gfx card.
Thanks
When cloning a ShaderMaterial, the attributes and vertex/fragment programs are copied by reference. Only the uniforms are copied by value, which is what you want.
This should work efficiently.
You can prove it to yourself by creating a ShaderMaterial and then using ShaderMaterial.clone() to clone it for each mesh. Then assign each material unique uniform values.
In the console type "render.info". It should show 1 program.
three.js r.64
You can safely create multiple ShaderMaterial instances with the same parameters, with clone or otherwise. Three.js will do some extra checks as a consequence of material.needsUpdate being initially true for each instance, but then it will be able to reuse the same program for all instances.
In newer releases another option is to use a single ShaderMaterial, but to add changes to uniforms in the objects' onBeforeRender functions. This avoids unnecessary calls to initMaterial in the renderer, but whether or not this makes it a faster solution overall would have to be tested. It may be a risky solution if you push too much what is being modified before the rendering, as in worst case the single material could then have to be recompiled multiple times during the render. I recommend this guide for further tips.