Coming from the Flash background, I'm used to creating a fragment shader in the following manner:
filters = [];
filters.push(new BasicFilter(SomeTexture));
filters.push(new NormalMapFilter(SomeOtherTexture));
myShader = new Shader(filters);
As a result, I could combine a number of effects freely and easily without the need of writing a large separate shader each time.
In case of threejs, I noticed that for complex visual effects, a single shader is written, like here: http://threejs.org/examples/#webgl_materials_bumpmap_skin
Is it possible to write e.g. a bump map shader and environment map shader separately and then combine them dynamically when needed? What would be the most proper way of doing this?
It is not possible directly in three.js. Three creates shaders by passing the shader code as a string directly to a WebGL shader object to compile. There is no mechanism to automatically build complex shaders, you have to write your own. Three.js conveniently adds a couple of uniforms/attributes to the shader, but you have to write what's done with them.
You can use Effect Composer now to combine shaders
Related
I'm thinking of writing a pixel shader for Windows Terminal and I'd love to add phosphor persistence. For that, I'd need to save a couple previous results of the shader output and keep them across multiple pixel calls. Is there a way to declare a texture map that is preserved across multiple frames in time?
There is no easy way to accomplish this in the windows terminal.
Shaders are usually called per frame, with no access to the parameters used in the call before or after.
Normally, because nothing is preserved within the shader inbetween frames, you would accomplish this using the following:
Save the result of each shader call outside of the shader at the end of each frame
Send it to the shader in a Texture2D on the next call.
Sample the texture to find the previous pixel value.
This is exactly what implementations of TAA do.
Without the ability to control the call to the shader, this is impossible. BUT the terminal src is here and you can give yourself that functionality.
We use three.js as the foundation of our WebGL engine and up until now, we only used traditional alpha blending with back-to-front sorting (which we customized a little to match the desired behavior) in our projects.
Goal: Now our goal is to incorporate the order-independent transparency algorithm proposed by McGuire and Bavoil in this paper trying to rid ourselves of the
usual problems with sorting and conventional alpha blending in complex scenes. I got it working without much hassle in a small, three.js based prototype.
Problem: The problem we have in the WebGL engine is that we're dealing with object hierarchies consisting of both opaque and translucent objects which are currently added to the same scene so three.js will handle transform updates. This, however, is a problem, since for the above algorithm to work, we need to render to one or more FBOs (the latter due to the lack of support for MRTs in three.js r79) to calculate accumulation and revealage and finally blend the result with the front buffer to which opaque objects have been previously rendered and in fact, this is what I do in my working prototype.
I am aware that three.js already does separate passes for both types of objects but I'm not aware of any way to influence to which render target three.js renders (render(.., .., rt,..) is not applicable) and how to modify other pipeline state I need. If a mixed hierarchy is added to a single scene, I have no idea how to tell three.js where my fragments are supposed to end up and in addition, I need to reuse the depth buffer from the opaque pass during the transparent pass with depth testing enabled but depth writes disabled.
Solution A: Now, the first obvious answer would be to simply setup two scenes and render opaque and translucent objects separately, choosing the render targets as we please and finally do our compositing as needed.
This would be fine except we would have to do or at least trigger all transformation calculations manually to achieve correct hierarchical behavior. So far, doing this seems to be the most feasible.
Solution B: We could render the scene twice and set all opaque and transparent materials visible flag to false depending on which pass were currently doing.
This is a variant of Solution A, but with a single scene instead of two scenes. This would spare us the manual transformation calculations but we would have to alter materials for all objects per pass - not my favorite, but definitely worth thinking about.
Solution C: Patch three.js as to allow for more control of the rendering process.
The most simple approach here would be to tell the renderer which objects to render when render() is called or to introduce something like renderOpaque() and renderTransparent().
Another way would be to somehow define the concept of a render pass and then render based on the information for that pass (e.g. which objects, which render target, how the pipeline is configured and so on).
Is someone aware of other, readily accessible approaches? Am I missing something or am I thinking way too complicated here?
I'm working on a 3d engine, that should work for mobile platforms. Currently I just want to make a prototype that will work on iOS and use forward rendering. In the engine a scene can have a variable number of lights of different types (directional, spot etc). When rendering, for each object (mesh) an array of lights that affect this object is constructed. The array will always have 1 or more elements. I can pack the light source information into 1D texture and pass to the shader. The number of lights can be put into this texture or passed as a separate uniform (I did not try it yet, but these are my thoughts after googling).
The problem is that not all glsl-es implementation support for loops with variable limits. So I can't write a shader that will loop through light sources and expect it to work on a wide range on platforms. Are there any technics to support variable number of lights in a shader if for loops with variable limits are not supported?
The idea I have:
Implement some preprocessing of shader source to unroll loops manually for different number of lights.
So in that case if I would render all objects with one type of shader and if the number of lights limits are 1 to 3, I will end-up having 3 different shaders (generated automatically) for 1, 2 and 3 lights.
Is it a good idea?
Since the source code for a shader consists of strings that you pass in at runtime, there's nothing stopping you from building the source code dynamically, depending on the number of lights, or any other parameters that control what kind of shader you need.
If you're using a setup where the shader code is in separate text files, and you want to keep it that way, you can take advantage of the fact that you can use preprocessor directives in shader code. Say you use LIGHT_COUNT for the number of lights in your shader code. Then when compiling the shader code, you prepend it with a definition for the count you need, for example:
#define LIGHT_COUNT 4
Since glShaderSource() takes an array of strings, you don't even need any string operations to connect this to the shader code your read from the file. You simply pass it in as an additional string to glShaderSource().
Shader compilation is fairly expensive, so you'll probably want to cache the shader program for each light count.
Another option is what Andon suggested in a comment. You can write the shader for the upper limit of the light count you need, and then pass in uniforms that serve as multipliers for each light source. For the lights you don't need, you set the multiplier to 0. That's not very efficient since you're doing extra calculations for light sources you don't need, but it's simple, and might be fine if it meets your performance requirements.
Is multiple render targets supported in Three.js? I mean, using one single fragment shader to write to different render targets.
I have a fragment shader that calculates several things and I need to output them separately into different textures for further manipulations.
Multiple render targets are not supported in three.js.
If you are interested in GPGPU within the framework of three.js, you can find a nice example here: http://jabtunes.com/labs/3d/gpuflocking/webgl_gpgpu_flocking3.html. Just be aware, it is using an older version of three.js.
three.js r.60
Though the above answer was correct at the time of its writing, it is no longer the case.
With the introduction of WebGL2, ThreeJS does indeed support multiple render targets.
https://threejs.org/docs/#api/en/renderers/WebGLMultipleRenderTargets
I'm using an ParticleSystem with PointSprites (inspired by the Cocos2D Source). But I wonder how to rebuild the functionality for OpenGL ES 2.0
glEnable(GL_POINT_SPRITE_OES);
glEnableClientState(GL_POINT_SIZE_ARRAY_OES);
glPointSizePointerOES(GL_FLOAT,sizeof(PointSprite),(GLvoid*) (sizeof(GL_FLOAT)*2));
glDisableClientState(GL_POINT_SIZE_ARRAY_OES);
glDisable(GL_POINT_SPRITE_OES);
these generate BAD_ACCESS when using an OpenGL ES 2.0 context.
Should I simply go with 2 TRIANGLES per PointSprite? But thats probably not very efficent (overhead for extra vertexes).
EDIT:
So, my new problem with the suggested solution from:
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
is a possibility to pass many different sizes in an batch call. I thought of using an Attribute instead of an Uniform, but then I would need to pass always an PointSize to my shaders - even if I'm not drawing GL_POINTS. So, maybe a second shader (a shader only for GL_POINTS)?! I'm not aware of the overhead for switching shaders every frame in the draw routine (because if the particle system is used, I want naturally also render regular GL_TRIANGLES without an pointSize)... Any ideas on this?
So doing the thing here as I already commented here is what you need: https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
And for which approach to go, I can either tell you to use different shaders for different types of drawables in your application or just another boolean uniform in your shader and enable and disable changing the gl_PointSize through your shader code. It's usually up to you. What you need to keep in mind is changing the shader program is one of the most time costly operations so doing the drawing of same type of objects in a batch will be better in that case. I'm not really sure if using an if statement in your shader code will give a huge performance impact.