How to implement multiple render target in opengles 3.0 - opengl-es

I want to implement MRT in opengl es 3.0. Thus has created a framebuffer with texture as a GL_COLOR_ATTACHMENT0 attachment of type GL_RGBA32UI. Rendering a textured image of GL_RGBA32UI on that framebuffer. Then reading the framebuffer data as a texture and applying it as a texture in default buffer. (Basically render to texture using INTEGER texture)
I am trying to use same fragment shader for both my customize FBO and default one.
precision highp float;
uniform highp usampler3D stexture;
in vec4 out_TexCoord;
uniform highp uint range;
layout(location = 0) out uvec4 uex_colour;
layout(location = 1) out vec4 ex_colour;
void main(void)
{
uex_colour = uvec4(texture(stexture, out_TexCoord.xyz));
ex_colour = vec4(texture(stexture, out_TexCoord.xyz))/(range);
ex_colour = vec4(vec3(ex_colour.xyz), 1.0);
}
Want to use uex_color for rendering into the customize framebuffer and ex_color to render in default framebuffer.
Tried using glDrawBuffer(1, {GL_COLOR_ATTACHMENT0 }) for my FBO, but not able get how to use ex_colour for default framebuffer.

I think you would save yourself a lot of headache by using two different shaders. The allowed sets of values for glDrawBuffers are much more restrictive in ES 3.0 than in full OpenGL.
Specifically in your case, with the fragment shader you're trying to use, what you want in the back buffer is output 1. But for the default framebuffer, you can have only one draw buffer, which has to be GL_BACK. So you can only use output 0 for drawing the back buffer.
It actually looks like you might be trying to render to the texture and use the texture in the same rendering pass. If that's the case, it's a really bad idea. It sets up what the specs call a "rendering feedback loop". You can read up on the details, but it's generally not going to work. And you couldn't render to an FBO and the default framebuffer at the same time anyway.
You need to do one pass that renders to the FBO to generate your texture, then another pass to render to the default framebuffer, sampling the texture. You will use different shaders for these two passes.

Related

Is there a way to access a vector by coordinates as you would a texture in GLSL?

I am implementing a feature extraction algorithm with OpenGL ES 3.0 (given an input texture with some 1's and mostly 0's, produce an output texture that has feature regions labeled). The problem I face in my fragment shader is how to perform a “lookup” on an intermediate vec or float rather than a sampler.
Conceptually every vec or float is just a texel-valued function, so there ought to be a way to get its value given texel coordinates, something like textureLikeLookup(texel_valued, v_color) - but I haven’t found anything that parallels the texture* functions.
The options I see are:
Render my vector to a framebuffer and pass that as a texture into another shader/rendering pass - undesirable because I have many such passes in a loop, and I want to avoid interleaving CPU calls;
Switch to ES 3.1 and take advantage of imageStore (https://www.khronos.org/registry/OpenGL-Refpages/es3.1/html/imageStore.xhtml) - it seems clear that if I can update an intermediate image within my shader then I can achieve this within the fragment shader (cf. https://www.reddit.com/r/opengl/comments/279fc7/glsl_frag_update_values_to_a_texturemap_within/), but I would rather stick to 3.0 if possible.
Is there a better/natural way to deal with this problem? In other words, do something like this
// works, because tex_sampler is a sampler2D
vec4 texel_valued = texture(tex_sampler, v_color);
when the data is not a sampler2D but a vec:
// doesn't work, because texel_valued is not a sampler but a vec4
vec4 oops = texture(texel_valued, v_color);

OpenGL ES: using the screen as an input texture to a shader

I'd like to do the opposite of what is normally done, i.e. take the default Framebuffer (the screen), use that as an input texture in my fragment shader.
I know I can do
glBindFramebuffer(GL_FRAMEBUFFER,0);
glReadPixels( blah,blah,blah, buf);
int texID = createTexture(buf);
glBindTexture( GL_TEXTURE_2D, texID);
runShaderProgram();
but that's copying data that's already in the GPU to the CPU (ReadPixels) and then back to the GPU (BindTexture), isn't it?
Couldn't we somehow 'directly' use the contents of the screen and feed it to the shaders?
It's not possible - the API simply doesn't expose this functionality for general purpose shader code.
If you want render to texture then is there any reason why you can't just do it the "normal" way and render to an off-screen FBO?

Manual selection lod of mipmaps in a fragment shader using three.js

I'm writing a physically based shader using glsl es in three.js. For the addition of specular global illumination I use a cubemap dds texture with mipmap chain inside (precalculate with CubeMapGen as it's explained here). I need to access this texture in fragment shader and I would like to select manually the index of mipmap. The correct function for doing this is
vec4 textureCubeLod(samplerCube sampler, vec3 coord, float lod)
but it's available only in vertex shader. In my fragment shader I'm using the similar function
vec4 textureCube(samplerCube sampler, vec3 coord, float bias)
but it doesn't work well, because the bias parameter is just added to the automatically calculated level of detail. So, when I zoom in or zoom out on the scene the LOD of mipmap change, but for my shader it must be the same (it must depends only on the rough parameter, as explained in the link above).
I would like to select manually the level of mipmap in fragment shader only depends on the roughness of the material (for example using the formula mipMapIndex = roughness*numMipMap), so it must be costant with the distance and no automatically changed when zooming. How can I solve this?
It wont work with webGL atm, because there is no support for this feature. You can experiment with textureLOD extensions though with recent builds of chrome canary, but it still needs some tweaking. Go about flags and look for this:
Enable WebGL Draft Extensions
WebGL textureCube bias causing seams

Is a varying a pixel?

I am writing a Fragment shader for an Open GL ES application, and Im trying to clarify the difference between a Pixel and a Varying?
A varying type in OpenGL ES contains an optional, user-defined output from the vertex shader to the fragment shader (e.g. a surface normal if using per-pixel lighting). It is used to calculate the final fragment color (gl_FragColor) within the fragment shader. While a final color can be output from the vertex shader (e.g. if using per-vertex lighting) as a varying type, this is not the norm and depends on your desired shader behaviour.
A pixel is simply the smallest measured unit of an image or screen. The OpenGL ES pipeline produces fragments (raw data) which are then converted (or not) to pixels, depending on their visibility, depth, stencil, colour, etc.

Subrect drawing to texture FBO within OpenGL ES Fragment Shader

I'm trying to draw to a subrect of a texture based FBO, but am having difficulty. The FBO has dimensions of say 500x500 and I am trying to have the fragment shader only redraw say a 20x20 pixel subrect. Modiyfing the full texture works without difficulty.
At first I tried setting glViewport to the needed subrect, but it doesn't look to be that simple. I'm suspecting that the Vertex attributes affecting gl_Position and the varying texture coordinates are involved, but I can't figure out how.
Turns out that I was trying to modify the texture coordinate attributes, but was more easily able to just modify the viewport using glViewport and gl_FlagCoord within the shader.

Resources