OpenGL ES render to texture bound to shader - opengl-es

Can I render to the same texture, which I pass to my shader?
gl.glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, currentTex, 0);
gl.glActiveTexture(GL_TEXTURE0);
gl.glBindTexture(GL_TEXTURE_2D, currentTex);
gl.glUniform1i(texGlId, 0);
// ...
// drawCall

No, you're not supposed to do that. The OpenGL specs call this a "rendering feedback loop". There are cases where you can use the same texture, for example if you render to a mipmap level that is not used for texturing. But if you use a level that is included in your texturing as a render target , the behavior is undefined.
From page 80 of the ES 2.0 spec, "Rendering Feedback Loops":
A rendering feedback loop can occur when a texture is attached to an attachment point of the currently bound framebuffer object. In this case rendering results are undefined. The exact conditions are detailed in section 4.4.4.

We should avoid it. The rendering result will be undefined, it might depend on GPU.
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glFramebufferTexture2D.xml
Notes
Special precautions need to be taken to avoid attaching a texture image to the currently bound framebuffer while the texture object is currently bound and potentially sampled by the current vertex or fragment shader. Doing so could lead to the creation of a "feedback loop" between the writing of pixels by rendering operations and the simultaneous reading of those same pixels when used as texels in the currently bound texture. In this scenario, the framebuffer will be considered framebuffer complete, but the values of fragments rendered while in this state will be undefined. The values of texture samples may be undefined as well.

Related

OpenGL ES: using the screen as an input texture to a shader

I'd like to do the opposite of what is normally done, i.e. take the default Framebuffer (the screen), use that as an input texture in my fragment shader.
I know I can do
glBindFramebuffer(GL_FRAMEBUFFER,0);
glReadPixels( blah,blah,blah, buf);
int texID = createTexture(buf);
glBindTexture( GL_TEXTURE_2D, texID);
runShaderProgram();
but that's copying data that's already in the GPU to the CPU (ReadPixels) and then back to the GPU (BindTexture), isn't it?
Couldn't we somehow 'directly' use the contents of the screen and feed it to the shaders?
It's not possible - the API simply doesn't expose this functionality for general purpose shader code.
If you want render to texture then is there any reason why you can't just do it the "normal" way and render to an off-screen FBO?

GLES2 : glTexImage2D with GL_LUMINANCE give me black screen/texture

I'm trying to render a video from a bayer buffer.
So I create a texture using GL_LUMINANCE/GL_UNSIGNED_BYTE. I apply some shaders onto this texture to generate a RGBA output.
The following call works fine on my PC and does NOT on the target board (iMX6/GLES2) :
glTexImage2D(GL_TEXTURE_2D, 0, textureFormat, m_texture_size.width(), m_texture_size.height(), 0, bufferFormat, GL_UNSIGNED_BYTE, imageData);
On the target board, I have a black texture.
bufferFormat is GL_LUMINANCE.
textureFormat is GL_LUMINANCE.
GLES2 implements a smaller subset of OpenGL API :
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexImage2D.xml
bufferFormat should be equal to the textureFormat. If I try another formats, it works on the PC. On the target board, I get a black screen and some errors reported by glGetError().
Failing other tests
If I try GL_ALPHA, it seems the texture is filled by (0,0,0,1).
If I try GL_RGBA/GL_RGBA (this makes no sense for the application but it checks the HW/API capabilities), I get a non-black texture on the board. Obviously, the image is not what I should expect.
Why does GL_LUMINANCE give me black texture ? How to make this works ?
Guesses:
the texture is not a power of two in dimensions and you have not set a compatible wrapping mode;
you have not set an appropriate mip mapping mode and the shader is therefore sampling a level other than the one you uploaded.
Does setting GL_CLAMP_TO_EDGE* and GL_LINEAR or GL_NEAREST rather than GL_LINEAR_MIPMAP_... resolve the problem?
Per section 3.8.2 of the ES 2 spec (warning: PDF):
Calling a sampler from a fragment shader will return (R, G, B, A) = (0,0,0,1) if any of the following conditions are true:
• A two-dimensional sampler is called, the minification filter is one that requires a mipmap (neither NEAREST nor LINEAR), and the sampler’s associated texture object is not complete, as defined in sections 3.7.1 and 3.7.10,
• A two-dimensional sampler is called, the minification filter is not one that requires a mipmap (either NEAREST nor LINEAR), and either dimension of the level zero array of the associated texture object is not positive.
• A two-dimensional sampler is called, the corresponding texture image is a non-power-of-two image (as described in the Mipmapping discussion of section 3.7.7), and either the texture wrap mode is not CLAMP_TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, any of the corresponding texture images are non-power-of-two images, and either the texture wrap mode is not CLAMP_- TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, and either the corresponding cube map texture image is not cube complete, or TEXTURE_MIN_FILTER is one that requires a mipmap and the texture is not mipmap cube complete.
... so my guesses are to check the first and third bullet points.

Small sample in opengl es 3, wrong gamma correction

I have a small sample, es-300-fbo-srgb, supposed to showing how to manage gamma correction in opengl es3.
Essentially I have:
a GL_SRGB8_ALPHA8 texture TEXTURE_DIFFUSE
a framebuffer with another GL_SRGB8_ALPHA8 texture on GL_COLOR_ATTACHMENT0 and a GL_DEPTH_COMPONENT24 texture on GL_DEPTH_ATTACHMENT
the back buffer of my default fbo is GL_LINEAR
GL_FRAMEBUFFER_SRGB initially disabled.
I get
instead of
Now, if I recap the display metho, this is what I do:
I render the TEXTURE_DIFFUSE texture on the sRGB fbo and since the source texture is in sRGB space, my fragment shader will read automatically a linear value and write it to the fbo. Fbo should contain now linear values, although it is sRGB, because GL_FRAMEBUFFER_SRGB is disabled, so no linear->sRGB conversion is executed.
I blit the content of the fbo to the default fbo back buffer (through a program). But since the texture of this fbo has the sRGB component, on the read values a wrong gamma operation will be performed because they are assumed in sRGB space when they are not.
a second gamma operation is performed by my monitor when it renders the content of the default fbo
So my image is, if I am right, twice as wrong..
Now, if I glEnable(GL_FRAMEBUFFER_SRGB); I get instead
The image looks like it have been too many times sRGB corrected..
If I, instead, leave the GL_FRAMEBUFFER_SRGB disabled and change the format of the GL_COLOR_ATTACHMENT0 texture of my fbo, I get finally the right image..
Why do I not get the correct image with glEnable(GL_FRAMEBUFFER_SRGB);?
I think you are basically right: you get the net effect of two decoding conversions where one (the one in your monitor) would be enough. I suppose that either your driver or your code breaks something so OpenGL doesn't 'connect the dots' properly; perhaps this answer helps you:
When to call glEnable(GL_FRAMEBUFFER_SRGB)?

Rendering to depth texture - unclarities about usage of GL_OES_depth_texture

I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0.
I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer knows the depth at each pixel.
Since iOS >= 4.1 supports GL_OES_depth_texture, I'm trying to use GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT16 for the depth texture. I'm using the following calls to create the texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureId, 0);
The framebuffer creation succeeds, but I don't know how to proceed. I'm lacking some fundamental understanding of depth textures attached to framebuffers.
What values should I output in the fragment shader? I mean, gl_FragColor is still an RGBA value, even though the texture is a depth texture. I cannot set the depth in the fragment shader, since gl_FragDepth is missing in OpenGL ES 2.0
How can I read from the depth texture in the final rendering pass, where the depth texture is attached as a sampler2D?
Why do I get an incomplete framebuffer if I set the third argument of glTexImage2D to GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT16_OES or GL_DEPTH_COMPONENT24_OES?
Is it right to attach the texture to the GL_DEPTH_ATTACHMENT? If I'm changing that to GL_COLOR_ATTACHMENT0, I'm getting an incomplete framebuffer.
Depth textures do not affect the output of the fragment shader. The value that ends up in the depth texture when you're rendering to it will be the fixed-function depth value.
So without gl_FragDepth, you can't really "set the depth in the fragment shader". You can, however, do what you describe, i.e., render depth to a texture in one pass and then read access that value in a later pass.
You can read from a depth texture using the texture2D built-in function just like for regular color textures. The value you get back will be (d, d, d, 1.0).
According to the depth texture extension specification, GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES are not supported as internal formats for depth textures. I'd expect this to generate an error. The incomplete framebuffer status you get is probably related to this.
It is correct to attach the texture to the GL_DEPTH_ATTACHMENT.

How can I read the depth buffer in WebGL?

Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting?
Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE.
General usage:
Preparation:
Query the extension (required)
Allocate a separate color and depth texture (gl.DEPTH_COMPONENT)
Combine both textures in to a single framebuffer (gl.COLOR_ATTACHMENT0, gl.DEPTH_ATTACHMENT)
Rendering:
Bind the framebuffer, render your scene (usually a simplified version)
Unbind the framebuffer, pass the depth texture to your shaders and read it like any other texture:
texPos.xyz = (gl_Position.xyz / gl_Position.w) * 0.5 + 0.5;
float depthFromZBuffer = texture2D(uTexDepthBuffer, texPos.xy).x;
I don't know if it's possible to directly access the depth buffer but if you want depth information in a texture, you'll have to create a rgba texture, attach it as a colour attachment to an frame buffer object and render depth information into the texture, using a fragment shader that writes the depth value into gl_FragColor.
For more information, see the answers to one of my older questions: WebGL - render depth to fbo texture does not work
If you google for opengl es and shadow mapping or depth, you'll find more explanations and example source code.
From section 5.13.12 of the WebGL specification it seems you cannot directly read the depth buffer, so maybe Markus' suggestion is the best way to do it, although you might not neccessarily need an FBO for this.
But if you want to do something like picking, there are other methods for it. Just browse SO, as it has been asked very often.
Not really a duplicate but see also: How to get object in WebGL 3d space from a mouse click coordinate
Aside of unprojecting and casting a ray (and then performing intersection tests against it as needed), your best bet is to look at 'picking'. This won't give exact 3D coordinates, but it is a useful substitute for unprojection when you only care about which object was clicked on, and don't really need per-pixel precision.
Picking in WebGL means to render the entire scene (or at least, the objects you care about) using a specific shader. The shader renders each object with a different unique ID, which is encoded in the red and green channels, using the blue channel as a key (non-blue means no object of interest). The scene is rendered into an offscreen framebuffer so that it's not visible to the end user. Then you read back, using gl.readPixels(), the pixel or pixels of interest and see which object ID was encoded at the given position.
If it helps, see my own implementation of WebGL picking. This implementation picks a rectangular region of pixels; passing in a 1x1 region results in picking at a single pixel. See also the functions at lines 146, 162, and 175.
As of January 23, 2012, there is a draft WebGL extension to enable depth buffer reading, WEBGL_depth_texture. I have no information about its availability in implementations, but I do not expect it at this early date.

Resources