Rendering to depth texture - unclarities about usage of GL_OES_depth_texture - opengl-es

I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0.
I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer knows the depth at each pixel.
Since iOS >= 4.1 supports GL_OES_depth_texture, I'm trying to use GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT16 for the depth texture. I'm using the following calls to create the texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureId, 0);
The framebuffer creation succeeds, but I don't know how to proceed. I'm lacking some fundamental understanding of depth textures attached to framebuffers.
What values should I output in the fragment shader? I mean, gl_FragColor is still an RGBA value, even though the texture is a depth texture. I cannot set the depth in the fragment shader, since gl_FragDepth is missing in OpenGL ES 2.0
How can I read from the depth texture in the final rendering pass, where the depth texture is attached as a sampler2D?
Why do I get an incomplete framebuffer if I set the third argument of glTexImage2D to GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT16_OES or GL_DEPTH_COMPONENT24_OES?
Is it right to attach the texture to the GL_DEPTH_ATTACHMENT? If I'm changing that to GL_COLOR_ATTACHMENT0, I'm getting an incomplete framebuffer.

Depth textures do not affect the output of the fragment shader. The value that ends up in the depth texture when you're rendering to it will be the fixed-function depth value.
So without gl_FragDepth, you can't really "set the depth in the fragment shader". You can, however, do what you describe, i.e., render depth to a texture in one pass and then read access that value in a later pass.
You can read from a depth texture using the texture2D built-in function just like for regular color textures. The value you get back will be (d, d, d, 1.0).
According to the depth texture extension specification, GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES are not supported as internal formats for depth textures. I'd expect this to generate an error. The incomplete framebuffer status you get is probably related to this.
It is correct to attach the texture to the GL_DEPTH_ATTACHMENT.

Related

OpenGL ES: using the screen as an input texture to a shader

I'd like to do the opposite of what is normally done, i.e. take the default Framebuffer (the screen), use that as an input texture in my fragment shader.
I know I can do
glBindFramebuffer(GL_FRAMEBUFFER,0);
glReadPixels( blah,blah,blah, buf);
int texID = createTexture(buf);
glBindTexture( GL_TEXTURE_2D, texID);
runShaderProgram();
but that's copying data that's already in the GPU to the CPU (ReadPixels) and then back to the GPU (BindTexture), isn't it?
Couldn't we somehow 'directly' use the contents of the screen and feed it to the shaders?
It's not possible - the API simply doesn't expose this functionality for general purpose shader code.
If you want render to texture then is there any reason why you can't just do it the "normal" way and render to an off-screen FBO?

OpenGL ES render to texture bound to shader

Can I render to the same texture, which I pass to my shader?
gl.glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, currentTex, 0);
gl.glActiveTexture(GL_TEXTURE0);
gl.glBindTexture(GL_TEXTURE_2D, currentTex);
gl.glUniform1i(texGlId, 0);
// ...
// drawCall
No, you're not supposed to do that. The OpenGL specs call this a "rendering feedback loop". There are cases where you can use the same texture, for example if you render to a mipmap level that is not used for texturing. But if you use a level that is included in your texturing as a render target , the behavior is undefined.
From page 80 of the ES 2.0 spec, "Rendering Feedback Loops":
A rendering feedback loop can occur when a texture is attached to an attachment point of the currently bound framebuffer object. In this case rendering results are undefined. The exact conditions are detailed in section 4.4.4.
We should avoid it. The rendering result will be undefined, it might depend on GPU.
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glFramebufferTexture2D.xml
Notes
Special precautions need to be taken to avoid attaching a texture image to the currently bound framebuffer while the texture object is currently bound and potentially sampled by the current vertex or fragment shader. Doing so could lead to the creation of a "feedback loop" between the writing of pixels by rendering operations and the simultaneous reading of those same pixels when used as texels in the currently bound texture. In this scenario, the framebuffer will be considered framebuffer complete, but the values of fragments rendered while in this state will be undefined. The values of texture samples may be undefined as well.

OpenGL ES 2.0 is it possible to draw to depth and "color" buffer simultaneously (without MRT)?

Simple OpenGL ES 2.0 question. If I need to have depth and color buffer, do I have to render geometry twice? Or I can just bind/attach depth buffer while render color frame?
Or I need MRT/render twice for this?
It's the normal mode of operation for OpenGL to updated both the color and depth buffer during rendering, as long as both of them exist and are enabled.
If you're rendering to an FBO, and want to use a depth buffer, you need to attach either a color texture or a color renderbuffer to GL_COLOR_ATTACHMENT0, by calling glFramebufferTexture2D() or glFramebufferRenderbuffer() respectively. Then allocate a depth renderbuffer with glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ...), and attach it to GL_DEPTH_ATTACHMENT by calling glFramebufferRenderbuffer().
After that, you can render once, and both your color and depth buffers will have been updated.

Overlapping Shader Effects in Opengl ES 2.0

Is it possible to overlap shader effects in OpenGL ES 2.0? (not using FBOs)
How to use the result of a shader with another shader without having to do a glReadPixels and push again the processed pixels?
The next pseudo-code is what I'm trying to achieve:
// Push RGBA pixels into the GPU
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, Pixels_To_Render);
// Apply first shader effect
glUseProgram( FIRST_SHADER_HANDLE);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Apply the second shader effect sampling from the result of the first shader effect
glUseProgram( SECOND_SHADER_HANDLE );
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Get the overall result
glReadPixels(......)
I presume you're talking about pixel processing with fragment shaders?
With the OpenGLĀ ES 2.0 core API, you can't get pixels from the destination framebuffer into the fragment shader without reading them back from the GPU.
But if you're on a device/platform that supports a shader framebuffer fetch extension (EXT_shader_framebuffer_fetch on at least iOS, NV_shader_framebuffer_fetch in some other places), you're in luck. With that extension, a fragment shader can read the fragment data from the destination framebuffer for the fragment it's rendering to (and only that fragment). This is great for programmable blending or pixel post-processing effects because you don't have to incur the performance penalty of a glReadPixels operation.
Declare that you're using the extension with #extension GL_EXT_shader_framebuffer_fetch : require, then read fragment data from the gl_LastFragData[0] builtin. (The subscript is for the rendering target index, but you don't have multiple render targets unless you're using OpenGLĀ ES 3.0, so it's always zero.) Process it however you like and write to gl_FragColor or gl_FragData as usual.

GLSL and glReadPixels

I am using OpenGL ES 2.0 and GLSL to draw objects.
Then I want to read pixel from whatever my fragment shader draws on my screen.
I am blending two grayscale textures and at the end of my fragment shader, I have:
gl_FragColor = vec4(something...blending textures);
I know that gl_FragColor means the final color will be written in the frame buffer.
(According to http://nehe.gamedev.net/article/glsl_an_introduction/25007/)
Given that, I did something like
GLubyte *pixels = new GLubyte[320*240];
glReadPixels(0, 0, 320,240, GL_LUMINANCE, GL_UNSIGNED_BYTE, pixels);
After drawing image using that, at the first, I get same as the original texture1 and then get the black empty color. Can someone help me what's wrong? Do I need to do something with FBO? I am a kinda lost...:(
Per the canonical ES 2.0 documentation for glReadPixels, the format parameter:
Specifies the format of the pixel data. The following symbolic values
are accepted: GL_ALPHA, GL_RGB, and GL_RGBA.
So GL_LUMINANCE is not a supported read format in GL ES.
Beyond that, assuming you have a working frame buffer of some sort and you're applying your fragment shader by rendering a suitably placed piece of geometry to the frame buffer, glReadPixels should be the correct thing to use.

Resources