OpenGL ES 2.0 is it possible to draw to depth and "color" buffer simultaneously (without MRT)? - opengl-es

Simple OpenGL ES 2.0 question. If I need to have depth and color buffer, do I have to render geometry twice? Or I can just bind/attach depth buffer while render color frame?
Or I need MRT/render twice for this?

It's the normal mode of operation for OpenGL to updated both the color and depth buffer during rendering, as long as both of them exist and are enabled.
If you're rendering to an FBO, and want to use a depth buffer, you need to attach either a color texture or a color renderbuffer to GL_COLOR_ATTACHMENT0, by calling glFramebufferTexture2D() or glFramebufferRenderbuffer() respectively. Then allocate a depth renderbuffer with glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ...), and attach it to GL_DEPTH_ATTACHMENT by calling glFramebufferRenderbuffer().
After that, you can render once, and both your color and depth buffers will have been updated.

Related

OpenGLES 3.0 Cannot render to a texture larger than the screen size

I have made an image below to indicate my problem. I render my scene to an offscreen framebuffer with a texture the size of the screen. I then render said texture to a screen-filling quad. This produces the case 1 on the image. I then run the exact same program, but with with a texture size, let's say, 1.5 times greater (enough to contain the entire smiley), and afterwards render it once more to the screen-filling quad. I then get result 3, but I expected to get result 2.
I remember to change the viewport according to the new texture size before rendering to the texture, and reset the viewport before drawing the quad. I do NOT understand, what I am doing wrong.
problem shown as an image!
To summarize, this is the general flow (too much code to post it all):
Create MSAAframebuffer and ResolveFramebuffer (Resolve contains the texture).
Set glViewport(0, 0, Width*1.5, Height*1.5)
Bind MSAAframebuffer and render my scene (the smiley).
Blit the MSAAframebuffer into the ResolveFramebuffer
Set glViewport(0, 0, Width, Height), bind the texture and render my quad.
Note that all the MSAA is working perfectly fine. Also both buffers have the same dimensions, so when I blit it is simply: glBlitFramebuffer(0, 0, Width*1.5, Height*1.5, 0, 0, Width*1.5, Height*1.5, ClearBufferMask.ColorBufferBit, BlitFramebufferFilter.Nearest)
Hope someone has a good idea. I might get fired if not :D
I found that I actually used an AABB somewhere else in the code to determine what to render; and this AABB was computed from the "small viewport's size". Stupid mistake.

OpenGL ES: using the screen as an input texture to a shader

I'd like to do the opposite of what is normally done, i.e. take the default Framebuffer (the screen), use that as an input texture in my fragment shader.
I know I can do
glBindFramebuffer(GL_FRAMEBUFFER,0);
glReadPixels( blah,blah,blah, buf);
int texID = createTexture(buf);
glBindTexture( GL_TEXTURE_2D, texID);
runShaderProgram();
but that's copying data that's already in the GPU to the CPU (ReadPixels) and then back to the GPU (BindTexture), isn't it?
Couldn't we somehow 'directly' use the contents of the screen and feed it to the shaders?
It's not possible - the API simply doesn't expose this functionality for general purpose shader code.
If you want render to texture then is there any reason why you can't just do it the "normal" way and render to an off-screen FBO?

Second depth buffer in OpenGL ES

I would like to implement Goldfeather's algorythm for CSG (Constructive Solid Geometry Modelling) in Open GL ES.
I need a second depth buffer and transfer (merge) operation between the buffers. I use glCopyPixels in "desktop" Open GL:
Transfer from 1st buffer to 2nd buffer
glViewport(0,0, _viewport.w, _viewport.h);
glRasterPos2f(_viewport.w>>1,0.0F);
glDisable(GL_STENCIL_TEST);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_ALWAYS);
glCopyPixels(0,0,_viewport.w>>1,_viewport.h,GL_DEPTH);
Transfer from 2nd buffer to 1st buffer
glViewport(0,0, _viewport.w, _viewport.h);
glRasterPos2f(0.0f,0.0f);
glCopyPixels(_viewport.w>>1,0,_viewport.w>>1,_viewport.h,GL_DEPTH);
What is the substituion of glCopyPixels in OpenGL ES?
I don't think you can have a second depth buffer in OpenGL ES -- gl.h contains only GL_DEPTH_ATTACHMENT, not like GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1. I'm not familiar with Goldfeather's Algorithm, but I think you might be able to fake having two depth buffers by binding textures to the depth and color renderbuffer attachment points of a framebuffer, drawing what you want in the other depth attachment to the color buffer, and then finally passing those textures into a shader which draws the result you want to the screen, emulating the glCopyPixels call.

Rendering to depth texture - unclarities about usage of GL_OES_depth_texture

I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0.
I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer knows the depth at each pixel.
Since iOS >= 4.1 supports GL_OES_depth_texture, I'm trying to use GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT16 for the depth texture. I'm using the following calls to create the texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureId, 0);
The framebuffer creation succeeds, but I don't know how to proceed. I'm lacking some fundamental understanding of depth textures attached to framebuffers.
What values should I output in the fragment shader? I mean, gl_FragColor is still an RGBA value, even though the texture is a depth texture. I cannot set the depth in the fragment shader, since gl_FragDepth is missing in OpenGL ES 2.0
How can I read from the depth texture in the final rendering pass, where the depth texture is attached as a sampler2D?
Why do I get an incomplete framebuffer if I set the third argument of glTexImage2D to GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT16_OES or GL_DEPTH_COMPONENT24_OES?
Is it right to attach the texture to the GL_DEPTH_ATTACHMENT? If I'm changing that to GL_COLOR_ATTACHMENT0, I'm getting an incomplete framebuffer.
Depth textures do not affect the output of the fragment shader. The value that ends up in the depth texture when you're rendering to it will be the fixed-function depth value.
So without gl_FragDepth, you can't really "set the depth in the fragment shader". You can, however, do what you describe, i.e., render depth to a texture in one pass and then read access that value in a later pass.
You can read from a depth texture using the texture2D built-in function just like for regular color textures. The value you get back will be (d, d, d, 1.0).
According to the depth texture extension specification, GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES are not supported as internal formats for depth textures. I'd expect this to generate an error. The incomplete framebuffer status you get is probably related to this.
It is correct to attach the texture to the GL_DEPTH_ATTACHMENT.

OpenGL ES : Pre-Rendering to FBO texture

Is it possible to render to FBO texture once and then use the resulting texture handle to render all following frames?
For example, in case I'm rendering a hard shadow map and the scene geometry and light position are static, the depth map is always the same and I want to render it only once using a FBO and then just use it after that. However, if I simply put a flag to render the depth texture once, the texture remains empty for the rest of the frames.
Is FBO get reallocated after rendering a frame has been complete? What would be the right way to preserve rendered texture for rendering of the following frames?
Rendering to a texture is no different than if you had uploaded those pixels to the texture in the first place. The contents of a texture do not magically disappear. A texture's contents are changed when you change them. This could be by uploading data to the texture, or by setting one of the texture's images to be used for framebuffer operations (clearing, rendering to it, etc).
Unless you do something to explicitly change the data stored in a texture, it won't change.

Resources