Second depth buffer in OpenGL ES - opengl-es

I would like to implement Goldfeather's algorythm for CSG (Constructive Solid Geometry Modelling) in Open GL ES.
I need a second depth buffer and transfer (merge) operation between the buffers. I use glCopyPixels in "desktop" Open GL:
Transfer from 1st buffer to 2nd buffer
glViewport(0,0, _viewport.w, _viewport.h);
glRasterPos2f(_viewport.w>>1,0.0F);
glDisable(GL_STENCIL_TEST);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_ALWAYS);
glCopyPixels(0,0,_viewport.w>>1,_viewport.h,GL_DEPTH);
Transfer from 2nd buffer to 1st buffer
glViewport(0,0, _viewport.w, _viewport.h);
glRasterPos2f(0.0f,0.0f);
glCopyPixels(_viewport.w>>1,0,_viewport.w>>1,_viewport.h,GL_DEPTH);
What is the substituion of glCopyPixels in OpenGL ES?

I don't think you can have a second depth buffer in OpenGL ES -- gl.h contains only GL_DEPTH_ATTACHMENT, not like GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1. I'm not familiar with Goldfeather's Algorithm, but I think you might be able to fake having two depth buffers by binding textures to the depth and color renderbuffer attachment points of a framebuffer, drawing what you want in the other depth attachment to the color buffer, and then finally passing those textures into a shader which draws the result you want to the screen, emulating the glCopyPixels call.

Related

sample depth from default frame buffer? (GL ES 3)

I'm wondering if it is possible to bind the default frame buffer's depth as a texture so that it can be sampled from in a fragment shader (during rendering passes which do not write to that depth buffer)? If so, some pointers would be appreciated.
You cannot bind any of the native window surface attachments as textures; it's simply not possible in the API.

Copying a non-multisampled FBO to a multisampled one

I have been trying to implement a render to texture approach in our application that uses GLES 3 and I have got it working but I am a little disappointed with the frame rate drop.
So far we have been rendering directly to the main FBO, which has been a multisampled FBO created using EGL_SAMPLES=8.
What I want basically is to be able to get a hold of the pixels that have been already drawn, while I'm still drawing. So I thought a render to texture approach should do it. Then I'd just read a section of the off-screen FBO's texture whenever I want and when I'm done rendering to it I'll blit the whole thing to the main FBO.
Digging into this I found I had to implement a system with a multisampled FBO as well as a non-multisampled textured FBO in which I had to resolve the multisampled one. Then just blit the resolved to the main FBO.
This all works but the problem is that by using the above system and a non-multisampled main FBO (EGL_SAMPLES=0) I get quite a big frame rate drop compared to the frame rate I get when I use just the main FBO with EGL_SAMPLES=8.
Digging a bit more into this I found people reporting online as well as a post here https://community.arm.com/thread/6925 that says that the fastest approach to multisampling is to use EGL_SAMPLES. And indeed that's what it looks like on the jetson tk1 too which is our target board.
Which finally leads me to the question, and apologies for the long introduction:
Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?
The only point of MSAA is to anti-alias geometry edges. It only provides benefit if multiple triangle edges appear in the same pixel. For rendering pipelines which are doing multiple off-screen passes you want to enable multiple samples for the off-screen passes which contain your geometry (normally one of the early passes in the pipeline, before any post-processing effects).
Applying MSAA at the end of the pipeline on the final blit will provide zero benefit, and probably isn't free (it will be close to free on tile-based renderers like IMG Series 6 and Mali (the blog you linked), less free on immediate-mode renders like the Nvidia in your Jetson board).
Note for off-screen anti-aliasing the "standard" approach is rendering to an MSAA framebuffer, and then resolving as a second pass (e.g. using glBlitFramebuffer to blit into a single sampled buffer). This bounce is inefficient on many architectures, so this extension exists to help:
https://www.khronos.org/registry/gles/extensions/EXT/EXT_multisampled_render_to_texture.txt
Effectively this provides the same implicit resolve as the EGL window surface functionality.
Answers to your questions in the comments.
Is the resulting texture a multisampled texture in that case?
From the application point of view, no. The multisampled data is inside an implicitly allocated buffer, allocated by the driver. See this bit of the spec:
"The implementation allocates an implicit multisample buffer with TEXTURE_SAMPLES_EXT samples and the same internalformat, width, and height as the specified texture level."
This may require a real MSAA buffer allocation in main memory on some GPU architectures (and so be no faster than the manual glBlitFramebuffer approach without the extension), but is known to be effectively free on others (i.e. tile-based GPUs where the implicit "buffer" is a small RAM inside the GPU, and not in main memory at all).
The goal is to blur the background behind widgets
MSAA is not in any way a general purpose blur - it only anti-aliases the pixels which are coincident with edges of triangles. If you want to blur triangle faces you'd be better off just using a separable gaussian blur implemented as a pair of fragment shaders, and implement it as a 2D post-processing pass.
Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?
Not in any way which is genuinely useful.
Framebuffer blitting does allow blits from single-sampled buffers to multisample buffers. But all that does is give every sample within a pixel the same value as the one from the source.
Blitting cannot generate new information. So you won't get any actual antialiasing. All you will get is the same data stored in a much less efficient way.

Small sample in opengl es 3, wrong gamma correction

I have a small sample, es-300-fbo-srgb, supposed to showing how to manage gamma correction in opengl es3.
Essentially I have:
a GL_SRGB8_ALPHA8 texture TEXTURE_DIFFUSE
a framebuffer with another GL_SRGB8_ALPHA8 texture on GL_COLOR_ATTACHMENT0 and a GL_DEPTH_COMPONENT24 texture on GL_DEPTH_ATTACHMENT
the back buffer of my default fbo is GL_LINEAR
GL_FRAMEBUFFER_SRGB initially disabled.
I get
instead of
Now, if I recap the display metho, this is what I do:
I render the TEXTURE_DIFFUSE texture on the sRGB fbo and since the source texture is in sRGB space, my fragment shader will read automatically a linear value and write it to the fbo. Fbo should contain now linear values, although it is sRGB, because GL_FRAMEBUFFER_SRGB is disabled, so no linear->sRGB conversion is executed.
I blit the content of the fbo to the default fbo back buffer (through a program). But since the texture of this fbo has the sRGB component, on the read values a wrong gamma operation will be performed because they are assumed in sRGB space when they are not.
a second gamma operation is performed by my monitor when it renders the content of the default fbo
So my image is, if I am right, twice as wrong..
Now, if I glEnable(GL_FRAMEBUFFER_SRGB); I get instead
The image looks like it have been too many times sRGB corrected..
If I, instead, leave the GL_FRAMEBUFFER_SRGB disabled and change the format of the GL_COLOR_ATTACHMENT0 texture of my fbo, I get finally the right image..
Why do I not get the correct image with glEnable(GL_FRAMEBUFFER_SRGB);?
I think you are basically right: you get the net effect of two decoding conversions where one (the one in your monitor) would be enough. I suppose that either your driver or your code breaks something so OpenGL doesn't 'connect the dots' properly; perhaps this answer helps you:
When to call glEnable(GL_FRAMEBUFFER_SRGB)?

OpenGL ES 2.0 is it possible to draw to depth and "color" buffer simultaneously (without MRT)?

Simple OpenGL ES 2.0 question. If I need to have depth and color buffer, do I have to render geometry twice? Or I can just bind/attach depth buffer while render color frame?
Or I need MRT/render twice for this?
It's the normal mode of operation for OpenGL to updated both the color and depth buffer during rendering, as long as both of them exist and are enabled.
If you're rendering to an FBO, and want to use a depth buffer, you need to attach either a color texture or a color renderbuffer to GL_COLOR_ATTACHMENT0, by calling glFramebufferTexture2D() or glFramebufferRenderbuffer() respectively. Then allocate a depth renderbuffer with glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ...), and attach it to GL_DEPTH_ATTACHMENT by calling glFramebufferRenderbuffer().
After that, you can render once, and both your color and depth buffers will have been updated.

How can I read the depth buffer in WebGL?

Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting?
Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE.
General usage:
Preparation:
Query the extension (required)
Allocate a separate color and depth texture (gl.DEPTH_COMPONENT)
Combine both textures in to a single framebuffer (gl.COLOR_ATTACHMENT0, gl.DEPTH_ATTACHMENT)
Rendering:
Bind the framebuffer, render your scene (usually a simplified version)
Unbind the framebuffer, pass the depth texture to your shaders and read it like any other texture:
texPos.xyz = (gl_Position.xyz / gl_Position.w) * 0.5 + 0.5;
float depthFromZBuffer = texture2D(uTexDepthBuffer, texPos.xy).x;
I don't know if it's possible to directly access the depth buffer but if you want depth information in a texture, you'll have to create a rgba texture, attach it as a colour attachment to an frame buffer object and render depth information into the texture, using a fragment shader that writes the depth value into gl_FragColor.
For more information, see the answers to one of my older questions: WebGL - render depth to fbo texture does not work
If you google for opengl es and shadow mapping or depth, you'll find more explanations and example source code.
From section 5.13.12 of the WebGL specification it seems you cannot directly read the depth buffer, so maybe Markus' suggestion is the best way to do it, although you might not neccessarily need an FBO for this.
But if you want to do something like picking, there are other methods for it. Just browse SO, as it has been asked very often.
Not really a duplicate but see also: How to get object in WebGL 3d space from a mouse click coordinate
Aside of unprojecting and casting a ray (and then performing intersection tests against it as needed), your best bet is to look at 'picking'. This won't give exact 3D coordinates, but it is a useful substitute for unprojection when you only care about which object was clicked on, and don't really need per-pixel precision.
Picking in WebGL means to render the entire scene (or at least, the objects you care about) using a specific shader. The shader renders each object with a different unique ID, which is encoded in the red and green channels, using the blue channel as a key (non-blue means no object of interest). The scene is rendered into an offscreen framebuffer so that it's not visible to the end user. Then you read back, using gl.readPixels(), the pixel or pixels of interest and see which object ID was encoded at the given position.
If it helps, see my own implementation of WebGL picking. This implementation picks a rectangular region of pixels; passing in a 1x1 region results in picking at a single pixel. See also the functions at lines 146, 162, and 175.
As of January 23, 2012, there is a draft WebGL extension to enable depth buffer reading, WEBGL_depth_texture. I have no information about its availability in implementations, but I do not expect it at this early date.

Resources