Small sample in opengl es 3, wrong gamma correction - opengl-es

I have a small sample, es-300-fbo-srgb, supposed to showing how to manage gamma correction in opengl es3.
Essentially I have:
a GL_SRGB8_ALPHA8 texture TEXTURE_DIFFUSE
a framebuffer with another GL_SRGB8_ALPHA8 texture on GL_COLOR_ATTACHMENT0 and a GL_DEPTH_COMPONENT24 texture on GL_DEPTH_ATTACHMENT
the back buffer of my default fbo is GL_LINEAR
GL_FRAMEBUFFER_SRGB initially disabled.
I get
instead of
Now, if I recap the display metho, this is what I do:
I render the TEXTURE_DIFFUSE texture on the sRGB fbo and since the source texture is in sRGB space, my fragment shader will read automatically a linear value and write it to the fbo. Fbo should contain now linear values, although it is sRGB, because GL_FRAMEBUFFER_SRGB is disabled, so no linear->sRGB conversion is executed.
I blit the content of the fbo to the default fbo back buffer (through a program). But since the texture of this fbo has the sRGB component, on the read values a wrong gamma operation will be performed because they are assumed in sRGB space when they are not.
a second gamma operation is performed by my monitor when it renders the content of the default fbo
So my image is, if I am right, twice as wrong..
Now, if I glEnable(GL_FRAMEBUFFER_SRGB); I get instead
The image looks like it have been too many times sRGB corrected..
If I, instead, leave the GL_FRAMEBUFFER_SRGB disabled and change the format of the GL_COLOR_ATTACHMENT0 texture of my fbo, I get finally the right image..
Why do I not get the correct image with glEnable(GL_FRAMEBUFFER_SRGB);?

I think you are basically right: you get the net effect of two decoding conversions where one (the one in your monitor) would be enough. I suppose that either your driver or your code breaks something so OpenGL doesn't 'connect the dots' properly; perhaps this answer helps you:
When to call glEnable(GL_FRAMEBUFFER_SRGB)?

Related

When do we need to use renderer.outputEncoding = THREE.sRGBEncoding

I'm a newbie in three.js. I have been learning three.js by trying to make simple scenes and understanding how the official examples work.
Recently I have been looking at https://threejs.org/examples/?q=trans#webgl_materials_physical_transmission, and I couldn't understand what's the exact reason that the code needs to use renderer.outputEncoding = THREE.sRGBEncoding here. In simpler scenarios such as loading JPGs as textures on cubes, the JPG images would look just fine without setting outputEncoding on the renderer.
I tried googling on similar topics such as gamma correction, and stuff like people saying most of the images online are gamma encoded in the sRGB color space. But I couldn't connect all the dots myself...I would be most grateful if anyone can explain this clearly to me.
If you do not touch renderer.outputEncoding, it means you use no color space workflow in your app. The engine assumes that all input color values are expected to be in linear space. And the final color value of each fragment is not transformed into an output color space.
This kind of workflow is problematic for different reasons. One reason is that your JPG texture is most likely sRGB encoded as well as many other textures. To compute proper colors in the fragment shader, it's important that all input color values are transformed into the same color space though (which is linear color space). If you don't care about color spaces, you quickly end up with wrong output colors.
For simple apps this detail does often not matter because the final image "just looks good". However, depending on what you are doing in your app (e.g. when importing glTF assets), a proper color space workflow is mandatory.
By setting renderer.outputEncoding = THREE.sRGBEncoding you tell the renderer to convert the final color value in the fragment shader from linear to sRGB color space. Consequently, you also have to tell the renderer when textures hold sRGB encoded data. You do this by assigning THREE.sRGBEncoding to the encoding property of textures. THREE.GLTFLoader does this automatically for all color textures. But when loading textures manually, you have to do this by yourself.

OpenGL ES2.0 Load 8-bit image into Stencil Buffer

I am learning OpenGL ES2.0. I need a stencil buffer in my project.
What I am going to do:
1) Create a stencil buffer.
2) Load a 8-bit gray color image into this stencil buffer (which is also 8-bit per pixel).
3) The gray color image has different area (by setting different part a different value), so I can render for each area by changing the stencil test value.
I've searched for a lot time, still have no idea on how to load the image into stencil buffer.
So for the image above, I set stencil value as 1 for the blue area, and set 2 for the greeen area. How to implement this?
If your bitmap were 1 bit, just write a shader that either discards or allows pixels to proceed based on an alpha test, or use glAlphaFunc to do the same thing if under the fixed pipeline, and draw a suitable quad with the appropriate glStencilFunc.
If it's 8-bit and you genuinely want all 8 transferring to the stencil, the best cross-platform solutions I can think of either involve 8 draws — start from 0, use glStencilMask to isolate individual bits, set glStencilOp to invert, test for non-zero in the relevant bit in your shader — or just using the regular texture and implementing the equivalent of a stencil test directly in your shader.

OpenGL ES : Pre-Rendering to FBO texture

Is it possible to render to FBO texture once and then use the resulting texture handle to render all following frames?
For example, in case I'm rendering a hard shadow map and the scene geometry and light position are static, the depth map is always the same and I want to render it only once using a FBO and then just use it after that. However, if I simply put a flag to render the depth texture once, the texture remains empty for the rest of the frames.
Is FBO get reallocated after rendering a frame has been complete? What would be the right way to preserve rendered texture for rendering of the following frames?
Rendering to a texture is no different than if you had uploaded those pixels to the texture in the first place. The contents of a texture do not magically disappear. A texture's contents are changed when you change them. This could be by uploading data to the texture, or by setting one of the texture's images to be used for framebuffer operations (clearing, rendering to it, etc).
Unless you do something to explicitly change the data stored in a texture, it won't change.

Opengl-es 1.1 change pixel data on the FBO

Is there a way to get the FBO pixel data and then: greyscale it fast and take back the image to that FBO again?
If you're using the fixed-function pipeline (ES 1.1), you can use glReadPixels to pull pixel data off the GPU so you can process it directly. Then you'd need to create a texture from that result, and render a quad mapped to the new texture. But this is a fairly inefficient way of accomplishing the result.
If you're using shaders (ES 2.0), you can do this on the GPU directly, which is faster. That means doing the greyscaling in a fragment shader in one of a few ways:
If your rendering is simple to begin with, you can add the greyscale math in your normal fragment shader, and perhaps toggle it with a boolean uniform variable.
If you don't want to mess with greyscale in your normal pipeline, you can render normally to an offscreen FBO (texture), and then render the contents of that texture to the screen's FBO using a special greyscale texturing shader that does the math on sampled texels.
Here's the greyscale math if you need it: https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx Essentially, plug the RGB values into that formula, and use the resulting luminance value in all your channels.

How can I read the depth buffer in WebGL?

Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting?
Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE.
General usage:
Preparation:
Query the extension (required)
Allocate a separate color and depth texture (gl.DEPTH_COMPONENT)
Combine both textures in to a single framebuffer (gl.COLOR_ATTACHMENT0, gl.DEPTH_ATTACHMENT)
Rendering:
Bind the framebuffer, render your scene (usually a simplified version)
Unbind the framebuffer, pass the depth texture to your shaders and read it like any other texture:
texPos.xyz = (gl_Position.xyz / gl_Position.w) * 0.5 + 0.5;
float depthFromZBuffer = texture2D(uTexDepthBuffer, texPos.xy).x;
I don't know if it's possible to directly access the depth buffer but if you want depth information in a texture, you'll have to create a rgba texture, attach it as a colour attachment to an frame buffer object and render depth information into the texture, using a fragment shader that writes the depth value into gl_FragColor.
For more information, see the answers to one of my older questions: WebGL - render depth to fbo texture does not work
If you google for opengl es and shadow mapping or depth, you'll find more explanations and example source code.
From section 5.13.12 of the WebGL specification it seems you cannot directly read the depth buffer, so maybe Markus' suggestion is the best way to do it, although you might not neccessarily need an FBO for this.
But if you want to do something like picking, there are other methods for it. Just browse SO, as it has been asked very often.
Not really a duplicate but see also: How to get object in WebGL 3d space from a mouse click coordinate
Aside of unprojecting and casting a ray (and then performing intersection tests against it as needed), your best bet is to look at 'picking'. This won't give exact 3D coordinates, but it is a useful substitute for unprojection when you only care about which object was clicked on, and don't really need per-pixel precision.
Picking in WebGL means to render the entire scene (or at least, the objects you care about) using a specific shader. The shader renders each object with a different unique ID, which is encoded in the red and green channels, using the blue channel as a key (non-blue means no object of interest). The scene is rendered into an offscreen framebuffer so that it's not visible to the end user. Then you read back, using gl.readPixels(), the pixel or pixels of interest and see which object ID was encoded at the given position.
If it helps, see my own implementation of WebGL picking. This implementation picks a rectangular region of pixels; passing in a 1x1 region results in picking at a single pixel. See also the functions at lines 146, 162, and 175.
As of January 23, 2012, there is a draft WebGL extension to enable depth buffer reading, WEBGL_depth_texture. I have no information about its availability in implementations, but I do not expect it at this early date.

Resources