Is it possible to downsample an OpenGL ES depth buffer WITHOUT GL_OES_depth_texture? - opengl-es

I'm working on a game for mobile platforms, and I'd like to render my effects to a lower resolution render target than the screen. The issue is, I need to start with the depth buffer from the full screen.
If the hardware supported GL_OES_depth_texture, I imagine it would be relatively straightforward, but unfortunately I don't, so I am wondering if there is any other way to get the depth information from the full screen render and use that for my lower resolution render.
If I can't actually downsample the render buffer, could I bind the higher resolution depth buffer to a render target with a lower resolution color buffer? I can't find any documentation that says that the resolutions of all the different attachments to the frame buffer object have to match in resolution, but I strongly suspect that is a requirement.
Thanks for your help!

I think you should render to a texture attached to an FBO. The texture could have whatever lower resolution you want. Textures are really the only FBO attachments that are actually useful in OpenGL ES 1.1 and 2.0, on most platforms.
Thanks Andon.
What Andon is suggesting is that you could render into a texture with an ordinary GL_RGBA format that OpenGL ES 2.0 understands, but packing/encoding the color in some more efficient way. You can do this if you only use that texture to render with a custom fragment shader that unpacks/decodes the custom format you put it in. For example, the values you read from a texture sampler can really be in any format you want.
float fR = texture2D(gsuTexture0, gsvTexCoord0).r;
float fG = texture2D(gsuTexture0, gsvTexCoord0).g;
float fB = texture2D(gsuTexture0, gsvTexCoord0).b;
float fA = texture2D(gsuTexture0, gsvTexCoord0).a;
float fDepth = fR;

Related

Small sample in opengl es 3, wrong gamma correction

I have a small sample, es-300-fbo-srgb, supposed to showing how to manage gamma correction in opengl es3.
Essentially I have:
a GL_SRGB8_ALPHA8 texture TEXTURE_DIFFUSE
a framebuffer with another GL_SRGB8_ALPHA8 texture on GL_COLOR_ATTACHMENT0 and a GL_DEPTH_COMPONENT24 texture on GL_DEPTH_ATTACHMENT
the back buffer of my default fbo is GL_LINEAR
GL_FRAMEBUFFER_SRGB initially disabled.
I get
instead of
Now, if I recap the display metho, this is what I do:
I render the TEXTURE_DIFFUSE texture on the sRGB fbo and since the source texture is in sRGB space, my fragment shader will read automatically a linear value and write it to the fbo. Fbo should contain now linear values, although it is sRGB, because GL_FRAMEBUFFER_SRGB is disabled, so no linear->sRGB conversion is executed.
I blit the content of the fbo to the default fbo back buffer (through a program). But since the texture of this fbo has the sRGB component, on the read values a wrong gamma operation will be performed because they are assumed in sRGB space when they are not.
a second gamma operation is performed by my monitor when it renders the content of the default fbo
So my image is, if I am right, twice as wrong..
Now, if I glEnable(GL_FRAMEBUFFER_SRGB); I get instead
The image looks like it have been too many times sRGB corrected..
If I, instead, leave the GL_FRAMEBUFFER_SRGB disabled and change the format of the GL_COLOR_ATTACHMENT0 texture of my fbo, I get finally the right image..
Why do I not get the correct image with glEnable(GL_FRAMEBUFFER_SRGB);?
I think you are basically right: you get the net effect of two decoding conversions where one (the one in your monitor) would be enough. I suppose that either your driver or your code breaks something so OpenGL doesn't 'connect the dots' properly; perhaps this answer helps you:
When to call glEnable(GL_FRAMEBUFFER_SRGB)?

OpenGL ES 2.0 Vertex Shader Texture Reads not possible from FBO?

I'm currently working on a GPGPU project that uses OpenGL ES 2.0. I have a rendering pipeline that uses framebuffer objects (FBOs) as targets, i.e. the result of each rendering pass is saved in a texture which is attached to an FBO. So far, this works when using fragment shaders. For example I have to following rendering pipeline:
Preprocessing (downscaling, grayscale conversion)
-> Adaptive Thresholding Pass 1 -> Adapt. Thresh. Pass 2
-> Copy back to CPU
However, I wanted to extend this pipeline by adding a grayscale histogram calculation after the proprocessing step. With OpenGL ES 2.0 this only works with texture reads in the vertex shader, as far as I know [1]. I can confirm that my shaders work in a different program where the input is a "real" image, not a rendered texture that is attached to an FBO. Hence I think it is not possible to read texture data in a vertex shader if it comes from an FBO. Can anyone confirm this assumption or am I missing something? I'm using a Nexus 10 for my experiments.
[1]: It basically works by reading each pixel value from the texture in the vertex shader, then calculating of the histogram bin from it and "adding" it in the fragment shader by using alpha blending.
Texture reads within a vertex shader are not a required element in OpenGL ES 2.0, so you'll find some manufacturers supporting them and some not. In fact, there was a weird situation where iOS supported it on some devices for one version of iOS, but not the next (and it's now officially supported in iOS 7). That might be the source of the inconsistency you see here.
To work around this, I implemented a histogram calculation by instead feeding the colors from the FBO (or its attached texture) in as vertices and using a scattering operation similar to what you describe. This doesn't require a texture read of any kind in the vertex shader, but it does involve a round-trip from and to the GPU and potentially a lot of vertices. It works on all OpenGL ES 2.0 hardware, but it can be costly.

Texel offsets in pixel shaders

I am currently porting an app over from iOS to Windows Phone 8. It is an image processing app, and all calculations are done on the GPU using pixel shaders.
There is one detail that I just haven't been able to figure out, that is Texel Width/Height offsets. I have absolutely no idea what these values are, and I can't seem to find any information on them.
Are they common terms? Does anybody know what they represent? Does anyone know what sort of values should be in them?
Texel is a pixel of texture localized by a coordinate, the offset in a texture is where a texture begin mapped on a model or render target.
The most simple example of this:
http://lifeasa.files.wordpress.com/2011/02/super_mario_world_by_xinzax.png
The map of stage is a few textures, when Mario advances in level, the X coordinate offset increases, and the right part of texture became visible, at same time the left side becames hidden.
Check the textures, if have more than a 'part' in a single image, is this.
Another case is a single texture that is mapped in multiple objects, and each object have a offset to appears a 'segment' of previous object.

How can I read the depth buffer in WebGL?

Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting?
Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE.
General usage:
Preparation:
Query the extension (required)
Allocate a separate color and depth texture (gl.DEPTH_COMPONENT)
Combine both textures in to a single framebuffer (gl.COLOR_ATTACHMENT0, gl.DEPTH_ATTACHMENT)
Rendering:
Bind the framebuffer, render your scene (usually a simplified version)
Unbind the framebuffer, pass the depth texture to your shaders and read it like any other texture:
texPos.xyz = (gl_Position.xyz / gl_Position.w) * 0.5 + 0.5;
float depthFromZBuffer = texture2D(uTexDepthBuffer, texPos.xy).x;
I don't know if it's possible to directly access the depth buffer but if you want depth information in a texture, you'll have to create a rgba texture, attach it as a colour attachment to an frame buffer object and render depth information into the texture, using a fragment shader that writes the depth value into gl_FragColor.
For more information, see the answers to one of my older questions: WebGL - render depth to fbo texture does not work
If you google for opengl es and shadow mapping or depth, you'll find more explanations and example source code.
From section 5.13.12 of the WebGL specification it seems you cannot directly read the depth buffer, so maybe Markus' suggestion is the best way to do it, although you might not neccessarily need an FBO for this.
But if you want to do something like picking, there are other methods for it. Just browse SO, as it has been asked very often.
Not really a duplicate but see also: How to get object in WebGL 3d space from a mouse click coordinate
Aside of unprojecting and casting a ray (and then performing intersection tests against it as needed), your best bet is to look at 'picking'. This won't give exact 3D coordinates, but it is a useful substitute for unprojection when you only care about which object was clicked on, and don't really need per-pixel precision.
Picking in WebGL means to render the entire scene (or at least, the objects you care about) using a specific shader. The shader renders each object with a different unique ID, which is encoded in the red and green channels, using the blue channel as a key (non-blue means no object of interest). The scene is rendered into an offscreen framebuffer so that it's not visible to the end user. Then you read back, using gl.readPixels(), the pixel or pixels of interest and see which object ID was encoded at the given position.
If it helps, see my own implementation of WebGL picking. This implementation picks a rectangular region of pixels; passing in a 1x1 region results in picking at a single pixel. See also the functions at lines 146, 162, and 175.
As of January 23, 2012, there is a draft WebGL extension to enable depth buffer reading, WEBGL_depth_texture. I have no information about its availability in implementations, but I do not expect it at this early date.

use core-image in 3d

i have a working Core Video setup (a frame captured from a USB camera via QTKit) and the current frame is rendered as a texture on an arbitary plane in 3d space in a subclassed NSOpenGLView. so far so good but i would like to use some Core Image filter on this frame.
i now have the basic code setup and it renders my unprocessed video frame like before, but the final processed output CIImage is rendererd as a screen aligned quad into the view. it feels like a image blitted over my 3d rendering. this is what i do not want!
i am looking for a way to process my video frame (a CVOpenGLTextureRef) with Core Image and just render the resulting image on my plane in 3d.
do i have to use offscreen rendering (store viewport, set new viewport and modelview and perspective matrices and render into a FBO) or is there any easier way?
thanks in advance!
Try the GPUImage! It's easy to use, and faster than CoreImage processing. It uses predefined, or custom shaders (GLSL)

Resources