I am new to OpenGL and i am trying out few experiments especially with the stencil buffers.
In my code, I have set the front and back stencil buffers separately using glStencilFuncSeparate to 0x5 and 0xC respectively(GL_ALWAYS as parameter to the function). glStencilOpSeparate is set for front and back states to GL_REPLACE for dppass. I have also ensured that the depth and the culling are disabled when I setup both the stencil buffers.
Next I try to render a cube with depth, front culling and stencil test enabled. I am now using glStencilFuncSeparate() to draw only the back stencil setup area by comparing against 0x12. As the front face is culled away, I expect only the areas covered by my back stencil buffer whose value is 0x12 to be displayed.
But to my dismay, it displays blank screen. When it give the value for comparision as 0x5(front stencil init val) with cull face as front, I am able to see the portion of the cube.
So it seems to indicate that, the back face of the cube is compared against the front stencil when culling is set to GL_FRONT. All other parameters like StencilClear(value 0x1) looks proper and i somehow cannot understand this behavior.
Setup the stencil
glStencilMaskSeparate(GL_FRONT_AND_BACK, 0xff)
glStencilFuncSeparate(GL_FRONT, GL_ALWAYS, 0x5, 0xff)
glStencilFuncSeparate(GL_BACK, GL_ALWAYS, 0xc, 0xff)
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_KEEP, GL_REPLACE)
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_KEEP, GL_REPLACE)
glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE)
glDepthMask(GL_FALSE)
glDrawArrays(GL_TRIANGLES, 0, 6)
Enable culling
glCullFace(GL_FRONT)
glEnable(GL_CULL_FACE)
Draw a cube
glStencilOpSeparate(GL_FRONT,GL_KEEP,GL_KEEP,GL_REPLACE)
glStencilOpSeparate(GL_BACK,GL_KEEP,GL_KEEP,GL_REPLACE)
glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE)
glDepthMask(GL_FALSE)
glDrawArrays(GL_TRIANGLES, 0, 36)
OpenGL version: OpenGL ES 3.1 Mesa 19.0.8, Mesa DRI Intel Haswell Desktop
Could someone please help me understand why the front and back stencil buffers seems swapped with culling mode?
It sounds like you think there are two different stencil buffers, one for front and one for back. This is incorrect; there is only a single stencil buffer with a single stencil value per pixel.
The glSeparate...() functions simply give you the means to specify separate stencil test/update rules for front and back-facing triangles, but they will write to the same storage location in the framebuffer.
Given you have a cube with the front-face closest to the camera then "surviving" stencil value after your first pass is always going to be the front-facing reference value.
Update: Note that you can "mix" front and back stencil values in the same stencil buffer by using glStencilMaskSeparate() to control which bits can be written, and glStencilFuncSeparate() to control which bits are used when stencil testing.
Related
I am learning OpenGL ES2.0. I need a stencil buffer in my project.
What I am going to do:
1) Create a stencil buffer.
2) Load a 8-bit gray color image into this stencil buffer (which is also 8-bit per pixel).
3) The gray color image has different area (by setting different part a different value), so I can render for each area by changing the stencil test value.
I've searched for a lot time, still have no idea on how to load the image into stencil buffer.
So for the image above, I set stencil value as 1 for the blue area, and set 2 for the greeen area. How to implement this?
If your bitmap were 1 bit, just write a shader that either discards or allows pixels to proceed based on an alpha test, or use glAlphaFunc to do the same thing if under the fixed pipeline, and draw a suitable quad with the appropriate glStencilFunc.
If it's 8-bit and you genuinely want all 8 transferring to the stencil, the best cross-platform solutions I can think of either involve 8 draws — start from 0, use glStencilMask to isolate individual bits, set glStencilOp to invert, test for non-zero in the relevant bit in your shader — or just using the regular texture and implementing the equivalent of a stencil test directly in your shader.
Simple OpenGL ES 2.0 question. If I need to have depth and color buffer, do I have to render geometry twice? Or I can just bind/attach depth buffer while render color frame?
Or I need MRT/render twice for this?
It's the normal mode of operation for OpenGL to updated both the color and depth buffer during rendering, as long as both of them exist and are enabled.
If you're rendering to an FBO, and want to use a depth buffer, you need to attach either a color texture or a color renderbuffer to GL_COLOR_ATTACHMENT0, by calling glFramebufferTexture2D() or glFramebufferRenderbuffer() respectively. Then allocate a depth renderbuffer with glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ...), and attach it to GL_DEPTH_ATTACHMENT by calling glFramebufferRenderbuffer().
After that, you can render once, and both your color and depth buffers will have been updated.
Is it possible to overlap shader effects in OpenGL ES 2.0? (not using FBOs)
How to use the result of a shader with another shader without having to do a glReadPixels and push again the processed pixels?
The next pseudo-code is what I'm trying to achieve:
// Push RGBA pixels into the GPU
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, Pixels_To_Render);
// Apply first shader effect
glUseProgram( FIRST_SHADER_HANDLE);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Apply the second shader effect sampling from the result of the first shader effect
glUseProgram( SECOND_SHADER_HANDLE );
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Get the overall result
glReadPixels(......)
I presume you're talking about pixel processing with fragment shaders?
With the OpenGL ES 2.0 core API, you can't get pixels from the destination framebuffer into the fragment shader without reading them back from the GPU.
But if you're on a device/platform that supports a shader framebuffer fetch extension (EXT_shader_framebuffer_fetch on at least iOS, NV_shader_framebuffer_fetch in some other places), you're in luck. With that extension, a fragment shader can read the fragment data from the destination framebuffer for the fragment it's rendering to (and only that fragment). This is great for programmable blending or pixel post-processing effects because you don't have to incur the performance penalty of a glReadPixels operation.
Declare that you're using the extension with #extension GL_EXT_shader_framebuffer_fetch : require, then read fragment data from the gl_LastFragData[0] builtin. (The subscript is for the rendering target index, but you don't have multiple render targets unless you're using OpenGL ES 3.0, so it's always zero.) Process it however you like and write to gl_FragColor or gl_FragData as usual.
I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0.
I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer knows the depth at each pixel.
Since iOS >= 4.1 supports GL_OES_depth_texture, I'm trying to use GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT16 for the depth texture. I'm using the following calls to create the texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureId, 0);
The framebuffer creation succeeds, but I don't know how to proceed. I'm lacking some fundamental understanding of depth textures attached to framebuffers.
What values should I output in the fragment shader? I mean, gl_FragColor is still an RGBA value, even though the texture is a depth texture. I cannot set the depth in the fragment shader, since gl_FragDepth is missing in OpenGL ES 2.0
How can I read from the depth texture in the final rendering pass, where the depth texture is attached as a sampler2D?
Why do I get an incomplete framebuffer if I set the third argument of glTexImage2D to GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT16_OES or GL_DEPTH_COMPONENT24_OES?
Is it right to attach the texture to the GL_DEPTH_ATTACHMENT? If I'm changing that to GL_COLOR_ATTACHMENT0, I'm getting an incomplete framebuffer.
Depth textures do not affect the output of the fragment shader. The value that ends up in the depth texture when you're rendering to it will be the fixed-function depth value.
So without gl_FragDepth, you can't really "set the depth in the fragment shader". You can, however, do what you describe, i.e., render depth to a texture in one pass and then read access that value in a later pass.
You can read from a depth texture using the texture2D built-in function just like for regular color textures. The value you get back will be (d, d, d, 1.0).
According to the depth texture extension specification, GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES are not supported as internal formats for depth textures. I'd expect this to generate an error. The incomplete framebuffer status you get is probably related to this.
It is correct to attach the texture to the GL_DEPTH_ATTACHMENT.
I have a texture onto which I render 16 drawings. The texture is 1024x1024 in size and it's divided into 4x4 "slots", each 256x256 pixels.
Before I render a new drawing into a slot, I want to clear it so that the old drawing is erased and the slot is totally transparent (alpha=0).
Is there a way to do it with OpenGL or need I just access the texture pixels directly in memory and clear them with memset oslt?
I imagine you'd just update the current texture normally:
std::vector<unsigned char> emptyPixels(1024*1024*4, 0); // Assuming RGBA / GL_UNSIGNED_BYTE
glBindTexture(GL_TEXTURE_2D, yourTextureId);
glTexSubImage2D(GL_TEXTURE_2D,
0,
0,
0,
1024,
1024,
GL_RGBA,
GL_UNSIGNED_BYTE,
emptyPixels.data()); // Or &emptyPixels[0] if you're stuck with C++03
Even though you're replacing every pixel, glTexSubImage2D is faster than recreating a new texture.
Is your texture actively bound as a frame buffer target? (I'm assuming yes because you say you're rendering to it.)
If so, you can set a glScissor test, followed by a glClear to just clear a specific region of the framebuffer.