Fade Out OpenGL VBO with Colors but without Texture - opengl-es

I would like to fade out a VBO object in OpenGL ES. The VBO is drawn using RGBA format GL_UNSIGNED_BYTE like this:
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer( 4, GL_UNSIGNED_BYTE, ....
I am fairly certain I should set up blending like this:
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
glColor4f( 1, 1, 1, alpha ); // I will be changing alpha from 1 to 0
But it doesn't fade, which makes sense, since I think the src alpha is coming from my VBO which is fixed.
So I thought maybe I should just pass 3 bytes to the driver keeping my VBO colors and then alpha will come from the glColor4f command.
glColorPointer( 3, GL_UNSIGNED_BYTE, ....
But this crashes for unclear reasons (iPad development with Xcode) which I am still trying to decipher. I would think all of my glColorPointer offsets would still be fine - I still have all 4 bytes (RGBA) in MyVertexObject so I don't think it's a padding issue - I do not change any offset values in the glColorPointer command, just changed the 4 to a 3.
If I disable GL_COLOR_ARRAY the fade works perfectly but now I've lost the colors and it's using the white color I set above.
So I'm stuck as it seems I can't control the alpha channel separately from the RGB colors. Is there another way to fade a VBO with colors? Thanks.

You can specify blend factors independently of the colors in your VBOs by using GL_CONSTANT_ALPHA as the blend function:
glBlendColor(0.0f, 0.0f, 0.0f, alpha);
glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
This will use the alpha component specified in glBlendColor() for the blend function.

Related

OpenGLES 3.0 Cannot render to a texture larger than the screen size

I have made an image below to indicate my problem. I render my scene to an offscreen framebuffer with a texture the size of the screen. I then render said texture to a screen-filling quad. This produces the case 1 on the image. I then run the exact same program, but with with a texture size, let's say, 1.5 times greater (enough to contain the entire smiley), and afterwards render it once more to the screen-filling quad. I then get result 3, but I expected to get result 2.
I remember to change the viewport according to the new texture size before rendering to the texture, and reset the viewport before drawing the quad. I do NOT understand, what I am doing wrong.
problem shown as an image!
To summarize, this is the general flow (too much code to post it all):
Create MSAAframebuffer and ResolveFramebuffer (Resolve contains the texture).
Set glViewport(0, 0, Width*1.5, Height*1.5)
Bind MSAAframebuffer and render my scene (the smiley).
Blit the MSAAframebuffer into the ResolveFramebuffer
Set glViewport(0, 0, Width, Height), bind the texture and render my quad.
Note that all the MSAA is working perfectly fine. Also both buffers have the same dimensions, so when I blit it is simply: glBlitFramebuffer(0, 0, Width*1.5, Height*1.5, 0, 0, Width*1.5, Height*1.5, ClearBufferMask.ColorBufferBit, BlitFramebufferFilter.Nearest)
Hope someone has a good idea. I might get fired if not :D
I found that I actually used an AABB somewhere else in the code to determine what to render; and this AABB was computed from the "small viewport's size". Stupid mistake.

Unexpected transparency in alpha map from three.js RenderTarget

I’m using a RenderTarget in three.js/react-three-fiber to generate a simple alpha map using a Cylinder. The Cylinder uses a basic material with color white, which should be fully opaque in the alpha map, per the docs. However, when applied to a geometry, the area corresponding to white in the mask is not fully opaque, as shown in this example:
https://codesandbox.io/s/r3f-programmatic-alpha-map-f8lhi?file=%2Fsrc%2Findex.js
The scene setup is:
Red plane masked to a cylinder via the alpha mask - expected to be fully opaque
Behind that, a white box - it should not be visible at all from default camera position, but it is.
Does anyone have any idea why the white alpha mask leaves the Red plane with non-zero transparency (i.e. NOT fully opaque)? Thank you.
The explanation was provided by #drcmda here: there is a default tone mapping applied by react-three-fiber which results in the alpha map not being rendered pure white. To prevent that happening, disable tone mapping in the alpha map geometry material:
<Cylinder args={[1, 1, 1, 64]} position={[0, 0, 0]} rotation={[Math.PI / 2, 0, 0]}>
<meshBasicMaterial attach="material" color="white" toneMapped={false} />
</Cylinder>
i.e. in line 49 of the codesandbox linked in the question.

Rendering to depth texture - unclarities about usage of GL_OES_depth_texture

I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0.
I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer knows the depth at each pixel.
Since iOS >= 4.1 supports GL_OES_depth_texture, I'm trying to use GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT16 for the depth texture. I'm using the following calls to create the texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureId, 0);
The framebuffer creation succeeds, but I don't know how to proceed. I'm lacking some fundamental understanding of depth textures attached to framebuffers.
What values should I output in the fragment shader? I mean, gl_FragColor is still an RGBA value, even though the texture is a depth texture. I cannot set the depth in the fragment shader, since gl_FragDepth is missing in OpenGL ES 2.0
How can I read from the depth texture in the final rendering pass, where the depth texture is attached as a sampler2D?
Why do I get an incomplete framebuffer if I set the third argument of glTexImage2D to GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT16_OES or GL_DEPTH_COMPONENT24_OES?
Is it right to attach the texture to the GL_DEPTH_ATTACHMENT? If I'm changing that to GL_COLOR_ATTACHMENT0, I'm getting an incomplete framebuffer.
Depth textures do not affect the output of the fragment shader. The value that ends up in the depth texture when you're rendering to it will be the fixed-function depth value.
So without gl_FragDepth, you can't really "set the depth in the fragment shader". You can, however, do what you describe, i.e., render depth to a texture in one pass and then read access that value in a later pass.
You can read from a depth texture using the texture2D built-in function just like for regular color textures. The value you get back will be (d, d, d, 1.0).
According to the depth texture extension specification, GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES are not supported as internal formats for depth textures. I'd expect this to generate an error. The incomplete framebuffer status you get is probably related to this.
It is correct to attach the texture to the GL_DEPTH_ATTACHMENT.

How to clear portion of a texture with alpha 0 using OpenGL ES?

I have a texture onto which I render 16 drawings. The texture is 1024x1024 in size and it's divided into 4x4 "slots", each 256x256 pixels.
Before I render a new drawing into a slot, I want to clear it so that the old drawing is erased and the slot is totally transparent (alpha=0).
Is there a way to do it with OpenGL or need I just access the texture pixels directly in memory and clear them with memset oslt?
I imagine you'd just update the current texture normally:
std::vector<unsigned char> emptyPixels(1024*1024*4, 0); // Assuming RGBA / GL_UNSIGNED_BYTE
glBindTexture(GL_TEXTURE_2D, yourTextureId);
glTexSubImage2D(GL_TEXTURE_2D,
0,
0,
0,
1024,
1024,
GL_RGBA,
GL_UNSIGNED_BYTE,
emptyPixels.data()); // Or &emptyPixels[0] if you're stuck with C++03
Even though you're replacing every pixel, glTexSubImage2D is faster than recreating a new texture.
Is your texture actively bound as a frame buffer target? (I'm assuming yes because you say you're rendering to it.)
If so, you can set a glScissor test, followed by a glClear to just clear a specific region of the framebuffer.

GLSL and glReadPixels

I am using OpenGL ES 2.0 and GLSL to draw objects.
Then I want to read pixel from whatever my fragment shader draws on my screen.
I am blending two grayscale textures and at the end of my fragment shader, I have:
gl_FragColor = vec4(something...blending textures);
I know that gl_FragColor means the final color will be written in the frame buffer.
(According to http://nehe.gamedev.net/article/glsl_an_introduction/25007/)
Given that, I did something like
GLubyte *pixels = new GLubyte[320*240];
glReadPixels(0, 0, 320,240, GL_LUMINANCE, GL_UNSIGNED_BYTE, pixels);
After drawing image using that, at the first, I get same as the original texture1 and then get the black empty color. Can someone help me what's wrong? Do I need to do something with FBO? I am a kinda lost...:(
Per the canonical ES 2.0 documentation for glReadPixels, the format parameter:
Specifies the format of the pixel data. The following symbolic values
are accepted: GL_ALPHA, GL_RGB, and GL_RGBA.
So GL_LUMINANCE is not a supported read format in GL ES.
Beyond that, assuming you have a working frame buffer of some sort and you're applying your fragment shader by rendering a suitably placed piece of geometry to the frame buffer, glReadPixels should be the correct thing to use.

Resources