alpha blending in opengl es not working - opengl-es

I'm trying to vary the transparency of a texture drawn onto a quad, the code below works fine except the alpha set with glColor4f has no effect. What are the possible reasons for this? Is it likely to be a gl setting somewhere else in the program?
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_ALPHA);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
glColor4f(1.0f, 1.0f, 1.0f, 0.3f);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, quadVertices);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0, quadNormals);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, quadTexCoords);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE, (GLfloat*)&modelViewProjectionButton.data[0] );
glDrawElements(GL_TRIANGLES, NUM_QUAD_INDEX, GL_UNSIGNED_SHORT, quadIndices);
glDisableVertexAttribArray(vertexHandle);
glDisableVertexAttribArray(normalHandle);
glDisableVertexAttribArray(textureCoordHandle);
Edit:
I managed to do it as per the answer below. If anyone's interested, i put a uniform variable in my shader, called alpha, like this:
uniform float alpha;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
gl_FragColor = gl_FragColor * alpha;
}
and then when i'm drawing the scene i used it like this (for example to set 0.5 alpha):
GLint alphaLocation = glGetUniformLocation(shaderProgramID, "alpha");
glUniform1f(alphaLocation, 0.5);

You are obviously using OpenGL ES 2.0 (because you are using glVertexAttribPointer and glUniformMatrix4fv), which actually makes this a bit puzzling. OpenGL ES 2.0 does not define glColor<N> (...), this was part of the fixed-function API and should not be defined in a compliant OpenGL ES 2.0 implementation.
Even if it is defined, there is no mechanism in GLSL ES to get the "current color" in a shader. Desktop GL has the pre-declared variable gl_Color in compatibility GLSL profiles, but GLSL ES does not.
You will need to use a GLSL uniform if you want to define the color this way instead of using a per-vertex attribute.

Related

How can I get Alpha blending transparency working in OpenGL ES 2.0?

I'm in the midst of porting some code from OpenGL ES 1.x to OpenGL ES 2.0, and I'm struggling to get transparency working as it did before; all my triangles are being rendered fully opaque.
My OpenGL setup has these lines:
// Draw objects back to front
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(false);
And my shaders look like this:
attribute vec4 Position;
uniform highp mat4 mat;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Position * mat;
}
and this:
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
}
What could be going wrong?
EDIT: If I set the alpha in the fragment shader manually to 0.5 in the fragment shader (or indeed in the vertex shader) as suggested by keaukraine below, then I get transparent everything. Furthermore, if I change the color values I'm passing in to OpenGL to be floats instead of unsigned bytes, then the code works correctly.
So it looks as though something is wrong with the code that was passing the color information into OpenGL, and I'd still like to know what the problem was.
My vertices were defined like this (unchanged from the OpenGL ES 1.x code):
typedef struct
{
GLfloat x, y, z, rhw;
GLubyte r, g, b, a;
} Vertex;
And I was using the following code to pass them into OpenGL (similar to the OpenGL ES 1.x code):
glBindBuffer(GL_ARRAY_BUFFER, glTriangleVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * nTriangleVertices, triangleVertices, GL_STATIC_DRAW);
glUniformMatrix4fv(matLocation, 1, GL_FALSE, m);
glVertexAttribPointer(positionSlot, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, x));
glVertexAttribPointer(colorSlot, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, r));
glDrawArrays(GL_TRIANGLES, 0, nTriangleVertices);
glBindBuffer(GL_ARRAY_BUFFER, 0);
What is wrong with the above?
Your Colour vertex attribute values are not being normalized. This means that the vertex shader sees values for that attribute in the range 0-255.
Change the fourth argument of glVertexAttribPointer to GL_TRUE and the values will be normalized (scaled to the range 0.0-1.0) as you originally expected.
see http://www.khronos.org/opengles/sdk/docs/man/xhtml/glVertexAttribPointer.xml
I suspect the DestinationColor varying to your fragment shader always contains 0xFF for the alpha channel? If so, that is your problem. Try changing that so that the alpha actually varies.
Update: We found 2 good solutions:
Use floats instead of unsigned bytes for the varyings that are supplied to the DestinationColor in the fragment shader.
Or, as GuyRT pointed out, you can change the fourth argument of glVertexAttribPointer to GL_TRUE to tell OpenGL ES to normalize the values when they are converted from integers to floats.
To debug this situation, you can try setting constant alpha and see if it makes a difference:
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
gl_FragColor.a = 0.5; /* try other values from 0 to 1 to test blending */
}
Also you should ensure that you're picking EGL config with alpha channel.
And don't forget to specify precision for floats in fragment shaders! Read specs for OpenGL GL|ES 2.0 (http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf section 4.5.3), and please see this answer: https://stackoverflow.com/a/6336285/405681

Is it possible to copy data from one framebuffer to another in OpenGL?

I guess it is somehow possible since this:
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glResolveMultisampleFramebufferAPPLE();
does exactly that, and on top resolves the multisampling. However, it's an Apple extension and I was wondering if there is something similar that copies all the logical buffers from one framebuffer to another and doesn't do the multisampling part in the vanilla implementation. GL_READ_FRAMEBUFFER doesn't seem to be a valid target, so I'm guessing there is no direct way? How about workarounds?
EDIT: Seems it's possible to use glCopyImageSubData in OpenGL 4, unfortunately not in my case since I'm using OpenGL ES 2.0 on iPhone, which seems to be lacking that function. Any other way?
glBlitFramebuffer accomplishes what you are looking for. Additionally, you can blit one TEXTURE onto another without requiring two framebuffers. I'm not sure using one fbo is possible with OpenGL ES 2.0 but the following code could be easily modified to use two fbos. You just need to attach different textures to different framebuffer attachments. glBlitFramebuffer function will even manage downsampling / upsampling for anti-aliasing applications! Here is an example of it's usage:
// bind fbo as read / draw fbo
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,m_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo);
// bind source texture to color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle0);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_textureHandle0, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
// bind destination texture to another color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle1);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_textureHandle1, 0);
glReadBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(from.left(),from.top(), from.width(), from.height(),
to.left(),to.top(), to.width(), to.height(), GL_COLOR_BUFFER_BIT, GL_NEAREST);
// release state
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
Tested in OpenGL 4, glBlitFramebuffer not supported in OpenGL ES 2.0.
I've fixed errors in the previous answer and generalized into a function that can support two framebuffers:
// Assumes the two textures are the same dimensions
void copyFrameBufferTexture(int width, int height, int fboIn, int textureIn, int fboOut, int textureOut)
{
// Bind input FBO + texture to a color attachment
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboIn);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureIn, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
// Bind destination FBO + texture to another color attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboOut);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, textureOut, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
// unbind the color attachments
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, 0, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
}
You can do it directly with OpenGL ES 2.0, and it seems that there is no extension neither.
I am not really sure of what your are trying to achieve but in a general way, simply remove attachements of the FBO in which you have accomplish your off-screen rendering. Then bind the default FBO to be able to draw on screen, here you can simply draw a quad with an orthographic camera that fill the screen and a shader that takes your off-screen generated textures as input.
You will be able to do the resolve too if you are using multi-sampled textures.
glBindFramebuffer(GL_FRAMEBUFFER, off_screenFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Default FBO, on iOS it is 1 if I am correct
// Set the viewport at the size of screen
// Use your compositing shader (it doesn't have to manage any transform)
// Active and bind your textures
// Sent textures uniforms
// Draw your quad
Here is an exemple of the shader:
// Vertex
attribute vec2 in_position2D;
attribute vec2 in_texCoord0;
varying lowp vec2 v_texCoord0;
void main()
{
v_texCoord0 = in_texCoord0;
gl_Position = vec4(in_position2D, 0.0, 1.0);
}
// Fragment
uniform sampler2D u_texture0;
varying lowp vec2 v_texCoord0;
void main()
{
gl_FragColor = texture2D(u_texture0, v_texCoord0);
}

How to efficiently copy depth buffer to texture on OpenGL ES

I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture:
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0);
However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to write the depth component to the color buffer, then copying it:
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// turn on depth rendering
glUseProgram(m_BaseShader.uiId);
// this is a switch to cause the fragment shader to just dump out the depth component
glUniform1i(uiBaseShaderRenderDepth, true);
// and for this, the color buffer needs to be on
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
// and clear it to 1.0, like how the depth buffer starts
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw the scene
DrawScene();
// bind our texture
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
Here is the fragment shader:
uniform sampler2D sTexture;
uniform bool bRenderDepth;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
if(bRenderDepth) {
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
} else {
gl_FragColor = vec4(texture2D(sTexture, TexCoord).rgb * LightIntensity, 1.0);
}
}
I have experimented with not having the 'bRenderDepth' branch, and it doesn't speed it up significantly.
Right now pretty much just doing this step its at 14fps, which obviously is not acceptable. If I pull out the copy its way above 30fps. I'm getting two suggestions from the Xcode OpenGLES analyzer on the copy command:
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: error:
Validation Error: glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
960, 640, 0) : Height<640> is not a power of two
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: warning:
GPU Wait on Texture: Your app updated a texture that is currently
used for rendering. This caused the CPU to wait for the GPU to
finish rendering.
I'll work to resolve the two above issues (perhaps they are the crux if of it). In the meantime can anyone suggest a more efficient way to pull that depth data into a texture?
Thanks in advance!
iOS devices generally support OES_depth_texture, so on devices where the extension is present, you can set up a framebuffer object with a depth texture as its only attachment:
GLuint g_uiDepthBuffer;
glGenTextures(1, &g_uiDepthBuffer);
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
// glTexParameteri calls omitted for brevity
GLuint g_uiDepthFramebuffer;
glGenFramebuffers(1, &g_uiDepthFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_uiDepthFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_uiDepthBuffer, 0);
Your texture then receives all the values being written to the depth buffer when you draw your scene (you can use a trivial fragment shader for this), and you can texture from it directly without needing to call glCopyTexImage2D.

My triangle doesn't render when I use OpenGL Core Profile 3.2

I have a Cocoa (OSX) project that is currently very simple, I'm just trying to grasp the general concepts behind using OpenGL. I was able to get a triangle to display in my view, but when I went to write my vertex shaders and fragment shaders, I realized I was running the legacy OpenGL core profile. So I switched to the OpenGL 3.2 profile by setting the properties in the pixel format of the view in question before generating the context, but now the triangle doesn't render, even without my vertex or fragment shaders.
I have a controller class for the view that's instantiated in the nib. On -awakeFromNib it sets up the pixel format and the context:
NSOpenGLPixelFormatAttribute attr[] =
{
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
0
};
NSOpenGLPixelFormat *glPixForm = [[NSOpenGLPixelFormat alloc] initWithAttributes:attr];
[self.mainView setPixelFormat:glPixForm];
self.glContext = [self.mainView openGLContext];
Then I generate the VAO:
glGenVertexArrays(1, &vertexArrayID);
glBindVertexArray(vertexArrayID);
Then the VBO:
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
g_vertex_buffer_data, the actual data for that buffer is defined as follows:
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
Here's the code for actually drawing:
[_glContext setView:self.mainView];
[_glContext makeCurrentContext];
glViewport(0, 0, [self.mainView frame].size.width, [self.mainView frame].size.height);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer);
glVertexAttribPointer(
0,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
glFlush();
This code draws the triangle fine if I comment out the NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, in the NSOpenGLPixelFormatAttribute array, but as soon as I enable OpenGL Core Profile 3.2, it just displays black. Can anyone tell me what I'm doing wrong here?
EDIT: This issue still happens whether I turn my vertex and fragment shaders on or not, but here are my shaders in case it is helpful:
Vertex shader:
#version 150
in vec3 position;
void main() {
gl_Position.xyz = position;
}
Fragment shader:
#version 150
out vec3 color;
void main() {
color = vec3(1,0,0);
}
And right before linking the program, I make this call to bind the attribute location:
glBindAttribLocation(programID, 0, "position");
EDIT 2:
I don't know if this helps at all, but I just stepped through my program, running glGetError() and it looks like everything is fine until I actually call glDrawArrays(), then it returns GL_INVALID_OPERATION. I'm trying to figure out why this could be occurring, but still having no luck.
I figured this out, and it's sadly a very stupid mistake on my part.
I think the issue is that you need a vertex shader and a fragment shader when using 3.2 core profile, you can't just render without them. The reason it wasn't working with my shaders was...wait for it...after linking my shader program, I forgot to store the programID in the ivar in my class, so later when I call glUseProgram() I'm just calling it with a zero parameter.
I guess one of the main sources of confusion was the fact that I expected the 3.2 core profile to work without any vertex or fragment shaders.

Why am I not able to attach this texture uniform to my GLSL fragment shader?

In my Mac application, I define a rectangular texture based on YUV 4:2:2 data from an attached camera. Using standard vertex and texture coordinates, I can draw this to a rectangular area on the screen without any problems.
However, I would like to use a GLSL fragment shader to process these image frames on the GPU, and am having trouble passing in the rectangular video texture as a uniform to the fragment shader. When I attempt to do so, the texture simply reads as black.
The shader program compiles, links, and passes validation. I am receiving the proper address for the uniform from the shader program. Other uniforms, such as floating point values, pass in correctly and the fragment shader responds to changes in these values. The fragment shader receives the correct texture coordinates. I've also sprinkled my code liberally with glGetError() and seen no errors anywhere.
The vertex shader is as follows:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = gl_Color;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is as follows:
uniform sampler2D videoFrame;
void main()
{
gl_FragColor = texture2D(videoFrame, gl_TexCoord[0].st);
}
This should simply display the texture on my rectangular geometry.
The relevant drawing code is as follows:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat textureVertices[] = {
0.0, videoImageSize.height,
videoImageSize.width, videoImageSize.height,
0.0, 0.0,
videoImageSize.width, 0.0
};
CGLSetCurrentContext(glContext);
if(!readyToDraw)
{
[self initGL];
readyToDraw = YES;
}
glViewport(0, 0, (GLfloat)self.bounds.size.width, (GLfloat)self.bounds.size.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &textureName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, textureName);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, videoImageSize.width, videoImageSize.height, 0, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
glUseProgram(filterProgram);
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, squareVertices);
glTexCoordPointer(2, GL_FLOAT, 0, textureVertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[super drawInCGLContext:glContext pixelFormat:pixelFormat forLayerTime:interval displayTime:timeStamp];
glDeleteTextures(1, &textureName);
This code resides within a CAOpenGLLayer, where the superclass's -drawInCGLContext:pixelFormat:forLayerTime: displayTime: simply runs glFlush().
The uniform address is read using code like the following:
uniforms[UNIFORM_VIDEOFRAME] = glGetUniformLocation(filterProgram, "videoFrame");
As I said, if I comment out the glUseProgram() and glUniform1i() lines, this textured rectangle draws properly. Leaving them in leads to a black rectangle being drawn.
What could be preventing my texture uniform from being passed into my fragment shader?
Not sure about the GLSL version you're using, but from 1.40 upwards there's the type sampler2DRect specifically for accessing non-power-of-two textures. Might be what you're looking for, however I don't know how rectangular textures were handled before glsl 1.40.

Resources