OpenGL Integer Textures on OSX (NVIDIA GeForce GT 650M) - macos

I am having problems reading from an unsigned integer texture in my Fragment Shader on OSX 10.9.4 with an GeForce GT 650M.
I am using the OpenGL 3.2 core profile.
GL_VESION reports as: 4.1 NVIDIA-8.26.26 310.40.45f01
GL_SHADING_LANGUAGE_VERSION reports as: 4.10
Here are the relevant parts of my setup in C++:
// ..
GLuint texID;
glGenTextures(1, &texID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);
// data is a void* arg
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 1024, 2, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, data);
// programID is a GLuint arg
GLint const uniID = getUniformID(programID, "my_texture")
glUniform1i(uniID, 0);
// ..
Here are the relevant parts from my Fragment Shader code:
#version 150 core
// ..
uniform usampler2D my_texture;
// ..
void main()
{
ivec2 texSize = textureSize(my_texture, 0);
uint pixelVal = texelFetch(my_texture, ivec2(0, 0), 0).r;
// ..
}
texSize is ivec2(1,1), even though I specified a 1024x2 texture.
pixelVal is a garbage uint.
I have no gl errors (glGetError calls after every OGL API call have been removed from above).
I get the same results when using an integer texture and when using an RGBA unsigned integer texture.
When I change the texture to a float texture, things work as expected.
texSize is ivec2(1024,2)
pixelVal is the correct float value
When I run the same unsigned integer texture code from above on Windows 7 (ATI Radeon HD 5450 with extensions provided by GLEW), I get the expected results in the Fragment Shader:
texSize is ivec2(1024,2)
pixelVal is the correct uint value
Can anyone shed some light on what's going wrong with integer textures on OSX?
Perhaps there's a bug with my NVIDIA card. Searching has not revealed any reported cases...

You have a problem with this call:
glUniform1i(uniID, GL_TEXTURE0);
The value that needs to be set for a sampler uniform is the index of the texture unit, not the corresponding enum. To sample from texture unit 0, this needs to be:
glUniform1i(uniID, 0);

Related

Storing floats in a texture in OpenGL ES

In WebGL, I am trying to create a texture with texels each consisting of 4 float values. Here I attempt to to create a simple texture with one vec4 in it.
var textureData = new Float32Array(4);
var texture = gl.createTexture();
gl.activeTexture( gl.TEXTURE0 );
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
// target, level, internal format, width, height
gl.TEXTURE_2D, 0, gl.RGBA, 1, 1,
// border, data format, data type, pixels
0, gl.RGBA, gl.FLOAT, textureData
);
My intent is to sample it in the shader using a sampler like so:
uniform sampler2D data;
...
vec4 retrieved = texture2D(data, vec2(0.0, 0.0));
However, I am getting an error during gl.texImage2D:
WebGL: INVALID_ENUM: texImage2D: invalid texture type
WebGL error INVALID_ENUM in texImage2D(TEXTURE_2D, 0, RGBA, 1, 1, 0, RGBA, FLOAT,
[object Float32Array])
Comparing the OpenGL ES spec and the OpenGL 3.3 spec for texImage2D, it seems like I am not allowed to use gl.FLOAT. In that case, how would I accomplish what I am trying to do?
You can create a byte array from your float array. Each float should take 4bytes (32bit float). This array can be put into texture using a standard RGBA format with unsigned byte. This will create a texture where each texel contains a single 32bit floating number which seems to be exactly what you want.
The only problem is your floating value is split into 4 floating values when you retrieve it from texture in your fragment shader. So what you are looking for is most likely "how to convert vec4 into a single float".
You should note what you are trying to do with internal format being RGBA consisting of 32bit floats will not work as your texture will always be 32bit per texel so even somehow forcing floats into a texture should result into clamping or precision loss. And then even if the texture texel would consist of 4 RGBA 32bit floats your shader would most likely treat them as lowp using texture2D at some point.
The solution to my problem is actually quite simple! I just needed to type
var float_texture_ext = gl.getExtension('OES_texture_float');
Now WebGL can use texture floats!
This MDN page tells us why:
Note: In WebGL, unlike in other GL APIs, extensions are only available if explicitly requested.

Image Rotation by using Opengl ES

I'm working on Opengl ES 2.0 using OMAP3530 development board on Windows CE 7.
My Task is to Load a 24-Bit Image File & rotate it about an angle in z-Axis & export the image file(Buffer).
For this task I've created a FBO for off-screen rendering & loaded this image file as a Texture by using glTexImage2D() & I've applied this Texture to a Quad & rotate that QUAD by using PVRTMat4::RotationZ() API & Read-Back by using ReadPixels() API. Since it is a single frame process i just made only 1 loop.
Here are the problems I'm facing now.
1) All API's are taking distinct processing time on every run.ie Sometimes when i run my application i get different processing time for all API's.
2) glDrawArrays() is taking too much time (~50 ms - 80 ms)
3) glReadPixels() is also taking too much time ~95 ms for Image(800x600)
4) Loading 32-Bit image is much faster than 24-Bit image so conversion is needed.
I'd like to ask you all if anybody facing/Solved similar problem kindly suggest me any
Here is the Code snippet of my Application.
[code]
[i]
void BindTexture(){
glGenTextures(1, &m_uiTexture);
glBindTexture(GL_TEXTURE_2D, m_uiTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ImageWidth, ImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, pTexData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, TCHAR *lpCmdLine, int nCmdShow)
{
// Fragment and vertex shaders code
char* pszFragShader = "Same as in RenderToTexture sample;
char* pszVertShader = "Same as in RenderToTexture sample;
CreateWindow(Imagewidth, ImageHeight);//For this i've referred OGLES2HelloTriangle_Windows.cpp example
LoadImageBuffers();
BindTexture();
Generate& BindFrame,Render Buffer();
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_auiFbo, 0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ImageWidth, ImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_auiDepthBuffer);
BindTexture();
GLfloat Angle = 0.02f;
GLfloat afVertices[] = {Vertices to Draw a QUAD};
glGenBuffers(1, &ui32Vbo);
LoadVBO's();//Aps's to load VBO's refer
// Draws a triangle for 1 frames
while(g_bDemoDone==false)
{
glBindFramebuffer(GL_FRAMEBUFFER, m_auiFbo);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
PVRTMat4 mRot,mTrans, mMVP;
mTrans = PVRTMat4::Translation(0,0,0);
mRot = PVRTMat4::RotationZ(Angle);
glBindBuffer(GL_ARRAY_BUFFER, ui32Vbo);
glDisable(GL_CULL_FACE);
int i32Location = glGetUniformLocation(uiProgramObject, "myPMVMatrix");
mMVP = mTrans * mRot ;
glUniformMatrix4fv(i32Location, 1, GL_FALSE, mMVP.ptr());
// Pass the vertex data
glEnableVertexAttribArray(VERTEX_ARRAY);
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, m_ui32VertexStride, 0);
// Pass the texture coordinates data
glEnableVertexAttribArray(TEXCOORD_ARRAY);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, m_ui32VertexStride, (void*) (3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);//
glReadPixels(0,0,ImageWidth ,ImageHeight,GL_RGBA,GL_UNSIGNED_BYTE,pOutTexData) ;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
eglSwapBuffers(eglDisplay, eglSurface);
}
DeInitAll();[/i][/code]
The PowerVR architecture can not render a single frame and allow the ARM to read it back quickly. It is just not designed to work that way fast - it is a deferred rendering tile-based architecture. The execution times you are seeing are too be expected and using an FBO is not going to make it faster either. Also, beware that the OpenGL ES drivers on OMAP for Windows CE are really poor quality. Consider yourself lucky if they work at all.
A better design would be to display the OpenGL ES rendering directly to the DSS and avoid using glReadPixels() and the FBO completely.
I've got improved performance for rotating a Image Buffer my using multiple FBO's & PBO's.
Here is the pseudo code snippet of my application.
InitGL()
GenerateShaders();
Generate3Textures();//Generate 3 Null Textures
Generate3FBO();//Generate 3 FBO & Attach each Texture to 1 FBO.
Generate3PBO();//Generate 3 PBO & to readback from FBO.
DrawGL()
{
BindFBO1;
BindTexture1;
UploadtoTexture1;
Do Some Processing & Draw it in FBO1;
BindFBO2;
BindTexture2;
UploadtoTexture2;
Do Some Processing & Draw it in FBO2;
BindFBO3;
BindTexture3;
UploadtoTexture3;
Do Some Processing & Draw it in FBO3;
BindFBO1;
ReadPixelfromFBO1;
UnpackToPBO1;
BindFBO2;
ReadPixelfromFBO2;
UnpackToPBO2;
BindFBO3;
ReadPixelfromFBO3;
UnpackToPBO3;
}
DeinitGL();
DeallocateALL();
By this way I've achieved 50% increased performance for overall processing.

How can I get Alpha blending transparency working in OpenGL ES 2.0?

I'm in the midst of porting some code from OpenGL ES 1.x to OpenGL ES 2.0, and I'm struggling to get transparency working as it did before; all my triangles are being rendered fully opaque.
My OpenGL setup has these lines:
// Draw objects back to front
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(false);
And my shaders look like this:
attribute vec4 Position;
uniform highp mat4 mat;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Position * mat;
}
and this:
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
}
What could be going wrong?
EDIT: If I set the alpha in the fragment shader manually to 0.5 in the fragment shader (or indeed in the vertex shader) as suggested by keaukraine below, then I get transparent everything. Furthermore, if I change the color values I'm passing in to OpenGL to be floats instead of unsigned bytes, then the code works correctly.
So it looks as though something is wrong with the code that was passing the color information into OpenGL, and I'd still like to know what the problem was.
My vertices were defined like this (unchanged from the OpenGL ES 1.x code):
typedef struct
{
GLfloat x, y, z, rhw;
GLubyte r, g, b, a;
} Vertex;
And I was using the following code to pass them into OpenGL (similar to the OpenGL ES 1.x code):
glBindBuffer(GL_ARRAY_BUFFER, glTriangleVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * nTriangleVertices, triangleVertices, GL_STATIC_DRAW);
glUniformMatrix4fv(matLocation, 1, GL_FALSE, m);
glVertexAttribPointer(positionSlot, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, x));
glVertexAttribPointer(colorSlot, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, r));
glDrawArrays(GL_TRIANGLES, 0, nTriangleVertices);
glBindBuffer(GL_ARRAY_BUFFER, 0);
What is wrong with the above?
Your Colour vertex attribute values are not being normalized. This means that the vertex shader sees values for that attribute in the range 0-255.
Change the fourth argument of glVertexAttribPointer to GL_TRUE and the values will be normalized (scaled to the range 0.0-1.0) as you originally expected.
see http://www.khronos.org/opengles/sdk/docs/man/xhtml/glVertexAttribPointer.xml
I suspect the DestinationColor varying to your fragment shader always contains 0xFF for the alpha channel? If so, that is your problem. Try changing that so that the alpha actually varies.
Update: We found 2 good solutions:
Use floats instead of unsigned bytes for the varyings that are supplied to the DestinationColor in the fragment shader.
Or, as GuyRT pointed out, you can change the fourth argument of glVertexAttribPointer to GL_TRUE to tell OpenGL ES to normalize the values when they are converted from integers to floats.
To debug this situation, you can try setting constant alpha and see if it makes a difference:
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
gl_FragColor.a = 0.5; /* try other values from 0 to 1 to test blending */
}
Also you should ensure that you're picking EGL config with alpha channel.
And don't forget to specify precision for floats in fragment shaders! Read specs for OpenGL GL|ES 2.0 (http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf section 4.5.3), and please see this answer: https://stackoverflow.com/a/6336285/405681

Is it possible to copy data from one framebuffer to another in OpenGL?

I guess it is somehow possible since this:
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glResolveMultisampleFramebufferAPPLE();
does exactly that, and on top resolves the multisampling. However, it's an Apple extension and I was wondering if there is something similar that copies all the logical buffers from one framebuffer to another and doesn't do the multisampling part in the vanilla implementation. GL_READ_FRAMEBUFFER doesn't seem to be a valid target, so I'm guessing there is no direct way? How about workarounds?
EDIT: Seems it's possible to use glCopyImageSubData in OpenGL 4, unfortunately not in my case since I'm using OpenGL ES 2.0 on iPhone, which seems to be lacking that function. Any other way?
glBlitFramebuffer accomplishes what you are looking for. Additionally, you can blit one TEXTURE onto another without requiring two framebuffers. I'm not sure using one fbo is possible with OpenGL ES 2.0 but the following code could be easily modified to use two fbos. You just need to attach different textures to different framebuffer attachments. glBlitFramebuffer function will even manage downsampling / upsampling for anti-aliasing applications! Here is an example of it's usage:
// bind fbo as read / draw fbo
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,m_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo);
// bind source texture to color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle0);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_textureHandle0, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
// bind destination texture to another color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle1);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_textureHandle1, 0);
glReadBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(from.left(),from.top(), from.width(), from.height(),
to.left(),to.top(), to.width(), to.height(), GL_COLOR_BUFFER_BIT, GL_NEAREST);
// release state
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
Tested in OpenGL 4, glBlitFramebuffer not supported in OpenGL ES 2.0.
I've fixed errors in the previous answer and generalized into a function that can support two framebuffers:
// Assumes the two textures are the same dimensions
void copyFrameBufferTexture(int width, int height, int fboIn, int textureIn, int fboOut, int textureOut)
{
// Bind input FBO + texture to a color attachment
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboIn);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureIn, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
// Bind destination FBO + texture to another color attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboOut);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, textureOut, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
// unbind the color attachments
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, 0, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
}
You can do it directly with OpenGL ES 2.0, and it seems that there is no extension neither.
I am not really sure of what your are trying to achieve but in a general way, simply remove attachements of the FBO in which you have accomplish your off-screen rendering. Then bind the default FBO to be able to draw on screen, here you can simply draw a quad with an orthographic camera that fill the screen and a shader that takes your off-screen generated textures as input.
You will be able to do the resolve too if you are using multi-sampled textures.
glBindFramebuffer(GL_FRAMEBUFFER, off_screenFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Default FBO, on iOS it is 1 if I am correct
// Set the viewport at the size of screen
// Use your compositing shader (it doesn't have to manage any transform)
// Active and bind your textures
// Sent textures uniforms
// Draw your quad
Here is an exemple of the shader:
// Vertex
attribute vec2 in_position2D;
attribute vec2 in_texCoord0;
varying lowp vec2 v_texCoord0;
void main()
{
v_texCoord0 = in_texCoord0;
gl_Position = vec4(in_position2D, 0.0, 1.0);
}
// Fragment
uniform sampler2D u_texture0;
varying lowp vec2 v_texCoord0;
void main()
{
gl_FragColor = texture2D(u_texture0, v_texCoord0);
}

How to efficiently copy depth buffer to texture on OpenGL ES

I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture:
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0);
However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to write the depth component to the color buffer, then copying it:
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// turn on depth rendering
glUseProgram(m_BaseShader.uiId);
// this is a switch to cause the fragment shader to just dump out the depth component
glUniform1i(uiBaseShaderRenderDepth, true);
// and for this, the color buffer needs to be on
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
// and clear it to 1.0, like how the depth buffer starts
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw the scene
DrawScene();
// bind our texture
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
Here is the fragment shader:
uniform sampler2D sTexture;
uniform bool bRenderDepth;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
if(bRenderDepth) {
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
} else {
gl_FragColor = vec4(texture2D(sTexture, TexCoord).rgb * LightIntensity, 1.0);
}
}
I have experimented with not having the 'bRenderDepth' branch, and it doesn't speed it up significantly.
Right now pretty much just doing this step its at 14fps, which obviously is not acceptable. If I pull out the copy its way above 30fps. I'm getting two suggestions from the Xcode OpenGLES analyzer on the copy command:
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: error:
Validation Error: glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
960, 640, 0) : Height<640> is not a power of two
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: warning:
GPU Wait on Texture: Your app updated a texture that is currently
used for rendering. This caused the CPU to wait for the GPU to
finish rendering.
I'll work to resolve the two above issues (perhaps they are the crux if of it). In the meantime can anyone suggest a more efficient way to pull that depth data into a texture?
Thanks in advance!
iOS devices generally support OES_depth_texture, so on devices where the extension is present, you can set up a framebuffer object with a depth texture as its only attachment:
GLuint g_uiDepthBuffer;
glGenTextures(1, &g_uiDepthBuffer);
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
// glTexParameteri calls omitted for brevity
GLuint g_uiDepthFramebuffer;
glGenFramebuffers(1, &g_uiDepthFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_uiDepthFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_uiDepthBuffer, 0);
Your texture then receives all the values being written to the depth buffer when you draw your scene (you can use a trivial fragment shader for this), and you can texture from it directly without needing to call glCopyTexImage2D.

Resources