glDrawArrays() slow on iPad? - performance

I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app?
For instances, if we now draw 3 textures (1024x1024, 256x512, 256x512) across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows:
- (void) draw {
GLuint textureAvailable = 0;
if(texture != nil){
textureAvailable = 1;
}
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture.name);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha);
glEnableVertexAttribArray(ATTRIB_COLOR);
glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping);
glEnableVertexAttribArray(ATTRIB_TEXTUREMAP);
//Note that we are NOT using position.z here because that is only used to determine drawing order
int *jnUniforms = JNOpenGLConstants::getInstance().uniforms;
glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0);
glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0);
glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation);
glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0);
glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y);
glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
Possible optimizations I think won't work:
Drawing geometry in batches
I'm only drawing 3 items and the FPS is 15, I don't think batching the geometry would work here because it's such a small number of calls for drawing that it doesn't matter if we kill 2/3 of those calls.
Texture Atlas
Again, only drawing 3 textures. What I do wonder if it would matter (a lot) if we were to convert these to PVR? I haven't looked into it, but I must admit we're loading big PNGs at the moment. Is there any way to see if this is indeed the case, or is it easier just to check it out?
But please tell me if I'm wrong, I'm happy to hear any ideas.
Proposed solutions
Mipmapped textures
Loading mipmapped textures, doing it like this:
- (id) initWithUIImage: (UIImage * const) image {
glGenTextures(1, &name);
//JNLogString(#"Recieved name(%d), binding texture", name);
glBindTexture(GL_TEXTURE_2D, name);
//Set the needed parameters for the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//Load the image data into the texture
glGenerateMipmap(GL_TEXTURE_2D);
return self;
}
This doesn't seem to do anything for our FPS, I think this is because our textures are already roughly at the size they are rendered to on the screen, in most cases even 1:1.
Other solutions are welcome! I will try them out and post the results here

If you are using very large textures, try to create mipmap textures. The cost is basically 1/3 of the original texture memory. I think they can be created with this call when setting up the textures.
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
Some calculations: If you have 3 textures 2048x2048 (max size) at 15 Hz you will have a texel throughput (if they are fully shown, ie downscaled to screen resolution) of 2048x2048x3x15 = 188,743,680 / sec which is around the value we see at glbenchmark.com for single fill rate (173 Mtexel/sec). But if you are using mipmap textures the texel throughput should be closer to the screen size resolution (1024x768) which should be something like 1/4 of the previous throughput.

I had a branch in my fragment shader. I though that didn't put a lot of strain on it, but it did! Anyhow, that was the whole problem, I removed the branch and now my FPS has almost doubled.

Related

Is it possible that very small particles disappear as I use RTT?

I have developed more than 20 mobile apps using OpenGL ES 2.0. However, I am trying to make a renderer to use my apps in OSX so now I am using OpenGL v3.3 with GLSL v130. Yesterday, I ran into a problem that I can't use a texture(RTT) that I drew particles on Off-Screen FBO with GL_LINES 1.0 size (it is the max value in OpenGL 3.3 why??)
When I drew geometry on the Off Screen FBO and used it as a texture on On-screen, I was able to see that
and also if I draw small particles on On-screen I can clearly see those but if I try to draw that particle lines and try to use it as a texture on Main screen I can see only a black texture.
I have already checked GL ERRORs and back FBOs' status and GL blending options but I am still struggling to solve it .
Anyone has a idea to solve it ?
Even though I think my code is okay I attached a little code bellow
// AFTER generate and bind FBO, generate RTT
StarTexture fboTex;
fboTex.texture_width = texture_width;
fboTex.texture_height = texture_height;
glGenTextures(1, &fboTex.texture_id);
glBindTexture(GL_TEXTURE_2D,fboTex.texture_id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture_width, texture_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTex.texture_id, 0);
and this is drawing particles on BACK FBO
glUniformMatrix4fv( h_Uniforms[UNIFORMS_PROJECTION], 1, GL_FALSE, g_proxtrans.s);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_PARTICLE]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(Vec3)*ParticleNumTotal*2, &p_particle_lc_xy[0]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_POSITION], 3, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_POSITION]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_COLOR]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_COLOR], 4, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_COLOR]);
glLineWidth(Thickness); // 1.0 because it is maxium
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, h_VBO[VBO_INDEX_OFF1]);
glDrawElements(GL_LINES, 400, GL_UNSIGNED_INT, 0); // 200 lines
and when I draw that on the main screen
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear( GL_COLOR_BUFFER_BIT);
starfbo->bindingVAO1();
glViewport(0, 0, ogl_Width, ogl_Height);
glUseProgram(h_Shader_Program[Shader_Program_FINAL]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_TEXCOORD2]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_UV2], 2, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_UV2]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_SQCOORD2]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_POSITION3], 2, GL_FLOAT, 0, 0,0 );
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_POSITION3]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, h_VBO[VBO_INDEX_ON]);
glDrawElements(GL_TRIANGLES,sizeof(squareIndices)/sizeof(squareIndices[0]), GL_UNSIGNED_INT ,(void*)0);
glUniformMatrix4fv( h_Uniforms[UNIFORMS_PROJECTION], 1, GL_FALSE, g_proxtrans.s);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_PARTICLE]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(Vec3)*ParticleNumTotal*2, &p_particle_lc_xy[0]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_POSITION], 3, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_POSITION]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_COLOR]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_COLOR], 4, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_COLOR]);
glLineWidth(Thickness);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, h_VBO[VBO_INDEX_OFF1]);
glDrawElements(GL_LINES, 400, GL_UNSIGNED_INT, 0);
If the resolution of the rendered image is much larger than the size (in pixels) it ends up being rendered at, it's certainly possible that small features disappear entirely.
Picture an extreme case. Say you render a few thin lines into a 1000x1000 texture, lighting up a very small fraction of the total 1,000,000 pixels. Now you map this texture onto a quad that has a size of 10x10 pixels when displayed. The fragment shader is invoked once for each pixel (assuming no MSAA), which makes for 100 shader invocations. Each of these 100 invocations samples the texture. With linear sampling and no mipmapping, it will read 4 texels for each sample operation. In total, 100 * 4 = 400 texels are read while rendering the polygon. It's quite likely that reading these 400 texels out of the total 1,000,000 will completely miss all of the lines you rendered into the texture.
One way to reduce this problem is to use mipmapping. This will generally prevent the features from disappearing completely. But small features will still fade because more and more texels are averaged in higher mipmap levels, where most of the texels are black.
A better but slightly more complex approach is that instead of using automatically generated mipmaps, you create the mipmaps manually, by rendering the same content into each mipmap level.
It might be good enough to simply be careful that you're not making the texture too large. Or to create your own wide lines by drawing them as polygons instead of using line primitives.
glDrawElements(GL_LINES, 400, GL_UNSIGNED_INT, 0);
GL_UNSIGNED_INT can not be used in OpenGL ES Versus OpenGL. Oddly, it works for IOS but not Android.
The parameter must be GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT in OpenGL ES.

Image Rotation by using Opengl ES

I'm working on Opengl ES 2.0 using OMAP3530 development board on Windows CE 7.
My Task is to Load a 24-Bit Image File & rotate it about an angle in z-Axis & export the image file(Buffer).
For this task I've created a FBO for off-screen rendering & loaded this image file as a Texture by using glTexImage2D() & I've applied this Texture to a Quad & rotate that QUAD by using PVRTMat4::RotationZ() API & Read-Back by using ReadPixels() API. Since it is a single frame process i just made only 1 loop.
Here are the problems I'm facing now.
1) All API's are taking distinct processing time on every run.ie Sometimes when i run my application i get different processing time for all API's.
2) glDrawArrays() is taking too much time (~50 ms - 80 ms)
3) glReadPixels() is also taking too much time ~95 ms for Image(800x600)
4) Loading 32-Bit image is much faster than 24-Bit image so conversion is needed.
I'd like to ask you all if anybody facing/Solved similar problem kindly suggest me any
Here is the Code snippet of my Application.
[code]
[i]
void BindTexture(){
glGenTextures(1, &m_uiTexture);
glBindTexture(GL_TEXTURE_2D, m_uiTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ImageWidth, ImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, pTexData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, TCHAR *lpCmdLine, int nCmdShow)
{
// Fragment and vertex shaders code
char* pszFragShader = "Same as in RenderToTexture sample;
char* pszVertShader = "Same as in RenderToTexture sample;
CreateWindow(Imagewidth, ImageHeight);//For this i've referred OGLES2HelloTriangle_Windows.cpp example
LoadImageBuffers();
BindTexture();
Generate& BindFrame,Render Buffer();
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_auiFbo, 0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ImageWidth, ImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_auiDepthBuffer);
BindTexture();
GLfloat Angle = 0.02f;
GLfloat afVertices[] = {Vertices to Draw a QUAD};
glGenBuffers(1, &ui32Vbo);
LoadVBO's();//Aps's to load VBO's refer
// Draws a triangle for 1 frames
while(g_bDemoDone==false)
{
glBindFramebuffer(GL_FRAMEBUFFER, m_auiFbo);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
PVRTMat4 mRot,mTrans, mMVP;
mTrans = PVRTMat4::Translation(0,0,0);
mRot = PVRTMat4::RotationZ(Angle);
glBindBuffer(GL_ARRAY_BUFFER, ui32Vbo);
glDisable(GL_CULL_FACE);
int i32Location = glGetUniformLocation(uiProgramObject, "myPMVMatrix");
mMVP = mTrans * mRot ;
glUniformMatrix4fv(i32Location, 1, GL_FALSE, mMVP.ptr());
// Pass the vertex data
glEnableVertexAttribArray(VERTEX_ARRAY);
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, m_ui32VertexStride, 0);
// Pass the texture coordinates data
glEnableVertexAttribArray(TEXCOORD_ARRAY);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, m_ui32VertexStride, (void*) (3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);//
glReadPixels(0,0,ImageWidth ,ImageHeight,GL_RGBA,GL_UNSIGNED_BYTE,pOutTexData) ;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
eglSwapBuffers(eglDisplay, eglSurface);
}
DeInitAll();[/i][/code]
The PowerVR architecture can not render a single frame and allow the ARM to read it back quickly. It is just not designed to work that way fast - it is a deferred rendering tile-based architecture. The execution times you are seeing are too be expected and using an FBO is not going to make it faster either. Also, beware that the OpenGL ES drivers on OMAP for Windows CE are really poor quality. Consider yourself lucky if they work at all.
A better design would be to display the OpenGL ES rendering directly to the DSS and avoid using glReadPixels() and the FBO completely.
I've got improved performance for rotating a Image Buffer my using multiple FBO's & PBO's.
Here is the pseudo code snippet of my application.
InitGL()
GenerateShaders();
Generate3Textures();//Generate 3 Null Textures
Generate3FBO();//Generate 3 FBO & Attach each Texture to 1 FBO.
Generate3PBO();//Generate 3 PBO & to readback from FBO.
DrawGL()
{
BindFBO1;
BindTexture1;
UploadtoTexture1;
Do Some Processing & Draw it in FBO1;
BindFBO2;
BindTexture2;
UploadtoTexture2;
Do Some Processing & Draw it in FBO2;
BindFBO3;
BindTexture3;
UploadtoTexture3;
Do Some Processing & Draw it in FBO3;
BindFBO1;
ReadPixelfromFBO1;
UnpackToPBO1;
BindFBO2;
ReadPixelfromFBO2;
UnpackToPBO2;
BindFBO3;
ReadPixelfromFBO3;
UnpackToPBO3;
}
DeinitGL();
DeallocateALL();
By this way I've achieved 50% increased performance for overall processing.

Fastest possible OpenCV 2 OpenGL context

I've been searching through the net for a few day looking for the fastest possible way to take a OpenCV webcam capture and display it on an OpenGL context. So far this seems to work OK until I need to zoom.
void Camera::DrawIplImage1(IplImage *image, int x, int y, GLfloat xZoom, GLfloat yZoom)
{
GLenum format;
switch(image->nChannels) {
case 1:
format = GL_LUMINANCE;
break;
case 2:
format = GL_LUMINANCE_ALPHA;
break;
case 3:
format = GL_BGR;
break;
default:
return;
}
yZoom =- yZoom;
glRasterPos2i(x, y);
glPixelZoom(xZoom, yZoom); //Slow when not (1.0f, 1.0f);
glDrawPixels(image->width, image->height, format, GL_UNSIGNED_BYTE, image->imageData);
}
I've heard that maybe taking the FBO approach would be even faster. Any ideas out there on the fastest possible way to get an OpenCV webcam capture to an OpenGL context. I will test everything I see and post results.
Are you sure your openGL implementation needs ^2 textures? Even very poor PC implementations (yes Intel) can manage arbitrary sizes now.
Then the quickest is probably to use a openGL Pixel buffer
Sorry the code is from Qt, so the function names are slightly different but the sequence is the same
Allocate the opengl Texture
glEnable(GL_TEXTURE_2D);
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, glFormat, width, height, 0, glFormatExt, glType, NULL );
glDisable(GL_TEXTURE_2D);
Now get a pointer to the texture to use the memeory
glbuffer.bind();
unsigned char *dest = (unsigned char*)glbuffer.map(QGLBuffer::ReadWrite);
// creates an openCV image but the pixel data is stored in an opengl buffer
cv::Mat opencvImage(rows,cols,CV_TYPE,dest);
.... do stuff ....
glbuffer.unmap(); // pointer is no longer valid - so neither is openCV image
Then to draw it - this should be essentially instant because the data was copied to the GPU in the mapped calls above
glBindTexture(GL_TEXTURE_2D,texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, width, height, glFormatExt, glType, 0);
glbuffer.release();
By using different types for glFormat and glFormatExt you can have the graphics card automatically convert between opencVs BGR and typical RGBA display formats for you in hardware.

OpenGL (ES) Model Within Translucent Model

I want "Face In a Crystal Ball" effect where I have a model (the face) doing things inside of a translucent model (the crystal ball). I feel like I'm taking crazy pills because I just can't get this inner face to show up partially occluded by the ball. My goal is to vary the alpha of the ball (and/or face) to make the face appear and disappear.
Below is the relevant bits code. As you'll see, I'm not using shaders, just good old GL/GLES1. If anyone can tell me what I'm doing wrong, I'll be VERY appreciative.
The setup code...
//-- CONFIGURATION ---------------
// Create The Depth Buffer Object
glGenRenderbuffersOES(1, &depth_renderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depth_renderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH_COMPONENT16_OES,
width,
height);
// Create The FrameBuffer Object
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES,
color_renderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES,
depth_renderbuffer);
// Bind Color Buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, color_renderbuffer);
glViewport(0, 0, width, height);
//-- LIGHTING ----------------------
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
//-- PROJECTION ---------------------
glMatrixMode(GL_PROJECTION);
viewport_size = vec2((float) width,(float) height);
//Orthographic Projection
float max_x,max_y;
if(width>height){
max_y = 1;
max_x = (float)width/(float)height;
}
else{
max_x = 1;
max_y = (float)height/(float) width;
}
const float MAX_X = max_x;
const float MAX_Y = max_y;
const float Z_0 = 0;
const float MAX_Z = 1;
glOrthof(-MAX_X, MAX_X, -MAX_Y, MAX_Y, Z_0-MAX_Z, Z_0+MAX_Z);
world_size = vec3(2*MAX_X,2*MAX_Y,2*MAX_Z);
//Color Depth
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE); //Dissapears if False
glDepthFunc(GL_LEQUAL);
glEnable(GL_BLEND);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //doesn't do it
glBlendFunc(GL_ONE, GL_ONE); //better
Here is the rendering call
glClearColor(world->background_color.x,
world->background_color.y,
world->background_color.z,
world->background_color.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for(int s=0;s<surfaces.size();s++){
Surface* surface = surface[s];
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, surface->getMatAmbient().Pointer());
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, surface->getMatDiffuse().Pointer());
glMatrixMode(GL_MODELVIEW);
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
glPushMatrix();
glLoadIdentity();
vec4 light_position = vec4(world->light->position,1);
glLightfv(GL_LIGHT0,GL_POSITION,light_position.Pointer());
glPopMatrix();
glPushMatrix();
glMultMatrixf(surface->transform.Pointer());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, surface->index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, surface->vertex_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, VERTEX_STRIDE, 0);
glNormalPointer(GL_FLOAT, VERTEX_STRIDE, (GLvoid*) VERTEX_NORMAL_OFFSET);
glDrawElements(GL_TRIANGLES, surface->indices.size(), GL_UNSIGNED_SHORT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glPopMatrix();
}
It sounds like you may be suffering from a simple case of the concept of a depth buffer not really applying to your scene. A depth buffer stores one depth for every pixel on screen, which in a scene with fully opaque objects would be the depth of the nearest object at that pixel.
The problem is that when you want to add partially transparent objects to the scene, you end up in a position where several objects contribute to the colour of an individual pixel. But you can still store the depth of only one of them.
So what's probably happening in your case is that you're drawing the crystal ball first, and that's putting the depths of the various crystal ball pixels into the depth buffer. You're then attempting to draw the face and OpenGL is seeing that it's further away than the values already in the buffer, so skipping those pixels.
So the quick-fix solution is just to re-order your scene geometry by hand such that the face is always drawn before the crystal ball, being always on the inside.
In an ideal solution, you'd draw all opaque geometry in one step (traditionally in something close to front-to-back order, though that's not as important on the PowerVR) to establish opaque depth values, then all transparent geometry back to front so that it is composited in the correct order.
In OpenGL you really want the order of certain things to be relatively fixed so that you can push the relevant values over to the GPU and not incur communications costs. People still tend to divide into opaque and transparent geometry and draw opaque first but often they'll just then disable z-buffer writes when they draw the transparent geometry, making an effort to do it something a bit like back-to-front order but not investing too much time in the problem.
If you're happy to use purely additive blending then clearly any order drawing for the transparencies is correct once the depth buffer has the opaque stuff set up.
What order are you rendering the objects? If you draw the ball before the face, then the entire face will get rejected because it is behind the ball in the z-buffer. If you want to do correct transparency, you have to render objects from back to front.
And regarding your inline question:
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
When you call glLightfv with a position, the position is transformed by what's currently in the modelview matrix stack. You have to put it in the right place relative to what frame of reference you're defining the coordinates (is it relative to the view coordinates, or to the world coordinates, or to the object coordinates?).

White textures with OpenGL and DevIL on some Windows systems

I'm having some problems trying to load images with DevIL and creating textures in OpenGL. When a friend of mine tested my program on his Windows, all the rects, whom were supposed to contain textures, were white, without any image (texture). The problem seems to occur only on Window XP, Windows Vista and Windows 7, but not in every Windows PC: my Windows XP runs the program without problems.
Maybe there're some missing DLLs or files (improbable), or something that don't let the image to be loaded or used as a texture. By the other hand, the program runs fine on *UNIX systems.
This is the code I'm using to load an image and generate a texture:
void Image::load(const char* filename)
{
ILuint ilimg;
ilGenImages(1, &ilimg);
ilBindImage(ilimg);
if (!ilLoadImage(filename))
throw ImageLoadError;
glGenTextures(1, &image);
glBindTexture(GL_TEXTURE_2D, image);
bpp = ilGetInteger(IL_IMAGE_BPP);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
format = ilGetInteger(IL_IMAGE_FORMAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, bpp, width, height, 0, format,
GL_UNSIGNED_BYTE, ilGetData());
ilDeleteImages(1, &ilimg);
}
This is the code that draws a rect with the texture applied:
void Rect::show()
{
glPushAttrib(GL_CURRENT_BIT);
glEnable(GL_TEXTURE_2D);
glColor4f(1.0, 1.0, 1.0, opacity);
glBindTexture(GL_TEXTURE_2D, texture->get_image());
glBegin(GL_POLYGON);
glTexCoord2i(0, 0); glVertex2f(x, y);
glTexCoord2i(1, 0); glVertex2f(x+width, y);
glTexCoord2i(1, 1); glVertex2f(x+width, y+height);
glTexCoord2i(0, 1); glVertex2f(x, y+height);
glEnd();
glDisable(GL_TEXTURE_2D);
glPopAttrib();
}
If you need some other code that I don't mentioned, ask me and i'll post it.
glTexImage2D(target, level, internal format, width, height, border, format, type, data);
The internal format parameter of glTexImage is not bits per pixel. Actually its numerical value is not related to the format at all. OpenGL defines only specific values to be valid, among them also four with numeric relation, but just as a mnemonic: 1 (shorthand for GL_LUMINANCE), 2 (GL_LUMINANCE_ALPHA), 3 (GL_RGB), 4 (GL_RGBA). There are also a number of other format tokens, but their numeric value is arbitrarily choosen.
Also the bpp for the image data is used to find the format and type parameters. You'll need to implement a LUT or switch-case structure for that.
The issue you are experiencing sounds like an implementation version issue. Probably a "correct" but less "nice" implementation isn't letting you get away with something subtly incorrect. Or on the other hand the implementation may have a subtle bug and isn't accepting your valid code.
I might try swapping
glColor4f(1.0, 1.0, 1.0, opacity);
glBindTexture(GL_TEXTURE_2D, texture->get_image());
to
glBindTexture(GL_TEXTURE_2D, texture->get_image());
glColor4f(1.0, 1.0, 1.0, opacity);
and declaring vertex before texcoord like.
glVertex2f(x, y);
glTexCoord2i(0, 0);
Short of that if you do not already have "The Red Book" consult http://glprogramming.com/red/chapter09.html

Resources