glClear() doesn't work on some machines - macos

I've got an OpenGL context with an FBO and all of my simple drawing operations work fine except for glClear(). I can draw textures and rectangles but glClear() refuses to do anything. glGetError() does not return any error.
Im calling glClear() like this:
glRectd(0, 0, 1024, 1080); // I see the results of this call
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // This does nothing
Note that this code with glClear() works fine on some Macs but not on others so perhaps I've been getting lucky and I need to setup the context or some other setting differently. Any suggestions would be greatly appreciated!
Edit: This has something to do with drawing the texture from the FBO onto another context. After that happens, glClear() stops working.
Edit 2: I now have it reproduced in a small Xcode project and I'm pretty sure I've concluded that everything works on the NVIDIA card but not on the integrated Intel Iris card. I'll post the test code shortly.
Edit 3: Code. I tried to minimize it but there's still a bit of bulk.
//
// On the NVIDIA card, we'll see the video switch between red and green.
// On the Intel Iris, it will get stuck on blue because of "break" code below.
//
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
// Initialize the view to white
[self clearContext:_view.openGLContext.CGLContextObj toRed:1 green:1 blue:1 alpha:1];
// FBO context
fbo = [[WTFBOContext alloc] initWithWidth:_view.frame.size.width height:_view.frame.size.height shareContext:_view.openGLContext];
// Clear the FBO context (and thus texture) to solid blue
[self clearContext:fbo.oglCtx.CGLContextObj toRed:0 green:0 blue:1 alpha:1];
// These calls "break" the FBO on Intel Iris chips
{
CGLContextObj cgl_ctx = _view.openGLContext.CGLContextObj;
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, fbo.texture);
glBegin(GL_QUADS);
glEnd();
}
__block int r = 0;
[NSTimer scheduledTimerWithTimeInterval:1.0 repeats:YES block:^(NSTimer * _Nonnull timer) {
r = 1 - r;
// Clear the FBO context (and thus texture) to solid red or green
[self clearContext:fbo.oglCtx.CGLContextObj toRed:r green:1 - r blue:0 alpha:1];
[self drawTexture:fbo.texture
fromRect:_view.frame
toContext:_view.openGLContext.CGLContextObj
inRect:_view.frame
flipped:NO
mirrored:NO
blending:NO
withAlpha:1.0];
}];
}
- (void) clearContext:(CGLContextObj) cgl_ctx
toRed:(GLfloat) red
green:(GLfloat) green
blue:(GLfloat) blue
alpha:(GLfloat) alpha
{
glClearColor(red, green, blue, alpha);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glFlush();
}
- (void) drawTexture:(GLuint) tname
fromRect:(CGRect) fromRect
toContext:(CGLContextObj) cgl_ctx
inRect:(CGRect) inRect
flipped:(BOOL) flipped
mirrored:(BOOL) mirrored
blending:(BOOL) blending
withAlpha:(GLfloat) withAlpha
{
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, tname);
GLint vp[4];
glGetIntegerv(GL_VIEWPORT, vp);
GLdouble left, right, bottom, top;
if (flipped)
{
bottom = vp[1] + vp[3];
top = vp[1];
}
else
{
bottom = vp[1];
top = vp[1] + vp[3];
}
if (mirrored)
{
left = vp[0] + vp[2];
right = vp[0];
}
else
{
left = vp[0];
right = vp[0] + vp[2];
}
glMatrixMode (GL_PROJECTION);
glPushMatrix();
glLoadIdentity ();
glOrtho (left, right, bottom, top, -1, 1);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
GLboolean wasBlending = glIsEnabled(GL_BLEND);
if (blending)
{
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
else
glDisable(GL_BLEND);
// Texures are multiplied by the current color.
glColor4f(withAlpha, withAlpha, withAlpha, withAlpha);
glBegin(GL_QUADS);
glTexCoord2f(fromRect.origin.x, fromRect.origin.y);
glVertex2i(inRect.origin.x, inRect.origin.y);
glTexCoord2f(fromRect.origin.x, fromRect.origin.y + fromRect.size.height);
glVertex2i(inRect.origin.x, inRect.origin.y + inRect.size.height);
glTexCoord2f(fromRect.origin.x + fromRect.size.width, fromRect.origin.y + fromRect.size.height);
glVertex2i(inRect.origin.x + inRect.size.width, inRect.origin.y + inRect.size.height);
glTexCoord2f(fromRect.origin.x + fromRect.size.width, fromRect.origin.y);
glVertex2i(inRect.origin.x + inRect.size.width, inRect.origin.y);
glEnd();
glMatrixMode (GL_PROJECTION);
glPopMatrix();
if (wasBlending)
glEnable(GL_BLEND);
else
glDisable(GL_BLEND);
glFlush();
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
Support Files:
#interface WTFBOContext : NSObject
{
GLuint framebuffer;
}
#property (nonatomic, retain) NSOpenGLContext *oglCtx;
#property (nonatomic, retain) NSOpenGLPixelFormat *oglFmt;
#property (nonatomic) GLuint texture;
#property (nonatomic, readonly) NSUInteger width;
#property (nonatomic, readonly) NSUInteger height;
- (id)initWithWidth:(NSUInteger)width height:(NSUInteger)height shareContext:(NSOpenGLContext *)shareContext;
// We take ownership of the texture
- (void)setTexture:(GLuint)texture;
- (BOOL)isComplete;
#end
#implementation WTFBOContext
- (id)initWithWidth:(NSUInteger)width height:(NSUInteger)height shareContext:(NSOpenGLContext *)shareContext
{
self = [super init];
_width = width;
_height = height;
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
NSOpenGLPFADepthSize, (NSOpenGLPixelFormatAttribute)24,
(NSOpenGLPixelFormatAttribute)0};
self.oglFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
self.oglCtx = [[NSOpenGLContext alloc] initWithFormat:self.oglFmt shareContext:shareContext];
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
glGenFramebuffersEXT(1, &framebuffer);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
[self _makeTexture];
glViewport(0, 0, _width, _height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, _width, 0, _height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
return self;
}
- (void)dealloc
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
glDeleteTextures(1, &_texture);
}
- (void)_makeTexture
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, texture);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA8, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
self.texture = texture;
}
- (void)setTexture:(GLuint)tex
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
if (_texture > 0)
glDeleteTextures(1, &_texture);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_RECTANGLE_EXT, tex, 0);
if (!self.isComplete)
NSLog(#"glFramebufferTexture2DEXT");
_texture = tex;
}
- (BOOL)isComplete
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
GLuint status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
return status == GL_FRAMEBUFFER_COMPLETE_EXT;
}
#end

Related

snapshot by glReadPixels. Then flip the pixels content. But the image quality has been decreased

All.
Now I am using unity3D to develop the game. And I want to save the content of each frame by AVFoundation as mp4 file. But I met some problem while I process the snapshot. After I use glReadPixels to obtain the data saved in render buffer, vertex shader and fragment shader is used to help me turn update side down the pixels content. But, after flipping each frame, I found that the quality of each frame has been decreased a lot. So, anyone has met this kind of case before.
Here is the code related.
The snapshot part,
- (void *)snapshot
{
// NSLog(#"snapshot used here");
GLint backingWidth1, backingHeight1;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, mainDisplaySurface->systemColorRB);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
if (transformData == NULL)
{
NSLog(#"transformData initial");
transformData = new loadDataFromeX();
transformData->setupOGL(backingWidth1, backingHeight1);
}
NSLog(#"data %d, %d, %d", (int)data[0], (int)data[1], (int)data[2]);
transformData->drawingOGL(data);
return data;
}
Here, transformData is an c++ class to help me to the flipping work.
in the function, setOGL(), all the textures and framebuffers have been constructed.
in the function drawingOGL(), the flipping work has been done by passing through the vertex shader and fragment shader. The details of this function is listed below,
int loadDataFromeX::drawingOGL(unsigned char* data)
{
//load data to the texture;
glDisable(GL_DEPTH_TEST);
glBindFramebuffer(GL_FRAMEBUFFER ,transFBO.frameBuffer);
glClear(GL_COLOR_BUFFER_BIT);
glClearColor(1., 0., 0., 1.);
glViewport(0, 0, imageWidth, imageHeight);
GLfloat vertex_postions[] = {
-1.0f, -1.0f, -10.0f,
1.0f, -1.0f, -10.0f,
-1.0f, 1.0f, -10.0f,
1.0f, 1.0f, -10.0f
};
GLfloat texture_coords[] = { //left up corner is (0.0)
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
glUseProgram(gl_program_id);
glVertexAttribPointer(gl_attribute_position, 3, GL_FLOAT, GL_FALSE, 0,vertex_postions);
glEnableVertexAttribArray(gl_attribute_position);
glVertexAttribPointer(gl_attribute_texture_coordinate, 2, GL_FLOAT, GL_FALSE, 0,texture_coords);
glEnableVertexAttribArray(gl_attribute_texture_coordinate);
// Load textures
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
if(flag)
{
flag = false;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
else
{
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
glUniform1i(glGetUniformLocation(gl_program_id, "inputImageTexture"), 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glUniformMatrix4fv(mvpLocation, 1, 0, gComputeMVP);
glDrawArrays(GL_TRIANGLE_STRIP ,0 ,4);
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, imageWidth, imageHeight,GL_RGBA, GL_UNSIGNED_BYTE, data);
glFinish();
cout<<"data: "<<(int)data[0]<<"; "<<(int)data[1]<<", "<<(int)data[2]<<endl;
return 1;
}
The vertex shader ans fragment shader have been provided below,
static float l = -1.f, r = 1.f;
static float b = -1.f, t = 1.f;
static float n = 0.1f, f = 100.f;
static float gComputeMVP[16] = {
2.0f/(r-l), 0.0f, 0.0f, 0.0f,
0.0f, 2.0f/(t-b), 0.0f, 0.0f,
0.0f, 0.0f, -2.0f/(f-n), 0.0f,
-(r+l)/(r-l), -(t+b)/(t-b), -(f+n)/(f-n), 1.0f
};
// Shader sources
const GLchar* vertex_shader_str =
"attribute vec4 position;\n"
"attribute vec4 inputTextureCoordinate;\n"
"varying mediump vec2 textureCoordinate;\n"
"uniform mat4 mvpMatrix;\n"
"void main()\n"
"{\n"
" gl_Position = position;\n"
" gl_Position = mvpMatrix * position;\n"
" textureCoordinate = inputTextureCoordinate.xy;\n"
"}";
const char* fragment_shader_str = ""
" varying mediump vec2 textureCoordinate;\n"
"\n"
" uniform sampler2D inputImageTexture;\n"
" \n"
" void main()\n"
" {\n"
" mediump vec4 Color = texture2D(inputImageTexture, textureCoordinate);\n"
" gl_FragColor = vec4(Color.z, Color.y, Color.x, Color.w);\n"
" }";
I don't know why the quality of each frame has been decreased. And also, when I compare the output of variable, data, before and after using drawingOGL, as these two lines shown below,
cout<<"data: "<<(int)data[0]<<"; "<<(int)data[1]<<", "<<(int)data[2]<<endl;
NSLog(#"data %d, %d, %d", (int)data[0], (int)data[1], (int)data[2]);
The first line gave the right pixel value. But, the second line always gave ZERO. It's really strange, right?
I have found out the reason to this strange problem. It was caused by the context of unity3D. I'm not familiar with unity3d. So, maybe, only someone like me will do something stupid like this. There is some special setting related with OpenGL ES in the context, belonging to unity3d. So, in order to finish the task in snapshot, a new context has to be established and only activate it when the snapshot is working.
In order to solve the problem, I construct an individual context (EAGLContext*) for the snapshot task like this,
- (void *)snapshot
{
// NSLog(#"snapshot used here");
GLint backingWidth1, backingHeight1;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, mainDisplaySurface->systemColorRB);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_BGRA, GL_UNSIGNED_BYTE, data);
NSLog(#"data %d, %d, %d", (int)data[0], (int)data[1], (int)data[2]);
NSLog(#"backingWidth1 : %d, backingHeight1: %d", backingWidth1, backingHeight1);
if (transformData == NULL)
{
mycontext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext: mycontext];
NSLog(#"transformData initial");
transformData = new loadDataFromeX();
transformData->setupOGL(backingWidth1, backingHeight1);
[EAGLContext setCurrentContext: mainDisplaySurface->context];
}
{
[EAGLContext setCurrentContext: mycontext];
transformData->drawingOGL(data);
[EAGLContext setCurrentContext: mainDisplaySurface->context];
}
}
When the resources used for snapshot is to be released, the codes is written like this,
if (transformData != NULL)
{
{
[EAGLContext setCurrentContext: mycontext];
transformData->destroy();
delete transformData;
transformData = NULL;
[EAGLContext setCurrentContext: mainDisplaySurface->context];
}
[mycontext release];
mycontext = nil;
}

GPUImage record OpenGL ES scene's texture, video is black

i am using GPUImageTextureInput and GPUImageMovieWriter,
but i always got a black video.
do i miss something? i have spent days on this.
the frameBuffer and texture setup is here:
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:[[[GPUImageContext sharedImageProcessingContext] context] sharegroup]];
[EAGLContext setCurrentContext:self.context];
GLenum status;
glGenFramebuffers(1, &newFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, newFrameBuffer);
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 640, 1136 , 0, GL_RGBA,
GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
and the movie writer:
textureInput = [[GPUImageTextureInput alloc] initWithTexture:outputTexture size:size];
NSURL *movieURL = [NSURL fileURLWithPath:
[NSString stringWithFormat:#"%#%f.mp4", NSTemporaryDirectory(),
[[NSDate date] timeIntervalSince1970]]];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(640, 1136)];
movieWriter.encodingLiveVideo = YES;
[textureInput addTarget:movieWriter];
double delayToStartRecording = 1.0;
dispatch_time_t sstartTime = dispatch_time(DISPATCH_TIME_NOW, delayToStartRecording * NSEC_PER_SEC);
dispatch_after(sstartTime, dispatch_get_main_queue(), ^(void){
NSLog(#"Start recording");
//videoCamera.audioEncodingTarget = movieWriter;
[movieWriter startRecording];
started = YES;
// NSError *error = nil;
// if (![videoCamera.inputCamera lockForConfiguration:&error])
// {
// NSLog(#"Error locking for configuration: %#", error);
// }
// [videoCamera.inputCamera setTorchMode:AVCaptureTorchModeOn];
// [videoCamera.inputCamera unlockForConfiguration];
double delayInSeconds = 20.0;
dispatch_time_t stopTime = dispatch_time(DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC);
dispatch_after(stopTime, dispatch_get_main_queue(), ^(void){
[movieWriter finishRecordingWithCompletionHandler:^{
NSLog(#"Movie completed");
finished = YES;
}];
// [videoCamera.inputCamera lockForConfiguration:nil];
// [videoCamera.inputCamera setTorchMode:AVCaptureTorchModeOff];
// [videoCamera.inputCamera unlockForConfiguration];
});
});
and the drawInRect
[(GLKView*)self.view bindDrawable];
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArrayOES(_vertexArray);
// Render the object with GLKit
[self.effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36);
// Render the object again with ES2
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, 36);
glFlush();
//[EAGLContext setCurrentContext:self.context];
glBindFramebuffer(GL_FRAMEBUFFER, newFrameBuffer);
glViewport(0,0, 640,1138);
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArrayOES(_vertexArray);
// Render the object with GLKit
[self.effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36);
// Render the object again with ES2
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, 36);
glFlush();
// some GPUImage code
My framebuffer output / output texture was black as well. I tried a lot of stuff, a few days on and off debugging, mostly following the CubeExample from Brad Larson.
As soon as I added this call, things started showing up:
glBindTexture(GL_TEXTURE_2D, 0)
AFTER calls to glTexImage2D and glFramebufferTexture2D
Here's that example project for reference. https://github.com/BradLarson/GPUImage/tree/master/examples/iOS/CubeExample

GLKit, crash in texture rendering

I am creating a rectangle in OpenGL ES 2.0 using GLKit. This is working and I can fill up the rectangle with solid colour and a gradient.
Now I am trying to fill it up with a texture from an image, which is not working for me.
I am getting a EXC_BAD_ACCESS code=1 error on the glDrawElements line.
Here is what I have right now:
//Setup GKLView
EAGLContext *context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
self.glkView = [[GLKView alloc] initWithFrame:self.bounds
context:context];
_glkView.opaque = NO;
_glkView.backgroundColor = [UIColor clearColor];
_glkView.delegate = self;
[self addSubview:_glkView];
//Setup BaseEffect
self.effect = [[GLKBaseEffect alloc] init];
[EAGLContext setCurrentContext:_glkView.context];
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, self.bounds.size.width, 0, self.bounds.size.height, 0.0, 1.0);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 modelMatrix = GLKMatrix4Translate(GLKMatrix4Identity, 0.0, 0.0, -0.1);
self.effect.transform.modelviewMatrix = modelMatrix;
//Load image
UIImage* image = ...
CGImageRef cgImage = image.CGImage;
NSError* error;
self.textureInfo = [GLKTextureLoader textureWithCGImage:cgImage options:nil error:&error];
if (error) {
NSLog(#"could not load texture");
}
if (_textureInfo) {
self.effect.texture2d0.envMode = GLKTextureEnvModeReplace;
self.effect.texture2d0.target = GLKTextureTarget2D;
self.effect.texture2d0.name = _textureInfo.name;
}
//Bind buffers
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer (GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, Position));
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, _currentData.texCoordinates);
//Populate buffers
glBufferData(GL_ARRAY_BUFFER, (sizeof(Vertex) * _currentData.numberOfVertices), _currentData.vertices, GL_STATIC_DRAW);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (sizeof(GLubyte) * _currentData.numberOfIndices), _currentData.indices, GL_STATIC_DRAW);
[self.effect prepareToDraw];
[self.glkView display];
//Inside - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect;
glDrawElements(GL_TRIANGLE_STRIP, (sizeof(GLubyte) * _currentData.numberOfIndices), GL_UNSIGNED_BYTE, 0);
The data for texture coordinates is:
SimpleVertex* vertices1 = (SimpleVertex *)malloc(sizeof(SimpleVertex) * 4);
vertices1[0].Position[0] = 0.0;
vertices1[0].Position[1] = 0.0;
vertices1[1].Position[0] = 0.0;
vertices1[1].Position[1] = 1.0;
vertices1[2].Position[0] = 1.0;
vertices1[2].Position[1] = 1.0;
vertices1[3].Position[0] = 1.0;
vertices1[3].Position[1] = 0.0;
typedef struct
{
GLfloat Position[2];
}
SimpleVertex;
As I said, I am sure the rectangle is being drawn correctly. I can check this by filling it up with gradient, it looks fine. Also the texture loader is loading the image correctly, I can check this in the debugger.
Can someone point out what I am missing or doing wrong here?
Ok I figured it out. The mistake was in the way I was passing texture co-ordinate data to openGL. I was passing the data for vertices and colour this way:
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer (GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, Position));
But for texture coordinates I created a separate structure and was passing a pointer to that data and not an offset to the already bonded vertices array:
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, _currentData.texCoordinates);
The solution then was obviously to include the data within the Vertex data structure and then give the offset like so:
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, TexCoord));
I am still a newbie to OpenGL ES so there might have been a better way to solve this problem but this worked for me.

OpenGL and Cocoa not Working

I am trying to draw a very simple blue square with a red background in OpenGL using an NSOpenGL on Mountain Lion. The code is simple and should work, I'm assuming it's a problem with me setting up the context.
Here is my GLView interface:
#import <Cocoa/Cocoa.h>
#import <OpenGL/gl.h>
#import <GLKit/GLKit.h>
typedef struct {
char *Name;
GLint Location;
} Uniform;
#interface MyOpenGLView : NSOpenGLView {
Uniform *_uniformArray;
int _uniformArraySize;
GLKMatrix4 _projectionMatrix;
GLKMatrix4 _modelViewMatrix;
IBOutlet NSWindow *window;
int height, width;
}
-(void)drawRect:(NSRect)bounds;
#end
The implementation:
GLfloat square[] = {
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
0.5, 0.5
};
- (id)initWithFrame:(NSRect)frame {
self = [super initWithFrame:frame];
if (self) {
}
return self;
}
-(void)awakeFromNib {
NSString *vertexShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"VertexShader" ofType:#"vsh"] encoding:NSUTF8StringEncoding error:nil];
const char *vertexShaderSourceCString = [vertexShaderSource cStringUsingEncoding:NSUTF8StringEncoding];
NSLog(#"%s",vertexShaderSourceCString);
NSString *fragmentShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"FragmentShader" ofType:#"fsh"] encoding:NSUTF8StringEncoding error:nil];
const char *fragmentShaderSourceCString = [fragmentShaderSource cStringUsingEncoding:NSUTF8StringEncoding];
NSLog(#"%s",fragmentShaderSourceCString);
NSOpenGLContext *glContext = [self openGLContext];
[glContext makeCurrentContext];
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, 1, &fragmentShaderSourceCString, NULL);
glCompileShader(fragmentShader);
GLint compileSuccess;
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &compileSuccess);
if (compileSuccess == GL_FALSE) {
GLint logLength;
glGetShaderiv(fragmentShader, GL_INFO_LOG_LENGTH, &logLength);
if(logLength > 0) {
GLchar *log = (GLchar *)malloc(logLength);
glGetShaderInfoLog(fragmentShader, logLength, &logLength, log);
NSLog(#"Shader compile log:\n%s", log);
free(log);
}
exit(1);
}
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, 1, &vertexShaderSourceCString, NULL);
glCompileShader(vertexShader);
GLuint program = glCreateProgram();
glAttachShader(program, fragmentShader);
glAttachShader(program, vertexShader);
glLinkProgram(program);
glUseProgram(program);
const char *aPositionCString = [#"a_position" cStringUsingEncoding:NSUTF8StringEncoding];
GLuint aPosition = glGetAttribLocation(program, aPositionCString);
glVertexAttribPointer(aPosition, 2, GL_FLOAT, GL_FALSE, 0, square);
glEnable(aPosition);
GLint maxUniformLength;
GLint numberOfUniforms;
char *uniformName;
glGetProgramiv(program, GL_ACTIVE_UNIFORMS, &numberOfUniforms);
glGetProgramiv(program, GL_ACTIVE_UNIFORM_MAX_LENGTH, &maxUniformLength);
_uniformArray = malloc(numberOfUniforms * sizeof(Uniform));
_uniformArraySize = numberOfUniforms;
for (int i =0; i <numberOfUniforms; i++) {
GLint size;
GLenum type;
GLint location;
uniformName = malloc(sizeof(char*)*maxUniformLength);
glGetActiveUniform(program, i, maxUniformLength, NULL, &size, &type, uniformName);
_uniformArray[i].Name = uniformName;
location = glGetUniformLocation(program, uniformName);
_uniformArray[i].Location = location;
}
_modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
_projectionMatrix = GLKMatrix4MakeOrtho(-1, 1, -1.5, 1.5, -1, 1);
}
- (void)drawRect:(NSRect)bounds {
glClearColor(1.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, 480, 360);
for (int i = 0; i <_uniformArraySize; i++) {
if (strcmp(_uniformArray[i].Name, "ModelViewProjectionMatrix")==0) {
// Multiply the transformation matrices together
GLKMatrix4 modelViewProjectionMatrix = GLKMatrix4Multiply(_projectionMatrix, _modelViewMatrix);
glUniformMatrix4fv(_uniformArray[i].Location, 1, GL_FALSE, modelViewProjectionMatrix.m);
}
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glFlush();
}
My very simple fragment shader:
void main() {
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}
simple vertex shader:
attribute vec4 a_position;
uniform mat4 ModelViewProjectionMatrix;
void main() {
gl_Position = a_position * ModelViewProjectionMatrix;
}
I get no compilation errors, just a red screen and no blue square. Could someone please help me figure out what's wrong. I do get a warning though, stating Gl.h and gl3.h are both included. I'd like to be using OpenGL 2
Where is your view matrix (AKA your look-at matrix)? For my NSOpenGLView, I have at least three matrices, the model view (which consists of the multiplicative result of the rotation and translation matrices), the projection matrix (which takes into account your frustum scale, aspect ratio, near plane, and far plane), and the view matrix (which takes into account the viewers eye position in the "world", the point of focus, and the direction of the up vector). I will say big ups on placing the code in the awakeFromNib, taught me something...

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BORDER, 0); Throws a GL_INVALID_ENUM?

I am creating a 2D openGL texture from an NSImage (cocoa desktop).
I grabbed some code from the internet and it seems to work.
However, I started to insert calls to glGetError() here and there to track other rendering bugs, and thus I traced a GL_INVALID_ENUM error to the call mentioned above. As per the documentation on glTexParameteri, both GL_TEXTURE_2D and GL_TEXTURE_BORDER seem to be valid parameters...?
any clue?
this is the code:
- (void) createOpenGLTexture
{
[_openGLContext makeCurrentContext];
if (_textureID != 0) {
glDeleteTextures(1, &_textureID);
_textureID = 0;
}
NSSize imageSize = [_originalImage size];
// .............................................................
// Flip Image Vertically
NSImage* image = [[NSImage alloc] initWithSize:imageSize];
NSRect rect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
[image lockFocus];
NSAffineTransform* transform = [NSAffineTransform transform];
[transform translateXBy:0 yBy:rect.size.height];
[transform scaleXBy:+1.0 yBy:-1.0];
[transform concat];
[_originalImage drawAtPoint:NSZeroPoint
fromRect:rect
operation:NSCompositeCopy
fraction:1];
[image unlockFocus];
// Then we grab the raw bitmap data from the NSBitmapImageRep that is
// the ‘source’ of an NSImage
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithData:[image TIFFRepresentation]];
[image release];
GLenum imageFormat = GL_RGBA;
imageSize = [bitmap size]; // Needed?
long sourceRowBytes = [bitmap bytesPerRow];
// This SHOULD be enough but nope
GLubyte* sourcePic = (GLubyte*) [bitmap bitmapData];
// We have to copy that raw data one row at a time….yay
GLubyte* pic = malloc( imageSize.height * sourceRowBytes );
GLuint i;
GLuint intHeight = (GLuint) (imageSize.height);
for (i = 0; i < imageSize.height; i++) {
memcpy(pic + (i * sourceRowBytes),
sourcePic + (( intHeight - i - 1 )*sourceRowBytes),
sourceRowBytes);
}
[bitmap release];
sourcePic = pic;
// .........................................................................
// Create the texture proper
glEnable(GL_TEXTURE_2D);
checkOpenGLError();
glEnable(GL_COLOR_MATERIAL);
checkOpenGLError();
glGenTextures (1, &_textureID);
checkOpenGLError();
glBindTexture (GL_TEXTURE_2D, _textureID);
checkOpenGLError();
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
checkOpenGLError();
// Here we set the basic properties such as how it looks when resized and if it's bordered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BORDER, 0);
checkOpenGLError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
checkOpenGLError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
checkOpenGLError();
// Not really necessary all the time but we can also define if it can be tiled (repeating)
// Here GL_TEXTURE_WRAP_S = horizontal and GL_TEXTURE_WRAP_T = vertical
// This defaults to GL_REPEAT so we're going to prevent that by using GL_CLAMP
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
checkOpenGLError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
checkOpenGLError();
// And.....CREATE
glTexImage2D(GL_TEXTURE_2D,
0,
imageFormat,
rect.size.width,
rect.size.height,
0,
imageFormat,
GL_UNSIGNED_BYTE,
sourcePic);
checkOpenGLError();
glDisable(GL_TEXTURE_2D);
checkOpenGLError();
glDisable(GL_COLOR_MATERIAL);
checkOpenGLError();
}
GL_TEXTURE_BORDER is not a valid enum for glTexParameter(). Reference: http://www.opengl.org/sdk/docs/man/xhtml/glTexParameter.xml
You cannot (and don't need to) specify the bordersize. GL_TEXTURE_BORDER is only a valid enum for glGetTexLevelParameter() to retrieve the bordersize the opengl driver assigned to a specific texture level. As far as I am aware this is not used any more and the function returns 0 on all newer systems.
Don't confuse it with GL_TEXTURE_BORDER_COLOR which is used to set the color rendered when using the GL_CLAMP_BORDER wrapping mode.
GL_TEXTURE_BORDER isn't a valid value for pname when calling glTexParameteri. The border is specified in the call to glTexImage2D.
http://www.opengl.org/sdk/docs/man/xhtml/glTexParameter.xml

Resources