i have some problems in a project i want to support for ios5. I have code of an app that contains 3d rendering using opengl-es and antialiasing using multisample frame buffers. The code works great on ios4.3. Since ios5 the 3d models won't get rendered. On the simulator i just get a pink screen.
After some tests i found out, that the function
glResolveMultisampleFramebufferAPPLE()
is the problem. This function raises an error. 0x502
NSLog(#"0x%x", glGetError()); // "0x502"
I have no idea what's going on and why this function won't work on ios5. Can please anyone help me with this?? Here is some code that creates the frame buffers.
//first destroy frame buffers
glDeleteFramebuffersOES(1, &viewFramebuffer);
viewFramebuffer = 0;
glDeleteRenderbuffersOES(1, &viewRenderbuffer);
viewRenderbuffer = 0;
if(depthRenderbuffer)
{
glDeleteRenderbuffersOES(1, &depthRenderbuffer);
depthRenderbuffer = 0;
}
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen whereever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
//Width and height der viewBoundingBox in Pixeln
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
//Generate our MSAA Frame and Render buffers
glGenFramebuffersOES(1, &msaaFramebuffer);
glGenRenderbuffersOES(1, &msaaRenderBuffer);
//Bind our MSAA buffers
glBindFramebufferOES(GL_FRAMEBUFFER_OES, msaaFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaRenderBuffer);
// Generate the msaaDepthBuffer.
// 4 will be the number of pixels that the MSAA buffer will use in order to make one pixel on the render buffer.
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4,GL_RGB565_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, msaaRenderBuffer);
glGenRenderbuffersOES(1, &msaaDepthBuffer);
//Bind the msaa depth buffer.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaDepthBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth , backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, msaaDepthBuffer);
// Make sure that you are drawing to the current context
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, msaaFramebuffer);
[controller drawView:self];
//Bind both MSAA and View FrameBuffers
glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, msaaFramebuffer);
glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer);
if((err = glGetError())) NSLog(#"%x error in line %u in method %s", err, __LINE__, __FUNCTION__);
// Call a resolve to combine both buffers
glResolveMultisampleFramebufferAPPLE();
if((err = glGetError())) NSLog(#"%x error in line %u in method %s", err, __LINE__, __FUNCTION__);
// Use discard to improve fill rate and overall performance!
GLenum attachments[] = {GL_DEPTH_ATTACHMENT_OES, GL_COLOR_ATTACHMENT0_OES, GL_STENCIL_ATTACHMENT_OES};
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 3, attachments);
// Present final image to screen
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
For me, this problem was cause by the format passed to glRenderbufferStorageMultisampleAPPLE.
Try using GL_RGBA8_OES or GL_RGB5_A1_OES instead.
Related
Please tell me if the question's vogue, I need the answe as soon as possible
for more information about the problem you can refer to this. I just didn't understand how to manage buffers properly.
A red rectangle drawn on 2D texture disappears right after being drawn
Being in the last stages of customizing the class COpenGLControl:
I have created two instances of the class in my MFC Dialog:
whenever the extent of zoom is changed in the bigger window, it is drawn as a red rectangle on the smaller window. which is always in full extent mode. In order to stablish such a relation between two instances, I have used the concept of user defined messages and send the message to the parent of the class.
The main trouble
based on the information above:
1- when I pan in the bigger window (mean I cause the user defined message be sent rapidly and m_oglWindow2.DrawRectangleOnTopOfTexture() be called rapidly I see a track of red rectangles shown but immediately disappeared in the smaller window
2- CPU-usage immediately get's high when panning from 3% to 25%
3- In other navigation tasks liked Fixed zoom in,Fixed zoom out,Pan and etc a red rectangle flashes and then immediately disappears, I mean it seems that the red rectangle is there just when the control is in the function m_oglWindow2.DrawRectangleOnTopOfTexture() but I want the rectangle be there until the next call off m_oglWindow2.DrawRectangleOnTopOfTexture()
4- making calls to glDrawBuffer(GL_FRONT_AND_BACK) and glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) causes the texture in the smaller window to get off and on even if the mouse is idle
I know the problem is most because of the lines glClear, glDrawBuffer, SwapBuffers in the following codes. But I don't know exactly how to solve
void COpenGLControl::OnTimer(UINT nIDEvent)
{
wglMakeCurrent(hdc,hrc);
switch (nIDEvent)
{
case 1:
{
// Clear color and depth buffer bits
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw OpenGL scene
oglDrawScene();
// Swap buffers
SwapBuffers(hdc);
break;
}
default:
break;
}
CWnd::OnTimer(nIDEvent);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::DrawRectangleOnTopOfTexture()
{
wglMakeCurrent(hdc, hrc);
//glDrawBuffer(GL_FRONT_AND_BACK);
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushAttrib(GL_ENABLE_BIT|GL_CURRENT_BIT);
glDisable(target);
glColor3f(1.0f,0.0f,0.0f);
glBegin(GL_LINE_LOOP);
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(1));
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(1));
glEnd();
glPopAttrib();
SwapBuffers(hdc);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnDraw(CDC *pDC)
{
// TODO: Camera controls
wglMakeCurrent(hdc,hrc);
glLoadIdentity();
gluLookAt(0,0,1,0,0,0,0,1,0);
glTranslatef(m_fPosX, m_fPosY, 0.0f);
glScalef(m_fZoom,m_fZoom,1.0);
wglMakeCurrent(NULL, NULL);
}
Remember:
OnDraw function is just called twice in the smaller window first when initializing the window and second when calling m_oglWindow2.ZoomToFullExtent() and then for each call of OnDraw in the bigger window, there's a call to the DrawRectangleOnTopOfTexture() in the smaller one but this function DrawRectangleOnTopOfTexture() is never called in the bigger window
It'll be favor of you if:
you correct my code
introduce me an excellent tutorial on how to use buffers in multiple
drawings that can not be done in a single function or a single thread (An excellent tutorial about buffers eg.color buffers and etc in opengl
------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------
I just added explanations below to provide further information about how's the class is working if it's needed. If you think it's bothering viewers just edit it to remove which of the parts that you feel is not required. But please do help me.
the oglInitialize sets initial parameters for the scene:
void COpenGLControl::oglInitialize(void)
{
// Initial Setup:
//
static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32, // bit depth
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
24, // z-buffer depth
8,0,PFD_MAIN_PLANE, 0, 0, 0, 0,
};
// Get device context only once.
hdc = GetDC()->m_hDC;
// Pixel format.
m_nPixelFormat = ChoosePixelFormat(hdc, &pfd);
SetPixelFormat(hdc, m_nPixelFormat, &pfd);
// Create the OpenGL Rendering Context.
hrc = wglCreateContext(hdc);
wglMakeCurrent(hdc, hrc);
// Basic Setup:
//
// Set color to use when clearing the background.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepth(1.0f);
// Turn on backface culling
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
// Turn on depth testing
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// Send draw request
OnDraw(NULL);
wglMakeCurrent(NULL, NULL);
}
example of a navigation task:
PAN:
void COpenGLControl::OnMouseMove(UINT nFlags, CPoint point)
{
// TODO: Add your message handler code here and/or call default
if (WantToPan)
{
if (m_fLastX < 0.0f && m_fLastY < 0.0f)
{
m_fLastX = (float)point.x;
m_fLastY = (float)point.y;
}
int diffX = (int)(point.x - m_fLastX);
int diffY = (int)(point.y - m_fLastY);
m_fLastX = (float)point.x;
m_fLastY = (float)point.y;
if (nFlags & MK_MBUTTON)
{
m_fPosX += (float)0.05f*m_fZoomInverse*diffX;
m_fPosY -= (float)0.05f*m_fZoomInverse*diffY;
}
if (WantToSetViewRectangle)
setViewRectangle();
OnDraw(NULL);
}
CWnd::OnMouseMove(nFlags, point);
}
the most important part: before calling OnDraw in each of the navigation functions if the client-programmer has set WantToSetViewRectangle as true means that he wants the view rectangle for the window to be calculated and calls setViewRectangle() which is as follows. It sends a message to the parent in case of an update for ViewRectangle:
void COpenGLControl::setViewRectangle()
{
CWnd *pParentOfClass = CWnd::GetParent();
ViewRectangle.at(0) = -m_fPosX - oglWindowWidth*m_fZoomInverse/2;
ViewRectangle.at(1) = -m_fPosY - oglWindowHeight*m_fZoomInverse/2;
ViewRectangle.at(2) = -m_fPosX + oglWindowWidth*m_fZoomInverse/2;
ViewRectangle.at(3) = -m_fPosY + oglWindowHeight*m_fZoomInverse/2;
bool is_equal = ViewRectangle == LastViewRectangle;
if (!is_equal)
pParentOfClass ->SendMessage(WM_RECTANGLECHANGED,0,0);
LastViewRectangle.at(0) = ViewRectangle.at(0);
LastViewRectangle.at(1) = ViewRectangle.at(1);
LastViewRectangle.at(2) = ViewRectangle.at(2);
LastViewRectangle.at(3) = ViewRectangle.at(3);
}
this is how we use the class in the client code:
MyOpenGLTestDlg.h
two instances of class:
COpenGLControl m_oglWindow;
COpenGLControl m_oglWindow2;
MyOpenGLTestDlg.cpp
apply texture on the windows and set both of them to full extent in the OnInitDlg
m_oglWindow.pImage = m_files.pRasterData;
m_oglWindow.setImageWidthHeightType(m_files.RasterXSize,m_files.RasterYSize,m_files.eType);
m_oglWindow.m_unpTimer = m_oglWindow.SetTimer(1, 1, 0);
m_oglWindow2.pImage = m_files.pRasterData;
m_oglWindow2.setImageWidthHeightType(m_files.RasterXSize,m_files.RasterYSize,m_files.eType);
m_oglWindow2.m_unpTimer = m_oglWindow2.SetTimer(1, 20, 0);
m_oglWindow2.ZoomToFullExtent();
m_oglWindow.ZoomToFullExtent();
want pan, zoomtool and setViewRectangle be active for the bigger window but not for the smaller one:
m_oglWindow.WantToPan = true;
m_oglWindow.WantToUseZoomTool = true;
m_oglWindow.WantToSetViewRectangle = true;
handling the user-defined message in the parent. exchange the ViewRectangle data to the smaller window and draw the red rectangle:
LRESULT CMyOpenGLTestDlg::OnRectangleChanged(WPARAM wParam,LPARAM lParam)
{
m_oglWindow2.RectangleToDraw = m_oglWindow.ViewRectangle;
m_oglWindow2.DrawRectangleOnTopOfTexture();
return 0;
}
Here's the full customized class if you're interested in downloading it and fix my problem.
The problem is that you're drawing on a timer and when your application receives a WM_PAINT message. MFC invokes your OnDraw (...) callback when it needs to repaint the window, you should move ALL of your drawing functionality into OnDraw (...) and call OnDraw (...) from your timer function.
void COpenGLControl::OnTimer(UINT nIDEvent)
{
switch (nIDEvent)
{
case 1:
{
OnDraw (NULL);
break;
}
default:
break;
}
CWnd::OnTimer(nIDEvent);
}
void COpenGLControl::OnDraw(CDC *pDC)
{
// TODO: Camera controls
wglMakeCurrent(hdc,hrc);
// Clear color and depth buffer bits
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity ();
gluLookAt (0,0,1,0,0,0,0,1,0);
glTranslatef (m_fPosX, m_fPosY, 0.0f);
glScalef (m_fZoom,m_fZoom,1.0);
// Draw OpenGL scene
oglDrawScene();
// Swap buffers
SwapBuffers(hdc);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::DrawRectangleOnTopOfTexture()
{
glPushAttrib(GL_ENABLE_BIT|GL_CURRENT_BIT);
glDisable(target);
glColor3f(1.0f,0.0f,0.0f);
glBegin(GL_LINE_LOOP);
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(1));
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(1));
glEnd();
glPopAttrib();
}
And only ever make calls to wglMakeCurrent (...) within OnDraw (...). This function is really meant for situations where you're rendering into multiple render contexts, or drawing using multiple threads.
I have an OpenGL 3.2 CORE context on OSX 10.7.5 set up and trying to render to a 3D texture,
using a layered rendering approach. The geometry shader feature "gl_layer" is supported, but I cannot bind a GL_TEXTURE_3D to my framebuffer attachment. It returns GL_FRAMEBUFFER_UNSUPPORTED.
This is the card and driver version in my MBP:
AMD Radeon HD 6770M 1024 MB - OpenGL 3.2 CORE (ATI-7.32.12)
This feature does not directly relate to a specific extension AFAIK.
Does anybody know how to figure out whether this is unsupported by the driver or hardware?
Thanks so much.
Below the code to reconstruct. I use glfw to set up the context:
// Initialize GLFW
if (!glfwInit())
throw "Failed to initialize GLFW";
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
// Open a window and create its OpenGL context
if (!glfwOpenWindow(720, 480, 8, 8, 8, 8, 24, 8, GLFW_WINDOW))
throw "Failed to open GLFW window";
//
// ...
//
GLuint framebuffer, texture;
GLenum status;
glGenFramebuffers(1, &framebuffer);
// Set up the FBO with one texture attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, framebuffer);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_3D, texture);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, 256, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture, 0);
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
throw status;
//
// status is GL_FRAMEBUFFER_UNSUPPORTED here !!!
//
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glDeleteTextures(1, &texture);
glDeleteFramebuffers(1, &framebuffer);
exit(1);
Does anybody know how to figure out whether this is unsupported by the driver or hardware?
It just told you. That's what GL_FRAMEBUFFER_UNSUPPORTED means: it's the driver exercising veto-power over any framebuffer attachments it doesn't like for any reason whatsoever.
There's not much you can do when this happens except to try other things. Perhaps rendering to a 2D array texture.
I am trying to figure out a way to cut out a certain region of a background texture such that a certain custom pattern is not rendered on the screen for that background. For example:
This square can be any pattern.
I am using Frame Buffer Object and Stencil Buffer to achieve this kind of effect. Here is the code:
fbo.begin();
//Disables ColorMask and DepthMask so that all the rendering is done on the Stencil Buffer
Gdx.gl20.glColorMask(false, false, false, false);
Gdx.gl20.glDepthMask(false);
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilFunc(GL20.GL_ALWAYS, 1, 0xFFFFFFFF);
Gdx.gl20.glStencilOp(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
stage.getSpriteBatch().begin();
rHeart.draw(stage.getSpriteBatch(), 1); //Draws the required pattern on the stencil buffer
//Enables the ColorMask and DepthMask to resume normal rendering
Gdx.gl20.glColorMask(true, true, true, true);
Gdx.gl20.glDepthMask(true);
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 1, 0xFFFFFFFF);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_KEEP, GL20.GL_KEEP);
background.draw(stage.getSpriteBatch(), 1); //Draws the background such that the background is not rendered on the required pattern, leaving that area black.
stage.getSpriteBatch().end();
Gdx.gl20.glDisable(GL20.GL_STENCIL_TEST);
fbo.end();
However this is not working at all. How am I supposed to do this using Stencil Buffers? I am also facing some difficulty understanding glStencilFunc and glStencilOp. It would be very helpful if anyone can shed some light on these two.
UPDATE: I have also tried producing something of the same kind using glColorMask. Here is the code:
Gdx.gl20.glClearColor(0, 0, 0, 0);
stage.draw();
FrameBuffer.clearAllFrameBuffers(Gdx.app);
fbo1.begin();
Gdx.gl20.glClearColor(0, 0, 0, 0);
batch.begin();
rubber.draw(batch, 1);
Gdx.gl20.glColorMask(false, false, false, true);
coverHeart.draw(batch, 1);
Gdx.gl20.glColorMask(true, true, true, false);
batch.end();
fbo1.end();
toDrawHeart = new Image(new TextureRegion(fbo1.getColorBufferTexture()));
batch.begin();
toDrawHeart.draw(batch, 1);
batch.end();
This code is producing this:
Instead of something like this: (Ignore the windows sizes and colour tones)
Note: I am using the libgdx library.
While drawing to a SpriteBatch, state changes are ignored, until end() is called. If you want to use stenciling with SpriteBatch, you'll need to break up the batch drawing. One thing, I've left out FBOs, but that shouldn't make a difference.
#Override
public void create() {
camera = new OrthographicCamera(1, 1);
batch = new SpriteBatch();
texture = new Texture(Gdx.files.internal("data/badlogic.jpg"));
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
TextureRegion region = new TextureRegion(texture, 0, 0, 256, 256);
sprite = new Sprite(region);
sprite.setSize(1f, 1f);
sprite.setPosition(-0.5f, -0.5f);
spriteUpsideDown = new Sprite(new TextureRegion(texture, 1f, 1f, 0f, 0f));
spriteUpsideDown.setSize(1f, 1f);
spriteUpsideDown.setPosition(-0.5f, -0.5f);
pattern = new Sprite(region);
pattern.setSize(0.5f, 0.5f);
pattern.setPosition(-0.25f, -0.25f);
<< Set Input Processor >>
}
The input processor allows to set two boolean flags breakBatch1 and breakBatch2 via keyboard (libgdx on desktop), which are used to break the SpriteBatch drawing.
#Override
public void render() {
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_STENCIL_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
// setup drawing to stencil buffer
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilFunc(GL20.GL_ALWAYS, 0x1, 0xffffffff);
Gdx.gl20.glStencilOp(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
Gdx.gl20.glColorMask(false, false, false, false);
// draw base pattern
batch.begin();
pattern.draw(batch);
if(breakBatch1) { batch.end(); batch.begin(); }
// fix stencil buffer, enable color buffer
Gdx.gl20.glColorMask(true, true, true, true);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_KEEP, GL20.GL_KEEP);
// draw where pattern has NOT been drawn
Gdx.gl20.glStencilFunc(GL20.GL_NOTEQUAL, 0x1, 0xff);
sprite.draw(batch);
if(breakBatch2) { batch.end(); batch.begin(); }
// draw where pattern HAS been drawn.
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0x1, 0xff);
spriteUpsideDown.draw(batch);
batch.end();
}
Gdx.gl20.glStencilFunc(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
These are not the right arguments to glStencilFunc. I think you mean glStencilOp here.
You need to use glGetError in your code, it will alert you to these kinds of errors.
I believe your problem is that your initial GL_REPLACE stencil operation is applied to all the drawn pixels by your rHeart.draw regardless of the shape of any texture applied on the quad.
Thus, the stencil value is applied to every pixel of your quads which gives your problem.
If the texture applied on your quad has an alpha channel, as GL_ALPHA_TEST is not supported, you could setup your shader to discard the totally transparent pixels, preventting them from being drawn to the stencil buffer.
In my current project I need to draw a complex background as a background for a few UITableView cells. Since the code for drawing this background is pretty long and CPU heavy when executed in the cell's drawRect: method, I decided to render it only once to a CGLayer and then blit it to the cell to enhance the overall performance.
The code I'm using to draw the background to a CGLayer:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
CGContextRef viewContext = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
standardCellBackgroundLayer = CGLayerCreateWithContext(viewContext, rect.size, NULL);
CGContextRef context = CGLayerGetContext(standardCellBackgroundLayer);
// Setup the paths
CGRect rectForShadowPadding = CGRectInset(rect, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2) + PLACES_DEFAULT_CELL_SIDE_PADDING, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2));
CGMutablePathRef path = createPathForRoundedRect(rectForShadowPadding, LIST_ITEM_CORNER_RADIUS);
// Save the graphics context state
CGContextSaveGState(context);
// Draw shadow
CGContextSetShadowWithColor(context, CGSizeMake(0, 0), PLACES_DEFAULT_CELL_SHADOW_SIZE, [Skin shadowColor]);
CGContextAddPath(context, path);
CGContextSetFillColorWithColor(context, [Skin whiteColor]);
CGContextFillPath(context);
// Clip for gradient
CGContextAddPath(context, path);
CGContextClip(context);
// Draw gradient on clipped path
CGPoint startPoint = rectForShadowPadding.origin;
CGPoint endPoint = CGPointMake(rectForShadowPadding.origin.x, CGRectGetMaxY(rectForShadowPadding));
CGContextDrawLinearGradient(context, [Skin listGradient], startPoint, endPoint, 0);
// Restore the graphics state and release everything
CGContextRestoreGState(context);
CGPathRelease(path);
}
return standardCellBackgroundLayer;
}
The code to blit the layer to the current context:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(context, CGPointMake(0.0, 0.0), [Skin standardCellBackgroundLayer]);
}
This actually does the trick pretty nice but the only problem I'm having is that the rounded corners (check the static method). Are very jaggy when blitted to the screen. This wasn't the case when the drawing code was at its original position: in the drawRect method.
How do I get back this antialiassing?
For some reason the following methods don't have any impact on the anti-aliassing:
CGContextSetShouldAntialias(context, YES);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetAllowsAntialiasing(context, YES);
Thanks in advance!
You can simplify this by just using UIGraphicsBeginImageContextWithOptions and setting the scale to 0.0.
Sorry for awakening an old post, but I came across it so someone else may too. More details can be found in the UIGraphicsBeginImageContextWithOptions documentation:
If you specify a value of 0.0, the scale factor is set to the scale
factor of the device’s main screen.
Basically meaning that if it's a retina display it will create a retina context, that way you can specify 0.0 and treat the coordinates as points.
I'm going to answer my own question since I figured it out some time ago.
You should make a retina aware context. The jaggedness only appeared on a retina device.
To counter this behavior, you should create a retina context with this helper method:
// Begin a graphics context for retina or SD
void RetinaAwareUIGraphicsBeginImageContext(CGSize size)
{
static CGFloat scale = -1.0;
if(scale < 0.0)
{
UIScreen *screen = [UIScreen mainScreen];
if([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0)
{
scale = [screen scale]; // Retina
}
else
{
scale = 0.0; // SD
}
}
if(scale > 0.0)
{
UIGraphicsBeginImageContextWithOptions(size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(size);
}
}
Then, in your drawing method call the method listed above like so:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
RetinaAwareUIGraphicsBeginImageContext(CGSizeMake(320.0, 480.0));
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
...
I am working on my first OpenGL application using Cocoa (I have used OpenGL ES on the iPhone) and I am having trouble loading a texture from an image file. Here is my texture loading code:
#interface MyOpenGLView : NSOpenGLView
{
GLenum texFormat[ 1 ]; // Format of texture (GL_RGB, GL_RGBA)
NSSize texSize[ 1 ]; // Width and height
GLuint textures[1]; // Storage for one texture
}
- (BOOL) loadBitmap:(NSString *)filename intoIndex:(int)texIndex
{
BOOL success = FALSE;
NSBitmapImageRep *theImage;
int bitsPPixel, bytesPRow;
unsigned char *theImageData;
NSData* imgData = [NSData dataWithContentsOfFile:filename options:NSUncachedRead error:nil];
theImage = [NSBitmapImageRep imageRepWithData:imgData];
if( theImage != nil )
{
bitsPPixel = [theImage bitsPerPixel];
bytesPRow = [theImage bytesPerRow];
if( bitsPPixel == 24 ) // No alpha channel
texFormat[texIndex] = GL_RGB;
else if( bitsPPixel == 32 ) // There is an alpha channel
texFormat[texIndex] = GL_RGBA;
texSize[texIndex].width = [theImage pixelsWide];
texSize[texIndex].height = [theImage pixelsHigh];
if( theImageData != NULL )
{
NSLog(#"Good so far...");
success = TRUE;
// Create the texture
glGenTextures(1, &textures[texIndex]);
NSLog(#"tex: %i", textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glPixelStorei(GL_UNPACK_ROW_LENGTH, [theImage pixelsWide]);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Typical texture generation using data from the bitmap
glBindTexture(GL_TEXTURE_2D, textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texSize[texIndex].width, texSize[texIndex].height, 0, texFormat[texIndex], GL_UNSIGNED_BYTE, [theImage bitmapData]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
}
}
return success;
}
It seems that the glGenTextures() function is not actually creating a texture because textures[0] remains 0. Also, logging glIsTexture(textures[texIndex]) always returns false.
Any suggestions?
Thanks,
Kyle
glGenTextures(1, &textures[texIndex] );
What is your textures definition?
glIsTexture only returns true if the texture is already ready. A name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture.
Check if the glGenTextures is by accident executed between glBegin and glEnd -- that's the only official failure reason.
Also:
Check if the texture is square and has dimensions that are a power of 2.
Although it isn't emphasized anywhere enough iPhone's OpenGL ES implementation requires them to be that way.
OK, I figured it out. It turns out that I was trying to load the textures before I set up my context. Once I put loading textures at the end of the initialization method, it worked fine.
Thanks for the answers.
Kyle