CALayer CGPatternRef performance issues - macos

I've created a CALayer subclass in order to draw a checkerboard background pattern. Everything works well and is rendering correctly, however I've discovered that performance takes a nosedive when the CALayer is given a large frame.
It seems fairly obvious that I could optimise by shifting the allocation of my CGColorRef and CGPatternRef outside of the drawLayer:inContext: call, but I'm not sure how to go about this as both rely on having a CGContextRef.
As far as my understanding goes, CALayer's drawing context is actually owned by its parent NSView and is only passed during drawing. If this is the case, how best can I optimise the following code?
void drawCheckerboardPattern(void *info, CGContextRef context)
{
CGColorRef alternateColor = CGColorCreateGenericRGB(1.0, 1.0, 1.0, 0.25);
CGContextSetFillColorWithColor(context, alternateColor);
CGContextAddRect(context, CGRectMake(0.0f, 0.0f, kCheckerboardSize, kCheckerboardSize));
CGContextFillPath(context);
CGContextAddRect(context, CGRectMake(kCheckerboardSize, kCheckerboardSize, kCheckerboardSize, kCheckerboardSize));
CGContextFillPath(context);
CGColorRelease(alternateColor);
}
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context
{
CGFloat red = 0.0f, green = 0.0f, blue = 0.0f, alpha = 0.0f;
NSColor *originalBackgroundColor = [self.document.backgroundColor colorUsingColorSpaceName:NSCalibratedRGBColorSpace];
[originalBackgroundColor getRed:&red green:&green blue:&blue alpha:&alpha];
CGColorRef bgColor = CGColorCreateGenericRGB(red, green, blue, alpha);
CGContextSetFillColorWithColor(context, bgColor);
CGContextFillRect(context, layer.bounds);
// Should we draw a checkerboard pattern?
if([self.document.drawCheckerboard boolValue])
{
static const CGPatternCallbacks callbacks = { 0, &drawCheckerboardPattern, NULL };
CGContextSaveGState(context);
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGContextSetFillColorSpace(context, patternSpace);
CGColorSpaceRelease(patternSpace);
CGPatternRef pattern = CGPatternCreate(NULL,
CGRectMake(0.0f, 0.0f, kCheckerboardSize*2, kCheckerboardSize*2),
CGAffineTransformIdentity,
kCheckerboardSize*2,
kCheckerboardSize*2,
kCGPatternTilingConstantSpacing,
true,
&callbacks);
alpha = 1.0f;
CGContextSetFillPattern(context, pattern, &alpha);
CGPatternRelease(pattern);
CGContextFillRect(context, layer.bounds);
CGContextRestoreGState(context);
}
CGColorRelease(bgColor);
}

You can create the pattern outside your drawLayer:inContext: just fine, it doesn't need a context. So just create a CGPatternRef instance variable and create the pattern. That should already speed up rendering as creating the pattern is expensive. In fact, I would create all CG* instances that don't need a context outside your drawLayer:inContext: method, so everything up to CGColorCreateGenericRGB and also the CGColorSpaceCreatePattern.

Related

glClear() doesn't work on some machines

I've got an OpenGL context with an FBO and all of my simple drawing operations work fine except for glClear(). I can draw textures and rectangles but glClear() refuses to do anything. glGetError() does not return any error.
Im calling glClear() like this:
glRectd(0, 0, 1024, 1080); // I see the results of this call
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // This does nothing
Note that this code with glClear() works fine on some Macs but not on others so perhaps I've been getting lucky and I need to setup the context or some other setting differently. Any suggestions would be greatly appreciated!
Edit: This has something to do with drawing the texture from the FBO onto another context. After that happens, glClear() stops working.
Edit 2: I now have it reproduced in a small Xcode project and I'm pretty sure I've concluded that everything works on the NVIDIA card but not on the integrated Intel Iris card. I'll post the test code shortly.
Edit 3: Code. I tried to minimize it but there's still a bit of bulk.
//
// On the NVIDIA card, we'll see the video switch between red and green.
// On the Intel Iris, it will get stuck on blue because of "break" code below.
//
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
// Initialize the view to white
[self clearContext:_view.openGLContext.CGLContextObj toRed:1 green:1 blue:1 alpha:1];
// FBO context
fbo = [[WTFBOContext alloc] initWithWidth:_view.frame.size.width height:_view.frame.size.height shareContext:_view.openGLContext];
// Clear the FBO context (and thus texture) to solid blue
[self clearContext:fbo.oglCtx.CGLContextObj toRed:0 green:0 blue:1 alpha:1];
// These calls "break" the FBO on Intel Iris chips
{
CGLContextObj cgl_ctx = _view.openGLContext.CGLContextObj;
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, fbo.texture);
glBegin(GL_QUADS);
glEnd();
}
__block int r = 0;
[NSTimer scheduledTimerWithTimeInterval:1.0 repeats:YES block:^(NSTimer * _Nonnull timer) {
r = 1 - r;
// Clear the FBO context (and thus texture) to solid red or green
[self clearContext:fbo.oglCtx.CGLContextObj toRed:r green:1 - r blue:0 alpha:1];
[self drawTexture:fbo.texture
fromRect:_view.frame
toContext:_view.openGLContext.CGLContextObj
inRect:_view.frame
flipped:NO
mirrored:NO
blending:NO
withAlpha:1.0];
}];
}
- (void) clearContext:(CGLContextObj) cgl_ctx
toRed:(GLfloat) red
green:(GLfloat) green
blue:(GLfloat) blue
alpha:(GLfloat) alpha
{
glClearColor(red, green, blue, alpha);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glFlush();
}
- (void) drawTexture:(GLuint) tname
fromRect:(CGRect) fromRect
toContext:(CGLContextObj) cgl_ctx
inRect:(CGRect) inRect
flipped:(BOOL) flipped
mirrored:(BOOL) mirrored
blending:(BOOL) blending
withAlpha:(GLfloat) withAlpha
{
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, tname);
GLint vp[4];
glGetIntegerv(GL_VIEWPORT, vp);
GLdouble left, right, bottom, top;
if (flipped)
{
bottom = vp[1] + vp[3];
top = vp[1];
}
else
{
bottom = vp[1];
top = vp[1] + vp[3];
}
if (mirrored)
{
left = vp[0] + vp[2];
right = vp[0];
}
else
{
left = vp[0];
right = vp[0] + vp[2];
}
glMatrixMode (GL_PROJECTION);
glPushMatrix();
glLoadIdentity ();
glOrtho (left, right, bottom, top, -1, 1);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
GLboolean wasBlending = glIsEnabled(GL_BLEND);
if (blending)
{
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
else
glDisable(GL_BLEND);
// Texures are multiplied by the current color.
glColor4f(withAlpha, withAlpha, withAlpha, withAlpha);
glBegin(GL_QUADS);
glTexCoord2f(fromRect.origin.x, fromRect.origin.y);
glVertex2i(inRect.origin.x, inRect.origin.y);
glTexCoord2f(fromRect.origin.x, fromRect.origin.y + fromRect.size.height);
glVertex2i(inRect.origin.x, inRect.origin.y + inRect.size.height);
glTexCoord2f(fromRect.origin.x + fromRect.size.width, fromRect.origin.y + fromRect.size.height);
glVertex2i(inRect.origin.x + inRect.size.width, inRect.origin.y + inRect.size.height);
glTexCoord2f(fromRect.origin.x + fromRect.size.width, fromRect.origin.y);
glVertex2i(inRect.origin.x + inRect.size.width, inRect.origin.y);
glEnd();
glMatrixMode (GL_PROJECTION);
glPopMatrix();
if (wasBlending)
glEnable(GL_BLEND);
else
glDisable(GL_BLEND);
glFlush();
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
Support Files:
#interface WTFBOContext : NSObject
{
GLuint framebuffer;
}
#property (nonatomic, retain) NSOpenGLContext *oglCtx;
#property (nonatomic, retain) NSOpenGLPixelFormat *oglFmt;
#property (nonatomic) GLuint texture;
#property (nonatomic, readonly) NSUInteger width;
#property (nonatomic, readonly) NSUInteger height;
- (id)initWithWidth:(NSUInteger)width height:(NSUInteger)height shareContext:(NSOpenGLContext *)shareContext;
// We take ownership of the texture
- (void)setTexture:(GLuint)texture;
- (BOOL)isComplete;
#end
#implementation WTFBOContext
- (id)initWithWidth:(NSUInteger)width height:(NSUInteger)height shareContext:(NSOpenGLContext *)shareContext
{
self = [super init];
_width = width;
_height = height;
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
NSOpenGLPFADepthSize, (NSOpenGLPixelFormatAttribute)24,
(NSOpenGLPixelFormatAttribute)0};
self.oglFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
self.oglCtx = [[NSOpenGLContext alloc] initWithFormat:self.oglFmt shareContext:shareContext];
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
glGenFramebuffersEXT(1, &framebuffer);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
[self _makeTexture];
glViewport(0, 0, _width, _height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, _width, 0, _height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
return self;
}
- (void)dealloc
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
glDeleteTextures(1, &_texture);
}
- (void)_makeTexture
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, texture);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA8, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
self.texture = texture;
}
- (void)setTexture:(GLuint)tex
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
if (_texture > 0)
glDeleteTextures(1, &_texture);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_RECTANGLE_EXT, tex, 0);
if (!self.isComplete)
NSLog(#"glFramebufferTexture2DEXT");
_texture = tex;
}
- (BOOL)isComplete
{
CGLContextObj cgl_ctx = self.oglCtx.CGLContextObj;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
GLuint status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
return status == GL_FRAMEBUFFER_COMPLETE_EXT;
}
#end

opengl es 2.0 object picking on ios using color coding

I have read a lot of tutorials about using color coding to achieve 3D Object picking on iOS. But I'm not sure how to do it. Anyone who can get me a demo written by Objective-C .
The related issues just like this:
OpenGL ES 2.0 Object Picking on iOS (Using Color Coding)
many thanks.
luo
I have achieved picking object in OpenGL ES scene using snapshot. Here is the key code:
-(GLKVector4)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
GLKVector4 objColor = [snapshot pickPixelAtX:x Y:y];
return objColor;
}
And, then in your tapGesture method, you just need to add this:
const CGPoint loc = [recognizer locationInView:[self view]];
GLKVector4 objColor = [self pickAtX:loc.x Y:loc.y];
if (GLKVector4AllEqualToVector4(objColor, GLKVector4Make(1.0f, 0.0f, 0.0f, 1.0f)))
{
//do something.......
}
of course, you should add this :
#implementation UIImage (NDBExtensions)
- (GLKVector4)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x > width) || (y > height))
{
GLKVector4 baseColor = GLKVector4Make(0.0f, 0.0f, 0.0f, 1.0f);
return baseColor;
}
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
//UInt8 b = data[offset+0];
float b = data[offset+0];
float g = data[offset+1];
float r = data[offset+2];
float a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%f G:%f B:%f A:%f", r, g, b, a);
GLKVector4 objColor = GLKVector4Make(r/255.0f, g/255.0f, b/255.0f, a/255.0f);
return objColor;
}
it is useful to achieve object picking in OpenGL ES scene.
I had been achieve 3D Object picking use color coding, if anybody want the demo, please call me and tell me your e-mail.

Why do vertices appear white instead of the colour in the vertex buffer?

I've created 3 buffers to separate vertex position, colour and index data.
The vertices correctly render as a square but it's white instead of the colour defined in the array dynamicVertexData.
I'm using OpenGL ES 2.0, but I assume I'm making a general OpenGL mistake.
Can anyone spot it?
typedef struct _vertexStatic
{
GLfloat position[2];
} vertexStatic;
typedef struct _vertexDynamic
{
GLubyte color[4];
} vertexDynamic;
enum {
ATTRIB_POSITION,
ATTRIB_COLOR,
NUM_ATTRIBUTES
};
// Separate buffers for static and dynamic data.
GLuint staticBuffer;
GLuint dynamicBuffer;
GLuint indexBuffer;
const vertexStatic staticVertexData[] = {
{0, 0},
{50, 0},
{50, 50},
{0, 50},
};
vertexDynamic dynamicVertexData[] = {
{0, 0, 255, 255},
{0, 0, 255, 255},
{0, 0, 255, 255},
{0, 0, 255, 255},
};
const GLubyte indices[] = {
0, 1, 2,
2, 3, 0,
};
- (void)setupGL {
CGSize screenSize = [UIApplication currentSize];
CGSize screenSizeHalved = CGSizeMake(screenSize.width/2, screenSize.height/2);
numIndices = sizeof(indices)/sizeof(indices[0]);
[EAGLContext setCurrentContext:self.context];
glEnable(GL_CULL_FACE); // Improves perfromance
self.effect = [[GLKBaseEffect alloc] init];
// The near and far plane are measured in units from the eye
self.effect.transform.projectionMatrix = GLKMatrix4MakeOrtho(-screenSizeHalved.width,
screenSizeHalved.width,
-screenSizeHalved.height,
screenSizeHalved.height,
0.0f, 1.0f);
self.preferredFramesPerSecond = 30;
CreateBuffers();
}
void CreateBuffers()
{
// Static position data
glGenBuffers(1, &staticBuffer);
glBindBuffer(GL_ARRAY_BUFFER, staticBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(staticVertexData), staticVertexData, GL_STATIC_DRAW);
// Dynamic color data
// While not shown here, the expectation is that the data in this buffer changes between frames.
glGenBuffers(1, &dynamicBuffer);
glBindBuffer(GL_ARRAY_BUFFER, dynamicBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(dynamicVertexData), dynamicVertexData, GL_DYNAMIC_DRAW);
// Static index data
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
void DrawModelUsingMultipleVertexBuffers()
{
glBindBuffer(GL_ARRAY_BUFFER, staticBuffer);
glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, sizeof(vertexStatic), 0);
glEnableVertexAttribArray(ATTRIB_POSITION);
glBindBuffer(GL_ARRAY_BUFFER, dynamicBuffer);
glVertexAttribPointer(ATTRIB_COLOR, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(vertexDynamic), 0);
glEnableVertexAttribArray(ATTRIB_COLOR);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glDrawElements(GL_TRIANGLES, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, (void*)0);
}
- (void)tearDownGL {
[EAGLContext setCurrentContext:self.context];
glDeleteBuffers(1, &_vertexBuffer);
glDeleteBuffers(1, &_indexBuffer);
//glDeleteVertexArraysOES(1, &_vertexArray);
self.effect = nil;
}
- (void)viewDidLoad
{
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
// view.drawableMultisample = GLKViewDrawableMultisample4X; // Smoothes jagged lines. More processing/memory
view.drawableColorFormat = GLKViewDrawableColorFormatRGB565; // Lower colour range. Less processing/memory
[self setupGL];
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
[self.effect prepareToDraw];
DrawModelUsingMultipleVertexBuffers();
}
#end
You've enabled and bound the vertex buffers to your ATTRIB_COLOR binding point, by using the glVertexAttribPointer and glEnableVertexAttribArray entry points, but not specified what to do with them.
OpenGLES 2.0 removed most of the fixed-functionality rendering pipeline, so you will need to write a vertex shader to use the vertex buffers. In 1.X, you'd be able to use the glColorPointer entry point to specify vertex colors to the fixed-functionality pipeline.
When you manage to get openGL ES 2.0 running - which can be hard when starting - but you don't get the drawings you want, I definitively recommend running on device, which enables Extra features from XCode to debug openGL
Then you can :
Go step by step through your cycle and draw calls, and see color /
depth buffer images refreshed
See all the bounded gl objects
See VAOs content (you can see the actual data it points to, useful to find missing data / pointers)
programs : you can edit your shaders LIVE on the GPU (gl bound objects -> program : double-click!) useful to polish your shaders
This can also be very useful, if you're curious, to get an insight at GLKit's GLKBaseEffect inner-workings - which in fact just generates a openGL program, whose specific vertex and fragment shaders code depend on which properties you set...
The property you forgot is GLKBaseEffect colorMaterialEnabled

SDL OpenGL Code Appears To Have No Depth

I am running this code on a PC (compiled in code::blocks 10.05)
When i used to do basic OpenGL code with GLUT (GL Utiltiy Toolkit) Everything worked fine.
Now that i'm running OpenGL code through the SDL Framework when i try to change the z-axis (third parameter) of the translation function the location of a geometric primitive (quad) the 3D space appears to have no depth and either shows covering the complete screen or completely disappears when the depth gets to a certain point.
Am i missing anything? :/
#include <sdl.h>
#include <string>
#include "sdl_image.h"
#include "SDL/SDL_opengl.h"
#include <gl\gl.h>
// Declare Constants
const int FRAMES_PER_SECOND = 60;
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
const int SCREEN_BPP = 32;
// Create Event Listener
SDL_Event event;
// Declare Variable Used To Control How Deep Into The Screen The Quad Is
GLfloat zpos(0);
// Loop Switches
bool gamestarted(false);
bool exited(false);
// Prototype For My GL Draw Function
void drawstuff();
// Code Begins
void init_GL() {
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LEQUAL); // The Type Of Depth Testing To Do
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glViewport(0, 0, 640, 513); // Viewport Change
glOrtho(0, 640, 0, 513, -1.0, 1.0); // Puts Stuff Into View
}
bool init() {
SDL_Init(SDL_INIT_EVERYTHING);
SDL_SetVideoMode(640, 513, 32, SDL_OPENGL);
return true;
}
void drawstuff() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, zpos);
glColor3f(0.5f,0.5f,0.5f);
glBegin(GL_QUADS);
glVertex3f(-1.0f, 1.0f, 0.0f);
glVertex3f( 1.0f, 1.0f, 0.0f);
glVertex3f( 1.0f,-1.0f, 0.0f);
glVertex3f(-1.0f,-1.0f, 0.0f);
glEnd();
}
int main (int argc, char* args[]) {
init();
init_GL();
while(exited == false) {
while( SDL_PollEvent( &event ) ) {
if( event.type == SDL_QUIT ) {
exited = true;
}
if( event.type == SDL_MOUSEBUTTONDOWN ) {
zpos-=.1;
}
}
glClear( GL_COLOR_BUFFER_BIT);
drawstuff();
SDL_GL_SwapBuffers();
}
SDL_Quit();
return 0;
}
When you say depth, do you refer to a perspective effect? You need to use a perspective projection matrix (see gluPerspective) if you want things farther away to appear smaller.
You're currently using orthographic projection (glOrtho), which does not have any perspective effect.
I don't know the reason for your problem, but I find a problem in your code.
After rendering one frame, you cleared the color framebuffer, but you forgot to clear the depth buffer.So, use glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); in your main function and see the effect.
I think I know the anwser:
In Orthographic projection(after projection and perspective divide):
Zb = -2/(f-n) -(f+n)/((f-n)*Z)
Zb:z value in depth buffer
Z: z value of your vertex you give
In your situation: glOrtho(0, 640, 0, 513, -1.0, 1.0);
f = 1.0, n = -1.0
so your Zb would always be -2/(f - n) = -1, this causes all your primitives's depth the same.
You can reference to Red book's Appendix C.2.5.There is a matrix for orthographic projection, and after that is perspective divide.
There is another tips to keep in mind in perspective projection that zNear value cannot be set to zero, which causes all primitives' depth value in depth buffer to be same as this one.

Jaggy paths when blitting an offscreen CGLayer to the current context

In my current project I need to draw a complex background as a background for a few UITableView cells. Since the code for drawing this background is pretty long and CPU heavy when executed in the cell's drawRect: method, I decided to render it only once to a CGLayer and then blit it to the cell to enhance the overall performance.
The code I'm using to draw the background to a CGLayer:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
CGContextRef viewContext = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
standardCellBackgroundLayer = CGLayerCreateWithContext(viewContext, rect.size, NULL);
CGContextRef context = CGLayerGetContext(standardCellBackgroundLayer);
// Setup the paths
CGRect rectForShadowPadding = CGRectInset(rect, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2) + PLACES_DEFAULT_CELL_SIDE_PADDING, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2));
CGMutablePathRef path = createPathForRoundedRect(rectForShadowPadding, LIST_ITEM_CORNER_RADIUS);
// Save the graphics context state
CGContextSaveGState(context);
// Draw shadow
CGContextSetShadowWithColor(context, CGSizeMake(0, 0), PLACES_DEFAULT_CELL_SHADOW_SIZE, [Skin shadowColor]);
CGContextAddPath(context, path);
CGContextSetFillColorWithColor(context, [Skin whiteColor]);
CGContextFillPath(context);
// Clip for gradient
CGContextAddPath(context, path);
CGContextClip(context);
// Draw gradient on clipped path
CGPoint startPoint = rectForShadowPadding.origin;
CGPoint endPoint = CGPointMake(rectForShadowPadding.origin.x, CGRectGetMaxY(rectForShadowPadding));
CGContextDrawLinearGradient(context, [Skin listGradient], startPoint, endPoint, 0);
// Restore the graphics state and release everything
CGContextRestoreGState(context);
CGPathRelease(path);
}
return standardCellBackgroundLayer;
}
The code to blit the layer to the current context:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(context, CGPointMake(0.0, 0.0), [Skin standardCellBackgroundLayer]);
}
This actually does the trick pretty nice but the only problem I'm having is that the rounded corners (check the static method). Are very jaggy when blitted to the screen. This wasn't the case when the drawing code was at its original position: in the drawRect method.
How do I get back this antialiassing?
For some reason the following methods don't have any impact on the anti-aliassing:
CGContextSetShouldAntialias(context, YES);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetAllowsAntialiasing(context, YES);
Thanks in advance!
You can simplify this by just using UIGraphicsBeginImageContextWithOptions and setting the scale to 0.0.
Sorry for awakening an old post, but I came across it so someone else may too. More details can be found in the UIGraphicsBeginImageContextWithOptions documentation:
If you specify a value of 0.0, the scale factor is set to the scale
factor of the device’s main screen.
Basically meaning that if it's a retina display it will create a retina context, that way you can specify 0.0 and treat the coordinates as points.
I'm going to answer my own question since I figured it out some time ago.
You should make a retina aware context. The jaggedness only appeared on a retina device.
To counter this behavior, you should create a retina context with this helper method:
// Begin a graphics context for retina or SD
void RetinaAwareUIGraphicsBeginImageContext(CGSize size)
{
static CGFloat scale = -1.0;
if(scale < 0.0)
{
UIScreen *screen = [UIScreen mainScreen];
if([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0)
{
scale = [screen scale]; // Retina
}
else
{
scale = 0.0; // SD
}
}
if(scale > 0.0)
{
UIGraphicsBeginImageContextWithOptions(size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(size);
}
}
Then, in your drawing method call the method listed above like so:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
RetinaAwareUIGraphicsBeginImageContext(CGSizeMake(320.0, 480.0));
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
...

Resources