Jaggy paths when blitting an offscreen CGLayer to the current context - performance

In my current project I need to draw a complex background as a background for a few UITableView cells. Since the code for drawing this background is pretty long and CPU heavy when executed in the cell's drawRect: method, I decided to render it only once to a CGLayer and then blit it to the cell to enhance the overall performance.
The code I'm using to draw the background to a CGLayer:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
CGContextRef viewContext = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
standardCellBackgroundLayer = CGLayerCreateWithContext(viewContext, rect.size, NULL);
CGContextRef context = CGLayerGetContext(standardCellBackgroundLayer);
// Setup the paths
CGRect rectForShadowPadding = CGRectInset(rect, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2) + PLACES_DEFAULT_CELL_SIDE_PADDING, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2));
CGMutablePathRef path = createPathForRoundedRect(rectForShadowPadding, LIST_ITEM_CORNER_RADIUS);
// Save the graphics context state
CGContextSaveGState(context);
// Draw shadow
CGContextSetShadowWithColor(context, CGSizeMake(0, 0), PLACES_DEFAULT_CELL_SHADOW_SIZE, [Skin shadowColor]);
CGContextAddPath(context, path);
CGContextSetFillColorWithColor(context, [Skin whiteColor]);
CGContextFillPath(context);
// Clip for gradient
CGContextAddPath(context, path);
CGContextClip(context);
// Draw gradient on clipped path
CGPoint startPoint = rectForShadowPadding.origin;
CGPoint endPoint = CGPointMake(rectForShadowPadding.origin.x, CGRectGetMaxY(rectForShadowPadding));
CGContextDrawLinearGradient(context, [Skin listGradient], startPoint, endPoint, 0);
// Restore the graphics state and release everything
CGContextRestoreGState(context);
CGPathRelease(path);
}
return standardCellBackgroundLayer;
}
The code to blit the layer to the current context:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(context, CGPointMake(0.0, 0.0), [Skin standardCellBackgroundLayer]);
}
This actually does the trick pretty nice but the only problem I'm having is that the rounded corners (check the static method). Are very jaggy when blitted to the screen. This wasn't the case when the drawing code was at its original position: in the drawRect method.
How do I get back this antialiassing?
For some reason the following methods don't have any impact on the anti-aliassing:
CGContextSetShouldAntialias(context, YES);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetAllowsAntialiasing(context, YES);
Thanks in advance!

You can simplify this by just using UIGraphicsBeginImageContextWithOptions and setting the scale to 0.0.
Sorry for awakening an old post, but I came across it so someone else may too. More details can be found in the UIGraphicsBeginImageContextWithOptions documentation:
If you specify a value of 0.0, the scale factor is set to the scale
factor of the device’s main screen.
Basically meaning that if it's a retina display it will create a retina context, that way you can specify 0.0 and treat the coordinates as points.

I'm going to answer my own question since I figured it out some time ago.
You should make a retina aware context. The jaggedness only appeared on a retina device.
To counter this behavior, you should create a retina context with this helper method:
// Begin a graphics context for retina or SD
void RetinaAwareUIGraphicsBeginImageContext(CGSize size)
{
static CGFloat scale = -1.0;
if(scale < 0.0)
{
UIScreen *screen = [UIScreen mainScreen];
if([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0)
{
scale = [screen scale]; // Retina
}
else
{
scale = 0.0; // SD
}
}
if(scale > 0.0)
{
UIGraphicsBeginImageContextWithOptions(size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(size);
}
}
Then, in your drawing method call the method listed above like so:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
RetinaAwareUIGraphicsBeginImageContext(CGSizeMake(320.0, 480.0));
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
...

Related

Changed Anchor Point of CALayer in Layer-backed NSView

I am trying to have a zoom animation run on a layer-backed NSView by animating the transform of the backing layer. The issue I am having with this, is that the animation zooms into the bottom left corner instead of the center of the view. I figured out that this is because NSView sets its backing layer's anchor point to (0, 0), even after I change it to some other value. This post talks about a similar issue.
I know that to get around this, I could make the view a layer-hosting view. However, I would like to use auto layout, which is why that is not really an option.
Does anyone know another way to get around this behavior and keep the anchor point of the view's backing layer at (0.5, 0.5)? The excerpt from apple's documentation in the post I linked above talks about NSView cover methods. What could such cover method be for the anchor point?
Thanks a lot!
The trick is to override the backing layer and pass an anchor point of choice (to be able to zoom from top left, for instance). Here's what I use:
extension CGPoint {
static let topLeftAnchor: Self = .init(x: 0.0, y: 1.0)
static let bottomLeftAnchor: Self = .init(x: 0.0, y: 0.0)
static let topRightAnchor: Self = .init(x: 1.0, y: 1.0)
static let bottomRightAnchor: Self = .init(x: 1.0, y: 0.0)
static let centerAnchor: Self = .init(x: 0.5, y: 0.5)
}
class AnchoredLayer: CALayer {
public var customAnchorPoint = CGPoint.topLeftAnchor
override var anchorPoint: CGPoint {
get { customAnchorPoint }
set { super.anchorPoint = customAnchorPoint }
}
}
class AnchoredView: NSView {
required convenience init(anchoredTo point: CGPoint) {
self.init(frame: .zero)
self.wantsLayer = true
self.anchorPoint = point
}
public override func makeBackingLayer() -> CALayer {
let roundedLayer = AnchoredLayer()
return roundedLayer
}
public var anchorPoint: CGPoint {
get { (layer as! AnchoredLayer).customAnchorPoint }
set { (layer as! AnchoredLayer).customAnchorPoint = newValue }
}
}
Then use AnchoredView as normal:
let myView = AnchoredView(anchoredTo: .topLeftAnchor)
// Create the scale animation
let transformScaleXyAnimation = CASpringAnimation()
transformScaleXyAnimation.fillMode = .forwards
transformScaleXyAnimation.keyPath = "transform.scale.xy"
transformScaleXyAnimation.toValue = 1
transformScaleXyAnimation.fromValue = 0.8
transformScaleXyAnimation.stiffness = 300
transformScaleXyAnimation.damping = 55
transformScaleXyAnimation.mass = 0.8
transformScaleXyAnimation.initialVelocity = 4
transformScaleXyAnimation.duration = transformScaleXyAnimation.settlingDuration
myView.layer?.add(transformScaleXyAnimation, forKey: "transformScaleXyAnimation")
...

Large Seamless Scrolling Background in SpriteKit?

Okay, so i'm fairly new to SpriteKit, but so far it's been a lot of fun working with it. I'm looking for a way to have a large scrolling background loop endlessly. I've divided my background into small chunks to let SK call as it needs them, using this code:
for (int nodeCount = 0; nodeCount < 50; nodeCount ++) {
NSString *backgroundImageName = [NSString stringWithFormat:#"Sky_%02d.gif", nodeCount +1];
SKSpriteNode *node = [SKSpriteNode spriteNodeWithImageNamed:backgroundImageName];
node.xScale = 0.5;
node.yScale = 0.5;
node.anchorPoint = CGPointMake(0.0f, 0.5f);
node.position = CGPointMake(nodeCount * node.size.width, self.frame.size.height/2);
node.name = #"skyBG";
and I'm able to move this whole block just fine in update, but I can't get it to loop seamlessly. or at all, for that matter. I've tried the method suggested elsewhere on here that takes a copy of the background and takes it to the other end to give the appearance of a seamless loop, and no dice here. I've also tried adding this to the end of the above code, which I had hoped might work:
if (nodeCount == 49)
{
nodeCount = 0;
}
but that just cause the simulator to hang, so I assume it's inadvertently creating a loop or something. Any assistance at all would be great! I feel like there's a simple fix, but I'm just not getting there... thanks a bunch, folks!
UPDATE
here's the code I'm currently using to get the background to scroll, which works just fine:
-(void)update:(CFTimeInterval)currentTime {
_backgroundNode.position = CGPointMake(_backgroundNode.position.x - 1, _backgroundNode.position.y);
}
So to sum up: I have a moving background, but getting it to loop seamlessly has been challenging because of the size of the background. If there's anything at all I can do to help clarify, let me know! Thanks a bunch
This is just an example of the logic you can employ to achieve a scrolling background. It is by no means a final or ideal implementation, but rather a conceptual example.
Here is a simple scroller class :
// Scroller.h
#import <SpriteKit/SpriteKit.h>
#interface Scroller : SKNode
{
// instance variables
NSMutableArray *tiles; // array for your tiles
CGSize tileSize; // size of your screen tiles
float scrollSpeed; // speed in pixels per second
}
// methods
-(void)addTiles:(NSMutableArray *)newTiles;
-(void)initScrollerWithTileSize:(CGSize)scrollerTileSize scrollerSpeed:(float)scrollerSpeed;
-(void)update:(float)elapsedTime;
#end
Implementation :
// Scroller.m
#import "Scroller.h"
#implementation Scroller
-(instancetype)init
{
if (self = [super init])
{
tiles = [NSMutableArray array];
}
return self;
}
-(void)addTiles:(NSMutableArray *)newTiles
{
[tiles addObjectsFromArray:newTiles];
}
-(void)initScrollerWithTileSize:(CGSize)scrollerTileSize scrollerSpeed:(float)scrollerSpeed
{
// set properties
scrollSpeed = scrollerSpeed;
tileSize = scrollerTileSize;
// set initial locations of tile and set their size
for (int index = 0; index < tiles.count; index++)
{
SKSpriteNode *tile = tiles[index];
//set anchorPoint to bottom left
tile.anchorPoint = CGPointMake(0, 0);
// set tilesize
// this implementation requires uniform size
[tile setSize:tileSize];
//calcuate and set initial position of tile
float startX = index * tileSize.width;
tile.position = CGPointMake(startX, 0);
//add child to the display list
[self addChild:tile];
}
}
-(void)update:(float)elapsedTime
{
//calculate speed for this frame
float curSpeed = scrollSpeed * elapsedTime;
// iterate through your screen tiles
for (int index = 0; index < tiles.count; index++)
{
SKSpriteNode *tile = tiles[index];
// set new x location for tile
tile.position = CGPointMake(tile.position.x - curSpeed, 0);
// if new location is off the screen, move it to the far right
if (tile.position.x < -tileSize.width)
{
// calculate the new x position based on number of tiles an tile width
float newX = (tiles.count -1) * tileSize.width;
// set it's new position
tile.position = CGPointMake(newX, 0);
}
}
}
In your scene , init your scroller :
-(void)initScroller
{
// create an array of all your textures
NSMutableArray *tiles = [NSMutableArray array];
[tiles addObject:[SKSpriteNode spriteNodeWithTexture:skyTexture]];
[tiles addObject:[SKSpriteNode spriteNodeWithTexture:skyTexture]];
[tiles addObject:[SKSpriteNode spriteNodeWithTexture:skyTexture]];
[tiles addObject:[SKSpriteNode spriteNodeWithTexture:skyTexture]];
// create scroller instance (ivar or property of scene)
scroller = [[Scroller alloc]init];
// add the tiles to the scroller
[scroller addTiles:tiles];
// init the scroller with desired tile size and scroller speed
[scroller initScrollerWithTileSize:CGSizeMake(1024, 768) scrollerSpeed:500];
// add scroller to the scene
[self addChild:scroller];
}
In your scene update method :
-(void)update:(CFTimeInterval)currentTime
{
float elapsedTime = .0166; // set to .033 if you are locking to 30fps
// call the scroller update method
[scroller update:elapsedTime];
}
Again, this implementation is very barebones and is a conceptual example.
I am currently working on a library for a tile scroller, is using a solution similar to the one provided by prototypical, but using a Tableview datasource pattern to ask for the nodes. Take a look at it:
RPTileScroller
I take some effort to make it the most efficient possible, but mind I am not a game developer. I tried it with my iPhone 5 with random colors tiles of 10x10 pixels, and is running on a solid 60 fps.

Confused about NSImageView scaling

I'm trying to display a simple NSImageView with it's image centered without scaling it like this:
Just like iOS does when you set an UIView's contentMode = UIViewContentModeCenter
So I tried all NSImageScaling values, this is what I get when I chose NSScaleNone
I really don't understand what's going on :-/
You can manually generate the image of the correct size and content, and set it to be the image of the NSImageView so that NSImageView doesn't need to do anything.
NSImage *newImg = [self resizeImage:sourceImage size:newSize];
[aNSImageView setImage:newImg];
The following function resizes an image to fit the new size, keeping the aspect ratio intact. If the image is smaller than the new size, it is scaled up and filled with the new frame. If the image is larger than the new size, it is downsized, and filled with the new frame
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = [[NSImage alloc] initWithSize:size];
NSSize sourceSize = [sourceImage size];
float ratioH = size.height/ sourceSize.height;
float ratioW = size.width / sourceSize.width;
NSRect cropRect = NSZeroRect;
if (ratioH >= ratioW) {
cropRect.size.width = floor (size.width / ratioH);
cropRect.size.height = sourceSize.height;
} else {
cropRect.size.width = sourceSize.width;
cropRect.size.height = floor(size.height / ratioW);
}
cropRect.origin.x = floor( (sourceSize.width - cropRect.size.width)/2 );
cropRect.origin.y = floor( (sourceSize.height - cropRect.size.height)/2 );
[targetImage lockFocus];
[sourceImage drawInRect:targetFrame
fromRect:cropRect //portion of source image to draw
operation:NSCompositeCopy //compositing operation
fraction:1.0 //alpha (transparency) value
respectFlipped:YES //coordinate system
hints:#{NSImageHintInterpolation:
[NSNumber numberWithInt:NSImageInterpolationLow]}];
[targetImage unlockFocus];
return targetImage;}
Here's an awesome category for NSImage: NSImage+ContentMode
It allows content modes like in iOS, works great.
Set image scaling property to NSImageScaleAxesIndependently which will scale image to fill rectangle.This will not preserve aspect ratio.
Swift version of #Shagru's answer (without the hints)
func resizeImage(_ sourceImage:NSImage, size:CGSize) -> NSImage
{
let targetFrame = CGRect(origin: CGPoint.zero, size: size);
let targetImage = NSImage.init(size: size)
let sourceSize = sourceImage.size
let ratioH = size.height / sourceSize.height;
let ratioW = size.width / sourceSize.width;
var cropRect = CGRect.zero;
if (ratioH >= ratioW) {
cropRect.size.width = floor (size.width / ratioH);
cropRect.size.height = sourceSize.height;
} else {
cropRect.size.width = sourceSize.width;
cropRect.size.height = floor(size.height / ratioW);
}
cropRect.origin.x = floor( (sourceSize.width - cropRect.size.width)/2 );
cropRect.origin.y = floor( (sourceSize.height - cropRect.size.height)/2 );
targetImage.lockFocus()
sourceImage.draw(in: targetFrame, from: cropRect, operation: .copy, fraction: 1.0, respectFlipped: true, hints: nil )
targetImage.unlockFocus()
return targetImage;
}

CALayer CGPatternRef performance issues

I've created a CALayer subclass in order to draw a checkerboard background pattern. Everything works well and is rendering correctly, however I've discovered that performance takes a nosedive when the CALayer is given a large frame.
It seems fairly obvious that I could optimise by shifting the allocation of my CGColorRef and CGPatternRef outside of the drawLayer:inContext: call, but I'm not sure how to go about this as both rely on having a CGContextRef.
As far as my understanding goes, CALayer's drawing context is actually owned by its parent NSView and is only passed during drawing. If this is the case, how best can I optimise the following code?
void drawCheckerboardPattern(void *info, CGContextRef context)
{
CGColorRef alternateColor = CGColorCreateGenericRGB(1.0, 1.0, 1.0, 0.25);
CGContextSetFillColorWithColor(context, alternateColor);
CGContextAddRect(context, CGRectMake(0.0f, 0.0f, kCheckerboardSize, kCheckerboardSize));
CGContextFillPath(context);
CGContextAddRect(context, CGRectMake(kCheckerboardSize, kCheckerboardSize, kCheckerboardSize, kCheckerboardSize));
CGContextFillPath(context);
CGColorRelease(alternateColor);
}
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context
{
CGFloat red = 0.0f, green = 0.0f, blue = 0.0f, alpha = 0.0f;
NSColor *originalBackgroundColor = [self.document.backgroundColor colorUsingColorSpaceName:NSCalibratedRGBColorSpace];
[originalBackgroundColor getRed:&red green:&green blue:&blue alpha:&alpha];
CGColorRef bgColor = CGColorCreateGenericRGB(red, green, blue, alpha);
CGContextSetFillColorWithColor(context, bgColor);
CGContextFillRect(context, layer.bounds);
// Should we draw a checkerboard pattern?
if([self.document.drawCheckerboard boolValue])
{
static const CGPatternCallbacks callbacks = { 0, &drawCheckerboardPattern, NULL };
CGContextSaveGState(context);
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGContextSetFillColorSpace(context, patternSpace);
CGColorSpaceRelease(patternSpace);
CGPatternRef pattern = CGPatternCreate(NULL,
CGRectMake(0.0f, 0.0f, kCheckerboardSize*2, kCheckerboardSize*2),
CGAffineTransformIdentity,
kCheckerboardSize*2,
kCheckerboardSize*2,
kCGPatternTilingConstantSpacing,
true,
&callbacks);
alpha = 1.0f;
CGContextSetFillPattern(context, pattern, &alpha);
CGPatternRelease(pattern);
CGContextFillRect(context, layer.bounds);
CGContextRestoreGState(context);
}
CGColorRelease(bgColor);
}
You can create the pattern outside your drawLayer:inContext: just fine, it doesn't need a context. So just create a CGPatternRef instance variable and create the pattern. That should already speed up rendering as creating the pattern is expensive. In fact, I would create all CG* instances that don't need a context outside your drawLayer:inContext: method, so everything up to CGColorCreateGenericRGB and also the CGColorSpaceCreatePattern.

Cocoa OpenGL Texture Creation

I am working on my first OpenGL application using Cocoa (I have used OpenGL ES on the iPhone) and I am having trouble loading a texture from an image file. Here is my texture loading code:
#interface MyOpenGLView : NSOpenGLView
{
GLenum texFormat[ 1 ]; // Format of texture (GL_RGB, GL_RGBA)
NSSize texSize[ 1 ]; // Width and height
GLuint textures[1]; // Storage for one texture
}
- (BOOL) loadBitmap:(NSString *)filename intoIndex:(int)texIndex
{
BOOL success = FALSE;
NSBitmapImageRep *theImage;
int bitsPPixel, bytesPRow;
unsigned char *theImageData;
NSData* imgData = [NSData dataWithContentsOfFile:filename options:NSUncachedRead error:nil];
theImage = [NSBitmapImageRep imageRepWithData:imgData];
if( theImage != nil )
{
bitsPPixel = [theImage bitsPerPixel];
bytesPRow = [theImage bytesPerRow];
if( bitsPPixel == 24 ) // No alpha channel
texFormat[texIndex] = GL_RGB;
else if( bitsPPixel == 32 ) // There is an alpha channel
texFormat[texIndex] = GL_RGBA;
texSize[texIndex].width = [theImage pixelsWide];
texSize[texIndex].height = [theImage pixelsHigh];
if( theImageData != NULL )
{
NSLog(#"Good so far...");
success = TRUE;
// Create the texture
glGenTextures(1, &textures[texIndex]);
NSLog(#"tex: %i", textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glPixelStorei(GL_UNPACK_ROW_LENGTH, [theImage pixelsWide]);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Typical texture generation using data from the bitmap
glBindTexture(GL_TEXTURE_2D, textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texSize[texIndex].width, texSize[texIndex].height, 0, texFormat[texIndex], GL_UNSIGNED_BYTE, [theImage bitmapData]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
}
}
return success;
}
It seems that the glGenTextures() function is not actually creating a texture because textures[0] remains 0. Also, logging glIsTexture(textures[texIndex]) always returns false.
Any suggestions?
Thanks,
Kyle
glGenTextures(1, &textures[texIndex] );
What is your textures definition?
glIsTexture only returns true if the texture is already ready. A name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture.
Check if the glGenTextures is by accident executed between glBegin and glEnd -- that's the only official failure reason.
Also:
Check if the texture is square and has dimensions that are a power of 2.
Although it isn't emphasized anywhere enough iPhone's OpenGL ES implementation requires them to be that way.
OK, I figured it out. It turns out that I was trying to load the textures before I set up my context. Once I put loading textures at the end of the initialization method, it worked fine.
Thanks for the answers.
Kyle

Resources