Improving Performance of CALayer Filters - cocoa

I'm working on a Cocoa fullscreen application. I am using 1 NSView that has 1 CALayer that has multiple sublayers. Right now for testing - I am using any keystrokes to add dots (20 x 20 ) to the screen. This is just for testing of drawing the dots. My issue is that I am using a filter on my dot layers - specifically I am using CIDiscBlur - and once I reach about 30 dots - the drawing of the dots significantly slows down. There can be a 1 - 1.5 second delay between the key press and the appearance of the dot. I have noticed that if I remove setting the CIDisBlur filter on the layers - that there is no slow down.
Are there any best practices or tips I should be using when drawing this many sublayers? Any help would be great.
CIFilter *blurFilter = [CIFilter filterWithName:#"CIDiscBlur"];
[blurFilter setDefaults];
[blurFilter setValue:(id)[NSNumber numberWithFloat:15.0] forKey:#"inputRadius"];
dotFilters = [[NSArray arrayWithObjects:(id)blurFilter, nil] retain];
CGColorRef purpleColor = CGColorCreateGenericRGB(0.604, 0.247, 0.463, 1.0);
CALayer *dot = [[CALayer layer] retain];
dot.backgroundColor = purpleColor;
dot.cornerRadius = 15.0f;
dot.filters = dotFilters;
NSRect screenRect = [[self.window screen] frame];
// 10 point border around the screen
CGFloat width = screenRect.size.width - 20;
CGFloat height = screenRect.size.height - 20;
#define ARC4RANDOM_MAX 0x100000000
width = ((CGFloat)arc4random() / ARC4RANDOM_MAX) * width + 10;
height = ((CGFloat)arc4random() / ARC4RANDOM_MAX) * height + 10;
dot.frame = CGRectMake(width, height, 20,20);//30, 30);
[dot addSublayer:dotsLayer];
I also tried using masksToBounds = YES to see if that helped - but no luck.

You can probably get a performance gain by not using corner radius to make your round layers. While it's a nice little shortcut to just make a round layer in a static context, when you're animating, it will degrade performance significantly. You'd be better off specifying a circular path to a CAShapeLayer or dropping down to Core Graphics and just drawing a circle in the drawInContext call. To test if I'm right, just comment out your call to set the corner radius and apply your filter. See if that speeds things up. If not, then I'm not sure what's up. It may mean you'll have to find a different way to get your effect without a filter. If you'll always have the same look for your dots, you'll can probably "cheat" by using an image.
Best regards.

Related

ios particle emitter - how to make faster?

I have 9 blocks on screen (BlockView is just a subclass of view with some properties to keep track of things), and I want to add a smoke particle emitter behind the top of each block to add some smoke rising from the tops of each block. I create a view to hold the block and the particle emitter, and bring the block in front of the subviews so the block is in front. However, this causes my device (iphone 6) to be incredibly laggy and very difficult to move the blocks with a pan gesture.
SmokeParticles.sks: birthrate of 3 (max set to 0), lifetime of 10 (100 range), position range set in code.
My code for adding a particle emitter to each view is below (I'm not very good with particle emitters so any advice is appreciated! :D)
- (void)addEffectForSingleBlock:(BlockView *)view
{
CGFloat spaceBetweenBlocksHeight = (self.SPACE_TO_WALLS * self.view.frame.size.height + self.SPACE_BETWEEN_BLOCKS*self.view.frame.size.width + self.WIDTH_OF_BLOCK*self.view.frame.size.height) - (self.HEIGHT_OF_BLOCK*self.view.frame.size.height + self.SPACE_TO_WALLS * self.view.frame.size.height);
view.alpha = 1.0;
CGRect frame2 = [view convertRect:view.bounds toView:self.view];
UIView * viewLarge = [[UIView alloc] initWithFrame:frame2];
[self.view addSubview:viewLarge];
CGRect frame1 = [view convertRect:view.bounds toView:viewLarge];
view.frame = frame1;
[viewLarge addSubview:view];
SKEmitterNode *burstNode = [self particleEmitterWithName:#"SmokeParticles"];
CGRect frame = CGRectMake(view.bounds.origin.x-self.SPACE_BETWEEN_BLOCKS*self.view.frame.size.width, view.bounds.origin.y-self.SPACE_BETWEEN_BLOCKS_HEIGHT, view.bounds.size.width+self.SPACE_BETWEEN_BLOCKS*self.view.frame.size.width, view.bounds.size.height/2);
SKView *skView = [[SKView alloc] initWithFrame:frame];
[viewLarge addSubview:skView];
SKScene *skScene = [SKScene sceneWithSize:skView.frame.size];
[skScene addChild:burstNode];
[viewLarge bringSubviewToFront:view];
[burstNode setParticlePositionRange:CGVectorMake(skView.frame.size.width/5, skView.frame.size.height/100.0)];
skView.allowsTransparency = YES;
skScene.backgroundColor = [UIColor clearColor];
skView.backgroundColor = [UIColor clearColor];
[skView presentScene:skScene];
[burstNode setPosition:CGPointMake(skView.frame.size.width/2, -skView.frame.size.height*0.25)];
}
I realize that this is an old question, but I recently learned something that could be helpful to others and decided to share it here because it is relevant (I think).
I'll assume your BlockView is a subclass of UIView (if it is not, this will not help you, sorry). A view performs a lot of unnecessary calculations each frame (for example, each view checks if someone tapped on it). When creating a game you should use as fewer UIViews as possible (that's why all other commenters recommended you to use only one SKView and make each Block a SKSpriteNode, which is not a view). But, if you need to use some other kind of object or you do not want to use SpriteKit (or SceneKit for 3D objects), then try using CALayers inside one single UIView (for example, one case where you would prefer to use CALayers instead of SpriteKit is to increase backwards compatibility with older iOS versions as SpriteKit needs iOS 7).
Mr. John Blanco explains the CALayer approach very well in his View vs. Layers (including Clock Demo).

ZBar not cropping scan region

I'm cropping the scanning region of Zbar via the following code:
- (void)startScanning
{
NSLog(#"Scanning..");
reader = [AACZBarViewController new];
reader.readerDelegate=self;
reader.supportedOrientationsMask = ZBarOrientationMask(UIInterfaceOrientationPortrait);
reader.showsZBarControls = NO;
CGFloat x,y,w,h;
x =0;
y =0.25;
w=1;
h=0.50;
reader.scanCrop = CGRectMake(x,y,w,h); //Crop scan region
reader.cameraOverlayView = [self myOverlay];
ZBarImageScanner *scanner = reader.scanner;
[scanner setSymbology: ZBAR_I25 config: ZBAR_CFG_ENABLE to: 0];
[self presentViewController:reader animated:YES completion:nil];
}
The problem however is that the program still uses the entire screen area to find a barcode - not the middle 50%. I don't think the issue is the reader.scanCrop method, but as to what the real culprit is, I can't fathom.
Edit:
Anyone?
I had a look at the zbar documentation again and noticed it said the x axis on the camera is verticle - not horizontal. Now I'd set the reader to be portrait only but apparently this does not affect the camera in any way. I didn't find a way to change this, but I did manage to crop to the scanning region I wanted.
The solution:
If you want the following scan region (x,y,w,h) then you set the rectangle by swapping the x and y and width and height so do this (y,x,h,w). It doesn't seem to crop to the bounding box exactly, but it's close enough for my purposes.

CATextLayer gets rasterized too early and it is blurred

I have some troubles with CATextLayer, that could be due to me, but I didn't find any help on this topic. I am on OS X (on iOS it should be the same).
I create a CATextLayer layers with scale factor > 1 and what I get is a blurred text. The layer is rasterized before applying the scale, I think. Is this the expected behavior? I hope it is not, because it just makes no sense... A CAShapeLayer is rasterized after that its transformation matrix is applied, why the CATextLayer should be different?
In case I am doing something wrong... what is it??
CATextLayer *layer = [CATextLayer layer];
layer.string = #"I like what I am doing";
layer.font = (__bridge CFTypeRef)[NSFont systemFontOfSize:24];
layer.fontSize = 24;
layer.anchorPoint = CGPointZero;
layer.frame = CGRectMake(0, 0, 400, 100);
layer.foregroundColor = [NSColor blackColor].CGColor;
layer.transform = CATransform3DMakeScale(2., 2., 1.);
layer.shouldRasterize = NO;
[self.layer addSublayer:layer];
The solution I use at the moment is to set the contentsScale property of the layer to the scale factor. The problem is that this solution doesn't scale: if the scale factor of any of the parent layers changes, then contentsScale should be updated too. I should write code to traverse the layers tree to update the contentsScale properties of all CATextLayers... not exactly what I would like to do.
Another solution, that is not really a solution, is to convert the text to a shape and use a CAShapeLayer. But then I don't see the point of having CATextLayers.
A custom subclass of CALayer could help in solving this problem?
EDIT: Even CAGradientLayer is able to render its contents, like CAShapeLayer, after that its transformation matrix is applied. Can someone explain how it is possible?
EDIT 2: My guess is that paths and gradients are rendered as OpenGL display lists, so they are rasterized at the actual size on the screen by OpenGL itself. Texts are rasterized by Core Animation, so they are bitmaps for OpenGL.
I think that I will go with the contentsScale solution for the moment. Maybe, in the future, I will convert texts to shapes. In order to get best results with little work, this is the code I use now:
[CATransaction setDisableActions:YES];
CGFloat contentsScale = ceilf(scaleOfParentLayer);
// _scalableTextLayer is a CATextLayer
_scalableTextLayer.contentsScale = contentsScale;
[_scalableTextLayer displayIfNeeded];
[CATransaction setDisableActions:NO];
After trying all the approaches, the solution I am using now is a custom subclass of CALayer. I don't use CATextLayer at all.
I override the contentsScale property with this custom setter method:
- (void)setContentsScale:(CGFloat)cs
{
CGFloat scale = MAX(ceilf(cs), 1.); // never less than 1, always integer
if (scale != self.contentsScale) {
[super setContentsScale:scale];
[self setNeedsDisplay];
}
}
The value of the property is always rounded to the upper integer value. When the rounded value changes, then the layer must be redrawn.
The display method of my CALayer subclass creates a bitmap image of the size of the text multiplied by the contentsScale factor and by the screen scale factor.
- (void)display
{
CGFloat scale = self.contentsScale * [MyUtils screenScale];
CGFloat width = self.bounds.size.width * scale;
CGFloat height = self.bounds.size.height * scale;
CGContextRef bitmapContext = [MyUtils createBitmapContextWithSize:CGSizeMake(width, height)];
CGContextScaleCTM(bitmapContext, scale, scale);
CGContextSetShouldSmoothFonts(bitmapContext, 0);
CTLineRef line = CTLineCreateWithAttributedString((__bridge CFAttributedStringRef)(_text));
CGContextSetTextPosition(bitmapContext, 0., self.bounds.size.height-_ascender);
CTLineDraw(line, bitmapContext);
CFRelease(line);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
self.contents = (__bridge id)(image);
CGImageRelease(image);
CGContextRelease(bitmapContext);
}
When I change the scale factor of the root layer of my hierarchy, I loop on all text layers and set the contentsScale property to the same factor. The display method is called only if the rounded value of the scale factor changes (i.e. if the previous value was 1.6 and now I set 1.7, nothing happens. But if the new value is 2.1, then the layer is redisplayed).
The cost in terms of speed of the redraw is little. My test is to change continuously the scale factor of a hierarchy of 40 text layers on an 3rd gen. iPad. It works like butter.
CATextLayer is different because the underlying CoreText renders the glyphs with the specified font size (educated guess based on experiments).
You could add an action to the parent layer so as soon as it's scale changes, it changes the font size of the text layer.
Blurriness could also come from misaligned pixels. That can happen if you put the text layer to non integral position or any transformation in the superlayer hierarchy.
Alternatively you could subclass CALayer and then draw the text using Cocoa in drawInContext:
see example here:
http://lists.apple.com/archives/Cocoa-dev/2009/Jan/msg02300.html
http://people.omnigroup.com/bungi/TextDrawing-20090129.zip
If you want to have the exact behaviour of a CAShapeLayer then you will need to convert your string into a bezier path and have CAShapeLayer render it. It's a bit of work but then you will have the exact behaviour you are looking for. An alternate approach, is to scale the fontSize instead. This yields crisp text every time but it might not fit to you exact situation.
To draw text as CAShapeLayer have a look at Apple Sample Code "CoreAnimationText":
http://developer.apple.com/library/mac/#samplecode/CoreAnimationText/Listings/Readme_txt.html

OpenGL ES 1.1 iOS not sizing properly on retina versus non retina

So, I'm having quite a bizarre issue and I'm not sure why. When displaying the following opengl code on a retina screen, I receive the following image:
while the image I get on a non retina is the following:
The coordinate system should be setup normally.. I also noticed that it doesn't seem to generally scale to retina size. When getting the frame from the [UIScreen mainscreen], I'm getting the same value for both retina and non as shown below. Is there a special way I'm suppose to size this?
frame: Origin: x:0.000000 y:0.000000, Size: width:768.000000 height:1024.000000
*Edit: The cause was that OpenGL ES 1.1 does not scale to Retina size by itself. When creating the ViewPort you must manually scale the size as such glViewport(0, 0, width * [[UIScreen mainScreen] scale], height * [[UIScreen mainScreen] scale]);
This was the simplest method that I could come up with.*
OpenGL itself doesn't know anything about the view or screen characteristics; it only knows about the pixels.
By default, and unlike other UIViews, a GL-backed view will not automatically use non-1.0 scale, and will instead operate at 1x. So as you discovered, you should set the screen scale to opt in to the retina pixel resolution (which is 2x the size in both dimensions).
However, increasing the number of pixels just increases the number of pixels. GL doesn't then magically know that it's supposed to scale all of your geometry (or in fact that that's what you really want). If you want to use the same geometry for both scale views (which you usually do), then yes, you are also responsible for applying the scale.
In ES 1.1 (which you seem to be using, this is just:
glScalef([UIScreen mainScreen] scale], [UIScreen mainScreen] scale], 1.0);
In ES 2.0, you'd apply this to your model-view or projection matrix, which you then use in the vertex shader to transform your input geometry.
I found a 'scale' method for the view that apparently needs to be set to properly resize the view (as the view is position based rather pixel and for some bizarre reason, the positions don't resize with the increased resolution and such). This is the only thing I could really find. I've noted to include the following call check and set below within the init method of the GLView... but I'm still wondering if there's a better way. Wouldn't I manually have to resize everything by multiplying the height and width by the scale property?
- (id)initWithFrame:(CGRect)frame
{
NSLog(#"frame: Origin: x:%f y:%f, Size: width:%f height:%f", frame.origin.x, frame.origin.y, frame.size.width, frame.size.height);
self = [super initWithFrame:frame];
if (self) {
**if([[UIScreen mainScreen] respondsToSelector: NSSelectorFromString(#"scale")])
{
if([self respondsToSelector: NSSelectorFromString(#"contentScaleFactor")])
{
[self setContentScaleFactor:[[UIScreen mainScreen] scale]];
NSLog(#"Scale factor: %f", [[UIScreen mainScreen] scale]);
}
}**
NSLog(#"frame: Origin: x:%f y:%f, Size: width:%f height:%f Scale factor: %f", frame.origin.x, frame.origin.y, frame.size.width, frame.size.height, self.contentScaleFactor);
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)super.layer;
eaglLayer.opaque = YES; // set to indicate that you do not need Quartz to handle transparency. This is a performance benefit.
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
return nil;
}
That's the way I use setContentScaleFactor -- if you are on a retina device and you just want to render to the old non-retina resolution set that scale to 1 and iOS will double-size it for you.
Otherwise, if you write your code to detect retina devices and create a correspondingly larger viewport, setContentScaleFactor to 2 to be left alone by iOS.

How to animate the drawing of a CGPath?

I am wondering if there is a way to do this using Core Animation. Specifically, I am adding a sub-layer to a layer-backed custom NSView and setting its delegate to another custom NSView. That class's drawInRect method draws a single CGPath:
- (void)drawInRect:(CGRect)rect inContext:(CGContextRef)context
{
CGContextSaveGState(context);
CGContextSetLineWidth(context, 12);
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, 0, 0);
CGPathAddLineToPoint(path, NULL, rect.size.width, rect.size.height);
CGContextBeginPath(context);
CGContextAddPath(context, path);
CGContextStrokePath(context);
CGContextRestoreGState(context);
}
My desired effect would be to animate the drawing of this line. That is, I'd like for the line to actually "stretch" in an animated way. It seems like there would be a simple way to do this using Core Animation, but I haven't been able to come across any.
Do you have any suggestions as to how I could accomplish this goal?
I found this animated paths example and wanted to share it for anyone else looking for how to do this with some code examples.
You will be using CAShapeLayer's strokeStart and strokeEnd which requires sdk 4.2, so if you are looking to support older iOS SDKs unfortunately this isn't what you want.
The really nice thing about these properties is that they are animatable. By animating strokeEnd from 0.0 to 1.0 over a duration of a few seconds, we can easily display the path as it is being drawn:
CABasicAnimation *pathAnimation = [CABasicAnimation animationWithKeyPath:#"strokeEnd"];
pathAnimation.duration = 10.0;
pathAnimation.fromValue = [NSNumber numberWithFloat:0.0f];
pathAnimation.toValue = [NSNumber numberWithFloat:1.0f];
[self.pathLayer addAnimation:pathAnimation forKey:#"strokeEndAnimation"];
Finally, add a second layer containing the image of a pen and use a
CAKeyframeAnimation to animate it along the path with the same speed
to make the illusion perfect:
CAKeyframeAnimation *penAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
penAnimation.duration = 10.0;
penAnimation.path = self.pathLayer.path;
penAnimation.calculationMode = kCAAnimationPaced;
[self.penLayer addAnimation:penAnimation forKey:#"penAnimation"];
Which the source can be viewed here and a demo video here. Read the creators blog for more information.
Sureā€”don't draw the line yourself. Add a 12-pixel-high sublayer with a flat background color, starting with a zero-width frame and animating out to your view's width. If you need the ends to be rounded, set the layer's cornerRadius to half its height.

Resources