Multiple Core Animation Viewports - cocoa

I have a complex structure of CALayers forming a motion graphics system that can be manipulated by the user. This is being displayed in the main window as a part of the UI. I am looking for a good way to display multiple small sections of the CALayer stack on a second display as "viewports", which will likely be at a higher resolution that the main view. I am aware that I could render them out and redraw them, but want to maintain the resolution independence of the CALayers.
My thought process was something to the effect of adding the main CALayer to multiple superlayers and then using a combination of masks and transforms to get the viewport to display the portion needed. Unfortunately, a CALayer can only have one superlayer.
Is there any good way to achieve this? Thanks in advance.

Unfortunately I think you'll need to maintain multiple CALayer stacks, one for each view. Since all the sets of layers should just be reflecting the state of a single model it should be relatively straightforward to keep them in sync.
You could optimise the zoomed view to only manage layers that are actually visible, which would cut down on resource usage.

Related

CALayer or UIImageView for displaying static custom graphics?

I need to draw some graphics dynamically for my view. These graphics will not be animating. Is it more efficient to draw and display them in a CALayer or use UIGraphicsBeginImageContextWithOptions() to create a UIImage and display it in a UIImageView? Does it matter? Or which questions could I ask to help me pick one over the other?
The first step to knowing what is more efficient is to define what that means to you. Are you talking about memory usage, CPU usage, etc? Next try the different approaches and measure against your definition of efficiency. The results may vary based I your exact usage.
That said: unless this is a core functionality of your app the difference in performance/efficiency is probably going to be very small (especially since image views use layers behind the scenes).
You should start with the solution that seems easiest to implement and understand. Only if that becomes a bottleneck should you look into optimizations.

UIView self.layer.shouldRasterize = YES and performance issues

I would like to share my experience from using self.layer.shouldRasterize = YES; flag on UIViews.
I have a UIView class hierarchy that has self.layer.shouldRasterize turned ON in order to improve scrolling performance (all of them have STATIC subviews that are larger than the screen of the device).
Today in one of the subclasses I used CAEmitterLayer to produce nice particle effects.
The performance is really poor although the number of particles was really low (50 particles).
What is the cause of this problem?
I'll just quote Apple Doc and explain:
#property BOOL shouldRasterize
When the value of this property is YES, the layer is
rendered as a bitmap in its local coordinate space and then composited
to the destination with any other content. Shadow effects and any
filters in the filters property are rasterized and included in the
bitmap. However, the current opacity of the layer is not rasterized.
If the rasterized bitmap requires scaling during compositing, the
filters in the minificationFilter and magnificationFilter properties
are applied as needed.
So basically when shouldRasterize is set to YES, every pixel that will compose the layer is calculated and the whole layer is cached as a bitmap.
When will you benefit from it ?
When you only need to draw it once. That means when you need just pure "simple" animation (eg moving, transform, scaling...) because CoreAnimation will actually use that layer without redrawing it every frame. It's a very powerful feature to cache complex layers (with shadows and corner radius) combined with CoreAnimation.
When will it kill you framerate ?
When your layer is redisplayed many times, because on top of the drawing that is already taking effect, the shouldRasterize will process all pixels to cache the bitmap data.
So the real question you should ask yourself is this : "On which layer am I applying the shouldRasterize to YES ? And how often is this layer redrawn ?"
Hope this was clear enough.
Turning OFF self.layer.shouldRasterize increases performance to normal levels.
Why is that?
According to a video on apple's developers site (I cannot remember the video, help please?) the rule for self.layer.shouldRasterize is that simple: If all of your subviews are static (their position, contents etc, are not changing or animating) then it is beneficiary to turn self.layer.shouldRasterize ON. On the other side if any of the subviews are changing then the framework needs to re-cache the view hierarchy and this is a huge bottleneck. Under the hood the bottleneck is the memory copying between CPU and GPU.

Advice for a Cocoa drawing application

I'm new to Cocoa and looking for a little advice for an application from experienced Cocoa-ers. 
I'm building a basic OmniGraffle-style app where objects are drawn/dragged onto a canvas. After the objects are on the canvas, they can be selected to modify their properties (fill color, stroke color/width, etc.), be resized, moved around to a new position, etc.
To get warmed up, I've written a basic drawing app that creates objects (circles, rectangles, etc.) as drawn by the mouse on a custom NSView, adds the objects to an NSArray collection, and renders the contents of the collection into the view. I could continue in this vein, but I'm going to have to add support for detecting object selection, resolving z-indexing, focus highlighting, drag handles, etc. with all the associated rendering. Also, rendering every object on each cycle seems terribly wasteful.
It seems like a better approach would be to drop lightweight view objects onto a canvas that were able to detect mouse events on themselves, draw themselves and their focus rings, and so forth. However, while NSView seems like an object with these properties, I see a lot of chatter on the web about it being a heavyweight component with a lot of baggage. I've stumbled across NSCells and have read up on them, but I'm not sure if they are the right alternative.
Any suggestions? If you can nudge me in the right direction I'd greatly appreciate it.
First rule of optimization: Don't do it first.
A custom NSView per shape sounds about right to me. Whether you'll want different subclasses for different shapes will be up to you; I'd start out with a single generic shape-view class and shapes able to describe themselves as Bézier paths, but don't be too strict about holding to that—change it if it'd make it easier. Just implement it however it makes sense to you.
Then, once you've got it working, profile it. Make as many shapes as you can. Then make more. High-poly-count shapes. Intersections. Fills, strokes, shadows, and gradients. You probably should create a separate document for each stressor. Notice just at the user level what's slow. Then, run your app under Instruments and look into why it's slow.
Maybe views will turn out to be the wrong solution. Don't forget to look into CALayers. But don't rule anything out as slow until you've tried it and measured it.

Help with Cocoa: Objects as views?

in my app I want to have a light table to sort photos. Basically it's just a huge view with lots of photos in it and you can drag the photos around. Photos can overlap, they don't fall into a grid like in iPhoto.
So every photo needs to respond to mouse events. Do I make every photo into its own view? Or are views too expensive to create? I want to easily support 100+ photos or more.
Photos need to be in layers as well so I can change the stacking order. Do I use CoreAnimation for this?
I don't need finished source code just some pointers and general ideas. I will (try to) figure out the implementation myself.
Fwiw, I target 10.5+, I use Obj-C 2.0 and garbage collection.
Thanks in advance!
You should definitely use CALayer objects. Using a set of NSImageView subviews will very quickly become unmanageable performance-wise, especially if you have more than 100 images on screen. If you don't want to use Core Animation for some reason, you'd be much better off creating a single custom view and handling all the image drawing and hit testing yourself. This will be more efficient than instantiating many NSImageView objects.
However, Core Animation layers will give orders of magnitude improvement in performance over this approach, as each layer is buffered in the GPU so you can drag the layers around with virtually zero cost, and you only need to draw each image once rather than every time anything in the view changes. Core Animation will also handle layer stacking for you.
Have a look at the excellent CocoaSlides sample code which demonstrates a very similar application to what you describe, including hit testing and simple animation.
The simplest method is to use NSImageViews. You can create a subclass that can be easily dragged scaled and rotated. A more complex but visually superior option would be to use Core Animation layers (CALayer).
As long as you maintain the photo representations as distinct objects (so you can manipulate individually) they will use quite a chunk of memory, no matter how you represent them. If you provide all the data available in the photos each one could take several megs. You probably will want to actually reduce the image's display quality i.e. size in pixels, fidelity etc except when the particular photo is being worked on in detail.
Remember, you don't have to treat the photos like the physical objects they mimic. You simply have to create the illusion of physical objects in the interface. We're theater stage designers, not architects. As long as you data model model remains rigorous to the task at hand, the interface can engage in all kinds of illusions for the benefit of the user.

What's the best way to draw a bunch of (~200) colored rectangles in Cocoa?

My current plan is to draw the rectangles by subclassing NSView, but that seems like a very inefficient way for what I'm trying to do, which is to draw a bunch of fixed, non-overlapping rectangles that changes colors once in a while. Is there a better way? Thanks.
You can try using CALayers, kind of like this: http://theocacao.com/document.page/555.
If they're all the same color or image, you may find a single CGLayer more efficient. The purpose of that API is drawing the same thing many times.
On the other hand, if the rectangles move independently or have different colors or images on them, Core Animation is definitely the way to go.
Core Animation would be a great technology for a game, but if you want to stick with NSView for the time being you could create a class similar to NSCell that the gameboard view uses to implement positioning and drawing. This would work in a similar way as many Cocoa control classes, which use a single cell (with different values) to draw multiple items inside a view.
Keep in mind that using individual NSView objects may very well be more than fast enough, but regardless of any speed differences this strategy allows you to separate the logic in a way that makes sense.

Resources