Mixing CGLayers and CALayers - macos

Here's the setup:
I have a NSView which is a CALayer-backed view, and contains many CALayers. I have a CALayer for the 'background' of the view, and many small CALayers which are sublayers. Sometimes this view can be very big (as large as 2560x1400), and when it does there is a very noticeable lag in the CALayers. My guess is that Core Animation has some trouble with very large CALayers.
So, I would like to change this 'background' layer from being a CALayer to maybe a CGLayer or something, where it would be rendered like a normal CGLayer into the view. However, on top of it, the small CALayers would still operate just as they do now. That way I only have small CALayers, the 'background' layer can be cached in a CGLayer, and hopefully the performance will significantly increase.
Anyone know how this could be accomplished?

After researching more, it doesn't look like this can be accomplished. An NSView is either CALayer-backed, or it's not.

Related

What is the relationship between NSImageView and NSImageCell?

It seems that if I create an NSImageView in code, there is no way to have the image automatically scale proportionally up if the NSImageView becomes bigger than the image itself. (an odd omission)
On the other hand, if I create the NSImageView in IB, it seems to somehow attach an NSImageCell to the NSImageView and the NSImageCell has an option to scale proportionally up and down, which is what I want.
But in IB, I can't seem to understand the relationship between the NSImageView and the NSImageCell. I can't delete the NSImageCell from the NSImageView and I don't see the connection in bindings or anywhere else.
How do I get the functionality of the NSImageCell when creating the NSImageView in code?
Sorry if this is obvious, but I'm used to UIImageViews and they're definitely different than NSImageView.
Thank you.
You should be able to scale up using [NSImageView setImageScaling:NSImageScaleProportionallyUpOrDown]. Have you had trouble with that? You can also access the cell for any NSControl using -cell.
As for the separation of cells from controls (views), this is a hold-over from the days of much less powerful computers (some NeXT computers only had 8MB of memory). Cells provide a lighter-weight object that does not require some of the overhead of a full view. This is important in cases where a lot of elements might exist, such as in a matrix or a table. Cells are designed to be easier to copy and reuse, much like UITableViewCell. Like UITableViewCell, the same NSCell may be used repeatedly to draw different content. Cells also share some singletons. For example, there is typically only one field editor (NSTextView) shared by most cells. It just gets moved around as needed as the user selects different text fields to edit.
In a world where the first iPhone had 10x the memory of a NeXT and desktops commonly have 1000x the memory, some of the choices in NSCell don't make as much sense. Since 10.7, OS X has moved some things away from NSCell. NSTableView now supports NSView cells as well as NSCell cells. When iPhoneOS was released, UITableView got started on views from the beginning. (Of course an iPhone table view is minuscule compared to an OS X table view, so it was an easier choice for more reasons than just available memory.)
The reason you're confused is that the documentation on -[NSImageView setImageScaling:] is wrong, where it lists the possible scaling choices. If you look up NSImageScaling, you will find another choice NSImageScaleProportionallyUpOrDown.

UIView self.layer.shouldRasterize = YES and performance issues

I would like to share my experience from using self.layer.shouldRasterize = YES; flag on UIViews.
I have a UIView class hierarchy that has self.layer.shouldRasterize turned ON in order to improve scrolling performance (all of them have STATIC subviews that are larger than the screen of the device).
Today in one of the subclasses I used CAEmitterLayer to produce nice particle effects.
The performance is really poor although the number of particles was really low (50 particles).
What is the cause of this problem?
I'll just quote Apple Doc and explain:
#property BOOL shouldRasterize
When the value of this property is YES, the layer is
rendered as a bitmap in its local coordinate space and then composited
to the destination with any other content. Shadow effects and any
filters in the filters property are rasterized and included in the
bitmap. However, the current opacity of the layer is not rasterized.
If the rasterized bitmap requires scaling during compositing, the
filters in the minificationFilter and magnificationFilter properties
are applied as needed.
So basically when shouldRasterize is set to YES, every pixel that will compose the layer is calculated and the whole layer is cached as a bitmap.
When will you benefit from it ?
When you only need to draw it once. That means when you need just pure "simple" animation (eg moving, transform, scaling...) because CoreAnimation will actually use that layer without redrawing it every frame. It's a very powerful feature to cache complex layers (with shadows and corner radius) combined with CoreAnimation.
When will it kill you framerate ?
When your layer is redisplayed many times, because on top of the drawing that is already taking effect, the shouldRasterize will process all pixels to cache the bitmap data.
So the real question you should ask yourself is this : "On which layer am I applying the shouldRasterize to YES ? And how often is this layer redrawn ?"
Hope this was clear enough.
Turning OFF self.layer.shouldRasterize increases performance to normal levels.
Why is that?
According to a video on apple's developers site (I cannot remember the video, help please?) the rule for self.layer.shouldRasterize is that simple: If all of your subviews are static (their position, contents etc, are not changing or animating) then it is beneficiary to turn self.layer.shouldRasterize ON. On the other side if any of the subviews are changing then the framework needs to re-cache the view hierarchy and this is a huge bottleneck. Under the hood the bottleneck is the memory copying between CPU and GPU.

Advice for a Cocoa drawing application

I'm new to Cocoa and looking for a little advice for an application from experienced Cocoa-ers. 
I'm building a basic OmniGraffle-style app where objects are drawn/dragged onto a canvas. After the objects are on the canvas, they can be selected to modify their properties (fill color, stroke color/width, etc.), be resized, moved around to a new position, etc.
To get warmed up, I've written a basic drawing app that creates objects (circles, rectangles, etc.) as drawn by the mouse on a custom NSView, adds the objects to an NSArray collection, and renders the contents of the collection into the view. I could continue in this vein, but I'm going to have to add support for detecting object selection, resolving z-indexing, focus highlighting, drag handles, etc. with all the associated rendering. Also, rendering every object on each cycle seems terribly wasteful.
It seems like a better approach would be to drop lightweight view objects onto a canvas that were able to detect mouse events on themselves, draw themselves and their focus rings, and so forth. However, while NSView seems like an object with these properties, I see a lot of chatter on the web about it being a heavyweight component with a lot of baggage. I've stumbled across NSCells and have read up on them, but I'm not sure if they are the right alternative.
Any suggestions? If you can nudge me in the right direction I'd greatly appreciate it.
First rule of optimization: Don't do it first.
A custom NSView per shape sounds about right to me. Whether you'll want different subclasses for different shapes will be up to you; I'd start out with a single generic shape-view class and shapes able to describe themselves as Bézier paths, but don't be too strict about holding to that—change it if it'd make it easier. Just implement it however it makes sense to you.
Then, once you've got it working, profile it. Make as many shapes as you can. Then make more. High-poly-count shapes. Intersections. Fills, strokes, shadows, and gradients. You probably should create a separate document for each stressor. Notice just at the user level what's slow. Then, run your app under Instruments and look into why it's slow.
Maybe views will turn out to be the wrong solution. Don't forget to look into CALayers. But don't rule anything out as slow until you've tried it and measured it.

Multiple Core Animation Viewports

I have a complex structure of CALayers forming a motion graphics system that can be manipulated by the user. This is being displayed in the main window as a part of the UI. I am looking for a good way to display multiple small sections of the CALayer stack on a second display as "viewports", which will likely be at a higher resolution that the main view. I am aware that I could render them out and redraw them, but want to maintain the resolution independence of the CALayers.
My thought process was something to the effect of adding the main CALayer to multiple superlayers and then using a combination of masks and transforms to get the viewport to display the portion needed. Unfortunately, a CALayer can only have one superlayer.
Is there any good way to achieve this? Thanks in advance.
Unfortunately I think you'll need to maintain multiple CALayer stacks, one for each view. Since all the sets of layers should just be reflecting the state of a single model it should be relatively straightforward to keep them in sync.
You could optimise the zoomed view to only manage layers that are actually visible, which would cut down on resource usage.

Help with Cocoa: Objects as views?

in my app I want to have a light table to sort photos. Basically it's just a huge view with lots of photos in it and you can drag the photos around. Photos can overlap, they don't fall into a grid like in iPhoto.
So every photo needs to respond to mouse events. Do I make every photo into its own view? Or are views too expensive to create? I want to easily support 100+ photos or more.
Photos need to be in layers as well so I can change the stacking order. Do I use CoreAnimation for this?
I don't need finished source code just some pointers and general ideas. I will (try to) figure out the implementation myself.
Fwiw, I target 10.5+, I use Obj-C 2.0 and garbage collection.
Thanks in advance!
You should definitely use CALayer objects. Using a set of NSImageView subviews will very quickly become unmanageable performance-wise, especially if you have more than 100 images on screen. If you don't want to use Core Animation for some reason, you'd be much better off creating a single custom view and handling all the image drawing and hit testing yourself. This will be more efficient than instantiating many NSImageView objects.
However, Core Animation layers will give orders of magnitude improvement in performance over this approach, as each layer is buffered in the GPU so you can drag the layers around with virtually zero cost, and you only need to draw each image once rather than every time anything in the view changes. Core Animation will also handle layer stacking for you.
Have a look at the excellent CocoaSlides sample code which demonstrates a very similar application to what you describe, including hit testing and simple animation.
The simplest method is to use NSImageViews. You can create a subclass that can be easily dragged scaled and rotated. A more complex but visually superior option would be to use Core Animation layers (CALayer).
As long as you maintain the photo representations as distinct objects (so you can manipulate individually) they will use quite a chunk of memory, no matter how you represent them. If you provide all the data available in the photos each one could take several megs. You probably will want to actually reduce the image's display quality i.e. size in pixels, fidelity etc except when the particular photo is being worked on in detail.
Remember, you don't have to treat the photos like the physical objects they mimic. You simply have to create the illusion of physical objects in the interface. We're theater stage designers, not architects. As long as you data model model remains rigorous to the task at hand, the interface can engage in all kinds of illusions for the benefit of the user.

Resources