Core animation code structure/conventions - macos

In learning Core Animation, I learned very quickly that if you don't do it right, you get really weird undefined behavior. To that end, I have a few questions that will help me conceptually understand it better.
My NSView subclass declares the following in it's init. This view is a subview of normal layer backed view.
[self setLayer:[CALayer layer]];
[self setWantsLayer:NO];
After this, when and in what situations should I refer to self as opposed to [self layer]? I have been ONLY manipulating the layer with explicit and implicit animations, staying away from [self setFrame:] etc. and using [[self layer] setPosition] etc.
The problem with this approach is that the actual frame of the view stays in one spot throughout any and all animations applied. What if my view is supposed to recieve mouse events? For example, I have a view that uses core animation and it is dragged around by the mouse. Is there a way I can somehow keep the view frame synced with the current state of the presentation layer so I can receive proper mouse events?
About the presentation layer, apparently it's only available when an actual animation is in progress. Is there any sort of property of the layer that can tell me where it's ACTUALLY visually at even when an animation's not in progress?

I think you need to re-phrase your question a little. It seems there is some underlying misunderstanding, but you're not really expressing it very clearly. You're question title suggests you're looking to understand something more theoretical, but your actual question suggests you're looking for something more concrete. Let me see if I can clarify a few things.
The presentationLayer provides information about the layer's current state while "in-flight".
When there is no animation occurring, the presentationLayer and the layer information will be identical. Query the layer's bounds, frame, or position to find out where it is currently in its parents coordinate space.
NSViews must have layer backing enabled to be able to perform animations.
Make sure you're not just animating with an explicit animation and not actually setting the layer value that you're animating. Animations don't automatically change the properties of the layers they're animating. You have to change the property to the ending value yourself or it will just "snap back" to the starting value.
If you want to animate the view, as opposed to a layer, you can use the animator proxy, like [[view animator] setFrame:newFrame];
Wrap calls to the animator in a CATrasaction to alter things like animation duration.
Let me know if you need some clarification by updating your question. Providing some pertinent code would really help identify the problems you're having trouble solving.

Firstly, you want to use [self setWantsLayer: YES]. Also, it's only important to call -setLayer: before -setWantsLayer: if you want to provide a specific CALayer subclass (such as a CAScrollLayer); if you just want a regular CALayer you just call -setWantsLayer: and it'll be created for you. Even better, just check the 'wants layer' option in Interface Builder.
Secondly, the entire point of using a layer-backed view is that you can continue to use the regular NSView methods and get the free CoreAnimation 'tweening' effects. If you want to use CoreAnimation as your only means of moving items around, then the correct way to do so is to create a layer backed view which contains your pure-CALayer presentation hierarchy.
I've not looked at any freely-available CoreAnimation tutorials, but I can definitely recommend the Pragmatic Programmers' book on the subject. They also have a screencast available by the book's author.

Related

Framework uses drawRect:; I need to use it on a layer-backed NSView. How to update?

According to the documentation for NSView's drawRect:
If your app manages content using its layer object instead, use the updateLayer method to update your layer instead of overriding this method.
I have an NSView with subviews that are provided by the framework, and they all draw using drawRect:. This framework-provided view is a subview of an NSView for which I require a layer. Because my framework-provided view is a descendant of a layer-backed view, drawRect: isn't usually called, especially in cases where the window is made active or inactive (the view needs to update to reflect its (in)active state).
Of course if I make my containing view not layer backed, updates occur when the window is made active or inactive.
Without modifying the framework into a custom fork, what's the best avenue for making sure drawRect: occurs when needed in my framework-provided view?
Thanks.
Edit 25-Aug-2018:
It looks like the trick is to set one of the views in the hierarchy to, e.g., [view setCanDrawSubviewsIntoLayer:YES, which according to the documentation uses all of the subviews’ drawRect: to add their drawing to its own layer. However this seems to work only through 10.13, and is broken in the 10.14 beta. I'll continue to look for a potential API change, unless this is a 10.14 beta bug.
Since the issue is still unresolved, it's not really answered yet.
Layer-backed views which don't override -wantsUpdateLayer to return true still draw themselves using -drawRect:. The bit of documentation you quoted is using "should" to mean "should, for best performance,". It's not required, it's just recommended.
Views don't generally redraw themselves just because the containing window has changed key or main status. You would have to mark them as needing display. Or the framework should be doing that.
I suspect the reason that it works when your view is not layer-backed is that you are marking your view as needing update. Since non-layer-backed views draw into the window's backing store using the painter model (back to front), if your view redraws itself then any subviews will have to redraw themselves on top of your view's drawing.
If the framework's views need to redraw when the window's key/main status changes, then they should be observing the relevant notifications and setting themselves as needing display. If they're not doing that, it's a framework bug. You can work around it by marking them as needing display yourself.

How do you animate a scroll and zoom atomically?

I have a custom view in my application, which is layer-backed and embedded in an NSScrollView. I allow the user to zoom in (which is accomplished by increasing the size of my custom view). I'm having trouble zooming in on an arbitrary point, though, since the NSScrollView keeps getting in the way and causing the view to jump around (typically to the view's origin) before I point it to the new scroll point. I would really like to use a CAScrollLayer, since I know I could definitely get the zooming right with it and have it move smoothly, but then I lose all built-in scrolling facilities.
Is there any way to leverage CAScrollLayer within an NSScrollView, possibly backing the NSClipView? If not, what purpose does CAScrollLayer actually serve? Is it possible, with a different approach, to change my view's size and the scroll point atomically and have that animate?
In short, is CAScrollLayer completely useless, or mostly useless?
Update
I've gotten my inner view to jump around less by making a CALayer subclass to display my view's contents. Rather than sizing with layout constraints, I have it sizing in an override of -resizeWithOldSuperlayerSize:. I still can't change the frame size and origin of my view simultaneously and get a smooth animation, though. To get a sense of what I'm looking for, open an image in Preview and zoom in and out. It zooms about the center of the image in a smooth manner.
In the limit, you can use an NSScroller instead; that way you would be able to use CAScrollLayer, if that’s your preferred implementation.
Note that on some (older) versions of Mac OS X, NSScroller has a bug that causes it to invoke an Apple private method on its containing view. You’ll know if this happens because you’ll get an exception about your custom view not responding to a method starting with an ‘_’.

Animating setHidden: on NSView via Cocoa bindings

I'm currently putting the final touches on a project.
A lot (if not all) of the UI logic currently relies on Cocoa Bindings.
Some of the user interface elements (labels, buttons, etc.) have their "Hidden" bindings defined. When certain events are triggered, these elements visibility is toggled.
I'm trying to animate the visibility change (by animating the opacity and maybe even the scale). This could easily be accomplished in a number of ways, either by setting the relevant layer properties, adding the animations to the layer, etc. However, since I'm trying to totally rely on the bindings behavior I "can't" really do this directly.
I tried an implementation using Layer actions, by defining actions for the keys kCAOnOrderIn and kCAOnOrderOut on the relevant elements, but it really didn't work, as the setHidden: is most likely being triggered on the NSView instead of the CALayer -- which makes sense.
So, my question is: how would you animate setHidden: on a NSView, when setHidden: is being invoked by the Cocoa Bindings.
Thank you.
This will fade out an NSView...
[[someView animator] setAlphaValue:0.0f];
Animating setHidden will have no visual effect since it's either on or off. If you want to animate visibility, use setAlpha (or setOpacity on the layer) instead. These take a value between 0.0 and 1.0. If you need the hidden flag to get set for the sake of state information, call -performSelector:withObject:afterDelay passing it a selector that sets the hidden value to whatever you need it to be after the animation has completed. Alternatively you can set up a delegate for explicit animation to be called back when the animation finishes and call setHidden then.
I would suggest taking a look at NSViewAnimation. It takes any NSView and can animate the frame, size or visibility.

Cocoa: Does there exist an NSView with user resize capability?

I want an NSView that can be resized by dragging its the bottom right corner around, just like an NSWindow. I want to be able to embed this NSView into a parent NSView. Is there a component like this in Cocoa or any of its extensions?
If you get more specific with your question, I can get a little more specific with the answer. :-)
There is nothing like this available that I know of, but it's not terribly difficult to create. The decision to make is "who handles drawing the resize grips and resizing / dragging logic?"
Views Handle Their Own
If your user-resizable view handles drawing the grips and responding to the resizing/dragging actions itself, then you have to choose whether you want the grips drawn atop the view's contents or "around the outside." If you want the grips "outside," the "usable area" decreases because your content has to be inset enough to leave room for you to draw the resizing controls, which can complicate drawing and sizing metrics. If you draw the grips "atop" the content, you can avoid this problem.
Container View Handles All Subviews
The alternative is to create a "resizable view container view" that draws the resize grips around any subviews' perimeters and handles the dragging/resizing logic by "bossing the subviews around" when it (the container) receives dragging events on one of its grip areas. Placing the logic here allows any type of subview to be draggable / resizable and gives you the added benefit of only having one instance of the slightly-heavier-weight view (versus many instances of subviews that have the more complicated logic in them).
The Basic Mechanism
Once you've decided that, it's really just a matter of creating your subview, which does the drawing, manages NSTrackingArea instances (for the grip areas), and responds to the appropriate mouse methods (down, moved, etc.). In the case of each subview handling its own, they'll manage their own tracking areas, grip drawing, and mouse moved, setting their own frame in response. In the case of a container view handling all this for its subviews, it will manage all subviews' tracking areas and draw their grips on itself, and set the targeted subview's frame (and the subview is blissfully ignorant of the whole thing).
I hope this helps give you at least a general idea of possible mechanisms. Had I not just gotten up and started my morning coffee, I'd probably be able to write this more succinctly, but there you have it. :-)
EDIT 7 YEARS LATER
Because there wasn't much detail about what the OP wanted, I gave a very generic answer, but I should make a few points:
Always prefer an NSSplitView if it can be made to work for you (ie, if the views align with each other and divide the common container view's space). A split view lets you customize grip areas, etc. and does all of this to your subview for free.
AutoLayout didn't exist when I wrote this answer and it greatly complicates rolling your own solution for the view-handling-multiple-sizable-subviews scenario.
If you really do need a UI element that can be dragged/resized within some container, try your best to get away with using CALayers inside a master view that handles all the layout/sizing logic if you can.
If you can't do the above (ie, the resizable view contains complex controls and layout, has its own NSViewController, etc.), try a hybrid approach (use layers to display cached images of non-selected views and only add a full, interactive sizable subview for the selected item (or subviews for items).
Because of the complexities of AutoLayout, I really can't recommend the real draggable subview approach at all unless it's unavoidable. If you're designing a view that contains movable, sizable things, it's best (and most efficient) to make everything inside it that view's responsibility. Example: a graphics app with lots of shapes should have a Canvas view that represents the shapes (and any GUI decorations like size/drag grips, etc.) using CALayers. This takes advantage of graphics acceleration and is far more efficient than a bunch of (very resource-heavy) NSView subviews. All the move/size/select logic is handled by the "Canvas View" and the only subviews might be overlaid controls (though if your Canvas itself needs to be enclosed in a scroll view, it's best to use NSScrollView machinery to allow stationary overlay views for this purpose).
If designing a view that draws lots of things (for which you should definitely use layers to represent those things) but allows selecting only one thing, the approach of adding a subview is manageable enough even with AutoLayout. If the "selected for editing" thing has lots of complex controls that become visible when editing, an "editor subview" with accompanying view controller makes sense and is a good tradeoff in maintainability (because view controller compartmentalizes all editing functionality/UI handling) vs. container view complexity (because one subview isn't going to break the resource bank and maintaining temporary AutoLayout constraints for keeping its position during container view resizes & editor interactions isn't overly complex).
All of this assumes macOS; if designing for iOS, definitely bend over backwards to use layers and the new (as of this writing) drag and drop machinery, of which I know precious little at present.
In summary, the answer was incomplete as well as somewhat outdated, so I feel my original advice isn't as good as it could be these days.
Instead of using views, you can use windows and set the style mask of the window to NSResizableWindowMask.
Another option is using an NSSplitView, if you have two resizable, contiguous subviews.

How do you develop an application to draw, edit and save UML models in Cocoa?

Will the individual UML diagram shapes be NSView subclasses or NSBezierPaths? How are the diagrams created and managed?
One way to do this is to:
Create a document-based app
Design model classes for the different objects the end-user will be able to draw in your canvas, all sharing one abstract superclass
In your CanvasView class, implement drawRect and have it call the NSDocument subclass, or for more granular classes it's viewcontroller, to get all the objects that should be drawn in the right order to draw them.
For each of these objects, call a drawInteriorInView:rect: method or something similar that they all have implemented, from within your CanvasView's drawRect: implementation.
The advantage of such a granular design is that you can decide to replace NSBezierPath drawing with straight CoreGraphics calls if you find a need to do so, without having to completely re-architect the app.
Typical Cocoa controls, like for instance a tableView, implement a bunch of different drawing methods, one for the background, one for the gridlines, etc. etc. all of them called (when applicable) from the view's drawRect:.
Or you could of course look at GCDrawKit, which seems to have a pretty functional implementation. Especially check out the sample app that comes with it.
Have you looked at OmniGraffle? It may do what you need.
[non-programming-related answer...]
Have you looked at the Sketch example project, found in /Developer/Examples/AppKit? It should get you at least halfway to where you're going.
You would typically start with an NSView subclass to represent your "canvas" and handle drawing and mouse/keyboard events. You would probably use NSBezierPath inside your drawing methods to fill and outline the shapes. Depending on how complex the drawing code is, you might put everything in your NSView subclass, or make an NSCell subclass that would take some work out of the NSView. In either case you would want to define a data source protocol (or create bindings) to provide data to the NSView from the objects in your data model which represent UML items.
Core Animation would be worth considering too, although I would start with NSView at the beginning, at least for a simple prototype.

Resources