can drawRect be triggered on a SCNView (macOS)? - macos

I have two macOS apps that are very similar. One app renders an animation in 2-D, with Quartz calls, in a subclass of NSView, the other app, a 3-D animation in a subclass of SCNView (itself a subclass of NSView) using SceneKit geometries. In each case the "view" is owned by a view controller and that ownership is set in a storyboard. In each case I use a timer to dirty the view every second so its drawRect gets triggered to drive the animated movements. In each case I have used: self.view.needsDisplay = true
In the 2-D case, drawRect is called in the view instance, in the 3-D case it is not (even for the initial render).
I'm puzzled! Does SCNView suppress calls to drawRect? If so, how might I get around this? If not, what voodoo secret have I missed?
If this behavior is not what readers would expect, I will post a sample project which exhibits it.
I know that SceneKit can take advantage of Core Animation but I want to keep the same general timer mechanism in both apps because the animated content is, essentially, the same action, what was flat in 2-D is spherical in 3-D so using SceneKit rendering made sense.
Added an Xcode project to show different NSView and SCNView behaviors:
https://www.dropbox.com/s/qtymzitkqcqhfje/SCN.zip?dl=0

You're fighting the framework.
SceneKit has its own timers for rendering and animation. Hook into those to update your objects' properties (locations, colors, etc). Let SceneKit handle the draw calls.
The methods you need may be in unexpected places. Take a look at documentation for SCNSceneRenderer and SCNSceneRendererDelegate protocols. The renderer delegate documentation explains the render loop and shows you where to customize your app's animations and physics.

Related

Animating setHidden: on NSView via Cocoa bindings

I'm currently putting the final touches on a project.
A lot (if not all) of the UI logic currently relies on Cocoa Bindings.
Some of the user interface elements (labels, buttons, etc.) have their "Hidden" bindings defined. When certain events are triggered, these elements visibility is toggled.
I'm trying to animate the visibility change (by animating the opacity and maybe even the scale). This could easily be accomplished in a number of ways, either by setting the relevant layer properties, adding the animations to the layer, etc. However, since I'm trying to totally rely on the bindings behavior I "can't" really do this directly.
I tried an implementation using Layer actions, by defining actions for the keys kCAOnOrderIn and kCAOnOrderOut on the relevant elements, but it really didn't work, as the setHidden: is most likely being triggered on the NSView instead of the CALayer -- which makes sense.
So, my question is: how would you animate setHidden: on a NSView, when setHidden: is being invoked by the Cocoa Bindings.
Thank you.
This will fade out an NSView...
[[someView animator] setAlphaValue:0.0f];
Animating setHidden will have no visual effect since it's either on or off. If you want to animate visibility, use setAlpha (or setOpacity on the layer) instead. These take a value between 0.0 and 1.0. If you need the hidden flag to get set for the sake of state information, call -performSelector:withObject:afterDelay passing it a selector that sets the hidden value to whatever you need it to be after the animation has completed. Alternatively you can set up a delegate for explicit animation to be called back when the animation finishes and call setHidden then.
I would suggest taking a look at NSViewAnimation. It takes any NSView and can animate the frame, size or visibility.

Scaling custom pagination in a document-based Cocoa application

I am implementing printing in my document-based Cocoa application, and I'm wondering if anyone can help me out with this task.
I have to use a custom pagination scheme because the main view works in ways that normal pagination methods would not support. This works, however my view ends up being too big for the paper size most of the time. Tiling the view across multiple pages is not acceptable for my app, I would like to have my custom pagination work in the same way the NSFitPagination method works; if the view is too big for the page, it will resize the view by scaling it.
I thought I could do this by simply overriding the drawRect: method of my view and applying a transform to the current graphics context before it is drawn. However, it appears that printing mechanism calls the drawRect: method independently for each individual subview of a view that's being drawn, so applying a scale to the drawRect: of the superview doesn't work.
Any thoughts?
I solved this by instead of adding my view as a subview of the view to be printed, I overrode the drawRect method of the view being printed and manually set up a transformation and scaling and called drawRect on the view to print as a subview.

How do I prevent a CALayer from redrawing as its bound change?

i have a CALayer with a custom draw method I've added to my view's base layer. I set needsDisplayOnBoundsChange to NO. However, when I resize the parent view's frame, the layer's drawInContext: is getting called continuously. I'd like the contents to scale while the resize is occurring. Any clues?
Interesting, I have a case where I have a CALayer that correctly scales its contents until I call setNeedsDisplay on it to redraw its contents. One thing that may be different is that in my case the layer is being drawn by its delegate and not by a subclass of CALayer. Another thing that may be different is that this is on iOS and not OSX (I don't know which you are using in this case). It is possible that there could be behavior differences between subclasses and delegate drawn layers and/or iOS and OSX.
Another thing to note is that needsDisplayOnBoundsChange is documented to be NO by default, so one should not need to set it. I am not specifically setting needsDisplayOnBoundsChange on my layer.
You could try using a delegate to do the drawing to see if that makes a difference. Note that a UIView cannot be a delegate to a CALayer. In my case I made a simple delegate object that forwards the draw call to my view.

Core animation code structure/conventions

In learning Core Animation, I learned very quickly that if you don't do it right, you get really weird undefined behavior. To that end, I have a few questions that will help me conceptually understand it better.
My NSView subclass declares the following in it's init. This view is a subview of normal layer backed view.
[self setLayer:[CALayer layer]];
[self setWantsLayer:NO];
After this, when and in what situations should I refer to self as opposed to [self layer]? I have been ONLY manipulating the layer with explicit and implicit animations, staying away from [self setFrame:] etc. and using [[self layer] setPosition] etc.
The problem with this approach is that the actual frame of the view stays in one spot throughout any and all animations applied. What if my view is supposed to recieve mouse events? For example, I have a view that uses core animation and it is dragged around by the mouse. Is there a way I can somehow keep the view frame synced with the current state of the presentation layer so I can receive proper mouse events?
About the presentation layer, apparently it's only available when an actual animation is in progress. Is there any sort of property of the layer that can tell me where it's ACTUALLY visually at even when an animation's not in progress?
I think you need to re-phrase your question a little. It seems there is some underlying misunderstanding, but you're not really expressing it very clearly. You're question title suggests you're looking to understand something more theoretical, but your actual question suggests you're looking for something more concrete. Let me see if I can clarify a few things.
The presentationLayer provides information about the layer's current state while "in-flight".
When there is no animation occurring, the presentationLayer and the layer information will be identical. Query the layer's bounds, frame, or position to find out where it is currently in its parents coordinate space.
NSViews must have layer backing enabled to be able to perform animations.
Make sure you're not just animating with an explicit animation and not actually setting the layer value that you're animating. Animations don't automatically change the properties of the layers they're animating. You have to change the property to the ending value yourself or it will just "snap back" to the starting value.
If you want to animate the view, as opposed to a layer, you can use the animator proxy, like [[view animator] setFrame:newFrame];
Wrap calls to the animator in a CATrasaction to alter things like animation duration.
Let me know if you need some clarification by updating your question. Providing some pertinent code would really help identify the problems you're having trouble solving.
Firstly, you want to use [self setWantsLayer: YES]. Also, it's only important to call -setLayer: before -setWantsLayer: if you want to provide a specific CALayer subclass (such as a CAScrollLayer); if you just want a regular CALayer you just call -setWantsLayer: and it'll be created for you. Even better, just check the 'wants layer' option in Interface Builder.
Secondly, the entire point of using a layer-backed view is that you can continue to use the regular NSView methods and get the free CoreAnimation 'tweening' effects. If you want to use CoreAnimation as your only means of moving items around, then the correct way to do so is to create a layer backed view which contains your pure-CALayer presentation hierarchy.
I've not looked at any freely-available CoreAnimation tutorials, but I can definitely recommend the Pragmatic Programmers' book on the subject. They also have a screencast available by the book's author.

How do you develop an application to draw, edit and save UML models in Cocoa?

Will the individual UML diagram shapes be NSView subclasses or NSBezierPaths? How are the diagrams created and managed?
One way to do this is to:
Create a document-based app
Design model classes for the different objects the end-user will be able to draw in your canvas, all sharing one abstract superclass
In your CanvasView class, implement drawRect and have it call the NSDocument subclass, or for more granular classes it's viewcontroller, to get all the objects that should be drawn in the right order to draw them.
For each of these objects, call a drawInteriorInView:rect: method or something similar that they all have implemented, from within your CanvasView's drawRect: implementation.
The advantage of such a granular design is that you can decide to replace NSBezierPath drawing with straight CoreGraphics calls if you find a need to do so, without having to completely re-architect the app.
Typical Cocoa controls, like for instance a tableView, implement a bunch of different drawing methods, one for the background, one for the gridlines, etc. etc. all of them called (when applicable) from the view's drawRect:.
Or you could of course look at GCDrawKit, which seems to have a pretty functional implementation. Especially check out the sample app that comes with it.
Have you looked at OmniGraffle? It may do what you need.
[non-programming-related answer...]
Have you looked at the Sketch example project, found in /Developer/Examples/AppKit? It should get you at least halfway to where you're going.
You would typically start with an NSView subclass to represent your "canvas" and handle drawing and mouse/keyboard events. You would probably use NSBezierPath inside your drawing methods to fill and outline the shapes. Depending on how complex the drawing code is, you might put everything in your NSView subclass, or make an NSCell subclass that would take some work out of the NSView. In either case you would want to define a data source protocol (or create bindings) to provide data to the NSView from the objects in your data model which represent UML items.
Core Animation would be worth considering too, although I would start with NSView at the beginning, at least for a simple prototype.

Resources