How to set frame of view constrained exclusively in code - xcode

I think I've read that when creating and constraining views exclusively in code it's acceptable to use [NSView -init] rather than the designated initialiser [NSView -initWithFrame:]. I assumed this was correct and began to construct my interface accordingly: all views fully constrained, but none with an explicit frame rectangle. Everything seemed to be working fine, and I guessed that under the hood cocoa did all the necessary frame-setting calculations. I then attempted to add a tracking area to one of my views; it failed, and after debugging I realized it was because my view had neither a frame nor a bounds.
Is the suggestion that [NSView -initWithFrame:] should be avoided when creating/constraining in code incorrect? Or is their a way to make cocoa generate the view's frame based on it's constraints? I've no problem adding constraints in code, but I'm keen to avoid the need to deduce all of my view's frame rectangles.

Related

What could prevent an NSView.layer to renderInContext:?

I've been working hard, searching the internet on that problem for 3 days and I'm now running out of ressource.
Currently porting an iOS app to MacOS (deployment 10.11). The problem:
I have a view hierarchy as below:
NSScrollview
documentView
grouping view
tiling view one
array of NSImageView (each one being a tile)
tiling view two
array of NSImageView (each one being a tile)
The two tiling views are overlaying completely, depending of UI one may be hidden or the second one has opacity set below 1.0 to blend the two tiled view.
Because of opacity requirement, as well as performance, views are CAlayer backed. This is done from IB where the to NSScrollview is checked for Core Animation. Thereoff, all the view tree is (implicitly) layer backed.
Works as expected, scroll, magnify, etc..
Then I need to make an image out of the document view to generate an SCNMaterial content (3D view).
On iOS the documentView renderInContext works as expected, and allows an image to be created.
On Appkit the context stay transparent, so is the image, a valid object while as if clearColor.
If the documentView.canDrawSubviewsIntoLayer is set at creation, the view tree renders OK. This can't be the solution since it prevents opacity setting to work.
Even when one tiling view is hidden (no opacity compositing) rendering fails.
I read that some kinds of views are not rendered. I don't use them. There are no filters, no masks, beside a default masksToBounds setting on all the view tree. I don't know why and where it is set. I tried to unset it on all the views at creation with no success. It is set again somehow, on the grouping view below the documentView. This may be the problem but why this property is out of my control ?
Alternative way to get view tree rendering, bitmapImageRepForCachingDisplayInRect: / cacheDisplayInRect:toBitmapImageRep: works the same : ok with canDrawSubviewsIntoLayer, KO otherwise. Apple code examples to make a texture out of a view are using either one of the two methods.
There are plenty of posts, mainly on SO, complaining about CALayer renderInContext and code to custom render a layer tree. Nevertheless, most are quite old and now there must be a simple standard way to achieve it.
Edit: among other attempts, I tried to set each view wantsLayer, with no success.
Well, as often when you post a request for help, you finally find the answer by yourself… Here it is:
Given the view tree as listed in the question, I achieved to render it by setting canDrawSubviewsIntoLayer on each tiling view. This way, the opacity compositing between layers is working, AND the views are rendered.
I post this as an answer, because it solves the problem.
As of WHY this works, here is my guess, but this is not a authorised answer: Each NSImageView tile, subviews of tiling view, have their origin set to their frame. This is not a transform on the layer but a position of the frame in the coordinates of the tiling view. This is a difference with the iOS code version where the tiles are positioned by a transform. I'm going to test further, and see if using a transform rather that a frame origin makes a difference to renderInContext.
Edit: after more testing it appears that the iOS version has a transform AND an offset to the tile.
So the only clue is that the layer system get lost when rendering in context on OSX when some subview have an offset ??
Summary:
The key point to the solution is to find the right layer where to set canDrawSubviewsIntoLayer

Cocoa - efficient view drawing

in my program, I'm implementing a custom view, a bit like a table view. To do so, I have subclassed NSView. Now my question is, what's the most efficient way to draw all the table view cells. Should I just use NSViews or possibly something else, like CALayers?
Thanks!
P.S.: This is on Mac OS X, not on iOS.
Based on your question and comment, I propose two alternatives:
NSCollectionView / NSCollectionViewItem - This is useful only if all of your "cells" (instances of your prototype view) are the same dimensions. That is, you can't have one that's wider or taller than the others (or narrower or shorter). This is highly efficient and a ready part of AppKit. Even with a single column and n rows, it works like a charm.
Roll Your Own - This is harder but gives you flexibility. Much like NSCollectionView / NSCollectionViewItem, you'd have a view that serves as the container and you'd ideally have a view you reuse to draw the various "items" it's showing. Using the same view to set its represented object and "stamp" it into place (pose it and draw it), you can roll through your entire collection in one go then use that same view as the live, active view for whatever selected, focused item you have. Even faster: roll through and cache the images and sizes of each item with your reusable item view and draw all from the cache except the selected item (which would use a live, real view posed in its proper location, updating the cached image of itself as its contents change for when it's not selected). Faster still: 1 live view and 1 "for caching" view, and draw only the computed rects of the cached images that intersect the visible rect (sans the "live" / selected view). Note: the caching will have to be re-done each time the container's frame's width changes, since presumably shrinking horizontally means all items grow vertically. If you can, take advantage of NSOperation / NSOperationQueue to handle the caching in the background, only flagging for re-display when all cached items 0 - n (where n is the highest-indexed item intersecting the visible rect) are available.
I use something very close to the latter in one of my own applications, where the "item" is an entry with varying-length text. I don't employ all the tactics I mentioned in my own solution but most and the performance increase is very satisfying. :-)
Hope this helps.

Automatic NSView resizing

I'm doing something with cocoa which I think is a bit complicate for a beginner like me. I tried a few things, but I admit I need some theory first, because I would like to understand exactly the meaning of this concepts.
I see that every NSView and every class that subclasses it has one thing called frame, and one called bounds. They both have a size with width and height and an origin.
I have an NSView with an NSTableView inside of it.
I have the size of a row from the table view, and I would like to set the height of both NSView and NSTableView equal to rows*rowSize, in a way that the group NSView+subviews is automatically resized when an object is added or removed to and from the data source of the Table View.
I made some experiments, but I did end a bit confused about frame, bounds, sizes and so on. I don't know what I should change and how.
Can you please give me an hint about what bounds and frame basically are, and how can I achieve that magic resizing?
Thank you for your replies in advance. Best regards,
—Albé
The difference between frame and bounds is covered very nicely in the View Programming Guide (under View Geometry).
You'll also want to peruse the NSView Class Reference, where you'll find some handy notifications, such as NSViewFrameDidChangeNotification and handy methods such as setPostsFrameChangedNotifications:.

NSView leaves artifacts on another NSView when the first is moved across the second

I have an NSView subclass that can be dragged around in its superview. I move the views by calling NSView's setFrameOrigin and setFrameRotation methods in my mouseDragged event handler. The views are both moved and rotated with each call.
I have multiple instances of these views contained by a single superview. The problem I'm having is that, as one view is dragged over another, it leaves artifacts behind on the view it's eclipsing. I recorded a short video of this in action. Unfortunately, due to the video compression the artifacts aren't very visible.
I strongly suspect that this is related to the simultaneous translation and rotation. Quartz Debug reveals that a rectangle of the occluding (or occluded) view is updated as another view is dragged across it (video here); somehow this rectangle is getting miscalculated by the drawing engine, so part of the view that should be redrawn isn't.
The kicker is I have no idea how to fix this. I can't find any way to manually specify the update rect in the docs, nor am I sure that's what needs to happen. Any ideas? Thanks!
You might also consider using CALayers instead of views. Unlike views, layers are intended to be stacked with their siblings.
For a possible least-effort solution, try making the views layer-backed; it may or may not solve this problem, but it's worth a try.
Views aren't really designed to be stacked in an interactive fashion. Can be done, but edge cases abound.
Generally, for this kind of thing you would use a Cell like infrastructure if you want to do in-view dragging (See the Sketch example) and you would use the drag-n-drop infrastructure if you want to drag between views or windows (or apps).
If you really want to drag a transformed view over the top, you'll need to invalidate a rectangle of the view underneath the view being dragged. The rectangle will need to be bigger by a few pixels than the total area (unrotated/untransformed) that is obscured by the view being dragged. The artifacts are, effectively, caused by rounding error; diagonal lines are just an estimate on a raster drawing system.
See the method:
- (void)setNeedsDisplayInRect:(NSRect)invalidRect;

How does CATransition work?

CATransition is quite unusual. Consider the following code.
CATransition* trans=[CATransition animation];
trans.duration=0.5;
trans.type=kCATransitionFade;
[self.holdingView.layer addAnimation:trans forKey:nil];
self.loadingView.hidden=YES;
self.displayView.hidden=NO;
Notice that nowhere did I tell the transition that I wanted to display the displayView rather than loadingView, so the views must somehow access the transition themselves. Can anyone explain in more detail how this works?
When you add the transition as an animation, an implicit CATransaction is begun. From that point on, all modifications to layer properties are going to be animated rather than immediately applied. The way the CATransition performs this animation to to take a snapshot of the view before the layer properties are changed, and a snapshot of what the view will look like after the layer properties are changed. It then uses a filter (on Mac this is Core Image, but on iPhone I'm guessing it's just hard-coded math) to iterate between those two images over time.
This is a key feature of Core Animation. Your draw logic doesn't generally need to deal with the animation. You're given a graphics context, you draw into it, you're done. The system handles compositing that with other images over time (or rotating it in space, or whatever). So in the case of changing the hidden state, the initial-state fully composited image is blended with the final-state composted image. Very fast on a GPU, and it doesn't really matter what change you made to the view.

Resources