The internals of NSScrollView - cocoa

When you gently scroll an NSScrollView the rectangle that Cocoa marks as dirty, and passes to drawRect, is often trivially small (perhaps as small as one or two pixels in height, for a vertical scroll view). The framework clearly already knows what the majority of the content is (because it's on screen) and where to redraw it (just the offset brought about by the scroll), so all it needs the developer to do is fill in the small rectangle that's about to appear. I was wondering what's happening behind the scenes to allow this to happen?
For example, if I wanted to implement my own super-smooth scroll view as a learning project, what kind of data would I be recording about the document view to enable me to just re-position - rather than redraw - the majority of it. Is Cocoa constantly generating images on background threads that it draws on screen when required, or is there something a bit more subtle going on?

There's lots going on. If you haven't already read it, you should read the Scroll View Programming Guide for Cocoa.
The copying of the existing rendering is accomplished by -[NSView scrollRect:by:]. It's only done if the NSClipView that's part of the NSScrollView architecture is set to copy-on-scroll (the copiesOnScroll property).
Also, there's "responsive scrolling". Since 10.9, if certain conditions are met, AppKit will speculatively render the document view beyond the visible rect so that, when the user scrolls, it can show the scrolled-in area without asking the document view to render.
You can set your views to be layer-backed. In that case, they are typically rendered to textures and composited by the window server. This means they don't necessarily have to re-draw to render in a new position. It's quite likely that responsive scrolling uses layers behind the scenes to hold the pre-rendered content.

Related

What could prevent an NSView.layer to renderInContext:?

I've been working hard, searching the internet on that problem for 3 days and I'm now running out of ressource.
Currently porting an iOS app to MacOS (deployment 10.11). The problem:
I have a view hierarchy as below:
NSScrollview
documentView
grouping view
tiling view one
array of NSImageView (each one being a tile)
tiling view two
array of NSImageView (each one being a tile)
The two tiling views are overlaying completely, depending of UI one may be hidden or the second one has opacity set below 1.0 to blend the two tiled view.
Because of opacity requirement, as well as performance, views are CAlayer backed. This is done from IB where the to NSScrollview is checked for Core Animation. Thereoff, all the view tree is (implicitly) layer backed.
Works as expected, scroll, magnify, etc..
Then I need to make an image out of the document view to generate an SCNMaterial content (3D view).
On iOS the documentView renderInContext works as expected, and allows an image to be created.
On Appkit the context stay transparent, so is the image, a valid object while as if clearColor.
If the documentView.canDrawSubviewsIntoLayer is set at creation, the view tree renders OK. This can't be the solution since it prevents opacity setting to work.
Even when one tiling view is hidden (no opacity compositing) rendering fails.
I read that some kinds of views are not rendered. I don't use them. There are no filters, no masks, beside a default masksToBounds setting on all the view tree. I don't know why and where it is set. I tried to unset it on all the views at creation with no success. It is set again somehow, on the grouping view below the documentView. This may be the problem but why this property is out of my control ?
Alternative way to get view tree rendering, bitmapImageRepForCachingDisplayInRect: / cacheDisplayInRect:toBitmapImageRep: works the same : ok with canDrawSubviewsIntoLayer, KO otherwise. Apple code examples to make a texture out of a view are using either one of the two methods.
There are plenty of posts, mainly on SO, complaining about CALayer renderInContext and code to custom render a layer tree. Nevertheless, most are quite old and now there must be a simple standard way to achieve it.
Edit: among other attempts, I tried to set each view wantsLayer, with no success.
Well, as often when you post a request for help, you finally find the answer by yourself… Here it is:
Given the view tree as listed in the question, I achieved to render it by setting canDrawSubviewsIntoLayer on each tiling view. This way, the opacity compositing between layers is working, AND the views are rendered.
I post this as an answer, because it solves the problem.
As of WHY this works, here is my guess, but this is not a authorised answer: Each NSImageView tile, subviews of tiling view, have their origin set to their frame. This is not a transform on the layer but a position of the frame in the coordinates of the tiling view. This is a difference with the iOS code version where the tiles are positioned by a transform. I'm going to test further, and see if using a transform rather that a frame origin makes a difference to renderInContext.
Edit: after more testing it appears that the iOS version has a transform AND an offset to the tile.
So the only clue is that the layer system get lost when rendering in context on OSX when some subview have an offset ??
Summary:
The key point to the solution is to find the right layer where to set canDrawSubviewsIntoLayer

hiding movieclip, rendering performance

I'm developing a rendering engine for a game i am currently building..
I have a main camera (rectangle) that determines what needs to be rendered (thing within it's boundaires)
I am using a bitmap rendering method for the background and that all works fine.
but for the character i am using a movieclip over the top.
when the character goes out of the camera's view is it 100% neccesary to set visible=false?
atm the game is running at 30 FPS (as intended) and everything is sweet, i just wanted to ask out of curiosity.
Is flash clever enough to not bother with movieclip outside of the scene boundaires?
Thanks in advance,
Rory
According to http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7e3e.html Flash won't render if an object is outside of Stage boundaries:
Display list
The hierarchy of display objects that will be rendered as visible
screen content by Flash Player and AIR. The Stage is the root of the
display list, and all the display objects that are attached to the
Stage or one of its children form the display list (even if the object
isn’t actually rendered, for example if it’s outside the boundaries of
the Stage).
In my experience display objects added to the stage cause a performance hit even if they are not rendered.
Setting visible to false causes a much lower performance hit, but still a small hit.
Removing unnecessary display objects from the display list is a documented performance tip from adobe as well.
Of course, if you only have a few dislay objects it might not be worth the effort, but if we talk about large amounts of display objects I strongly recommend removing them from the display list.

"CoreAnimation: surface is too large"

I'm creating a custom (layer-hosting) document view, which is contained within a scroll view. The root layer has two sub layers of the same size--one for the view's content, and one for anything that needs to hover over the main content. I set the frame to 2500x2500 and added a number of cells to the content layer, which was fine. On adding a translucent clone of one of the cell's layers to the overlay layer, the whole view clears briefly, and I get a log message 'core animation: surface 2502x2502 is too large'. This happens between adding the new layer and the next cycle of the event loop, so I guess when core animation renders the new layer.
I knew that a layer's content size is related to opengl texture size, but didn't think its frame mattered. I'm not drawing anything to these layers, not setting any style properties, and remove offscreen sub layers. All I'm really using them for is to handle the geometry of the document view. Is this an appropriate use of CA layers? If not, are there better ways of handling a large core animation-based document view?
Edit:
I've had this problem again, caused by an implicit animation on adding sublayers to the large parent. So in addition to what is suggested below, that's one to check if you run into this.
I would check to make sure that you're not setting any properties on your 2500x2500 layers which could require offscreen rendering. (This causes the layer to try and create a full-size buffer off-screen and render its contents into that buffer, rather than just rendering the contents to the screen directly.)
For example, setting an opacity, masksToBounds, mask, shouldRasterize, etc, could cause offscreen-rendering. You can see if offscreen-rendering is happening with the Core Animation instrument. (There's a checkbox to highlight offscreen-rendered areas.)

CALayer flickering when adding a foreground layer to IKImageBrowserView items with garbage collection on

I'm trying to implement a technique similar to the one in the ImageBrowserViewAppearance sample code from Apple (located here: http://developer.apple.com/library/mac/#samplecode/ImageBrowserViewAppearance/Introduction/Intro.html ), where CALayers are generated on top of the items in the IKImageBrowserView to customize the appearances of the objects in the image browser.
However, I'm getting a weird problem when I turn on garbage collection, and I can reproduce it in the Apple sample code. Simply turn on Garbage Collection in the target, and build and launch the ImageBrowserAppearance sample app. Then, add some photos to the image browser using the "Add Photos..." button.
Now, click on an empty portion of the IKImageBrowserView, and click and drag to start selecting multiple items in the browser view. As you drag the selection box around, you should notice that sometimes the pin and gloss overlay for some of the items flicker and briefly appear in the bottom-left corner of the IKImageBrowserView. All of the CALayers seem to do this occasionally, I've seen the white surrounding slide area flicker down into the bottom-left corner as well.
When I mimic the technique in my own code, I (not surprisingly) also can reproduce this badge flickering. However, this problem disappears when garbage collection is off.
Anybody have a clue what could be going wrong here? I'd like to use garbage collection in my app in conjunction with this technique, but the flickering is kind of annoying.
I bookmarked this a while back but Apple's changed the URL and the text. Fortunately I quoted it when I bookmarked it:
The Core Graphics APIs (Quartz 2D) see an approximately 25% reduction in drawing performance for applications compiled to use garbage collection.
That "25% reduction in drawing performance" text has been rewritten into a "slight overhead in code execution" and that was for 10.5. Perhaps Apple fixed it for 10.6. And you're talking Core Animation, not Core Graphics.
Still, Core Animation eventually has to talk to Core Graphics, and perhaps that performance issue hasn't gone away, and you're being bitten by it.
I fooled around with this a bit and can confirm I get the same behavior running the project with GC turned on. In fact, if you're patient enough and slowly change the selection one image at a time using the arrow keys, eventually it'll trigger the behavior and you can see the layers from one image in the view are displayed in the lower left corner instead of on top of the image. I haven't been able to find any sort of pattern as to when it happens, or any relation between which image is selected and which image has its layers missing. I'm assuming that for whatever reason, those layers are getting their frame origin set to {0, 0}, but heck if I know why.

NSView leaves artifacts on another NSView when the first is moved across the second

I have an NSView subclass that can be dragged around in its superview. I move the views by calling NSView's setFrameOrigin and setFrameRotation methods in my mouseDragged event handler. The views are both moved and rotated with each call.
I have multiple instances of these views contained by a single superview. The problem I'm having is that, as one view is dragged over another, it leaves artifacts behind on the view it's eclipsing. I recorded a short video of this in action. Unfortunately, due to the video compression the artifacts aren't very visible.
I strongly suspect that this is related to the simultaneous translation and rotation. Quartz Debug reveals that a rectangle of the occluding (or occluded) view is updated as another view is dragged across it (video here); somehow this rectangle is getting miscalculated by the drawing engine, so part of the view that should be redrawn isn't.
The kicker is I have no idea how to fix this. I can't find any way to manually specify the update rect in the docs, nor am I sure that's what needs to happen. Any ideas? Thanks!
You might also consider using CALayers instead of views. Unlike views, layers are intended to be stacked with their siblings.
For a possible least-effort solution, try making the views layer-backed; it may or may not solve this problem, but it's worth a try.
Views aren't really designed to be stacked in an interactive fashion. Can be done, but edge cases abound.
Generally, for this kind of thing you would use a Cell like infrastructure if you want to do in-view dragging (See the Sketch example) and you would use the drag-n-drop infrastructure if you want to drag between views or windows (or apps).
If you really want to drag a transformed view over the top, you'll need to invalidate a rectangle of the view underneath the view being dragged. The rectangle will need to be bigger by a few pixels than the total area (unrotated/untransformed) that is obscured by the view being dragged. The artifacts are, effectively, caused by rounding error; diagonal lines are just an estimate on a raster drawing system.
See the method:
- (void)setNeedsDisplayInRect:(NSRect)invalidRect;

Resources