What is the correct way to optimise scrolling performance of NSImageView and NSView - macos

I need to improve the scrolling performance of a view for annotating on top of an image.
Currently I have the following:
- annotationView (custom NSView)
- imageView (NSImageView)
- contentView (custom NSView)
- clipView (NSClipView)
- scrollView (NScrollView)
The images are quite large PDFs and PNGs and scrolling is poor unless I make the imageView layer backed, which I am just doing in Interface Builder. Scrolling is then pretty smooth.
However the PNG images override everything on top of the imageView, whereas the PDF images remain correctly in the background.
Why is this and how can I fix that?
To get even better performance I would also like to make the annotationView layer backed as well but if I do that the entire view becomes black - with the exception of the annotations being draw on the annotationView. How can I make this layer backed view transparent but still allow for the shapes to be draw on it. It seems I can make it transparent but then everything becomes transparent, including the drawn shapes.
Is there a better way to achieve this? The annotations are simply shapes and text that need to be placed at specific positions over the image which I am currently just drawing in response to mouse positions.

The short answer is to use layer backed views and use CGContext for drawing and not the simpler NSView drawing APIs.
By default macOS does not use GPU based graphics, unlike iOS.
macOS provides a high level API that uses the CPU rather than the GPU for graphics operations.
So in my case I switched everything to layer backed NSViews - you can set this in Interface Builder or simply add 'wantsLayer = true' in the NSView initialisation code (init()).
Avoid using NSImageView, instead use a layer backed view and set the layer.content = NSIMage, you may have to also set the layers background colour or you might get some areas of the background not being cleared when scrolling.
This works for me - I have big PDF images in the background - building layouts and a layer backed view on top of that for placing annotations.
Scrolling around is now buttery smooth. Images seem to load instantly.
For the most part its pretty easy - just set up right from the start and save yourself a lot of headaches.

Related

What could prevent an NSView.layer to renderInContext:?

I've been working hard, searching the internet on that problem for 3 days and I'm now running out of ressource.
Currently porting an iOS app to MacOS (deployment 10.11). The problem:
I have a view hierarchy as below:
NSScrollview
documentView
grouping view
tiling view one
array of NSImageView (each one being a tile)
tiling view two
array of NSImageView (each one being a tile)
The two tiling views are overlaying completely, depending of UI one may be hidden or the second one has opacity set below 1.0 to blend the two tiled view.
Because of opacity requirement, as well as performance, views are CAlayer backed. This is done from IB where the to NSScrollview is checked for Core Animation. Thereoff, all the view tree is (implicitly) layer backed.
Works as expected, scroll, magnify, etc..
Then I need to make an image out of the document view to generate an SCNMaterial content (3D view).
On iOS the documentView renderInContext works as expected, and allows an image to be created.
On Appkit the context stay transparent, so is the image, a valid object while as if clearColor.
If the documentView.canDrawSubviewsIntoLayer is set at creation, the view tree renders OK. This can't be the solution since it prevents opacity setting to work.
Even when one tiling view is hidden (no opacity compositing) rendering fails.
I read that some kinds of views are not rendered. I don't use them. There are no filters, no masks, beside a default masksToBounds setting on all the view tree. I don't know why and where it is set. I tried to unset it on all the views at creation with no success. It is set again somehow, on the grouping view below the documentView. This may be the problem but why this property is out of my control ?
Alternative way to get view tree rendering, bitmapImageRepForCachingDisplayInRect: / cacheDisplayInRect:toBitmapImageRep: works the same : ok with canDrawSubviewsIntoLayer, KO otherwise. Apple code examples to make a texture out of a view are using either one of the two methods.
There are plenty of posts, mainly on SO, complaining about CALayer renderInContext and code to custom render a layer tree. Nevertheless, most are quite old and now there must be a simple standard way to achieve it.
Edit: among other attempts, I tried to set each view wantsLayer, with no success.
Well, as often when you post a request for help, you finally find the answer by yourself… Here it is:
Given the view tree as listed in the question, I achieved to render it by setting canDrawSubviewsIntoLayer on each tiling view. This way, the opacity compositing between layers is working, AND the views are rendered.
I post this as an answer, because it solves the problem.
As of WHY this works, here is my guess, but this is not a authorised answer: Each NSImageView tile, subviews of tiling view, have their origin set to their frame. This is not a transform on the layer but a position of the frame in the coordinates of the tiling view. This is a difference with the iOS code version where the tiles are positioned by a transform. I'm going to test further, and see if using a transform rather that a frame origin makes a difference to renderInContext.
Edit: after more testing it appears that the iOS version has a transform AND an offset to the tile.
So the only clue is that the layer system get lost when rendering in context on OSX when some subview have an offset ??
Summary:
The key point to the solution is to find the right layer where to set canDrawSubviewsIntoLayer

Xcode GLKit printing Text on GLKView without using UIImages

I have an app, its a small game using opengles with GLKit.
No im wondering how it works when i want to draw text on
my screen (if it is possible).
How can i do it?
i draw all of my game objects using images (wrapped in some kind
of sprite). its possible to scale, to move, and to rotate.
everything works fine.
but finding out how it works to print text on that glkview
gets me deep inside of problems ^^
I dont want to use uiimages cause i also dont know how
to present uiimages on a glkview.
There are a number of ways to do what you want:
1) Have an image with all the text glyphs you need in it. For example, if your application is in English, you'd have the 26 uppercase and 26 lowercase letters in the image. Upload that texture to the GPU and use the proper texture coordinates or glSubTexImage2d() to pull out the glyphs you need. (It's not clear to me if this is what you meant by not wanting a UIImage. It doesn't have to be a UIImage, though that's probably easiest.)
2) Every time you need to display text, draw it on the CPU on the fly, and upload the entire word, phrase, or sentence as a texture. You could create a CGBitmapContext and use Core Graphics to draw text to it. Then upload it using glTexImage2D().
3) Get the individual glyphs out of the fonts and draw directly using the bezier curves that make up the glyphs. This allows for 3D extrusion, too. However, this option is the most time consuming to code and probably least performant. It also involves dealing with the many small problems that fonts have (like degenerate segments, and incorrect winding orders). IF you want to go down this path, I think maybe Core Text can help.
There are at least two clean ways to do this, depending on your requirements.
While documentation advises against compositing over a CAEAGLLayer (GLKView), it works quite well, at least in recent iOS versions, when transparent content is layered on top of the CAEAGLLayer. For example, try dropping a UITextView, with opaque set to false and a clear background color, on top of a GLKView in your Storyboard in Interface Builder in the Apple GLKit template or your app. In my test on an iPhone 5, frame rendering time remained around 1ms, even while scrolling in the text view. If your text needs are static, or you don't want the user to interact with the text, use CATextLayer as a child layer of your EAGLLayer instead of a view.
The second approach is to render the text into a texture. You can then composite the text onto your view by disabling the depth buffer and rendering the texture on a full screen rectangle. Look at UIGraphicsBeginImageContextWithOptions to see how to render to an offscreen image with Quartz. UIGraphicsGetImageFromCurrentImageContext allows you to retrieve the UIImage to use as a texture.

"CoreAnimation: surface is too large"

I'm creating a custom (layer-hosting) document view, which is contained within a scroll view. The root layer has two sub layers of the same size--one for the view's content, and one for anything that needs to hover over the main content. I set the frame to 2500x2500 and added a number of cells to the content layer, which was fine. On adding a translucent clone of one of the cell's layers to the overlay layer, the whole view clears briefly, and I get a log message 'core animation: surface 2502x2502 is too large'. This happens between adding the new layer and the next cycle of the event loop, so I guess when core animation renders the new layer.
I knew that a layer's content size is related to opengl texture size, but didn't think its frame mattered. I'm not drawing anything to these layers, not setting any style properties, and remove offscreen sub layers. All I'm really using them for is to handle the geometry of the document view. Is this an appropriate use of CA layers? If not, are there better ways of handling a large core animation-based document view?
Edit:
I've had this problem again, caused by an implicit animation on adding sublayers to the large parent. So in addition to what is suggested below, that's one to check if you run into this.
I would check to make sure that you're not setting any properties on your 2500x2500 layers which could require offscreen rendering. (This causes the layer to try and create a full-size buffer off-screen and render its contents into that buffer, rather than just rendering the contents to the screen directly.)
For example, setting an opacity, masksToBounds, mask, shouldRasterize, etc, could cause offscreen-rendering. You can see if offscreen-rendering is happening with the Core Animation instrument. (There's a checkbox to highlight offscreen-rendered areas.)

Core Animation architecture

From the documentation it appears that core animation layer is above OpenGL and Quartz2D. i.e.
executing a core animation command should produce a sequence of Quartz2D and OpenGL commands Am I right?
In interface builder, under the View Effects tab, we can set the core animation layer. What happens internally there?When we tick Context View option, contents on screen (buttons, scrolls etc) are not drawn using main context or currentContext(view), but new Bitmap Context is created for them. What is happening under the hood there?
Can somebody please explain me relationship between CoreAnimation Layer and Quartz2d/OpenGL?
Core Animation layers are essentially high-level abstractions of OpenGL surfaces. They are stored and manipulated by the GPU and so manipulation of the layers is extremely fast. CALayer objects by themselves are very lightweight and have no event handling.
Layer-backed NSView objects (which is what you get if you enable the checkboxes in Interface Builder) are views that draw their content into a Core Animation layer, again stored in the GPU's memory and with the same performance advantages as plain CALayer objects, but with all the functionality of a normal NSView.
What happens is that the view's content is rendered (via Quartz) to its backing layer (essentially an OpenGL texture). The view then only needs to draw again if the content of the layer changes.
Changes in position, scale, rotation etc of the view's layer do not require the view's content be redrawn. This means that most of the time the CPU does not have to get involved in constantly redrawing the view.

what is layer in core animation

In core animation or in App kit When we say layer-backed view or simply add a layer in the view,then actually what we mean by the layer.
A simple Google search:
The CALayer is the canvas upon which everything in Core Animation is painted. When you define movement, color changes, image effects, and so on, those are applied to CALayer objects. From a code perspective, CALayers are a lightweight representation similar to an NSView. In fact, NSView objects can be manipulated via their CALayer. This is referred to as being layer-backed.
A CALayer is an object which manages and draws upon a GL surface, and can manipulate that surface's location in three dimensions, without needing to redraw its contents.

Resources