Drastic slowdown using layer backed NSOpenGLView - cocoa

I needed to display some Cocoa widgets on top of an NSOpenGLView in an existing app. I followed the example in Apple's LayerBackedOpenGLView example code. The NSOpenGLView is given a backing layer using:
[glView setWantsLayer:YES]
Then the Cocoa NSView with the widgets is added as a subview of the glView. This is basically working and is twice ad fast as my previous approach where I added the NSView containing the widgets to a child window of the window containing the glView (this was the other solution I found on the web).
There were two problems.
The first is that some textures that I use with blending were no longer getting the blend right. After searching around a bit it looked like I might need to clear the alpha channel of the OpenGLView. This bit of code that I call after drawing a frame seems to have fixed this problem:
Code:
glColorMask(FALSE, FALSE, FALSE, TRUE); //This ensures that only alpha will be effected
glClearColor(0, 0, 0, 1); //alphaValue - Value to which you need to clear
glClear(GL_COLOR_BUFFER_BIT);
glColorMask(TRUE, TRUE, TRUE, TRUE); //Put color mask back to what it was.
Can someone explain why this is needed when using the CALayer, but not without?
The second problem I don't have a solution for. It seems that when I pan to the part of the scene where problem is #1 was observed, the frame rate drops from something like 110 FPS down to 10 FPS. Again, this only started happening after I added the backing layer. This doesn't always happen. Sometimes the FPS stays high when panning over this part of the scene but that is rare. I assume it must have something with how the textures here are blended, but I have no idea what.
Any thoughts?

I did figure out a workaround to the slowdown. The OpenGL view has a HUD (heads up display) view that goes on top of it. I had installed another NSView as a subview if it. Both the HUD and the subview have lots of alpha manipulation and for some reason that tickled a real slowdown in compositing the layers. I could easily instal this subview as a subview of the OpenGL view and when I did this everything sped up again. So although I don't fully understand the slowdown, I do have a good work around for it.

Related

SCNView overlay causes tearing on resize

I'm using SceneKit to display a 3D scene (so far, a single quad), and the overlaySKScene to display a 2D overlay (which so far is just a SKNode with no geometry, though I had previously used a single SKLabelNode). It's a pretty simple view inside a bunch of nested NSSplitView. And during normal use, it works brilliantly. The problem comes when I try to resize the window or split view — I get areas of red leaking through my nice background, which disappear shortly after.
I'm running this on a 2016 MBP with a Radeon Pro 460, and captured this frame using Quicktime's screen capture:
Disabling the overlay removes the red areas, which makes me think that it's the problem. Disabling the statistics bar or the scroller (a child view of the SCNView) do not have any impact. My most minimal SKScene subclass is defined as
#implementation TestOverlay
- (instancetype) initWithSize: (CGSize) size
{
if( self = [super initWithSize: size] )
{
// Setup the default state
self.scaleMode = SKSceneScaleModeResizeFill;
self.userInteractionEnabled = NO;
self.backgroundColor = [NSColor blackColor];
}
return self;
}
#end
Has anybody run into similar issues before? Annoyingly, the apple sample Fox2 doesn't have similar problems...
For true enlightenment, one needs to read the documentation carefully, then comment everything out and restore functionality one step at a time. And then read the documentation again.
In the discussion section of -[SCNSceneRendererDelegate renderer:willRenderScene:atTime:], the solution is obvious (emphasis mine):
You should only execute Metal or OpenGL drawing commands (and any setup required to perform them) in this method—the results of modifying SceneKit objects during this method are undefined.
Which is exactly what I was doing. I had misread this as modifying geometry, so thought that assigning textures would be reasonable to do here (after all, "will" render means it hadn't started rendering yet, right?), and would therefore pick the most recently created texture. And unfortunately, before I decided that I needed an overlay, this actually works perfectly well! As soon as the overlay was added, however, the tearing appeared.
The correct place to update material properties seems to be -[SCNSceneRendererDelegate renderer:updateAtTime]. Use that to avoid silly bugs like this one folks!
Try to reset SMC (System Management Controller). It helped me for solving a similar problem but with Autodesk Maya 2018 on MBP 2017 (Radeon 560).
So, shut down and unplug your MBP.
On the built-in keyboard, press and hold the Shift-Option-Control keys on the left side and press the Power Button and hold all of these down for 10 seconds, then release the keys.
Connect the power adapter and then turn the Mac on normally.
Hope this helps.
P.S. In case it doesn't help, try to check/uncheck Automatic graphics switching option in System Preferences–Energy Saver to see if there's a difference.

What could prevent an NSView.layer to renderInContext:?

I've been working hard, searching the internet on that problem for 3 days and I'm now running out of ressource.
Currently porting an iOS app to MacOS (deployment 10.11). The problem:
I have a view hierarchy as below:
NSScrollview
documentView
grouping view
tiling view one
array of NSImageView (each one being a tile)
tiling view two
array of NSImageView (each one being a tile)
The two tiling views are overlaying completely, depending of UI one may be hidden or the second one has opacity set below 1.0 to blend the two tiled view.
Because of opacity requirement, as well as performance, views are CAlayer backed. This is done from IB where the to NSScrollview is checked for Core Animation. Thereoff, all the view tree is (implicitly) layer backed.
Works as expected, scroll, magnify, etc..
Then I need to make an image out of the document view to generate an SCNMaterial content (3D view).
On iOS the documentView renderInContext works as expected, and allows an image to be created.
On Appkit the context stay transparent, so is the image, a valid object while as if clearColor.
If the documentView.canDrawSubviewsIntoLayer is set at creation, the view tree renders OK. This can't be the solution since it prevents opacity setting to work.
Even when one tiling view is hidden (no opacity compositing) rendering fails.
I read that some kinds of views are not rendered. I don't use them. There are no filters, no masks, beside a default masksToBounds setting on all the view tree. I don't know why and where it is set. I tried to unset it on all the views at creation with no success. It is set again somehow, on the grouping view below the documentView. This may be the problem but why this property is out of my control ?
Alternative way to get view tree rendering, bitmapImageRepForCachingDisplayInRect: / cacheDisplayInRect:toBitmapImageRep: works the same : ok with canDrawSubviewsIntoLayer, KO otherwise. Apple code examples to make a texture out of a view are using either one of the two methods.
There are plenty of posts, mainly on SO, complaining about CALayer renderInContext and code to custom render a layer tree. Nevertheless, most are quite old and now there must be a simple standard way to achieve it.
Edit: among other attempts, I tried to set each view wantsLayer, with no success.
Well, as often when you post a request for help, you finally find the answer by yourself… Here it is:
Given the view tree as listed in the question, I achieved to render it by setting canDrawSubviewsIntoLayer on each tiling view. This way, the opacity compositing between layers is working, AND the views are rendered.
I post this as an answer, because it solves the problem.
As of WHY this works, here is my guess, but this is not a authorised answer: Each NSImageView tile, subviews of tiling view, have their origin set to their frame. This is not a transform on the layer but a position of the frame in the coordinates of the tiling view. This is a difference with the iOS code version where the tiles are positioned by a transform. I'm going to test further, and see if using a transform rather that a frame origin makes a difference to renderInContext.
Edit: after more testing it appears that the iOS version has a transform AND an offset to the tile.
So the only clue is that the layer system get lost when rendering in context on OSX when some subview have an offset ??
Summary:
The key point to the solution is to find the right layer where to set canDrawSubviewsIntoLayer

Alpha Detection in Layer OK on Simulator, not iPhone

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

drawRect makes scrolling slow in UIScrollView

I've got a UIScrollView with a (custom) UIView inside of it.
In my scrollViewDidScroll-method I'm calling
[myCustomView setNeedsDisplay];
This makes the scrolling noticeably slower, if I'm implementing the drawRect: method in my custom UIView - even if it's completely empty.
As soon as I delete the drawRect: method, it's smooth again.
I have absolutely no idea, why... anyone of you?
I hate drawrect too
"It's because of the way hardware-accellerated animation works in Cocoa.
If you don't have a custom drawRect method, the system caches the pixels for your view in video memory. When it needs to redraw them, it just blits the pixels onto the screen.
If you have a custom drawRect method, the system instead has to call your drawRect method to render the pixels in main memory, then copy those pixels into video memory, THEN draw the pixels to the screen for each an every frame. The docs say to avoid drawRect if you can.
I think main memory and video memory are shared on most/all iOS devices, but the system still has a lot more work to do when you implement a drawRect method for a view.
You would probably be better served to use an OpenGL layer and render to that with OpenGL calls, since OpenGL talks directly with the display hardware."
link to that quote: http://www.iphonedevsdk.com/forum/iphone-sdk-development/80637-drawrect-makes-scrolling-slow-uiscrollview.html

How does CATransition work?

CATransition is quite unusual. Consider the following code.
CATransition* trans=[CATransition animation];
trans.duration=0.5;
trans.type=kCATransitionFade;
[self.holdingView.layer addAnimation:trans forKey:nil];
self.loadingView.hidden=YES;
self.displayView.hidden=NO;
Notice that nowhere did I tell the transition that I wanted to display the displayView rather than loadingView, so the views must somehow access the transition themselves. Can anyone explain in more detail how this works?
When you add the transition as an animation, an implicit CATransaction is begun. From that point on, all modifications to layer properties are going to be animated rather than immediately applied. The way the CATransition performs this animation to to take a snapshot of the view before the layer properties are changed, and a snapshot of what the view will look like after the layer properties are changed. It then uses a filter (on Mac this is Core Image, but on iPhone I'm guessing it's just hard-coded math) to iterate between those two images over time.
This is a key feature of Core Animation. Your draw logic doesn't generally need to deal with the animation. You're given a graphics context, you draw into it, you're done. The system handles compositing that with other images over time (or rotating it in space, or whatever). So in the case of changing the hidden state, the initial-state fully composited image is blended with the final-state composted image. Very fast on a GPU, and it doesn't really matter what change you made to the view.

Resources