SCNView overlay causes tearing on resize - macos

I'm using SceneKit to display a 3D scene (so far, a single quad), and the overlaySKScene to display a 2D overlay (which so far is just a SKNode with no geometry, though I had previously used a single SKLabelNode). It's a pretty simple view inside a bunch of nested NSSplitView. And during normal use, it works brilliantly. The problem comes when I try to resize the window or split view — I get areas of red leaking through my nice background, which disappear shortly after.
I'm running this on a 2016 MBP with a Radeon Pro 460, and captured this frame using Quicktime's screen capture:
Disabling the overlay removes the red areas, which makes me think that it's the problem. Disabling the statistics bar or the scroller (a child view of the SCNView) do not have any impact. My most minimal SKScene subclass is defined as
#implementation TestOverlay
- (instancetype) initWithSize: (CGSize) size
{
if( self = [super initWithSize: size] )
{
// Setup the default state
self.scaleMode = SKSceneScaleModeResizeFill;
self.userInteractionEnabled = NO;
self.backgroundColor = [NSColor blackColor];
}
return self;
}
#end
Has anybody run into similar issues before? Annoyingly, the apple sample Fox2 doesn't have similar problems...

For true enlightenment, one needs to read the documentation carefully, then comment everything out and restore functionality one step at a time. And then read the documentation again.
In the discussion section of -[SCNSceneRendererDelegate renderer:willRenderScene:atTime:], the solution is obvious (emphasis mine):
You should only execute Metal or OpenGL drawing commands (and any setup required to perform them) in this method—the results of modifying SceneKit objects during this method are undefined.
Which is exactly what I was doing. I had misread this as modifying geometry, so thought that assigning textures would be reasonable to do here (after all, "will" render means it hadn't started rendering yet, right?), and would therefore pick the most recently created texture. And unfortunately, before I decided that I needed an overlay, this actually works perfectly well! As soon as the overlay was added, however, the tearing appeared.
The correct place to update material properties seems to be -[SCNSceneRendererDelegate renderer:updateAtTime]. Use that to avoid silly bugs like this one folks!

Try to reset SMC (System Management Controller). It helped me for solving a similar problem but with Autodesk Maya 2018 on MBP 2017 (Radeon 560).
So, shut down and unplug your MBP.
On the built-in keyboard, press and hold the Shift-Option-Control keys on the left side and press the Power Button and hold all of these down for 10 seconds, then release the keys.
Connect the power adapter and then turn the Mac on normally.
Hope this helps.
P.S. In case it doesn't help, try to check/uncheck Automatic graphics switching option in System Preferences–Energy Saver to see if there's a difference.

Related

SKSceneScaleModeResizeFill on ios8 scales improperly

I have written a game in SpriteKit using objective C and it works perfectly on ios9 but it looks hideous on ios8. I would really like to know how to fix this problem, either by “correcting” my mistake, or if I have no mistake then by finding a workaround for the bug in ios8.
I think I have really done all I can to make the problem as clear as possible, including making loads of screenshots to illustrate the problem and also making a new Xcode project that is as simple as possible while still showing the problem.
If you want to try the Xcode project, here is a link for it….
xcode project
If you want to see the screenshots of the problem, then here is a link for the screenshots.
Screenshots
Now I will try to explain the code I wrote and the problem illustrated in the screenshots.
PLEASE REMEMBER: My code works perfectly on iOS9.3. So my code is obviously not complete gargage. But admittedly, I am not an expert on handling screen rotation, so probably my code could be better.
I should probably mention that both scenes have scale mode set to SKSceneScaleModeResizeFill. I chose this mode because I had tremendous difficulty doing proper layouts for all possible screen sizes (including iPhone) when working with SKSceneScaleModeAspectFill. I do I hope I can solve this problem while sticking with SKSceneScaleModeResizeFill.
Anyway, my app is a SpriteKit game with two scenes. The main scene is the GameScene, where you play the game. And this scene has a pointer to the SettingsScene, where you can change the settings of the scene. (e.g. change the level of difficulty).
Anytime the user rotates the screen, GameViewController detects this change in viewWillTransitionToSize and tells the GameScene object about the new screen width and screen height. Game Scene then adjusts the positions of its sprites in consideration of the new screen orientation and then tells its SettingsScene object about the new screen width and height to that the Settings scene is properly laid out as well.
Please note that with this design, all sprites on BOTH scenes get repositioned any time the user rotates the screen REGARDLESS of which scene is actually active at that time..
As I said before, all works as expected on ios9.3. But on ios8, the result is attrocious. The screenshots illustrate one example of typical experience on ios8. If the user rotates the screen while using the game and then goes to the settings screen, he will see something awful. And will often be trapped in this terrible experience because the button for going back to the main game might not even be fitting on the screen anymore.
At first, it might seem like I am failing to reposition sprites for landscape mode in the settings scene. But this explanation is wrong. The text on the screen shows that the last layout was performed with the landscape orientation in mind.
So what is going wrong here?
Any suggestions would be highly highly highly highly appreciated.
Thanks!
-j
p.s. In case you don't want to look directly at the linked project file, here are some details about the example code. GameViewController implements viewTransitionToSize to handle any screen rotation. It directly tells the new screen dimensions to GameScene, which then tells SettingsScene. Both scenes rearrange their sprites in consideration of the new screen dimensions. And all goes well on ios9. On ios8, however, the inactive scene ends up looking hideaous when it is presented even though it clearly did reposition its sprites according to the new dimensions.
the problem is easily resolved by these lines....
gameScene.size = newScreenSize;
settingsScene.size = newScreenSize;
anytime the orientation changes.
This code is not required for ios9. The scene knows what size the screen is without assistance. But for ios8, it seems to be needed to add this code.

Forcing OSX NSScroller thumb to draw in highlighted form without actually mouse tracking

I'm drawing an NSScroller into a bitmap. I need to capture it with the thumb highlighted (I'm using cacheDisplayInRect:toBitmapImageRep:, but I've tried the separate draw methods into a GC created on the bitmap). I've tried everything I can think of, including setting various values in the (private) _sFlags2 and sFlags NSScroller ivars before the draw call. I can't send it events because the scroller isn't actually live.
Eventually I need this to work on 10.6+, but all of my testing so far has been on 10.7+ (which is where the new style scrollbars came in), and I haven't checked 10.6 yet because I'm also using alignmentRectForFrame: and haven't faked that out for 10.6 yet.
After some quality time spent with Hopper! I found the following solution:
[[scrollbar scrollerImp] setShouldDrawRolloverState: YES];
Undocumented, but I'm a happy camper!

Mac NSView animation animationDidEnd called twice on Retina MBP

A bug recently showed up in my app related to view animation on the new MBP Retina. I don't have a new MPB to reproduce but the affected user is helping track down the issue through copious amounts of debugging output. It appears that animationDidEnd is being called twice on my animation delegate, the second time seems to be screwing things up immensely. The code has worked on 10.5-10.7.4 for quite some time now and this seems to be isolated to the new MBP Retina so far.
I am using the view itself as the animation delegate in case something about the relationship between the view and the animation delegate has changed which precludes this possibility. I'm also further investigating the possibility of the animationDidEnd method being called by two distinct animation objects (though I have nothing to indicate that another animation is running anywhere in the app, let alone for this delegate).
If anyone is aware of any updates to documentation related to animation delegates I would appreciate a pointer, or any ideas otherwise. Thanks.
SOLVED: The issue didn't have to do with animations at all. It had to do with use of the deprecated method convertPointFromBase:
While deprecated methods are "usually" ok for at least the next release, this one is trouble when it comes to the Retina display. This is only conjecture, but since the method works as expected on non-Retina displays, I have to assume this has to do with the pixel density on the new displays.

Alpha Detection in Layer OK on Simulator, not iPhone

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

Drastic slowdown using layer backed NSOpenGLView

I needed to display some Cocoa widgets on top of an NSOpenGLView in an existing app. I followed the example in Apple's LayerBackedOpenGLView example code. The NSOpenGLView is given a backing layer using:
[glView setWantsLayer:YES]
Then the Cocoa NSView with the widgets is added as a subview of the glView. This is basically working and is twice ad fast as my previous approach where I added the NSView containing the widgets to a child window of the window containing the glView (this was the other solution I found on the web).
There were two problems.
The first is that some textures that I use with blending were no longer getting the blend right. After searching around a bit it looked like I might need to clear the alpha channel of the OpenGLView. This bit of code that I call after drawing a frame seems to have fixed this problem:
Code:
glColorMask(FALSE, FALSE, FALSE, TRUE); //This ensures that only alpha will be effected
glClearColor(0, 0, 0, 1); //alphaValue - Value to which you need to clear
glClear(GL_COLOR_BUFFER_BIT);
glColorMask(TRUE, TRUE, TRUE, TRUE); //Put color mask back to what it was.
Can someone explain why this is needed when using the CALayer, but not without?
The second problem I don't have a solution for. It seems that when I pan to the part of the scene where problem is #1 was observed, the frame rate drops from something like 110 FPS down to 10 FPS. Again, this only started happening after I added the backing layer. This doesn't always happen. Sometimes the FPS stays high when panning over this part of the scene but that is rare. I assume it must have something with how the textures here are blended, but I have no idea what.
Any thoughts?
I did figure out a workaround to the slowdown. The OpenGL view has a HUD (heads up display) view that goes on top of it. I had installed another NSView as a subview if it. Both the HUD and the subview have lots of alpha manipulation and for some reason that tickled a real slowdown in compositing the layers. I could easily instal this subview as a subview of the OpenGL view and when I did this everything sped up again. So although I don't fully understand the slowdown, I do have a good work around for it.

Resources