Alpha Detection in Layer OK on Simulator, not iPhone - calayer

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!

OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

Related

SCNView overlay causes tearing on resize

I'm using SceneKit to display a 3D scene (so far, a single quad), and the overlaySKScene to display a 2D overlay (which so far is just a SKNode with no geometry, though I had previously used a single SKLabelNode). It's a pretty simple view inside a bunch of nested NSSplitView. And during normal use, it works brilliantly. The problem comes when I try to resize the window or split view — I get areas of red leaking through my nice background, which disappear shortly after.
I'm running this on a 2016 MBP with a Radeon Pro 460, and captured this frame using Quicktime's screen capture:
Disabling the overlay removes the red areas, which makes me think that it's the problem. Disabling the statistics bar or the scroller (a child view of the SCNView) do not have any impact. My most minimal SKScene subclass is defined as
#implementation TestOverlay
- (instancetype) initWithSize: (CGSize) size
{
if( self = [super initWithSize: size] )
{
// Setup the default state
self.scaleMode = SKSceneScaleModeResizeFill;
self.userInteractionEnabled = NO;
self.backgroundColor = [NSColor blackColor];
}
return self;
}
#end
Has anybody run into similar issues before? Annoyingly, the apple sample Fox2 doesn't have similar problems...
For true enlightenment, one needs to read the documentation carefully, then comment everything out and restore functionality one step at a time. And then read the documentation again.
In the discussion section of -[SCNSceneRendererDelegate renderer:willRenderScene:atTime:], the solution is obvious (emphasis mine):
You should only execute Metal or OpenGL drawing commands (and any setup required to perform them) in this method—the results of modifying SceneKit objects during this method are undefined.
Which is exactly what I was doing. I had misread this as modifying geometry, so thought that assigning textures would be reasonable to do here (after all, "will" render means it hadn't started rendering yet, right?), and would therefore pick the most recently created texture. And unfortunately, before I decided that I needed an overlay, this actually works perfectly well! As soon as the overlay was added, however, the tearing appeared.
The correct place to update material properties seems to be -[SCNSceneRendererDelegate renderer:updateAtTime]. Use that to avoid silly bugs like this one folks!
Try to reset SMC (System Management Controller). It helped me for solving a similar problem but with Autodesk Maya 2018 on MBP 2017 (Radeon 560).
So, shut down and unplug your MBP.
On the built-in keyboard, press and hold the Shift-Option-Control keys on the left side and press the Power Button and hold all of these down for 10 seconds, then release the keys.
Connect the power adapter and then turn the Mac on normally.
Hope this helps.
P.S. In case it doesn't help, try to check/uncheck Automatic graphics switching option in System Preferences–Energy Saver to see if there's a difference.

UIImageView delay when adding to another UIView

This is my first post here - but I've been a reader for a long time. Thank you so much for this site! :-)
I am currently working on a port of my XNA-based 2D engine from WP7 to iOS (5). I would prefer not to use OpenGL directly, because I prefer to invest my time more in gameplay than in technique. So I would be very happy to get an answer not involving OpenGL (directly).
The problem:
When I add an UIImageView to another UIView, there is a short delay before the UIImageView gets drawn. I assume, this is due to the caching the UIView-class performs before converting everything internally to OpenGL then drawing.
What I want:
Tell the UIView (superview) to perform all neccessary calculations for all subviews and then draw them all at once.
Currently the behaviour I observe ist: Calculate uiimageview_1, draw uiimageview_1, calculate uiimageview_n, draw uiimageview_n, ...
Dummycode of what I want:
// put code here to tell superview to pause drawing
for (int i = 0; i < 400; i ++)
{
add UIImageView[i] to superview;
}
// put code here to tell superview to draw now
Possible workaround (but coming from C# & Windows, I have no idea how to implement it efficiently in Objective-C on iOS) - I am afraid that this code is inefficient because large blocks of RAM had to be transferred (per frame!) on retina displays at native resolution:
for (int i = 0; i < 400; i ++)
{
add UIImageView[i] to superview;
}
// put code here to get a bitmap in ram from superview
// return bitmap and draw it in a view for the scenery/canvas
Any help on how to approach this "popping"-problem would be highly appreciated.
Thanks
Answering this will be a bit tricky. The exact behavior of Cocoa Touch is undocumented (an 'implementation detail' says Apple), so recommendations can only be given based on guessing and experience.
Instead of working with UIViews, you might want to try CoreAnimation. It abstracts 2D drawing and compositing operations without the overhead of a UI framework such as UIKit. It also much easier to use than programming OpenGL directly. UIKit uses CoreAnimation to do it's drawing, but augments it in many ways. This 'augmentation' might be exactly the reason why you're hitting performance problems.
Have a look at the tutorial and judge for yourself.
First of all - forget about UIKit's views. If you trying to port XNA game to iOS, I'd recommend to familiarize yourself with Cocos2D. It's more convenient than core OpenGL, and it'll give you performance you need.

Creating pattern on-the-fly?

Is there a way to create a colored fill pattern dynamically in Cocoa?
In particular instead of using a fixed pattern from an image file via
NSColor *fillPattern = [NSColor colorWithPatternImage:patternImage];
I'd like to create a pattern by dynamically choosing the appropriate colors at runtime.
Background is highlighting a colored object by rendering stripes or squares in the ''opposite'' color on top of it - whatever opposite might mean in this context, but that's a different story..
Being applied to potentially hundreds of objects in a drawing app it needs to be a rather fast method so I suppose just swapping colors in patternImage won't be good enough.
(It did work just fine back in QuickDraw..!)
Why not just draw to an in-memory image and use that for your pattern?
NSImage* patternImage = [[NSImage alloc] initWithSize:someSize];
[patternImage lockFocus];
//draw your pattern
[patternImage unlockFocus];
NSColor* patternColor = [NSColor colorWithPatternImage:patternImage];
//do something with the pattern color
//remember to release patternImage if you're not using ARC
Performance-wise, you generally should be looking at optimising drawing by paying attention to the rect passed in to drawRect: and making sure you only draw what is necessary. If you do that then I can't see the pattern drawing performance being a major problem.
Background is highlighting a colored object by rendering stripes or squares in the ''opposite'' color on top of it - whatever opposite might mean in this context, but that's a different story..
You'll want to use one of Quartz's blend modes (most of them are present in Photoshop, Pixelmator, and Opacity, so you can experiment in one of those apps to determine which one you need).
You should then be able to fill with a static image—or a dynamic pattern, if it's really necessary—and Quartz will blend it in appropriately.
There's no way to do this in AppKit alone; you'll need to get a CGContext from the current NSGraphicsContext and do it in Quartz.

NSTabView with background color

As discussed elsewhere, NSTabView does not have a setBackgroundColor method and subclassing NSTabView and using an drawRect to control it does no longer work - as it does not paint the top 10%, the bit just below the segmented control button.
Now I am a bit surprised by the amounts of work arounds I had to do solving this; see
code: https://github.com/dirkx/CustomizableTabView/blob/master/CustomizableTabView/CustomizableTabView.m
and am wondering if i went down the wrong path. And how to do this better & simpler:
The NSSegmentStyleTexturedSquare seems to yield me a semi-transparent segmented Control. Which means I need to do extra work to hide any bezel lines (line 240, 253).
is there a better way to do this ? I.e. negate its transparency ?
or is there a way I can use the actual/original segmented choise button ?
I find that the colours I need - like the [NSColor windowBackgroundColour] are not set to anything useful (i.e. that one is transparent) -- so right now I hardcode them (lines 87, 94).
Is there a better way to do this ?
I find I need a boatload of fluffy methods to keep things in sync ( line 128, 134, etc).
can this be avoided ?
I find that mimicking the cleverness on rescaling means I need to keep a constant eye on the segemented Control box and remove/resize it. And even then - it is not quite as good as the original
is there a better way to do this than line 157 -- i.e. hear about resizing ? Rather than do it all the time ?
The segementControl fades dark when focus is removed from the window - unlike the real McCoy.
can that easily be prevented ? is there a cheap way to track this ?
Or is this the wrong approach - and should I focus on just a transparent hole here - and let the NSTabViewItem draw a background ? But in any case - then I still have the issue with the Segemented COntrol box - or is there than a way to make that be the default again.
when trying this - I get stuck on the top 20-30 pixels being drawn in the 'real' windows background colour - which is 'transparent' - and hence the colour will not run all the way to the top or behind the segment bar and up to the bezel - but instead stop some 8 pixels below the bottom of the segment controls.
Feedback appreciated - as this feels so far off/suboptimal for such a simple things --
Thanks a lot. Brownie points for hacking/forking the github code :) :) :) As a line of running code says more than a thousand words.
Dw.
PSMTabBarControl is probably the best workaround for you. I have created several custom tab views, but cocoa does not play well with this control. PSMTabBarControl has been updated to support Xcode 4. https://github.com/ciaran/psmtabbarcontrol
Have you tried setting the background color of its underlying CALayer? (Make it a layer-backed view, if it isn't already, by setting wantsLayer = YES.)
If your situation can tolerate some fragility, a very simple and quick approach is to subclass NSTabView and manually adjust the frame of the item subviews. This gives each item a seamless yellow background:
- (void)drawRect:(NSRect)dirtyRect {
static const NSRect offsetRect = (NSRect) { -2, -16, 4, 18 };
NSRect rect = self.contentRect;
rect.origin.x += offsetRect.origin.x;
rect.origin.y += offsetRect.origin.y;
rect.size.width += offsetRect.size.width;
rect.size.height += offsetRect.size.height;
[[NSColor yellowColor] set];
NSRectFill(rect);
[super drawRect:dirtyRect];
}
A future change in the metrics of NSTabView would obviously be a problem so proceed at your own risk!

Drastic slowdown using layer backed NSOpenGLView

I needed to display some Cocoa widgets on top of an NSOpenGLView in an existing app. I followed the example in Apple's LayerBackedOpenGLView example code. The NSOpenGLView is given a backing layer using:
[glView setWantsLayer:YES]
Then the Cocoa NSView with the widgets is added as a subview of the glView. This is basically working and is twice ad fast as my previous approach where I added the NSView containing the widgets to a child window of the window containing the glView (this was the other solution I found on the web).
There were two problems.
The first is that some textures that I use with blending were no longer getting the blend right. After searching around a bit it looked like I might need to clear the alpha channel of the OpenGLView. This bit of code that I call after drawing a frame seems to have fixed this problem:
Code:
glColorMask(FALSE, FALSE, FALSE, TRUE); //This ensures that only alpha will be effected
glClearColor(0, 0, 0, 1); //alphaValue - Value to which you need to clear
glClear(GL_COLOR_BUFFER_BIT);
glColorMask(TRUE, TRUE, TRUE, TRUE); //Put color mask back to what it was.
Can someone explain why this is needed when using the CALayer, but not without?
The second problem I don't have a solution for. It seems that when I pan to the part of the scene where problem is #1 was observed, the frame rate drops from something like 110 FPS down to 10 FPS. Again, this only started happening after I added the backing layer. This doesn't always happen. Sometimes the FPS stays high when panning over this part of the scene but that is rare. I assume it must have something with how the textures here are blended, but I have no idea what.
Any thoughts?
I did figure out a workaround to the slowdown. The OpenGL view has a HUD (heads up display) view that goes on top of it. I had installed another NSView as a subview if it. Both the HUD and the subview have lots of alpha manipulation and for some reason that tickled a real slowdown in compositing the layers. I could easily instal this subview as a subview of the OpenGL view and when I did this everything sped up again. So although I don't fully understand the slowdown, I do have a good work around for it.

Resources