As discussed elsewhere, NSTabView does not have a setBackgroundColor method and subclassing NSTabView and using an drawRect to control it does no longer work - as it does not paint the top 10%, the bit just below the segmented control button.
Now I am a bit surprised by the amounts of work arounds I had to do solving this; see
code: https://github.com/dirkx/CustomizableTabView/blob/master/CustomizableTabView/CustomizableTabView.m
and am wondering if i went down the wrong path. And how to do this better & simpler:
The NSSegmentStyleTexturedSquare seems to yield me a semi-transparent segmented Control. Which means I need to do extra work to hide any bezel lines (line 240, 253).
is there a better way to do this ? I.e. negate its transparency ?
or is there a way I can use the actual/original segmented choise button ?
I find that the colours I need - like the [NSColor windowBackgroundColour] are not set to anything useful (i.e. that one is transparent) -- so right now I hardcode them (lines 87, 94).
Is there a better way to do this ?
I find I need a boatload of fluffy methods to keep things in sync ( line 128, 134, etc).
can this be avoided ?
I find that mimicking the cleverness on rescaling means I need to keep a constant eye on the segemented Control box and remove/resize it. And even then - it is not quite as good as the original
is there a better way to do this than line 157 -- i.e. hear about resizing ? Rather than do it all the time ?
The segementControl fades dark when focus is removed from the window - unlike the real McCoy.
can that easily be prevented ? is there a cheap way to track this ?
Or is this the wrong approach - and should I focus on just a transparent hole here - and let the NSTabViewItem draw a background ? But in any case - then I still have the issue with the Segemented COntrol box - or is there than a way to make that be the default again.
when trying this - I get stuck on the top 20-30 pixels being drawn in the 'real' windows background colour - which is 'transparent' - and hence the colour will not run all the way to the top or behind the segment bar and up to the bezel - but instead stop some 8 pixels below the bottom of the segment controls.
Feedback appreciated - as this feels so far off/suboptimal for such a simple things --
Thanks a lot. Brownie points for hacking/forking the github code :) :) :) As a line of running code says more than a thousand words.
Dw.
PSMTabBarControl is probably the best workaround for you. I have created several custom tab views, but cocoa does not play well with this control. PSMTabBarControl has been updated to support Xcode 4. https://github.com/ciaran/psmtabbarcontrol
Have you tried setting the background color of its underlying CALayer? (Make it a layer-backed view, if it isn't already, by setting wantsLayer = YES.)
If your situation can tolerate some fragility, a very simple and quick approach is to subclass NSTabView and manually adjust the frame of the item subviews. This gives each item a seamless yellow background:
- (void)drawRect:(NSRect)dirtyRect {
static const NSRect offsetRect = (NSRect) { -2, -16, 4, 18 };
NSRect rect = self.contentRect;
rect.origin.x += offsetRect.origin.x;
rect.origin.y += offsetRect.origin.y;
rect.size.width += offsetRect.size.width;
rect.size.height += offsetRect.size.height;
[[NSColor yellowColor] set];
NSRectFill(rect);
[super drawRect:dirtyRect];
}
A future change in the metrics of NSTabView would obviously be a problem so proceed at your own risk!
Related
In my app I want to provide text scaling in layer backed NSTextView like Apple's TextEdit. I use analogue of it's ScalingScrollView. Also I need to create some CALayer overlays on self.window.contentView. All is ok until I make [self.window.contentView setWantsLayer:YES].
Before [setWantsLayer:YES]
After [setWantsLayer:YES]
I haven't any ideas how to fix this problem.
I've been searching for the solution to the similar issue, too. Finally, I discovered that layer-backed views must be positioned on integral pixels and must not be positioned on subpixels.
E.g. if you dynamically calculate frame of layer-backed view
NSMakeRect((self.frame.size.width - 350)/2, (self.frame.size.height - 150)/2, 350, 150)
you may encounter non-integral values, so you should do something like
NSMakeRect(floor((self.frame.size.width - 350)/2), floor((self.frame.size.height - 150)/2), 350, 150)
Apologies for the noob question - coming from an iOS background I'm struggling a little with OSX.
The good news - I have an NSScrollView with a large NSView as it's documentView. I have been adjusting the bounds of the contentView to effectively zoom in on the documentView - and all works well with respect to anything I do in drawRect (of the documentView)
The not so good news - I have now added another NSView as a child of the large documentView and expected it to simply zoom just like it would in iOS land - but it doesn't. If anyone can help fill in the rather large gap in my understanding of all this, I'd be extremely grateful
Thanks.
[UPDATE] Fixed it myself - 'problem' was that autolayout (layout constraints) were enabled. Once I disabled them and set the autosizing appropriately then everything was ok. I guess I should learn about layout constraints...
I know this is very old but I just implemented mouse scroll zooming using the following after spending days trying to figure it out using various solutions posted by others, all of which had fundamental issues. As background I and using a CALayers in a NSView subclass with a large PDF building layout in the background and 100+ draggable CALayer objects overplayed on top of that.
The zooming is instant and smooth and everything scales perfectly with no pixellation that I was expecting from something called 'magnification'. I wasted many days on this.
override func scrollWheel(with event: NSEvent) {
guard event.modifierFlags.contains(.option) else {
super.scrollWheel(with: event)
return
}
let dy = event.deltaY
if dy != 0.0 {
let magnification = self.scrollView.magnification + dy/30
let point = self.scrollView.contentView.convert(event.locationInWindow, from: nil)
self.scrollView.setMagnification(magnification, centeredAt: point)
}
}
LOL, I had exactly the same problem. I lost like two days messing around with autolayout. After I read your update I went in and just added another NSBox to the view and it gets drawn correctly and zooms as well.
Though, does it work for NSImageViews as subviews as well?
First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!
I needed to display some Cocoa widgets on top of an NSOpenGLView in an existing app. I followed the example in Apple's LayerBackedOpenGLView example code. The NSOpenGLView is given a backing layer using:
[glView setWantsLayer:YES]
Then the Cocoa NSView with the widgets is added as a subview of the glView. This is basically working and is twice ad fast as my previous approach where I added the NSView containing the widgets to a child window of the window containing the glView (this was the other solution I found on the web).
There were two problems.
The first is that some textures that I use with blending were no longer getting the blend right. After searching around a bit it looked like I might need to clear the alpha channel of the OpenGLView. This bit of code that I call after drawing a frame seems to have fixed this problem:
Code:
glColorMask(FALSE, FALSE, FALSE, TRUE); //This ensures that only alpha will be effected
glClearColor(0, 0, 0, 1); //alphaValue - Value to which you need to clear
glClear(GL_COLOR_BUFFER_BIT);
glColorMask(TRUE, TRUE, TRUE, TRUE); //Put color mask back to what it was.
Can someone explain why this is needed when using the CALayer, but not without?
The second problem I don't have a solution for. It seems that when I pan to the part of the scene where problem is #1 was observed, the frame rate drops from something like 110 FPS down to 10 FPS. Again, this only started happening after I added the backing layer. This doesn't always happen. Sometimes the FPS stays high when panning over this part of the scene but that is rare. I assume it must have something with how the textures here are blended, but I have no idea what.
Any thoughts?
I did figure out a workaround to the slowdown. The OpenGL view has a HUD (heads up display) view that goes on top of it. I had installed another NSView as a subview if it. Both the HUD and the subview have lots of alpha manipulation and for some reason that tickled a real slowdown in compositing the layers. I could easily instal this subview as a subview of the OpenGL view and when I did this everything sped up again. So although I don't fully understand the slowdown, I do have a good work around for it.
I am currently creating a simple Cocoa Window programmatically with and NSOpenGLView attached to it. Anyways if I create the windows style mask with with NSResizableWindowMask and [m_window setShowsResizeIndicator: YES ]; I'd expect to see the resize indicator in the bottom right. The resizing works, but the indicator does not show at all. I also checked simple NSOpenGLView examples and they also have the same problem so I am pretty certain that it's not a bug in my code but rather a problem with a view that has 100% width and height. Is there any way to position the indicator ontop of the NSOpenGLView?
Short answer: NSOpenGLView will cover the resize indicator, no matter what you do. You can fake it positioning a custom texture at the bottom right corner of your View.
See this thread for detailed discussion and some example codes:
http://www.idevgames.com/forums/thread-6160.html