I have an NSTableView with floating group rows which I can make transparent easily. However no matter what I do, setting the rowView backgroundFilters to have a CIGaussianBlur has no effect.
The view containing the NSTableView (and the acompanying NSScrollView and NSClipView) wants a layer. And I have confirmed in didAddRowView that the rowView has a layer with the backgroundFilter set.
I can reproduce this blur with any other plain NSView, it just won't work when applied to the tableview row view. I just want to understand why, and can I do something about it?
I would guess that since the group row view is sometimes drawn with partial transparency in the TableView, that they have done some hacks where they have it draw into an offscreen buffer and then composite that buffer onto the screen themselves, so it doesn’t matter what effects you’ve added to the background, because the offscreen buffer is always just filled with transparency.
Related
I've been working hard, searching the internet on that problem for 3 days and I'm now running out of ressource.
Currently porting an iOS app to MacOS (deployment 10.11). The problem:
I have a view hierarchy as below:
NSScrollview
documentView
grouping view
tiling view one
array of NSImageView (each one being a tile)
tiling view two
array of NSImageView (each one being a tile)
The two tiling views are overlaying completely, depending of UI one may be hidden or the second one has opacity set below 1.0 to blend the two tiled view.
Because of opacity requirement, as well as performance, views are CAlayer backed. This is done from IB where the to NSScrollview is checked for Core Animation. Thereoff, all the view tree is (implicitly) layer backed.
Works as expected, scroll, magnify, etc..
Then I need to make an image out of the document view to generate an SCNMaterial content (3D view).
On iOS the documentView renderInContext works as expected, and allows an image to be created.
On Appkit the context stay transparent, so is the image, a valid object while as if clearColor.
If the documentView.canDrawSubviewsIntoLayer is set at creation, the view tree renders OK. This can't be the solution since it prevents opacity setting to work.
Even when one tiling view is hidden (no opacity compositing) rendering fails.
I read that some kinds of views are not rendered. I don't use them. There are no filters, no masks, beside a default masksToBounds setting on all the view tree. I don't know why and where it is set. I tried to unset it on all the views at creation with no success. It is set again somehow, on the grouping view below the documentView. This may be the problem but why this property is out of my control ?
Alternative way to get view tree rendering, bitmapImageRepForCachingDisplayInRect: / cacheDisplayInRect:toBitmapImageRep: works the same : ok with canDrawSubviewsIntoLayer, KO otherwise. Apple code examples to make a texture out of a view are using either one of the two methods.
There are plenty of posts, mainly on SO, complaining about CALayer renderInContext and code to custom render a layer tree. Nevertheless, most are quite old and now there must be a simple standard way to achieve it.
Edit: among other attempts, I tried to set each view wantsLayer, with no success.
Well, as often when you post a request for help, you finally find the answer by yourself… Here it is:
Given the view tree as listed in the question, I achieved to render it by setting canDrawSubviewsIntoLayer on each tiling view. This way, the opacity compositing between layers is working, AND the views are rendered.
I post this as an answer, because it solves the problem.
As of WHY this works, here is my guess, but this is not a authorised answer: Each NSImageView tile, subviews of tiling view, have their origin set to their frame. This is not a transform on the layer but a position of the frame in the coordinates of the tiling view. This is a difference with the iOS code version where the tiles are positioned by a transform. I'm going to test further, and see if using a transform rather that a frame origin makes a difference to renderInContext.
Edit: after more testing it appears that the iOS version has a transform AND an offset to the tile.
So the only clue is that the layer system get lost when rendering in context on OSX when some subview have an offset ??
Summary:
The key point to the solution is to find the right layer where to set canDrawSubviewsIntoLayer
I have an NSImageView that encloses a dynamically generated NSImage. When I change the image displayed, I would like to dynamically resize the NSImageView so that it precisely wraps the new image, and also have the enclosing window resize so that the space between the NSImageView and every other part of the window remains constant. (Note that the image view's scaling is set to none, as I want its image to always be shown at its physical size.) To illustrate, suppose I begin with a small image in my image view. If I replace it with a large image, I wish for both the NSImageView and enclosing window to resize to accommodate it, without affecting the sizing or spacing of any other element.
Currently, I call the following method whenever the magnification level is changed via the stepper or associated text field. Though regenerating the image and loading it into the NSImageView works fine, resizing the NSImageView and enclosing window do not.
- (void) updateMagnification:(NSUInteger)newMagnification {
// Keep values of stepper and associated text field synchronized.
[self.magnificationStepper setIntegerValue:newMagnification];
[self.magnificationTextField setIntegerValue:newMagnification];
// Regenerate image based on newMagnification and display in image view.
[self.qrGenerator generateWithBlockPixelWidth:newMagnification];
self.imageView.image = self.qrGenerator.image;
// Adjust frame size of image view.
NSLog(#"Old size: frame=%# image=%#", NSStringFromSize(self.imageView.frame.size), NSStringFromSize(self.imageView.image.size));
[self.imageView setFrameSize:NSMakeSize(self.imageView.image.size.width, self.imageView.image.size.height )];
NSLog(#"New size: frame=%# image=%#", NSStringFromSize(self.imageView.frame.size), NSStringFromSize(self.imageView.frame.size));
//[self.window setViewsNeedDisplay:YES];
//[self.imageView setNeedsDisplay:YES];
//[self.imageView.superview setNeedsDisplay:YES];
}
Regardless of whether I increase or decrease the magnification value, causing the image to grow smaller or larger, the size of both the NSImageView and window remains constant. The three setNeedsDisplay: calls that are commented out have no effect even if they're uncommented -- they were my attempt to determine if the problem was related to the controls not redrawing once their size was adjusted, but the calls had no effect. Curiously, my NSLog calls indicate that the imageView's frame does indeed take the requested size, for they yield this output:
2012-06-12 11:02:50.651 Presenter[4660:603] Old size: frame={422, 351} image={168, 168}
2012-06-12 11:02:50.651 Presenter[4660:603] New size: frame={168, 168} image={168, 168}
The actual display, of course, does not change.
Interestingly, changing the imageView's frame style to "none," either in Interface Builder or programatically with [self.imageView setImageFrameStyle:NSImageFrameNone], gives me behaviour closer to what I desire. Making the image larger so that it would otherwise be clipped by the image view does indeed result in the image view and window growing larger. From this point, however, making the image smaller does not result in the image view or window resizing accordingly. "None" is the only image frame style that displays this somewhat correct behaviour -- all four of the bordered styles (i.e., bevel [which is the default], button, groove, and photo) show the same entirely incorrect behaviour described above.
I came across someone with a similar problem. Oddly, he only observed the problematic behaviour when his image view's frame style was set to NSImageFrameNone, when this is the only value that gives me somewhat-correct behaviour. I tried modifying the frame style to a non-none value before the resize and to none afterward, as this resolved the other person's problem, but for me, this yielded the same behaviour as when I simply set the frame style to "none" initially.
Any help you provide will be much appreciated. Thanks!
I have a 3d model rendered in a UIView. When I pinch the model it zooms correctly, but when zoomed in I'd like the ability to pan/scroll the view so that I can see the parts of the model which are outside the boundaries of the view. I switched the UIView to a UIScrollView, and on the pinch event is update the content size with the scale factor. This works great and I can now scroll the view. My problem is that panning to the outer edges of the content view still shows the 3d model still clipped at the size of the initial view. I hope this makes sense...
Does anyone have an idea what might be needed to render the zoomed in parts of the chair, which should now be visible (after panning)?
I was able to get this working after setting the UIView.bounds as well as the UIScrollView content size. That is, the scrollview contained a uiview, which contained the 3d model. I was resizing the scrollview, but not the uiview. They both needed to be set in order for this to work.
I have an NSWindow with a 32px bottom content border. Inside the window's view, I have two custom subviews. Each of them are layer backed, and I'm tracking the mouse with an NSTrackingArea. Part of what I'm doing is some mouseOver effects with CoreAnimation. This is not a problem in general, but I noticed something kind of strange and wondered if anyone knows why this is happening.
When setting up the trackingArea and mouseOver method, I hitTest the root layer and log the layer's name so I can see if the geometry of the various sublayers hold water when I resize the window. Internally, they seem (and look) fine. Visually, they are in the right place, but when I move the mouse, I notice that the though the mouse is physically over a layer, hitTest is returning whatever layer is 32 px above it. However, if I remove the content border, it works as you would expect and the correct layer is returned.
I obviously need the content border, so I have a very simple workaround which involves offsetting the hitTest point by 32px. This works fine, but it just seems weird that the presence of a content border seems to skewing the co-ordinate system of these subviews. Does anyone know why this could be happening?
NSEvent returns mouse locations relative to the window's coordinate system, not the targeted view's. You probably need to call convertRect:fromView: to get the correct coordinates.
I have an NSView as the document of an NSScrollView. I would like to have a few pixels of padding at the top and bottom of the visible part of the view, regardless of where the scroller is positioned (not just at the top and bottom of the document as described here). For an example of an app that does this, look at Terminal.app. Regardless of the background color of the text, the top two visible rows of pixels are always the default background color.
I know I could simply draw everything two pixels lower and draw a rectangle at the top and bottom of the document-visible rect, but that will require changing a lot of complex code that I didn't write. Simpler ideas are welcomed!
The answer to the question you linked is actually a good solution for this problem too. In fact if your view is anything but an NSTextView, I'd say it's easier to implement.
Specifically: make your actual document view a subview of some other view, leaving room around the edges and make that view the scroll view's document view. If your content view (the one you wish to pad) changes sizes, have your "padding view" observe it for frame changes and resize to maintain the padding.
2015 Update
Content inset has been added to NSScrollView as of 10.10, making my older answer obsolete.