I have an NSView as the document of an NSScrollView. I would like to have a few pixels of padding at the top and bottom of the visible part of the view, regardless of where the scroller is positioned (not just at the top and bottom of the document as described here). For an example of an app that does this, look at Terminal.app. Regardless of the background color of the text, the top two visible rows of pixels are always the default background color.
I know I could simply draw everything two pixels lower and draw a rectangle at the top and bottom of the document-visible rect, but that will require changing a lot of complex code that I didn't write. Simpler ideas are welcomed!
The answer to the question you linked is actually a good solution for this problem too. In fact if your view is anything but an NSTextView, I'd say it's easier to implement.
Specifically: make your actual document view a subview of some other view, leaving room around the edges and make that view the scroll view's document view. If your content view (the one you wish to pad) changes sizes, have your "padding view" observe it for frame changes and resize to maintain the padding.
2015 Update
Content inset has been added to NSScrollView as of 10.10, making my older answer obsolete.
Related
When you gently scroll an NSScrollView the rectangle that Cocoa marks as dirty, and passes to drawRect, is often trivially small (perhaps as small as one or two pixels in height, for a vertical scroll view). The framework clearly already knows what the majority of the content is (because it's on screen) and where to redraw it (just the offset brought about by the scroll), so all it needs the developer to do is fill in the small rectangle that's about to appear. I was wondering what's happening behind the scenes to allow this to happen?
For example, if I wanted to implement my own super-smooth scroll view as a learning project, what kind of data would I be recording about the document view to enable me to just re-position - rather than redraw - the majority of it. Is Cocoa constantly generating images on background threads that it draws on screen when required, or is there something a bit more subtle going on?
There's lots going on. If you haven't already read it, you should read the Scroll View Programming Guide for Cocoa.
The copying of the existing rendering is accomplished by -[NSView scrollRect:by:]. It's only done if the NSClipView that's part of the NSScrollView architecture is set to copy-on-scroll (the copiesOnScroll property).
Also, there's "responsive scrolling". Since 10.9, if certain conditions are met, AppKit will speculatively render the document view beyond the visible rect so that, when the user scrolls, it can show the scrolled-in area without asking the document view to render.
You can set your views to be layer-backed. In that case, they are typically rendered to textures and composited by the window server. This means they don't necessarily have to re-draw to render in a new position. It's quite likely that responsive scrolling uses layers behind the scenes to hold the pre-rendered content.
Xcode 5
I'm trying to learn the auto-layout system. Thought I would start with something simple, but I'm already getting stumped :-)
Scene: Main View -> ImageView -> View
I want to support rotation such that the Image rotates and centers on the screen, using Aspect-Fit content.
I want the smaller view to maintain it's relative position to the top edge of the UIimage view. It does't seem to understand the aspect-fit, and it aligns the sub view along the top of the main view, not the fitted image.
I think it has something to do with the fact that the small view is a sibling of the Image, and not a sub-view. I can only seem to create constraints to the superview.
.
You haven't started with something simple!
An aspect fitted image view doesn't actually change its size under auto layout depending on the image, it fits the image into the bounds that the constraints have determined, leaving the rest of its frame blank. If you set a border or background colour on the image view you will see this.
To achieve the effect you're after you would need to do the aspect fitting calculation yourself and modify the sizing constraints on the image view appropriately.
I have an NSTableView with floating group rows which I can make transparent easily. However no matter what I do, setting the rowView backgroundFilters to have a CIGaussianBlur has no effect.
The view containing the NSTableView (and the acompanying NSScrollView and NSClipView) wants a layer. And I have confirmed in didAddRowView that the rowView has a layer with the backgroundFilter set.
I can reproduce this blur with any other plain NSView, it just won't work when applied to the tableview row view. I just want to understand why, and can I do something about it?
I would guess that since the group row view is sometimes drawn with partial transparency in the TableView, that they have done some hacks where they have it draw into an offscreen buffer and then composite that buffer onto the screen themselves, so it doesn’t matter what effects you’ve added to the background, because the offscreen buffer is always just filled with transparency.
I have been programming android apps for a bit, and I am now making an iphone app. I want to make margins for my view. I would not like to explain my exact situation, but if someone helps me with this I'll be able to figure out what I need to do.
I have two views, I want the first view to take up the entire screen. Then I want another view to always be, lets say 20 pixels from the edge of the screen on all four sides. Is there a simple way to do that in xcode?
Thanks
I assume you're using Interface Builder (now part of Xcode).
Add the view as you suggest - leaving a 20 pixel border around all 4 sides. Set all 6 resize options (flexible height, width, top, bottom, left & right).
Ensure that 'autoresize subviews' is enabled on the parent view.
The view will now resize if the parent view also resizes, leaving a 20px margin as required.
I have an NSWindow with a 32px bottom content border. Inside the window's view, I have two custom subviews. Each of them are layer backed, and I'm tracking the mouse with an NSTrackingArea. Part of what I'm doing is some mouseOver effects with CoreAnimation. This is not a problem in general, but I noticed something kind of strange and wondered if anyone knows why this is happening.
When setting up the trackingArea and mouseOver method, I hitTest the root layer and log the layer's name so I can see if the geometry of the various sublayers hold water when I resize the window. Internally, they seem (and look) fine. Visually, they are in the right place, but when I move the mouse, I notice that the though the mouse is physically over a layer, hitTest is returning whatever layer is 32 px above it. However, if I remove the content border, it works as you would expect and the correct layer is returned.
I obviously need the content border, so I have a very simple workaround which involves offsetting the hitTest point by 32px. This works fine, but it just seems weird that the presence of a content border seems to skewing the co-ordinate system of these subviews. Does anyone know why this could be happening?
NSEvent returns mouse locations relative to the window's coordinate system, not the targeted view's. You probably need to call convertRect:fromView: to get the correct coordinates.