Trouble with Auto-Layout - xcode

Xcode 5
I'm trying to learn the auto-layout system. Thought I would start with something simple, but I'm already getting stumped :-)
Scene: Main View -> ImageView -> View
I want to support rotation such that the Image rotates and centers on the screen, using Aspect-Fit content.
I want the smaller view to maintain it's relative position to the top edge of the UIimage view. It does't seem to understand the aspect-fit, and it aligns the sub view along the top of the main view, not the fitted image.
I think it has something to do with the fact that the small view is a sibling of the Image, and not a sub-view. I can only seem to create constraints to the superview.
.

You haven't started with something simple!
An aspect fitted image view doesn't actually change its size under auto layout depending on the image, it fits the image into the bounds that the constraints have determined, leaving the rest of its frame blank. If you set a border or background colour on the image view you will see this.
To achieve the effect you're after you would need to do the aspect fitting calculation yourself and modify the sizing constraints on the image view appropriately.

Related

Android BlurView issues with SKCanvasView

I am attempting to create a view for Android whose background appears to blur the view's content for which it is on top. This is nothing new and has been done before. I based my implementation on what was done here: Dimezis/BlurView.
The approach uses the pre-draw event from the view tree observed to draw a view to an internal canvas. The canvas is backed by a bitmap. A blur is applied to the bitmap before it is drawn to the canvas passed to BlurView's draw method. This approach works well for all standard views/controls and is a common method used to achieve the blur view effect.
However, it does not handle Skia-based drawings that may be on the view that is blurred.
Skia, via SKCanvasView, is used heavily for controls within the App so this is kind of a deal breaker if I cannot find a solution. The issue is very odd. Anything that is drawn on the canvas view appears to be scaled and translated when drawn to the internal canvas.
Screen-Shot - blur label v. blur SKCanvasView
The screen-shot show the difference in results from blurring a label with text v. blurring a red circle drawn on an SKCanvasView.
For reference, I've posted a sample project on GitHub. It can be found here: jaredballen/BlurView
I'd really appreciate any input that can be shared.
When you invoke _rootView?.Draw(_internalCanvas); the circle SKCanvasView is internally rendering itsself using _internalCanvas size (same as _blurView size) resulting in demonstrated behavour. My guess this could be solved by making _internalCanvas of the same size as the _rootView, rendering root view as blurred, then clipping and translating the result inside the smaller _blurView.

What could prevent an NSView.layer to renderInContext:?

I've been working hard, searching the internet on that problem for 3 days and I'm now running out of ressource.
Currently porting an iOS app to MacOS (deployment 10.11). The problem:
I have a view hierarchy as below:
NSScrollview
documentView
grouping view
tiling view one
array of NSImageView (each one being a tile)
tiling view two
array of NSImageView (each one being a tile)
The two tiling views are overlaying completely, depending of UI one may be hidden or the second one has opacity set below 1.0 to blend the two tiled view.
Because of opacity requirement, as well as performance, views are CAlayer backed. This is done from IB where the to NSScrollview is checked for Core Animation. Thereoff, all the view tree is (implicitly) layer backed.
Works as expected, scroll, magnify, etc..
Then I need to make an image out of the document view to generate an SCNMaterial content (3D view).
On iOS the documentView renderInContext works as expected, and allows an image to be created.
On Appkit the context stay transparent, so is the image, a valid object while as if clearColor.
If the documentView.canDrawSubviewsIntoLayer is set at creation, the view tree renders OK. This can't be the solution since it prevents opacity setting to work.
Even when one tiling view is hidden (no opacity compositing) rendering fails.
I read that some kinds of views are not rendered. I don't use them. There are no filters, no masks, beside a default masksToBounds setting on all the view tree. I don't know why and where it is set. I tried to unset it on all the views at creation with no success. It is set again somehow, on the grouping view below the documentView. This may be the problem but why this property is out of my control ?
Alternative way to get view tree rendering, bitmapImageRepForCachingDisplayInRect: / cacheDisplayInRect:toBitmapImageRep: works the same : ok with canDrawSubviewsIntoLayer, KO otherwise. Apple code examples to make a texture out of a view are using either one of the two methods.
There are plenty of posts, mainly on SO, complaining about CALayer renderInContext and code to custom render a layer tree. Nevertheless, most are quite old and now there must be a simple standard way to achieve it.
Edit: among other attempts, I tried to set each view wantsLayer, with no success.
Well, as often when you post a request for help, you finally find the answer by yourself… Here it is:
Given the view tree as listed in the question, I achieved to render it by setting canDrawSubviewsIntoLayer on each tiling view. This way, the opacity compositing between layers is working, AND the views are rendered.
I post this as an answer, because it solves the problem.
As of WHY this works, here is my guess, but this is not a authorised answer: Each NSImageView tile, subviews of tiling view, have their origin set to their frame. This is not a transform on the layer but a position of the frame in the coordinates of the tiling view. This is a difference with the iOS code version where the tiles are positioned by a transform. I'm going to test further, and see if using a transform rather that a frame origin makes a difference to renderInContext.
Edit: after more testing it appears that the iOS version has a transform AND an offset to the tile.
So the only clue is that the layer system get lost when rendering in context on OSX when some subview have an offset ??
Summary:
The key point to the solution is to find the right layer where to set canDrawSubviewsIntoLayer

cocoa mac osx grid display of images of different sizes

what i'm trying to do is displaying a set of images in a grid like fashion but with different sizes as for images in landscape or portrait position, this excludes using the NSCollectionView because the item prototype's size can only be set once...
i'd go and add subviews programmatically to a scrollView but this yet again when the window's size changes and the scrollView get a bigger width, there will be just blank spaces on the right side...
you can checkout the image below for a better understanding...
http://i.stack.imgur.com/NVJjP.png
thanks in advance you guys...
what i ended up doing was implementing a purely mathematic algorithm where i calculated the (x,y) position for every new added photo depending on the previously added photos and it did a pretty good job... the container holding the photos was embedded in a scrollView, needless to say that i had to calculate the height of this container as well.

masking with an image in iOS

I'd like to take an image and use it as a mask for a view on which I add numerous image views. I know of the quartz CGContextClipToMask() call but what would be the best way to approach this? Can I override the drawRect method of a container view, call CGContextClipToMask() within it, and then expect its subviews to adhere to that clipping region? It doesn't seem to work.
Do I need to instead add some blocking mask image over top?
Instead of subclassing or overriding drawing functions, I chose to overlay the images with an image that had transparency in the viewable portion. i.e., if my 'surface' was an image of a parchment, and I aimed to draw a bunch of images on it. I would have the parchment image, then a container UIView for any images to be put on that parchment, then a masking image over top of that which was the original parchment image but with the parchment itself converted instead to full transparency, while the surrounding area is left exactly as the background the parchment is on (then all other UI widgets over top of that).
This seems a viable solution in all cases except if one were to need some image to visually animate around and behind the parchment (not my case).

Padding at top of NSView in an NSScrollView

I have an NSView as the document of an NSScrollView. I would like to have a few pixels of padding at the top and bottom of the visible part of the view, regardless of where the scroller is positioned (not just at the top and bottom of the document as described here). For an example of an app that does this, look at Terminal.app. Regardless of the background color of the text, the top two visible rows of pixels are always the default background color.
I know I could simply draw everything two pixels lower and draw a rectangle at the top and bottom of the document-visible rect, but that will require changing a lot of complex code that I didn't write. Simpler ideas are welcomed!
The answer to the question you linked is actually a good solution for this problem too. In fact if your view is anything but an NSTextView, I'd say it's easier to implement.
Specifically: make your actual document view a subview of some other view, leaving room around the edges and make that view the scroll view's document view. If your content view (the one you wish to pad) changes sizes, have your "padding view" observe it for frame changes and resize to maintain the padding.
2015 Update
Content inset has been added to NSScrollView as of 10.10, making my older answer obsolete.

Resources