tvOS: Rounded corners for image view when focused - xcode

I have an image view that will get this awesome tvOS focus effect when the containing view gets focused.
The problem is - it should have rounded corners. Now this is easily done:
imageView.layer.cornerRadius = 5
imageView.layer.masksToBounds = true
I have to set either masksToBounds of the layer or clipsToBounds of the image view to true (which is basically the same), in order to clip the edges of the image - but as soon as I do this, the focus effect won't work any more, because it will get clipped as well.
I had more or less the same problem with buttons, but since the focus effect is much simpler than for the image view (only scaling and shadow), I just implemented it myself, but that is not an option for the image view, with all the effects applied (moving, shimmering, and so on...)
Is there an easier way? Did I miss something? I can't be the only trying to figure out how this works!? :)

I have found out an alternative solution. What one may do is to actually draw the image, clipping out the corners with an alpha channel. The image then gets scaled correctly when focused. That applied to the layer. Then, to have the alpha channel added to the other layers (like the one for the glowing effect) we need to set; "masksFocusEffectToContents = true".
I made an extension for it, based on this answer:
Swift 4.2
extension UIImageView {
func roundedImage(corners: UIRectCorner, radius: CGFloat) {
let rect = CGRect(origin:CGPoint(x: 0, y: 0), size: self.frame.size)
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, 1)
UIBezierPath(
roundedRect: rect,
byRoundingCorners: corners,
cornerRadii: CGSize(width: radius, height: radius)
).addClip()
self.draw(rect)
self.image = UIGraphicsGetImageFromCurrentImageContext()!
// Shadows - Change shadowOpacity to value > 0 to enable the shadows
self.layer.shadowOpacity = 0
self.layer.shadowColor = UIColor.black.cgColor
self.layer.shadowOffset = CGSize(width: 10, height: 15)
self.layer.shadowRadius = 3
// This propagate the transparency to the the overlay layers,
// like the one for the glowing effect.
self.masksFocusEffectToContents = true
}
}
Then to apply the rounded corners call:
myImageView.adjustsImageWhenAncestorFocused = true
myImageView.clipToBounds = false
// masks all corners with a radius of 25 in myImageView
myImageView.roundedImage(corners: UIRectCorner.allCorners, radius: 25)
One can obviously modify roundedImage() to add the parameters to define the shadows at the calling time.
Downsides:
Borders behave like cornerRadius (they get drawn inside the image).
But I think I made it working somewhere, then investigating further I
lost the changes
I am not exactly sure this is the right way to do it. I am quite confident there must be some methods out there doing it in a couple of lines. In tvOS 11 Apple introduced the round badges (animatable and all), shown at WWDC 2017. I just can't find a sample for them.
Otherwise, tvOS 12 (beta for now) introduced the Lockup. I managed to implement them programmatically, as shown in this answer.

https://forums.developer.apple.com/thread/20513
We are also facing this same issue. When you round the corners, you can see the "shine" still has a rectange shape.
I showed the issue to the Dev Evangelists at the Tech Talks in Toronto and they said it's a bug. It's reported and open rdar://23846376

For 2022.
Note that you can just use UICardView on tvOS for the effect.
Simply put the UIImageView inside the card view.
Don't forget to actually turn OFF "adjust on ancestor focus" and "user interaction enabled" on the image view, or else it will "doubly expand" when the card view expands!
There's also a weird issue where you have to add 20 to the height of all card views to make them work neatly with and enclosed image view.

Related

UIScrollView contentLayoutGuide and zooming centered

The problem to be solved here is how to zoom in a UIScrollView while staying centered. If you don't take some sort of precautions, the default is that as we zoom out, the zoomed view slides up to the top left corner of the scroll view, like this:
So how to prevent this, and keep the zoomed view in the center as we zoom? As you probably know, there are traditional ways of handling this by messing with the scroll view's layout, as described by Josh and Eliza in the brilliant classic WWDC video 104 from 2010. This can be done by using a delegate or by subclassing UIScrollView, and gives the desired result:
Now comes WWDC 2017 video 201 (https://developer.apple.com/videos/play/wwdc2017/201/?time=1496), and there's Eliza making a claim that the new (iOS 11) contentLayoutGuide solves the problem of zooming while staying centered in a new way: she says to center the content view at the center of the content layout guide.
But she doesn't demonstrate. And when I try it for myself, I find it isn't solving the problem. I'm zooming in just fine, but when zooming out, so that the zoom scale is smaller than 1, the content view moves up to the top left, just as it always has.
Has anyone figured out what this claim in the video actually means? How does iOS 11 make it easier to zoom centered than in the past?
EDIT I actually received a sample project from Apple in response to my bug report, which they claimed illustrated how to solve this, and it didn't! So I conclude that even Apple doesn't know what they're talking about here.
The view goes to the top left because the contentSize of the scroll view is not defined. When using the new Auto Layout guides in iOS 11, it's still necessary to define the contentSize.
Add the following constraints:
scrollView.contentLayoutGuide.widthAnchor.constraint(equalTo: scrollView.frameLayoutGuide.widthAnchor),
scrollView.contentLayoutGuide.heightAnchor.constraint(equalTo: scrollView.frameLayoutGuide.heightAnchor)
This worked for me, when I had a contentView with a fixed width/height and the following additional constraints:
// give the centerView explicit height and width constraints
centerView.widthAnchor.constraint(equalToConstant: 500),
centerView.heightAnchor.constraint(equalToConstant: 500),
// pin the center of the centerView to the center of the scrollView's contentLayoutGuide
centerView.centerXAnchor.constraint(equalTo: scrollView.contentLayoutGuide.centerXAnchor),
centerView.centerYAnchor.constraint(equalTo: scrollView.contentLayoutGuide.centerYAnchor)
This is the solution you are / everybody is looking for. In my case I want to center a view inside a table view scroll view. So if the table view scrolls the custom view will always be in the center of the scroll view content.
// create a view
let v:UIView = UIView(frame:CGRect.zero) // use zero if using constraints
ibTableView.addSubview(v)
ibTableView.bringSubview(toFront:v)
v.translatesAutoresizingMaskIntoConstraints = no
v.backgroundColor = .yellow
v.widthAnchor.constraint(equalToConstant:100).isActive = yes
v.heightAnchor.constraint(equalToConstant:100).isActive = yes
// set scrollview guides
ibTableView.contentLayoutGuide.widthAnchor.constraint(equalTo:ibTableView.frameLayoutGuide.widthAnchor).isActive = yes
ibTableView.contentLayoutGuide.heightAnchor.constraint(equalTo:ibTableView.frameLayoutGuide.heightAnchor).isActive = yes
// anchor view
v.centerXAnchor.constraint(equalTo:ibTableView.contentLayoutGuide.centerXAnchor).isActive = yes
v.centerYAnchor.constraint(equalTo:ibTableView.contentLayoutGuide.centerYAnchor).isActive = yes

xcode drawrect - UIView bounds not reflecting rotation in simulator

I'm using xcode 4.5.2 to learn to develop in iOS6. I have this code in my drawRect...
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint midPoint;
midPoint.x = self.bounds.origin.x + self.bounds.size.width/2;
midPoint.y = self.bounds.origin.y + self.bounds.size.height/2;
... etc
For some reason when I run this in iphone simulator, the result is always
midpoint.x = 160
midpoint.y = 252
regardless of whether the simulator (iPhone retina screen) is in portrait mode or rotated to landscape. The result is thus the graphic that I draw in portrait mode is centred on the screen correctly, but offset to the left in landscape.
Can someone suggest where do I begin to look as to why this is the case?
This drawRect code came directly from an earlier app I wrote which functioned correctly in terms of determining this midpoint of the screen (a UIView spanning the whole screen). This problem arises when I imported this code (the whole class) into my currently program which is segue-ing into instances of these UIViews.
Thanks.
figured it out, I think...
it appears that for unknown reasons (to me anyway), when the phone gets rotated, the view inside the phone did not get resized to fill the new vertical & horizontal dimensions. Thus the midpoint coordinates are always the same - because the view basically did not resize. In the size inspector, the view's got struts on the left & right only and no springs.
By adding both vertical & horizontal springs, the view seems to resize with rotation and the midpoint coordinate seem to change accordingly.
While I've gotten by my hurdle, I'm still unclear why I needed to add springs to the view, since the demo that I was following did not seem to require this step. Any suggestions would be appreciated. Thanks.

NSTabView with background color

As discussed elsewhere, NSTabView does not have a setBackgroundColor method and subclassing NSTabView and using an drawRect to control it does no longer work - as it does not paint the top 10%, the bit just below the segmented control button.
Now I am a bit surprised by the amounts of work arounds I had to do solving this; see
code: https://github.com/dirkx/CustomizableTabView/blob/master/CustomizableTabView/CustomizableTabView.m
and am wondering if i went down the wrong path. And how to do this better & simpler:
The NSSegmentStyleTexturedSquare seems to yield me a semi-transparent segmented Control. Which means I need to do extra work to hide any bezel lines (line 240, 253).
is there a better way to do this ? I.e. negate its transparency ?
or is there a way I can use the actual/original segmented choise button ?
I find that the colours I need - like the [NSColor windowBackgroundColour] are not set to anything useful (i.e. that one is transparent) -- so right now I hardcode them (lines 87, 94).
Is there a better way to do this ?
I find I need a boatload of fluffy methods to keep things in sync ( line 128, 134, etc).
can this be avoided ?
I find that mimicking the cleverness on rescaling means I need to keep a constant eye on the segemented Control box and remove/resize it. And even then - it is not quite as good as the original
is there a better way to do this than line 157 -- i.e. hear about resizing ? Rather than do it all the time ?
The segementControl fades dark when focus is removed from the window - unlike the real McCoy.
can that easily be prevented ? is there a cheap way to track this ?
Or is this the wrong approach - and should I focus on just a transparent hole here - and let the NSTabViewItem draw a background ? But in any case - then I still have the issue with the Segemented COntrol box - or is there than a way to make that be the default again.
when trying this - I get stuck on the top 20-30 pixels being drawn in the 'real' windows background colour - which is 'transparent' - and hence the colour will not run all the way to the top or behind the segment bar and up to the bezel - but instead stop some 8 pixels below the bottom of the segment controls.
Feedback appreciated - as this feels so far off/suboptimal for such a simple things --
Thanks a lot. Brownie points for hacking/forking the github code :) :) :) As a line of running code says more than a thousand words.
Dw.
PSMTabBarControl is probably the best workaround for you. I have created several custom tab views, but cocoa does not play well with this control. PSMTabBarControl has been updated to support Xcode 4. https://github.com/ciaran/psmtabbarcontrol
Have you tried setting the background color of its underlying CALayer? (Make it a layer-backed view, if it isn't already, by setting wantsLayer = YES.)
If your situation can tolerate some fragility, a very simple and quick approach is to subclass NSTabView and manually adjust the frame of the item subviews. This gives each item a seamless yellow background:
- (void)drawRect:(NSRect)dirtyRect {
static const NSRect offsetRect = (NSRect) { -2, -16, 4, 18 };
NSRect rect = self.contentRect;
rect.origin.x += offsetRect.origin.x;
rect.origin.y += offsetRect.origin.y;
rect.size.width += offsetRect.size.width;
rect.size.height += offsetRect.size.height;
[[NSColor yellowColor] set];
NSRectFill(rect);
[super drawRect:dirtyRect];
}
A future change in the metrics of NSTabView would obviously be a problem so proceed at your own risk!

Alpha Detection in Layer OK on Simulator, not iPhone

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

Fluid layout for android

I'm targeting android but I don't know how to layout the UI so it works for all devices. How do I do this?
I have a TextField with a Button for searching and the search results are displayed below in a TableView below. So I have a table view but the bottom is cut off.
this.searchResults = Ti.UI.createTableView({
top:'70px',
height:'450dp'
});
As you can see from the code above I clearly dont know how to do this. How do you lay things out for android?
You can set top/bottom/left/right values. If you want the table to stop at the bottom edge of the screen, you could set bottom: 0. It's the same for iOS.
If I'm working on Android stuff, and I want it to resize proportionate to the size of the screen I often use percentages. So
this.searchResults = Ti.UI.createTableView({
top:'10%',
height:'90%'
});
Alternatively, If you want pin point accurate calculations, you can ask appcelerator for the platform width and height, and resize things proportionately yourself. Like so:
var height = Ti.Platform.DisplayCaps.platformHeight; //Screen height in pixels
this.searchResults = Ti.UI.createTableView({
top:'75dp',
height: (height - (75 * Ti.Platform.DisplayCaps.logicalDensityFactor)) //height - 75dp converted to pixels
});

Resources