Convert screen coordinate to window coordinate in Cocoa? - macos

I've seen this question asked multiple times here, and I tried some answers, but it doesn't seem to work for me.
NSPoint pnt = [[self window] convertScreenToBase:[NSEvent mouseLocation]];
I'm using above code to do my conversion, but I get same coordinates for pnt.x and pnt.y no matter where mouse location is.
I noticed that this method is deprecated, but it should still work I guess, right?
Does anyone have any good suggestion on how I can do this conversion?
Any kind of help is highly appreciated!

You should use [window mouseLocationOutsideOfEventStream] instead.

Related

Scale NSView centered

I need to scale an NSView around it's center.
I tried using CATransform3DMakeScale and it worked to scale the view towards it's corner but I couldn't achieve to center it. I tried to set the layers anchor point to .5, .5 but that didn't work.
I tried using scaleUnitSquareToSize but I ran into the problem that this seems unresetable without a lot of work.
In the end I need to be able to set the scale of an NSView to something like 0.8 and have it zoom out around it's center and be able to set the scale to 1 again to reset it.
Added a frame fix for #fyasar's answer, works for me on MacOS:
CGRect frame = layer.frame;
layer.anchorPoint = CGPointMake(.5, .5);
layer.frame = frame; // Fix the location shift caused from anchor change
I also faced with smilar problem.
Please be sure exeute that line before construct CABasicAnimation.
layer.anchorPoint = CGPointMake(.5, .5);
So, that worked for me.
Good luck
If you can require 10.8, then you could call NSScrollView's function:
- (void)setMagnification:(CGFloat)magnification centeredAtPoint:(NSPoint)point

Removing Glossy Effect on UITabBar

I know this is a common question but it's quite different in my case.
I want an image to be placed when the bar is active and I've done that successfully using these code under didFinishLoadingWithOptions method [[UITabBar appearance] setSelectionIndicatorImage:[UIImage imageNamed:#"tabbar-active.png"]];
And that's what my app looks like
Now I just want to remove that glossy effect, not the blue image on the bar
Thanks in advance!
After digging a little bit, I found the solution for the same. You simply need to create an UIImage object with empty image.
[yourTabbar setSelectionIndicatorImage:[[UIImage alloc] init]];
Thats it :)

UIScrollview setContentOffset and animation

I want to setContentOffset with different animation transition.
Now I use:
[UIScrollView animateWithDuration:speed animations:^ {
[scrollView setContentOffset:offset];
}];
Can you help me, How I achieve animation during set offset.
I am not quite sure, I understand what you are looking for. However, in the case you are looking for a UIScrollView class whose contentOffset is animated with a non-linear timing function you might take a like at my MOScrollView. I use a CADisplayLink to animate the contentOffset. Though, please note that it might not work perfectly as I did not yet use this class in production.

Alpha Detection in Layer OK on Simulator, not iPhone

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

Enabling layer-backed views?

I keep seeing that in order to get transformations around the center point of a CALayer, you need to turn on layer-backing? I can't figure out how to do it though! Please help...
I am surprised you didn't see that in the docs. Just send the layer the wantsLayer message. Or if you prefer dot syntax just do layer.wantsLayer = YES;

Resources