In macOS programming, We know that
Quartz uses a coordinate space where the origin (0, 0) is at the top-left of the primary display. Increasing y goes down.
Cocoa uses a coordinate space where the origin (0, 0) is the bottom-left of the primary display and increasing y goes up.
Now am using a Quartz API - CGImageCreateWithImageInRect to crop an image , which takes a rectangle as a param. The rect has the Y origin coming from Cocoa's mousedown events.
Thus i get crops at inverted locations...
I tried this code to flip my Y co-ordinate in my cropRect
//Get the point in MouseDragged event
NSPoint currentPoint = [self.view convertPoint:[theEvent locationInWindow] fromView:nil];
CGRect nsRect = CGRectMake(currentPoint.x , currentPoint.y,
circleSizeW, circleSizeH);
//Now Flip the Y please!
CGFloat flippedY = self.imageView.frame.size.height - NSMaxY(nsRectFlippedY);
CGRect cropRect = CGRectMake(currentPoint.x, flippedY, circleSizeW, circleSizeH);
But for the areas on the top, i wrong FlippedY coordinates.
If i click near top edge of the view, i get flippedY = 510 to 515
At the top edge it should be between 0 to 10 :-|
Can someone point me to the correct and reliable way to Flip
the Y coordinate in such circumstances? Thank you!
Here is sample project in GitHub highlighting the issue
https://github.com/kamleshgk/SampleMacOSApp
As Charles mentioned, the Core Graphics API you are using requires coordinates relative to the image (not the screen). The important thing is to convert the event location from window coordinates to the view which most closely corresponds to the image's location and then flip it relative to that same view's bounds (not frame). So:
NSView *relevantView = /* only you know which view */;
NSPoint currentPoint = [relevantView convertPoint:[theEvent locationInWindow] fromView:nil];
// currentPoint is in Cocoa's y-up coordinate system, relative to relevantView, which hopefully corresponds to your image's location
currentPoint.y = NSMaxY(relevantView.bounds) - currentPoint.y;
// currentPoint is now flipped to be in Quartz's y-down coordinate system, still relative to relevantView/your image
The rect you pass to CGImageCreateWithImageInRect should be in coordinates relative to the input image's size, not screen coordinates. Assuming the size of the input image matches the size of the view to which you've converted your point, you should be able to achieve this by subtracting the rect's corner from the image's height, rather than the screen height.
Related
SKSpriteNode not showing movement when told to move ?
As explained shortly, I believe that I need to convert SKScene coordinates to SKView coordinates. So my question reduces to "How do I do that?"
Specifics:
I have a .sks file from which I manually extract size and position data for a SKSpriteNode as its supposed to move, which movement is inhibited only by the surrounding wall from which it bounces when it hits same.
This SKSpriteNode's changing position within this wall is based on its anchor = 0.5, 0.5.
Every time the object moves, I call this, for example:
func drawBall() {
newPosition = CGPoint(x: ballPosX, y: ballPosY)
moveTO = SKAction.move(to: newPosition, duration: TimeInterval(0))
myBall!.run(moveTO)
}
The fact that I do not see physical movement indicates that I may have a coordinate problem.
Specifically, the fact that the position of the SKSpriteNode is based on its anchor = 0.5, 0.5 shows me that I am dealing with SKScene coordinates and I need to convert these coordinates to SKView coordinates.
If short how do I do that? .. or .. if I have another error, how do I correct it?
In my NSView subclass in drawRect I stroke a number of NSBezierPaths. I would like the lines drawn as a result of these strokes to have the exact same with, preferably just a couple of pixels wide no matter the scaling of the view. Here's my drawRect:
- (void)drawRect:(NSRect)dirtyRect
{
NSSize x = [self convertSize:NSMakeSize(1,1) fromView:nil];
printf("size = %f %f\n", x.width, x.height);
for(NSBezierPath *path in self.paths) {
[path setLineWidth:x.width];
[path stroke];
}
}
Here's a screenshot of what I am seeing:
(source: crb at www.sonic.net)
Can anyone suggest how I can get the crisp consistant path outlines that I am looking for?
Thanks.
Try to match the exact pixels of the device. (more difficult since iphone 5)
Do not use coordinates with on half points: like 0.5 (The work on retina, but on "non retina" they are unsharp).
Th eline width goes half to the left / or up, half to the right.
So if you have a lineWidth of 2 and coorinates at integer values it should be sharp.
I usually use CGRectMake method for all my code. Are there other useful handy methods ?
Useful Core Graphics functions
NSLog(#"%#", CGRectCreateDictionaryRepresentation(rect)); :
Printing CGRect in NSLog
bool CGRectContainsPoint (
CGRect rect,
CGPoint point
); :
You can use this function to determine if a touch event falls within a set onscreen area, which can be very handy if you're using geometric elements that aren't based on separate UIViews.
bool CGRectContainsRect (
CGRect rect1,
CGRect rect2
); :
The function takes two arguments. The first rectangle is always the surrounding item. The second argument either falls fully inside the first or it does not.
bool CGRectIntersectsRect (
CGRect rect1,
CGRect rect2
);:
If you want to see whether two UIViews overlap, use CGRectIntersects instead. This takes two rectangles, in any order, and checks to see if those two rectangles have any point of intersection.
CGRect CGRectIntersection (
CGRect r1,
CGRect r2
);:This also takes two arguments, both CGRects, again in any order. It returns a CGRect structure, which is the actual intersection of the two CGRects. There is, as you'd expect, a CGRectUnion that returns the opposite function. CGRectIntersection proves handy when you not only want to test the intersection but use the actual rectangle that falls between two views.
CGRect testRect = CGRectIntersection(rect1, rect2);if (CGRectIsNull(testRect)) ...some result...
CGRect CGRectOffset (
CGRect rect,
CGFloat dx,
CGFloat dy
);:
When you want to move views around the screen, the CGRectOffset function comes in handy. It returns a rectangle that has been offset by (dx, dy), providing a simple translation from one point to a new point. You don't have to start calculating a new center or frame, you can just update the frame to the new offset.
CGRect CGRectInset (
CGRect rect,
CGFloat dx,
CGFloat dy
);:
CGRectInset is probably my favorite of the Core Graphics rect utilities. it lets you expand or contract a rectangle programmatically. You pass it an offset pair, and let the function adjust the rectangle accordingly. The function will inset the width by dx, producing a difference of two times dx, because the inset is applied to both the left and right. The height is inset by dy for a total difference of twice dy.
Hope you Like it.
Reference: what-the-most-useful-core-graphics-cgrect-functions
The use case: I am subclassing UIView to create a custom view that "mattes" a UIImage with a rounded rectangle (clips the image to a rounded rect). The code is working; I've used a method similar to this question.
However, I want to stroke the clipping path to create a "frame". This works, but the arc strokes look markedly different than the line strokes. I've tried adjusting the stroke widths to greater values (I thought it was pixelation at first), but the anti-aliasing seems to handle arcs and lines differently.
Here's what I see on the simulator:
This is the code that draws it:
CGContextSetRGBStrokeColor(context, 0, 0, 0, STROKE_OPACITY);
CGContextSetLineWidth(context, 2.0f);
CGContextAddPath(context, roundRectPath);
CGContextStrokePath(context);
Anyone know how to make these line up smoothly?
… but the anti-aliasing seems to handle arcs and lines differently.
No, it doesn't.
Your stroke width is consistent—it's 2 pt all the way around.
What's wrong is that you have clipped to a rectangle, and your shape's sides are right on top of the edges of this rectangle, so only the halves of the sides that are inside the rectangle are getting drawn. That's why the edges appear only 1 px wide.
The solution is either not to clip, to grow your clipping rectangle by 2 pt on each axis before clipping to it, or to move your shape's edges inward by 1 pt on each side. (ETA: Or, yeah, do an inner stroke.)
Just in case anyone is trying to do the same thing I am (round rect an image):
The UIImageView class has a property layer, of type CALayer . CALayer already has this functionality built-in (it WAS a little surprising to me I couldn't find it anywhere):
UIImageView *thumbnailView = [UIImage imageNamed:#"foo.png"];
thumbnailView.layer.masksToBounds = YES;
thumbnailView.layer.cornerRadius = 15.0f;
thumbnailView.layer.borderWidth = 2.0f;
[self.view addSubview:thumbnailView];
Also does the trick.
I've got a simple test app with a custom view (set up in Interface Builder) with its origin at (20, 20). When I get a mouse event at the lower left most point in the view, the event's location is being reported as (20, 21) and the converted point at (0, 1). I'm using pixie to make sure I'm clicking right at the lower left corner. (If I move one pixel down, I get nothing, indicating I'm outside the view.)
Here's the mouseDown code:
- (void)mouseDown:(NSEvent *)e
{
NSPoint pt = [e locationInWindow];
NSLog(#"Location in Window: %.0f, %.0f", pt.x, pt.y);
pt = [self convertPoint:pt fromView:nil];
NSLog(#"Converted Point: %.0f, %.0f", pt.x, pt.y);
}
Can anyone explain why the y position appears to be off by one?
This is correct behavior. From the Cocoa Drawing Guide:
Important: Cocoa event objects return y coordinate values that are
1-based instead of 0-based. Thus, a
mouse click on the bottom left corner
of a window or view would yield the
point (0, 1) in Cocoa and not (0, 0).
Only y-coordinates are 1-based.
Fencepost error? In your code, or the library or your use of the library? Rounding errors as you are doing integral pixel reporting of floating point data?