What the most useful Core Graphics (CGrect) functions? - cgrectmake

I usually use CGRectMake method for all my code. Are there other useful handy methods ?

Useful Core Graphics functions
NSLog(#"%#", CGRectCreateDictionaryRepresentation(rect)); :
Printing CGRect in NSLog
bool CGRectContainsPoint (
CGRect rect,
CGPoint point
); :
You can use this function to determine if a touch event falls within a set onscreen area, which can be very handy if you're using geometric elements that aren't based on separate UIViews.
bool CGRectContainsRect (
CGRect rect1,
CGRect rect2
); :
The function takes two arguments. The first rectangle is always the surrounding item. The second argument either falls fully inside the first or it does not.
bool CGRectIntersectsRect (
CGRect rect1,
CGRect rect2
);:
If you want to see whether two UIViews overlap, use CGRectIntersects instead. This takes two rectangles, in any order, and checks to see if those two rectangles have any point of intersection.
CGRect CGRectIntersection (
CGRect r1,
CGRect r2
);:This also takes two arguments, both CGRects, again in any order. It returns a CGRect structure, which is the actual intersection of the two CGRects. There is, as you'd expect, a CGRectUnion that returns the opposite function. CGRectIntersection proves handy when you not only want to test the intersection but use the actual rectangle that falls between two views.
CGRect testRect = CGRectIntersection(rect1, rect2);if (CGRectIsNull(testRect)) ...some result...
CGRect CGRectOffset (
CGRect rect,
CGFloat dx,
CGFloat dy
);:
When you want to move views around the screen, the CGRectOffset function comes in handy. It returns a rectangle that has been offset by (dx, dy), providing a simple translation from one point to a new point. You don't have to start calculating a new center or frame, you can just update the frame to the new offset.
CGRect CGRectInset (
CGRect rect,
CGFloat dx,
CGFloat dy
);:
CGRectInset is probably my favorite of the Core Graphics rect utilities. it lets you expand or contract a rectangle programmatically. You pass it an offset pair, and let the function adjust the rectangle accordingly. The function will inset the width by dx, producing a difference of two times dx, because the inset is applied to both the left and right. The height is inset by dy for a total difference of twice dy.
Hope you Like it.
Reference: what-the-most-useful-core-graphics-cgrect-functions

Related

SKSpriteNode not showing movement when told to move?

SKSpriteNode not showing movement when told to move ?
As explained shortly, I believe that I need to convert SKScene coordinates to SKView coordinates. So my question reduces to "How do I do that?"
Specifics:
I have a .sks file from which I manually extract size and position data for a SKSpriteNode as its supposed to move, which movement is inhibited only by the surrounding wall from which it bounces when it hits same.
This SKSpriteNode's changing position within this wall is based on its anchor = 0.5, 0.5.
Every time the object moves, I call this, for example:
func drawBall() {
newPosition = CGPoint(x: ballPosX, y: ballPosY)
moveTO = SKAction.move(to: newPosition, duration: TimeInterval(0))
myBall!.run(moveTO)
}
The fact that I do not see physical movement indicates that I may have a coordinate problem.
Specifically, the fact that the position of the SKSpriteNode is based on its anchor = 0.5, 0.5 shows me that I am dealing with SKScene coordinates and I need to convert these coordinates to SKView coordinates.
If short how do I do that? .. or .. if I have another error, how do I correct it?

Cocoa NSPoint to Quartz NSPoint - Flip Y cordinate

In macOS programming, We know that
Quartz uses a coordinate space where the origin (0, 0) is at the top-left of the primary display. Increasing y goes down.
Cocoa uses a coordinate space where the origin (0, 0) is the bottom-left of the primary display and increasing y goes up.
Now am using a Quartz API - CGImageCreateWithImageInRect to crop an image , which takes a rectangle as a param. The rect has the Y origin coming from Cocoa's mousedown events.
Thus i get crops at inverted locations...
I tried this code to flip my Y co-ordinate in my cropRect
//Get the point in MouseDragged event
NSPoint currentPoint = [self.view convertPoint:[theEvent locationInWindow] fromView:nil];
CGRect nsRect = CGRectMake(currentPoint.x , currentPoint.y,
circleSizeW, circleSizeH);
//Now Flip the Y please!
CGFloat flippedY = self.imageView.frame.size.height - NSMaxY(nsRectFlippedY);
CGRect cropRect = CGRectMake(currentPoint.x, flippedY, circleSizeW, circleSizeH);
But for the areas on the top, i wrong FlippedY coordinates.
If i click near top edge of the view, i get flippedY = 510 to 515
At the top edge it should be between 0 to 10 :-|
Can someone point me to the correct and reliable way to Flip
the Y coordinate in such circumstances? Thank you!
Here is sample project in GitHub highlighting the issue
https://github.com/kamleshgk/SampleMacOSApp
As Charles mentioned, the Core Graphics API you are using requires coordinates relative to the image (not the screen). The important thing is to convert the event location from window coordinates to the view which most closely corresponds to the image's location and then flip it relative to that same view's bounds (not frame). So:
NSView *relevantView = /* only you know which view */;
NSPoint currentPoint = [relevantView convertPoint:[theEvent locationInWindow] fromView:nil];
// currentPoint is in Cocoa's y-up coordinate system, relative to relevantView, which hopefully corresponds to your image's location
currentPoint.y = NSMaxY(relevantView.bounds) - currentPoint.y;
// currentPoint is now flipped to be in Quartz's y-down coordinate system, still relative to relevantView/your image
The rect you pass to CGImageCreateWithImageInRect should be in coordinates relative to the input image's size, not screen coordinates. Assuming the size of the input image matches the size of the view to which you've converted your point, you should be able to achieve this by subtracting the rect's corner from the image's height, rather than the screen height.

How can CALayer image edges be prevented from stretching during resize?

I am setting the .contents of a CALayer to a CGImage, derived from drawing into an NSBitMapImageRep.
As far as I understand from the docs and WWDC videos, setting the layer's .contentsCenter to an NSRect like {{0.5, 0.5}, {0, 0}}, in combination with a .contentsGravity of kCAGravityResize should lead to Core Animation resizing the layer by stretching the middle pixel, the top and bottom horizontally, and the sides vertically.
This very nearly works, but not quite. The layer resizes more-or-less correctly, but if I draw lines at the edge of the bitmap, as I resize the window the lines can be seen to fluctuate in thickness very slightly. It's subtle enough to be barely a problem until the resizing gets down to around 1/4 of the original layer's size, below which point the lines can thin and disappear altogether. If I draw the bitmaps multiple times at different sizes, small differences in line thickness are very apparent.
I originally canvassed a pixel-alignment issue, but it can't be that because the thickness of the stationary LH edge (for example) will fluctuate as I resize the RH edge. It happens on 1x and 2x screens.
Here's some test code. It's the updateLayer method from a layer-backed NSView subclass (I'm using the alternative non-DrawRect draw path):
- (void)updateLayer {
id image = [self imageForCurrentScaleFactor]; // CGImage
self.layer.contents = image;
// self.backingScaleFactor is set from the window's backingScaleFactor
self.layer.contentsScale = self.backingScaleFactor;
self.layer.contentsCenter = NSMakeRect(0.5, 0.5, 0, 0);
self.layer.contentsGravity = kCAGravityResize;
}
And here's some test drawing code (creating the image supplied by imageForCurrentScaleFactor above):
CGFloat width = rect.size.width;
CGFloat height = rect.size.height;
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: NULL
pixelsWide: width * scaleFactor
pixelsHigh: height * scaleFactor
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
bytesPerRow: 0
bitsPerPixel: 0];
[imageRep setSize:rect.size];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *ctx = [NSGraphicsContext graphicsContextWithBitmapImageRep:imageRep];
[NSGraphicsContext setCurrentContext:ctx];
[[NSColor whiteColor] setFill];
[NSBezierPath fillRect:rect];
[[NSColor blackColor] setStroke];
[NSBezierPath setDefaultLineWidth:1.0f];
[NSBezierPath strokeRect:insetRect];
[NSGraphicsContext restoreGraphicsState];
// image for CALayer.contents is now [imageRep CGImage]
The solution (if you're talking about the problem I think you're talking about) is to have a margin of transparent pixels forming the outside edges of the image. One pixel thick, all the way around, will do it. The reason is that the problem (if it's the problem I think it is) arises only with visible pixels that touch the outside edge of the image. Therefore the idea is to have no visible pixels touch the outside edge of the image.
I have found a practical answer, but would be interested in comments filling in detail from anyone who knows how this works.
The problem did prove to be to do with how the CALayer was being stretched. I was drawing into a bitmap of arbitrary size, on the basis that (as the CALayer docs suggest) use of a .contentsCenter with zero width and height would in effect do a nine-part-image stretch, selecting the single centre pixel as the central stretching portion. With this bitmap as a layer's .contents, I could then resize the CALayer to any desired size (down or up).
Turns out that the 'artibrary size' was the problem. Something odd happens in the way CALayer stretches the edge portions (at least when resizing down). By instead making the initial frame for drawing tiny (ie. just big enough to fit my outline drawing plus a couple of pixels for the central stretching portion), nothing spurious makes its way into the edges during stretching.
The bitmap stretches properly if created with rect just big enough to fit the contents and stretchable center pixel, ie.:
NSRect rect = NSMakeRect(0, 0, lineWidth * 2 + 2, lineWidth * 2 + 2);
This tiny image stretches to any larger size perfectly.

The transform property in CGPathAddEllipseInRect

I am using CGPathAddEllipseInRect to draw a circle and then using that in CAKeyframeAnimation. My issue is that the animation always starts in the same spot. I thought that I could do the following with a CGAffineTransform to make it start in a different point:
CGAffineTransform temp = CGAffineTransformMakeRotation(M_PI / 2);
CGPathAddEllipseInRect(animationPath , &temp, rect);
I do not know what this is doing. When it runs, I don't even see this portion of the animation. It is doing something offscreen. Any help understanding this would be great.
The rotation happens around the origin (0,0) by default, but you want to rotate around the center of the circle, so you have to do additional transformations:
float midX = CGRectGetMidX(rect);
float midY = CGRectGetMidY(rect);
CGAffineTransform t =
CGAffineTransformConcat(
CGAffineTransformConcat(
CGAffineTransformMakeTranslation(-midX, -midY),
CGAffineTransformMakeRotation(angle)),
CGAffineTransformMakeTranslation(midX, midY));
CGPathAddEllipseInRect(animationPath, &t, rect);
Essentially, this chains three transformations: First, the circle is moved to the origin (0,0), then the rotation is applied and afterwards it is moved back to its original position. I've made a little visualization to illustrate the effect:
I chose a square instead of a circle and 45° instead of 90° to make the rotation easier to see, but the principle is the same.

Core Graphics stroke width is inconsistent between lines & arcs?

The use case: I am subclassing UIView to create a custom view that "mattes" a UIImage with a rounded rectangle (clips the image to a rounded rect). The code is working; I've used a method similar to this question.
However, I want to stroke the clipping path to create a "frame". This works, but the arc strokes look markedly different than the line strokes. I've tried adjusting the stroke widths to greater values (I thought it was pixelation at first), but the anti-aliasing seems to handle arcs and lines differently.
Here's what I see on the simulator:
This is the code that draws it:
CGContextSetRGBStrokeColor(context, 0, 0, 0, STROKE_OPACITY);
CGContextSetLineWidth(context, 2.0f);
CGContextAddPath(context, roundRectPath);
CGContextStrokePath(context);
Anyone know how to make these line up smoothly?
… but the anti-aliasing seems to handle arcs and lines differently.
No, it doesn't.
Your stroke width is consistent—it's 2 pt all the way around.
What's wrong is that you have clipped to a rectangle, and your shape's sides are right on top of the edges of this rectangle, so only the halves of the sides that are inside the rectangle are getting drawn. That's why the edges appear only 1 px wide.
The solution is either not to clip, to grow your clipping rectangle by 2 pt on each axis before clipping to it, or to move your shape's edges inward by 1 pt on each side. (ETA: Or, yeah, do an inner stroke.)
Just in case anyone is trying to do the same thing I am (round rect an image):
The UIImageView class has a property layer, of type CALayer . CALayer already has this functionality built-in (it WAS a little surprising to me I couldn't find it anywhere):
UIImageView *thumbnailView = [UIImage imageNamed:#"foo.png"];
thumbnailView.layer.masksToBounds = YES;
thumbnailView.layer.cornerRadius = 15.0f;
thumbnailView.layer.borderWidth = 2.0f;
[self.view addSubview:thumbnailView];
Also does the trick.

Resources