cocoa: How can I draw a scaled up version of NSBitmapImageRep? - cocoa

I want to use NSBitmapImageRep to construct a 64x64 pixel sprite in code, and then draw it to the screen, blown up very large. The result would be very large "pixels" on the screen. Think old school Mario Bros. or Minecraft. How can I do this?
Edit I want to draw to this off-screen bitmap and then render it later on a CALayer

Open an new image context with CGBitmapContextCreate and use
void CGContextSetInterpolationQuality (
CGContextRef c,
CGInterpolationQuality quality
);
to set the interpolation quality to kCGInterpolationNone.
Then draw the image into the context.

Related

How to draw an NSImage, but fade out to the side (linear alpha gradient)?

I have an image that is generated as an NSImage, and then I draw it into my NSView subclass, using a custom draw() method.
I want to modify this custom view so that the image is drawn in the same place, but it fades out on the side. That is, it's drawn as a linear gradient, from alpha=1.0 to alpha=0.0.
My best guess is one of the draw() variants with NSCompositingOperation might help me do what I want, but I'm having trouble understanding how they could do this. I'm not a graphics expert, and the NSImage and NSCompositingOperation docs seem to be using different terminology.
The quick version: pretty much this question but on macOS instead of Android.
You're on the right track. You'll want to use NSCompositingOperationDestinationOut to achieve this. That will effectively punch out the destination based on the alpha of the source being drawn. So if you first draw your image and then draw a gradient from alpha 0.0 to 1.0 with the .destinationOut operation on top, you'll end up with your image with alpha 1.0 to 0.0
Because that punch out happens to whatever is already in the backing store where the gradient is being drawn to, you'll want to be careful where/how you use it.
If you want to do this all within the drawing of the view, you should do the drawing of your image and the punch out gradient within a transparency layer using CGContextBeginTransparencyLayer and CGContextEndTransparencyLayer to prevent punching out anything else.
You could also first create a new NSImage to represent this faded variant, either using the drawingHandler constructor or NSCustomImageRep and doing the same image & gradient drawing within there (without worrying about transparency layers). And then draw that image into your view, or simply use that image with an NSImageView.

NSBezierPath: how to invert the clip path?

I am drawing a rounded image in the centre of my custom view by creating a rounded NSBezierPath and adding it to the clipping region of the graphics context. This is working well, but I'm wondering if I can improve the performance of drawing the background of the view (the area outside of the rounded centred image) but inverting this clip and performing the background draw (an NSGradient fill).
Can anyone suggest a method of inverting a clip path?
You won't improve the drawing performance of the background by doing this. If anything, using a more complex clipping path will slow down drawing.
If you want to draw objects over a background without significant overhead of redrawing the background, you could use Core Animation layers as I explained in my answer to this question.
Regardless of performance, this question comes first when you try to invert clip.
So if that's what you're looking for:
Using NSBezierPath addClip - How to invert clip

CGImage gets stretched in CALayer

I have an NSMutableArray of CGImage objects. All the images have different dimensions ( 80x20 px, 200x200 px, 10x10 px, etc...)
I'm trying to animate these in CALayer which has 256x256 pixels size. The animation works, but my CGImages get stretched to the dimensions of the CALayer. How could I prevent my CGImages from getting scaled?
thank you.
To answer my own question. I had to set my layer's contentsGravity property to kCAGravityCenter and that did the trick. The default value of contentsGravity is kCAGravityResize which makes things stretched to fill in layer's bounds.

Draw portion of image in CGBitmapContext

I have created a CGBitmapContext of 256 x 256 and I have a CGImageRef that I want to draw into this context. My goal is to crop the image (and ultimately create tiles of an image), but if I use CGContextDrawImage, Core Graphics scales the image to fit the context.
So the question is, how do I make sure that only a portion of the CGImageRef is drawn into the CGBitmapContext (no scaling)?
CGContextDrawImage takes a CGRect parameter. It sounds like you are passing the bounds of your context. Instead, try passing the bounds of the image, offset appropriately to draw the desired part of it.

Zooming in on CIImage or NSImage?

I am taking photos from a webcam in my Cocoa application and I would like to zoom in on the centre of the image I receive. I start by receiving a CIImage and eventually save an NSImage.
How would I go about zooming in on either of these objects?
“Zoom” means a couple of things. You'll need at least to crop the image, and you may want to scale up. Or you may want to reserve scaling for display only.
CGImage
To crop it, use a CICrop filter.
To scale it, use either a CILanczosScaleTransform filter or a CIAffineTransform filter.
To crop and scale it, use both filters. Simply pass the output of the crop as the input of the scale.
NSImage
Crop and scale are the same operation here. You'll need to create a new, empty NSImage of the desired size (whether it's the size of the source crop if you won't zoom or an increased size if you will zoom), lock focus on it, draw the crop rectangle from the source image into the bounding rectangle of the destination image, and unlock focus.
If the destination rectangle is not the same size as the source (crop) rectangle, it will scale; if they are the same size, it will simply copy or composite pixel-to-pixel.

Resources