How to save PNG file from NSImage (retina issues) - macos

I'm doing some operations on images and after I'm done, I want to save the image as PNG on disk. I'm doing the following:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
[image lockFocus] ;
NSBitmapImageRep *imageRepresentation = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0.0, 0.0, image.size.width, image.size.height)] ;
[image unlockFocus] ;
NSData *data = [imageRepresentation representationUsingType:NSPNGFileType properties:nil];
[data writeToFile:path atomically:YES];
}
This code is working, but the problem is with retina mac, if I print the NSBitmapImageRep object I get a different size and pixels rect and when my image is saved on disk, it's twice the size:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=600x600 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
I tied to force the pixel size to not take care about the retina scale, as I want to preserve the original size:
imageRepresentation.pixelsWide = image.size.width;
imageRepresentation.pixelsHigh = image.size.height;
This time I get the right size when I print the NSBitmapImageRep object, but when I save my file I still get the same issue:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=300x300 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
Any idea how to fix this, and preserve the original pixel size?

If you have an NSImage and want to save it as an image file to the filesystem, you should never use lockFocus! lockFocus creates a new image which is determined for getting shown an the screen and nothing else. Therefore lockFocus uses the properties of the screen: 72 dpi for normal screens and 144 dpi for retina screens. For what you want I propose the following code:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
CGImageRef cgRef = [image CGImageForProposedRect:NULL
context:nil
hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cgRef];
[newRep setSize:[image size]]; // if you want the same resolution
NSData *pngData = [newRep representationUsingType:NSPNGFileType properties:nil];
[pngData writeToFile:path atomically:YES];
[newRep autorelease];
}

NSImage is resolution aware and uses a HiDPI graphics context when you lockFocus on a system with retina screen.
The image dimensions you pass to your NSBitmapImageRep initializer are in points (not pixels). An 150.0 point-wide image therefore uses 300 horizontal pixels in a #2x context.
You could use convertRectToBacking: or backingScaleFactor: to compensate for the #2x context. (I didn't try that), or you can use the following NSImage category, that creates a drawing context with explicit pixel dimensions:
#interface NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error;
#end
#implementation NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error
{
BOOL result = YES;
NSImage* scalingImage = [NSImage imageWithSize:[self size] flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[self drawAtPoint:NSMakePoint(0.0, 0.0) fromRect:dstRect operation:NSCompositeSourceOver fraction:1.0];
return YES;
}];
NSRect proposedRect = NSMakeRect(0.0, 0.0, outputSizePx.width, outputSizePx.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef cgContext = CGBitmapContextCreate(NULL, proposedRect.size.width, proposedRect.size.height, 8, 4*proposedRect.size.width, colorSpace, kCGBitmapByteOrderDefault|kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithGraphicsPort:cgContext flipped:NO];
CGContextRelease(cgContext);
CGImageRef cgImage = [scalingImage CGImageForProposedRect:&proposedRect context:context hints:nil];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)(URL), kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, cgImage, nil);
if(!CGImageDestinationFinalize(destination))
{
NSDictionary* details = #{NSLocalizedDescriptionKey:#"Error writing PNG image"};
[details setValue:#"ran out of money" forKey:NSLocalizedDescriptionKey];
*error = [NSError errorWithDomain:#"SSWPNGAdditionsErrorDomain" code:10 userInfo:details];
result = NO;
}
CFRelease(destination);
return result;
}
#end

I found this code on web , and it works on retina. Paste here, hope can help someone.
NSImage *computerImage = [NSImage imageNamed:NSImageNameComputer];
NSInteger size = 256;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size
pixelsHigh:size
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[rep setSize:NSMakeSize(size, size)];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[computerImage drawInRect:NSMakeRect(0, 0, size, size) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSData *data = [rep representationUsingType:NSPNGFileType properties:nil];

Just incase anyone stumbles up on this thread. Here is certainly flawed solution that does the job of saving image at 1x size (image.size) regardless of device in swift
public func writeToFile(path: String, atomically: Bool = true) -> Bool{
let bitmap = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(self.size.width), pixelsHigh: Int(self.size.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)!
bitmap.size = self.size
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(NSGraphicsContext(bitmapImageRep: bitmap))
self.drawAtPoint(CGPoint.zero, fromRect: NSRect.zero, operation: NSCompositingOperation.CompositeSourceOver, fraction: 1.0)
NSGraphicsContext.restoreGraphicsState()
if let imagePGNData = bitmap.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [NSImageCompressionFactor: 1.0]) {
return imagePGNData.writeToFile((path as NSString).stringByStandardizingPath, atomically: atomically)
} else {
return false
}
}

Here's a Swift 5 version based on Heinrich Giesen's answer:
static func saveImage(_ image: NSImage, atUrl url: URL) {
guard
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)
else { return } // TODO: handle error
let newRep = NSBitmapImageRep(cgImage: cgImage)
newRep.size = image.size // if you want the same size
guard
let pngData = newRep.representation(using: .png, properties: [:])
else { return } // TODO: handle error
do {
try pngData.write(to: url)
}
catch {
print("error saving: \(error)")
}
}

My 2 cents for OS X including write that handles extensions + offscreen image drawing (method 2); one can verify with NSGraphicsContext.currentContextDrawingToScreen()
func createCGImage() -> CGImage? {
//method 1
let image = NSImage(size: NSSize(width: bounds.width, height: bounds.height), flipped: true, drawingHandler: { rect in
self.drawRect(self.bounds)
return true
})
var rect = CGRectMake(0, 0, bounds.size.width, bounds.size.height)
return image.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
//method 2
if let pdfRep = NSPDFImageRep(data: dataWithPDFInsideRect(bounds)) {
return pdfRep.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
}
return nil
}
func PDFImageData(filter: QuartzFilter?) -> NSData? {
return dataWithPDFInsideRect(bounds)
}
func bitmapContext() -> NSGraphicsContext? {
var context : NSGraphicsContext? = nil
if let imageRep = NSBitmapImageRep(bitmapDataPlanes: nil,
pixelsWide: Int(bounds.size.width),
pixelsHigh: Int(bounds.size.height), bitsPerSample: 8,
samplesPerPixel: 4, hasAlpha: true, isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: Int(bounds.size.width) * 4,
bitsPerPixel: 32) {
imageRep.size = NSSize(width: bounds.size.width, height: bounds.size.height)
context = NSGraphicsContext(bitmapImageRep: imageRep)
}
return context
}
func writeImageData(view: MyView, destination: NSURL) {
if let dest = CGImageDestinationCreateWithURL(destination, imageUTType, 1, nil) {
let properties = imageProperties
let image = view.createCGImage()!
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue) {
CGImageDestinationAddImage(dest, image, properties)
CGImageDestinationFinalize(dest)
}
}
}

Related

NSBezierPath AddClip to mask NSImage - Size issue

On a macOS (OSX) desktop app, I use NSBezierPath to draw a random closed shape which looks like this.
As you can see the dashed line shows the closed path which was drawn.
Now am trying to extract the masked image as per this path.
But i get a small portion of the image. The masked image seems to correctly get the outline from the BezierPath. But size is an issue.
This is the method which returns the masked/clipped image.
The sourceImage.size for this drawing - 1021 * 1031
- (NSImage *)imageByApplyingClippingBezierPath:(NSImage *)sourceImage
bezierPath:(NSBezierPath *)bezierPath
newFrame:(NSRect)newFrame
{
NSImage* newImage = [[NSImage alloc] initWithSize:newFrame.size];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:newFrame.size.width
pixelsHigh:newFrame.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[newImage addRepresentation:rep];
[newImage lockFocus];
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context);
[bezierPath addClip];
NSRect targetFrame = NSMakeRect(0, 0, newFrame.size.width, newFrame.size.height);
[sourceImage drawInRect:targetFrame];
[newImage unlockFocus];
CGContextRestoreGState(context);
return newImage;
}
How can i get perfectly sized image outlined by the BezierPath?
Any tips would be appreciated!
UPDATE
Just clarifying, how i draw the image.
I get rectangle bounds of the bezier path and rectangular image cropped like this.
CGRect bezierBounds = CGPathGetPathBoundingBox([self.smartLassoWavyBezierPath quartzPath]);
NSRect targetFrame = NSMakeRect(0, 0, bezierBounds.size.width, bezierBounds.size.height);
NSImage *targetImage = [[NSImage alloc initWithSize:targetFrame.size];
[targetImage lockFocus];
[self.view.originalLoadImageOnCanvas
drawInRect:targetFrame fromRect:bezierBounds operation:NSCompositeCopy
fraction:1.0f];
[targetImage unlockFocus];
For Swift 5;
func imageByApplyingClippingBezierPath(source: NSImage, path: NSBezierPath, frame: NSRect) -> NSImage? {
let new = NSImage(size: frame.size)
guard let rep = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(frame.size.width), pixelsHigh: Int(frame.size.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSColorSpaceName.calibratedRGB, bitmapFormat: .alphaFirst, bytesPerRow: 0, bitsPerPixel: 0) else { return nil}
new.addRepresentation(rep)
new.lockFocus()
let context = NSGraphicsContext.current?.cgContext
context?.saveGState()
path.addClip()
let targetFrame = NSRect(x: 0, y: 0, width: Int(frame.size.width), height: Int(frame.size.height))
source.draw(in: targetFrame)
new.unlockFocus()
context?.restoreGState()
return new
}

How to trim an NSImage by giving bounds

I have an NSImage object, I have an CIDetector object that detects QR codes on that image. After it detects, I wants to trim that image so it only has the QR code in it. This is how I've got the bounds of the QR code:
NSArray *features = [myQRDetector featureInImage:myCIImage];
CIQRCodeFeature *qrFeature = features[0];
CGRect qrBounds = qrFeature.bounds;
Now how can I trim the image so it only contains the area described by qrBounds variable.
In Swift 5
func trim(image: NSImage, rect: CGRect) -> NSImage {
let result = NSImage(size: rect.size)
result.lockFocus()
let destRect = CGRect(origin: .zero, size: result.size)
image.draw(in: destRect, from: rect, operation: .copy, fraction: 1.0)
result.unlockFocus()
return result
}
The answer from onmyway133 is great, but it doesn't preserve the datatype of the source image. For instance, if your source is an .hdr image, each color channel will be floats, but the cropped image will be an 8-bit integer RGBA image.
For preserving the format of the source, it seems you have to go down to the associated CGImage. I do this:
extension NSImage {
func cropping(to rect: CGRect) -> NSImage {
var imageRect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
guard let imageRef = self.cgImage(forProposedRect: &imageRect, context: nil, hints: nil) else {
return NSImage(size: rect.size)
}
guard let crop = imageRef.cropping(to: rect) else {
return NSImage(size: rect.size)
}
return NSImage(cgImage: crop, size: NSZeroSize)
}
}
You need to create a new NSImage and draw the part of the original image you want to it.
NSImage* newImage = [[NSImage alloc] initWithSize:NSSizeFromCGSize(qrBounds.size)];
[newImage lockFocus];
NSRect dest = { NSZeroPoint, newImage.size };
[origImage drawInRect:dest fromRect:NSRectFromCGRect(qrBounds) operation:NSCompositeCopy fraction:1];
[newImage unlockFocus];

Blend two NSImages using Swift on OS X

I have two instances of NSImage. I want image2 to be placed on top of image1, with a certain level of opacity. They are of matching dimensions. Neither of the images need to be visible in the UI.
How can I correctly set up a graphics context and draw images to it, one with full opacity and another semi-transparent?
I have been reading several answers here but I find it complicated, especially since most seem to be for either Objective-C or only apply to iOS. Any pointers are appriciated. If this can be accomplished without needing a CGContext at all that would be even better.
func blendImages(image1: NSImage, image2: NSImage, alpha: CGFloat) -> CGImage {
// Create context
var ctx: CGContextRef = CGBitmapContextCreate(0, inputImage.size.width, inputImage.size.height, 8, inputImage.size.width*4, NSColorSpace.genericRGBColorSpace(), PremultipliedLast)
let area = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height)
CGContextScaleCTM(ctx, 1, -1)
// Draw image1 in context
// Draw image2 with alpha opacity
CGContextSetAlpha(ctx, CGFloat(0.5))
// Create CGImage from context
let outputImage = CGBitmapContextCreateImage(ctx)
return outputImage
}
Elsewhere I have this extension to get CGImages from my NSImages:
extension NSImage {
var CGImage: CGImageRef {
get {
let imageData = self.TIFFRepresentation
let source = CGImageSourceCreateWithData(imageData as! CFDataRef, nil)
let maskRef = CGImageSourceCreateImageAtIndex(source, 0, nil)
return maskRef
}
}
}
I managed to solve this with NSGraphicsContext.
func mergeImagesAB(pathA: String, pathB: String, fraction: CGFloat) -> CGImage {
guard let imgA = NSImage(byReferencingFile: pathA),
let imgB = NSImage(byReferencingFile: pathB),
let bitmap = imgA.representations[0] as? NSBitmapImageRep,
let ctx = NSGraphicsContext(bitmapImageRep: bitmap)
else {fatalError("Failed to load images.")}
let rect = NSRect(origin: CGPoint(x: 0, y: 0), size: bitmap.size)
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(ctx)
imgB.drawInRect(rect, fromRect: rect, operation: .CompositeSourceOver, fraction: fraction)
NSGraphicsContext.restoreGraphicsState()
guard let result = bitmap.CGImage
else {fatalError("Failed to create image.")}
return result;
}
Thanks Henrik, I needed this for Obj-C. Here's that version in case someone also needs:
-(CGImageRef) mergeImage:(NSImage*)a andB:(NSImage*)b fraction:(float)fraction{
NSBitmapImageRep *bitmap = (NSBitmapImageRep*)[[a representations] objectAtIndex:0];
NSGraphicsContext *ctx = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmap];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:ctx];
CGRect rect = CGRectMake(0, 0, bitmap.size.width, bitmap.size.height);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:ctx];
[b drawInRect:rect fromRect:rect operation:NSCompositeSourceOver fraction:fraction];
[NSGraphicsContext restoreGraphicsState];
return [bitmap CGImage];
}

How do I draw NSGradient to NSImage?

I'm trying to take an NSGradient and save it as an image in RubyMotion, but I can't get it to work. This is the code I have so far:
gradient = NSGradient.alloc.initWithColors(colors,
atLocations: locations.to_pointer(:double),
colorSpace: NSColorSpace.genericRGBColorSpace
)
size = Size(width, height)
image = NSImage.imageWithSize(size, flipped: false, drawingHandler: lambda do |rect|
gradient.drawInRect(rect, angle: angle)
true
end)
data = image.TIFFRepresentation
data.writeToFile('output.tif', atomically: false)
It runs without error, but the file that is saved is blank and there is no image data. Can anyone help point me in the right direction?
I don’t know about RubyMotion, but here’s how to do it in Objective-C:
NSGradient *grad = [[NSGradient alloc] initWithStartingColor:[NSColor redColor]
endingColor:[NSColor blueColor]];
NSRect rect = CGRectMake(0.0, 0.0, 50.0, 50.0);
NSImage *image = [[NSImage alloc] initWithSize:rect.size];
NSBezierPath *path = [NSBezierPath bezierPathWithRect:rect];
[image lockFocus];
[grad drawInBezierPath:path angle:0.0];
NSBitmapImageRep *imgRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:rect];
NSData *data = [imgRep representationUsingType:NSPNGFileType properties:nil];
[image unlockFocus];
[data writeToFile: #"/path/to/file.png" atomically:NO];
In case you want to know how it works in Swift 5:
extension NSImage {
convenience init?(gradientColors: [NSColor], imageSize: NSSize) {
guard let gradient = NSGradient(colors: gradientColors) else { return nil }
let rect = NSRect(origin: CGPoint.zero, size: imageSize)
self.init(size: rect.size)
let path = NSBezierPath(rect: rect)
self.lockFocus()
gradient.draw(in: path, angle: 0.0)
self.unlockFocus()
}
}

How To Save PNG file From NSImage (retina issues) The Right Way?

I am trying to save each image in an array as a .PNG file (also as the right size, without scaling up because of retina mac dpi issues) and can't seem to find a solution. NONE of the solutions at How to save PNG file from NSImage (retina issues) seem to be working for me. I've tried each one and each of them would still save a 72x72 file as 144x144 in retina .etc.
More specifically I am looking for an NSImage category (yes, I am working in the Mac environment)
I am trying to have the user Choose a directory to save them in and execute the saving of the images from array like this:
- (IBAction)saveImages:(id)sender {
// Prepare Images that are checked and put them in an array
[self prepareImages];
if ([preparedImages count] == 0) {
NSLog(#"We have no preparedImages to save!");
NSAlert *alert = [[NSAlert alloc] init];
[alert setAlertStyle:NSInformationalAlertStyle];
[alert setMessageText:NSLocalizedString(#"Error", #"Save Images Error Text")];
[alert setInformativeText:NSLocalizedString(#"You have not selected any images to create.", #"Save Images Error Informative Text")];
[alert beginSheetModalForWindow:self.window
modalDelegate:self
didEndSelector:#selector(testDatabaseConnectionDidEnd:returnCode:
contextInfo:)
contextInfo:nil];
return;
} else {
NSLog(#"We have prepared %lu images.", (unsigned long)[preparedImages count]);
}
// Save Dialog
// Create a File Open Dialog class.
//NSOpenPanel* openDlg = [NSOpenPanel openPanel];
NSSavePanel *panel = [NSSavePanel savePanel];
// Set array of file types
NSArray *fileTypesArray;
fileTypesArray = [NSArray arrayWithObjects:#"jpg", #"gif", #"png", nil];
// Enable options in the dialog.
//[openDlg setCanChooseFiles:YES];
//[openDlg setAllowedFileTypes:fileTypesArray];
//[openDlg setAllowsMultipleSelection:TRUE];
[panel setNameFieldStringValue:#"Images.png"];
[panel setDirectoryURL:directoryPath];
// Display the dialog box. If the OK pressed,
// process the files.
[panel beginWithCompletionHandler:^(NSInteger result) {
if (result == NSFileHandlingPanelOKButton) {
NSLog(#"OK Button!");
// create a file manager and grab the save panel's returned URL
NSFileManager *manager = [NSFileManager defaultManager];
directoryPath = [panel URL];
[[self directoryLabel] setStringValue:[NSString stringWithFormat:#"%#", directoryPath]];
// then copy a previous file to the new location
// copy item at URL was self.myURL
// copy images that are created from array to this path
for (NSImage *image in preparedImages) {
#warning Fix Copy Item At URL to copy image from preparedImages array to save each one
NSString *imageName = image.name;
NSString *imagePath = [[directoryPath absoluteString] stringByAppendingPathComponent:imageName];
//[manager copyItemAtURL:nil toURL:directoryPath error:nil];
NSLog(#"Trying to write IMAGE: %# to URL: %#", imageName, imagePath);
//[image writePNGToURL:[NSURL URLWithString:imagePath] outputSizeInPixels:image.size error:nil];
[self saveImage:image atPath:imagePath];
}
//[manager copyItemAtURL:nil toURL:directoryPath error:nil];
}
}];
[preparedImages removeAllObjects];
return;
}
one user attempted to answer his by using this NSImage category but it does not produce any file or PNG for me.
#interface NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error;
#end
#implementation NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error
{
BOOL result = YES;
NSImage* scalingImage = [NSImage imageWithSize:[self size] flipped:[self isFlipped] drawingHandler:^BOOL(NSRect dstRect) {
[self drawAtPoint:NSMakePoint(0.0, 0.0) fromRect:dstRect operation:NSCompositeSourceOver fraction:1.0];
return YES;
}];
NSRect proposedRect = NSMakeRect(0.0, 0.0, outputSizePx.width, outputSizePx.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef cgContext = CGBitmapContextCreate(NULL, proposedRect.size.width, proposedRect.size.height, 8, 4*proposedRect.size.width, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithGraphicsPort:cgContext flipped:NO];
CGContextRelease(cgContext);
CGImageRef cgImage = [scalingImage CGImageForProposedRect:&proposedRect context:context hints:nil];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)(URL), kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, cgImage, nil);
if(!CGImageDestinationFinalize(destination))
{
NSDictionary* details = #{NSLocalizedDescriptionKey:#"Error writing PNG image"};
[details setValue:#"ran out of money" forKey:NSLocalizedDescriptionKey];
*error = [NSError errorWithDomain:#"SSWPNGAdditionsErrorDomain" code:10 userInfo:details];
result = NO;
}
CFRelease(destination);
return result;
}
#end
I had trouble with the answer provided in original thread too. Further reading landed me on a post by Erica Sadun related to debugging code for retina displays without a retina display. She creates a bitmap of the desired size, then replaces the current drawing context (display based/retina influenced) with the generic one associated with the new bitmap. She then renders the original image into the bitmap (using the generic graphics context).
I took her code and made a quick category on NSImage which seems to do the job for me. After calling
NSBitmapImageRep *myRep = [myImage unscaledBitmapImageRep];
you should have a bitmap of the proper (original) dimensions, regardless of the type of physical display you started with. From this point, you can call representationUsingType:properties on the unscaled bitmap to get whatever format you are looking to write out.
Here is my category (header omitted). Note - you may need to expose the colorspace portion of the bitmap initializer. This is the value that works for my particular case.
-(NSBitmapImageRep *)unscaledBitmapImageRep {
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:self.size.width
pixelsHigh:self.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
rep.size = self.size;
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[self drawAtPoint:NSMakePoint(0, 0)
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
return rep;
}
Thank tad & SnowPaddler.
For anyone who is not familiar with Cocoa and using Swift 4, you can view Swift 2 & Swift 3 version from edit history:
import Cocoa
func unscaledBitmapImageRep(forImage image: NSImage) -> NSBitmapImageRep {
guard let rep = NSBitmapImageRep(
bitmapDataPlanes: nil,
pixelsWide: Int(image.size.width),
pixelsHigh: Int(image.size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: .deviceRGB,
bytesPerRow: 0,
bitsPerPixel: 0
) else {
preconditionFailure()
}
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = NSGraphicsContext(bitmapImageRep: rep)
image.draw(at: .zero, from: .zero, operation: .sourceOver, fraction: 1.0)
NSGraphicsContext.restoreGraphicsState()
return rep
}
func writeImage(
image: NSImage,
usingType type: NSBitmapImageRep.FileType,
withSizeInPixels size: NSSize?,
to url: URL) throws {
if let size = size {
image.size = size
}
let rep = unscaledBitmapImageRep(forImage: image)
guard let data = rep.representation(using: type, properties: [.compressionFactor: 1.0]) else {
preconditionFailure()
}
try data.write(to: url)
}
Tad - thank you very much for this code - I agonised over this for days! It helped me write a file from a NSImage whilst keeping the resolution to 72DPI despite the retina display installed on my Mac. For the benefit of others that want to save a NSImage to a file with a specific pixel size and type (PNG, JPG etc) with a resolution of 72 DPI, here is the code that worked for me. I found that you need to set the size of your image before calling unscaledBitmapImageRep for this to work.
-(void)saveImage:(NSImage *)image
AsImageType:(NSBitmapImageFileType)imageType
forSize:(NSSize)targetSize
atPath:(NSString *)path
{
image.size = targetSize;
NSBitmapImageRep * rep = [image unscaledBitmapImageRep:targetSize];
// Write the target image out to a file
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0] forKey:NSImageCompressionFactor];
NSData *targetData = [rep representationUsingType:imageType properties:imageProps];
[targetData writeToFile:path atomically: NO];
return;
}
I have also included the source code for the category header and .m file below.
The NSImage+Scaling.h file:
#import <Cocoa/Cocoa.h>
#import <QuartzCore/QuartzCore.h>
#interface NSImage (Scaling)
-(NSBitmapImageRep *)unscaledBitmapImageRep;
#end
And the NSImage+Scaling.m file:
#import "NSImage+Scaling.h"
#pragma mark - NSImage_Scaling
#implementation NSImage (Scaling)
-(NSBitmapImageRep *)unscaledBitmapImageRep
{
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:self.size.width
pixelsHigh:self.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[self drawAtPoint:NSMakePoint(0, 0)
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
return rep;
}
#end
I had the same difficulties with saving an NSImage object to a PNG or JPG file, and I finally understood why...
Firstly, the code excerpt shown above works well:
import Cocoa
func unscaledBitmapImageRep(forImage image: NSImage) -> NSBitmapImageRep {
guard let rep = NSBitmapImageRep(
bitmapDataPlanes: nil,
pixelsWide: Int(image.size.width),
pixelsHigh: Int(image.size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: .deviceRGB,
bytesPerRow: 0,
bitsPerPixel: 0
) else {
preconditionFailure()
}
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = NSGraphicsContext(bitmapImageRep: rep)
image.draw(at: .zero, from: .zero, operation: .sourceOver, fraction: 1.0)
NSGraphicsContext.restoreGraphicsState()
return rep
}
func writeImage(
image: NSImage,
usingType type: NSBitmapImageRep.FileType,
withSizeInPixels size: NSSize?,
to url: URL) throws {
if let size = size {
image.size = size
}
let rep = unscaledBitmapImageRep(forImage: image)
guard let data = rep.representation(using: type, properties:[.compressionFactor: 1.0]) else {
preconditionFailure()
}
try data.write(to: url)
}
...However, since I am working with a Mac App that is Sandboxed, which as you know is a requirement for Apple App Store distribution, I noticed that I had to choose my destination directory with care as I was testing my preliminary code.
If I used a file URL by way of:
let fileManager = FileManager.default
let documentsURL = fileManager.urls(for: .documentDirectory, in: .userDomainMask).first!
let documentPath = documentsURL.path
let filePath = documentsURL.appendingPathComponent("TestImage.png")
filePath = file:///Users/Andrew/Library/Containers/Objects-and-Such.ColourSpace/Data/Documents/TestImage.png
...which works for sandboxed Apps, file saving won't work if I had chosen a directory such as Desktop:
filePath = file:///Users/Andrew/Library/Containers/Objects-and-Such.ColourSpace/Data/Desktop/TestImage.png
I hope this helps.

Resources