Too many pixels in my NSBitmapImageRep when drawing into an NSImage - macos

I am trying to create an NSImage that is exactly 200 x 300 pixels large from the contents of another NSImage. I'm not just scaling, but taking it from a chunk of a much larger image.
The resulting image looks just like the pixels I want. However, there are too many. The NSImage reports a size of 200 x 300, and the image representations report a size of 200 x 300, but the image representations report a number of pixels twice that : 400 x 600. When I save this image representation to a file, I get an image that's 400 x 600.
Here's how I am doing it:
NSRect destRect = NSMakeRect(0,0,200,300);
NSImage* destImage = [[NSImage alloc] initWithSize:destRect.size];
// lock focus, set interpolation
[destImage lockFocus];
NSImageInterpolation oldInterpolation = [[NSGraphicsContext currentContext] imageInterpolation];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[_image drawInRect:destRect fromRect:srcRect operation:NSCompositeCopy fraction:1.0];
[destImage unlockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation:oldInterpolation];
NSData* tiffImageData = [destImage TIFFRepresentation];
NSBitmapImageRep* tiffImageRep = [NSBitmapImageRep imageRepWithData:tiffImageData];
In the debugger, you can see the NSBitmapImageRep has the right size, but twice the number of pixels.
po tiffImageRep
(NSBitmapImageRep *) $5 = 0x000000010301ad80 NSBitmapImageRep 0x10301ad80 Size={200, 300} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) **Pixels=400x600** Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x10f353a40
So, when I save it to disk, I get an image that is 400 x 600, not 200 x 300. How do I fix this?

Related

NSImage w/ BitmapImageRep -> CIImage to invert color -> NSImage w/ BitmapImageRep, colorspace issues

So I have an NSImage *startingImage
It is represented by an NSBitmapImageRep with a gray colorspace
I need to invert the colors on it, so I convert it to a CIImage
CIImage *startingCIImage = [[CIImage alloc] initWithBitmapImageRep:(NSBitmapImageRep*)[startingImage representations][0]];
CIFilter *invertColorFilter = [CIFilter filterWithName:NEVER_TRANSLATE(#"CIColorInvert")];
[invertColorFilter setValue:startingCIImage forKey:NEVER_TRANSLATE(#"inputImage")];
CIImage *outputImage = [invertColorFilter valueForKey:NEVER_TRANSLATE(#"outputImage")];
If I view the outputImage at this point, it is exactly what I expect, the same image except with inverted colors.
I then convert it back into an NSImage like so:
NSBitmapImageRep *finalImageRep = [[NSBitmapImageRep alloc] initWithCIImage:outputImage];
NSImage *finalImage = [[NSImage alloc] initWithSize:[finalImageRep size]];
[finalImage finalImageRep];
Here's my issue... My original NSImage has a Gray colorspace, and 8 bits per pixel.
<NSImage 0x610000071440 Size={500, 440} Reps=(
"NSBitmapImageRep 0x6100002a1800 Size={500, 440} ColorSpace=Device Gray colorspace BPS=8 BPP=8 Pixels=500x440 Alpha=NO Planar=NO Format=0
CurrentBacking=<CGImageRef: 0x6100001ab0c0>" )>
However, after I convert everything, and log out the image, this is what I have
<NSImage 0x61800127e540 Size={500, 440} Reps=(
"NSBitmapImageRep 0x6080000b8cc0 Size={500, 440} ColorSpace=ASUS PB278 colorspace BPS=8 BPP=32 Pixels=500x440 Alpha=YES Planar=NO
Format=0 CurrentBacking=<CGImageRef: 0x6180001a3f00>" )>
And as you may know, NSBitmapImageRep is meant to be immutable, and when I try setColorSpaceName or setAlpha, the image ends up just being a black box.
Is there something I'm missing so that I can convert my NSImage into a CIImage, invert the black and white, then convert back into an NSImage?
Maybe you could replace the color space at the end:
NSBitmapImageRep* fixedRep = [finalImageRep bitmapImageRepByConvertingToColorSpace: [startingImageRep colorSpace]
renderingIntent: NSColorRenderingIntentDefault];

NSAffineTransform running out of memory

I am using NSAffineTransform to rotate/ reflect an NSImage and when using larger images i run into an error:
NSImage: Insufficient memory to allocate pixel data buffer of 4496342739800064 bytes
The image i am transforming here is 6,998,487 bytes at 4110px x 2735. Does NSAffineTransform really need this much memory to do this transformation or am i going wrong somewhere? Heres my rotate code:
-(NSImage *)rotateLeft:(NSImage *)img{
NSImage *existingImage = img;
NSSize existingSize;
existingSize.width = existingImage.size.width;
existingSize.height = existingImage.size.height;
NSSize newSize = NSMakeSize(existingSize.height, existingSize.width);
NSImage *rotatedImage = [[NSImage alloc] initWithSize:newSize];
[rotatedImage lockFocus];
NSAffineTransform *rotateTF = [NSAffineTransform transform];
NSPoint centerPoint = NSMakePoint(newSize.width / 2, newSize.height / 2);
[rotateTF translateXBy: centerPoint.x yBy: centerPoint.y];
[rotateTF rotateByDegrees: 90];
[rotateTF translateXBy: -centerPoint.y yBy: -centerPoint.x];
[rotateTF concat];
NSRect r1 = NSMakeRect(0, 0, newSize.height, newSize.width);
[existingImage drawAtPoint:NSMakePoint(0,0)
fromRect:r1
operation:NSCompositeCopy fraction:1.0];
[rotatedImage unlockFocus];
return rotatedImage;
}
I am using ARC in my project.
Thanks in advance, Ben

sRGB to NSColor

I am trying to draw an inner shadow in an NSView. The shadow itself is not the problem, but the color setting is driving me nuts :/
#define ShadowBlurRadius 10.0
#define SRGB (CGFloat [4]){184.0, 184.0, 184.0, 1.0}
#implementation SWShadowedView
- (void)drawRect:(NSRect)dirtyRect {
NSGraphicsContext *context = [NSGraphicsContext currentContext];
[context saveGraphicsState];
[context setCompositingOperation:NSCompositePlusDarker];
NSBezierPath *path = [NSBezierPath bezierPathWithRect:NSMakeRect(0, dirtyRect.size.height -ShadowBlurRadius, self.superview.frame.size.width, ShadowBlurRadius)];
[[NSColor whiteColor] setStroke];
NSShadow * shadow = [[NSShadow alloc] init];
NSColorSpace *colorSpace = [NSColorSpace sRGBColorSpace];
NSColor *color = [NSColor colorWithColorSpace:colorSpace components:SRGB count:4];
[shadow setShadowColor:color];
[shadow setShadowBlurRadius:ShadowBlurRadius];
[shadow set];
[path stroke];
[context restoreGraphicsState];
[super drawRect:dirtyRect];
}
#end
If I replace the shadow color with [NSColor redColor] it works but with the wrong color. This is where I got the sRGB from: link
The way to convert sRGB to NSColor is taken from another post from here but obviously it's not working.
best regards
Your code is almost completely correct, the only problem is that you're using numerical values from 0-255 in your array. All the NSColor creation methods use CGFloat values from 0-1.0.
All you need to do is define your SRGB array like so:
#define SRGB (CGFloat [4]){184.0/255.0, 184.0/255.0, 184.0/255.0, 1.0}
Your code will then work correctly. Please note that using the colorWithCalibratedRed:green:blue:alpha: method of NSColor will not give you the correct color from your sRGB values.
To get correct sRGB values, you must use the method in your original code, which specifically uses the sRGB color space to create the color. A category on NSColor that creates colors using 255-based sRGB values might look something like this:
#implementation NSColor (sRGB_Additions)
+ (NSColor *)colorWith255sRGBRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue alpha:(CGFloat)alpha
{
CGFloat sRGBComponents[4] = {red / 255.0, green / 255.0, blue / 255.0, alpha};
NSColorSpace *colorSpace = [NSColorSpace sRGBColorSpace];
return [NSColor colorWithColorSpace:colorSpace components:sRGBComponents count:4];
}
#end
Then you could just do this:
NSColor* someColor = [NSColor colorWith255sRGBRed:184.0 green:184.0 blue:184.0 alpha:1.0];
Here is the simplest, most modern way of creating the color that you need:
NSColor *color = [NSColor colorWithSRGBRed:(184.0 / 255.5) green:(184.0 / 255.5) blue:(184.0 / 255.5) alpha:1.0];
Use RGB not sRGB:
You can create color with RGB like this:
float red = 182.0f/256.0f;
float green = 182.0f/256.0f;
float blue = 182.0f/256.0f;
NSColor *color = [NSColor colorWithCalibratedRed:red green:green blue:blue alpha:1.0f];

Cropping CIImage with CICrop isn't working properly

I'm having troubles cropping image. For me CICrop filter is not working properly. If my CIVector x and y (origins) are 0 everything working fine (image is cropped from left bottom corner), image is cropped by my rectangle width and height, but if CIVector origins (x and y) aren't 0 in my cropped image becomes space (because CICrop filter cropping from bottom left corner no matter what origins (x and y) are).
I'm cropping CIImage with rectangle, source:
CIVector *cropRect =[CIVector vectorWithX:150 Y:150 Z: 300 W: 300];
CIFilter *cropFilter = [CIFilter filterWithName:#"CICrop"];
[cropFilter setValue:myCIImage forKey:#"inputImage"];
[cropFilter setValue:cropRect forKey:#"inputRectangle"];
CIImage *croppedImage = [cropFilter valueForKey:#"outputImage"];
Output Image with CIVector X 150 and Y 150: (I drawn the border for clarity)
Output Image with CIVector X 0 and Y 0:
Original Image:
What I'm doing wrong? Or is it supposed to do this?
Are you sure the output image is the size you are expecting? How are you drawing the output image?
The CICrop filter does not reduce the size of the original image, it just blanks out the content you don't want.
To get the result you want you probably need to just do this:
[image drawAtPoint:NSZeroPoint fromRect:NSMakeRect(150, 150, 300, 300) operation:NSCompositeSourceOver fraction:1.0];
If you want an actual CIImage as output rather than just drawing it, just do this:
CIImage* croppedImage = [image imageByCroppingToRect:CGRectMake(150, 150, 300, 300)];
//you also need to translate the origin
CIFilter* transform = [CIFilter filterWithName:#"CIAffineTransform"];
NSAffineTransform* affineTransform = [NSAffineTransform transform];
[affineTransform translateXBy:-150.0 yBy:-150.0];
[transform setValue:affineTransform forKey:#"inputTransform"];
[transform setValue:croppedImage forKey:#"inputImage"];
CIImage* transformedImage = [transform valueForKey:#"outputImage"];
It's important to note that the coordinate system of a view is top-left-corner, whereas CIImage is bottom left. This will make you crazy if you don't catch it when you're doing these transforms! This other post describes a one-directional conversion: Changing CGrect value to user coordinate system.
This is how CICrop works -- it crop the rect you specified, and the un-cropped area becomes transparent. If you print extent you will see that it is still the same original rect.
As suggested, you can do a translation. This is now just 1 line, in Swift 5:
let newImage = myCIImage.transformed(by: CGAffineTransform(translationX: -150, y: -150)

Is there a best way to size an NSImage to a maximum filesize?

Here's what I've got so far:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:
[file.image TIFFRepresentation]];
// Resize image to 200x200
CGFloat maxSize = 200.0;
NSSize imageSize = imageRep.size;
if (imageSize.height>maxSize || imageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = imageSize.height/imageSize.width;
CGSize newImageSize;
if (aspectRatio > 1.0) {
newImageSize = CGSizeMake(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = CGSizeMake(maxSize, maxSize * aspectRatio);
} else {
newImageSize = CGSizeMake(maxSize, maxSize);
}
[imageRep setSize:NSSizeFromCGSize(newImageSize)];
}
NSData *imageData = [imageRep representationUsingType:NSPNGFileType properties:nil];
NSString *outputFilePath = [#"~/Desktop/output.png" stringByExpandingTildeInPath];
[imageData writeToFile:outputFilePath atomically:NO];
The code assumes that a 200x200 PNG will be less than 128K, which is my size limit. 200x200 is big enough, but I'd prefer to max out the size if at all possible.
Here are my two problems:
The code doesn't work. I check the size of the exported file and it's the same size as the original.
Is there a way to predict the size of the output file before I export, so I can max out the dimensions but still get an image that's less than 128K?
Here's the working code. It's pretty sloppy and could probably use some optimizations, but at this point it runs fast enough that I don't care. It iterates over 100x for most pictures, and it's over in milliseconds. Also, this method is declared in a category on NSImage.
- (NSData *)resizeImageWithBitSize:(NSInteger)size
andImageType:(NSBitmapImageFileType)fileType {
CGFloat maxSize = 500.0;
NSSize originalImageSize = self.size;
NSSize newImageSize;
NSData *returnImageData;
NSInteger imageIsTooBig = 1000;
while (imageIsTooBig > 0) {
if (originalImageSize.height>maxSize || originalImageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = originalImageSize.height/originalImageSize.width;
if (aspectRatio > 1.0) {
newImageSize = NSMakeSize(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = NSMakeSize(maxSize, maxSize * aspectRatio);
} else {
newImageSize = NSMakeSize(maxSize, maxSize);
}
} else {
newImageSize = originalImageSize;
}
NSImage *resizedImage = [[NSImage alloc] initWithSize:newImageSize];
[resizedImage lockFocus];
[self drawInRect:NSMakeRect(0, 0, newImageSize.width, newImageSize.height)
fromRect: NSMakeRect(0, 0, originalImageSize.width, originalImageSize.height)
operation: NSCompositeSourceOver
fraction: 1.0];
[resizedImage unlockFocus];
NSData *tiffData = [resizedImage TIFFRepresentation];
[resizedImage release];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:tiffData];
NSDictionary *imagePropDict = [NSDictionary
dictionaryWithObject:[NSNumber numberWithFloat:0.85]
forKey:NSImageCompressionFactor];
returnImageData = [imageRep representationUsingType:fileType properties:imagePropDict];
[imageRep release];
if ([returnImageData length] > size) {
maxSize = maxSize * 0.99;
imageIsTooBig--;
} else {
imageIsTooBig = 0;
}
}
return returnImageData;
}
For 1.
As another poster mentioned, setSize only alters display sizes of the image, and not the actual pixel data of the underlying image file.
To resize, you may want to redraw the source image onto another NSImageRep and then write that to file.
This blog post contains some sample code on how to do the resize in this manner.
How to Resize an NSImage
For 2.
I don't think you can without at least creating the object in memory and checking the length of the image. The actual bytes used will be dependent on the image type. Bitmaps with ARGB will be easy to predict their size, but PNG and JPEG would be much harder.
[imageData length] should give you the length of the NSData's contents, which I understand to be the final file size when written to disk. That should give you a chance to maximize the size of the file before actually writing it.
As to why the image is not shrinking or growing, according to the docs for setSize:
The size of an image representation combined with the physical dimensions of the image data determine the resolution of the image.
So it may be that by setting the size and not altering the resolution you're not modifying any pixels, just the way in which the pixels should be interpreted.

Resources