I'm trying to do image scaling using Core Image, using Lanczos Scale Transform filter.
It is fine when I'm doing scaleup. But on scaledown and saving to JPEG I found a class of images which produces strange noise artifacts and behavior.
For inputScale multiples to 0.5: 0.5, 0.25, 0.125 etc it is always fine.
For others inputScale values,it's broken.
When I'm saving to TIFF,PNG,JPEG2000 or draw on screen - it's fine.
When I'm saving to JPG or BMP - it's broken.
I uploaded sample images:
(original, 4272x2848, 3.4Mb)
[http://www.zoomfoot.com/get/package/517e2423-7795-4239-a166-03d507ec51d8]
(scaled with noise, 1920x1280, 2.2Mb)
[http://www.zoomfoot.com/get/package/6eb64d33-3a30-4e8d-9953-67ce1e7d7ef9]
It also, pretty good reproducible on 'sunset' images.
I've tried couple more ways to scale images. Using CIAffineTransform and drawing using NSImage itself. Both provides results without any noise.
Here is the code I was using for scaling & saving:
================= Lanczos ==============
- (NSImage *) scaleLanczosImage:(NSString *)path
Width:(int) desiredWidth
Height:(int) desiredHeight
{
CIImage *image=[CIImage imageWithContentsOfURL:
[NSURL fileURLWithPath:path]];
CIFilter *scaleFilter = [CIFilter filterWithName:#"CILanczosScaleTransform"];
int originalHeight=[image extent].size.height;
int originalWidth=[image extent].size.width;
float yScale=(float)desiredHeight / (float)originalHeight;
float xScale=(float)desiredWidth / (float)originalWidth;
float scale=fminf(yScale, xScale);
[scaleFilter setValue:[NSNumber numberWithFloat:scale]
forKey:#"inputScale"];
[scaleFilter setValue:[NSNumber numberWithFloat:1.0]
forKey:#"inputAspectRatio"];
[scaleFilter setValue: image
forKey:#"inputImage"];
image = [scaleFilter valueForKey:#"outputImage"];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc] initWithCIImage:image];
NSMutableDictionary *options=[NSMutableDictionary dictionaryWithObject:[NSDecimalNumber numberWithFloat:1.0]
forKey:NSImageCompressionFactor];
[options setValue:[NSNumber numberWithBool:YES] forKey:NSImageProgressive];
NSData* jpegData = [rep representationUsingType:NSJPEGFileType
properties:options];
[jpegData writeToFile:#"/Users/dmitry/tmp/img_4389-800x600.jpg" atomically:YES];
}
================= NSImage ==============
- (void) scaleNSImage:(NSString *)path
Width:(int) desiredWidth
Height:(int) desiredHeight
{
NSImage *sourceImage = [[NSImage alloc] initWithContentsOfFile:path];
NSImage *resizedImage = [[NSImage alloc] initWithSize:
NSMakeSize(desiredWidth, desiredHeight)];
NSSize originalSize = [sourceImage size];
[resizedImage lockFocus];
[sourceImage drawInRect: NSMakeRect(0, 0, desiredWidth, desiredHeight)
fromRect: NSMakeRect(0, 0, originalSize.width, originalSize.height)
operation: NSCompositeSourceOver fraction: 1.0];
[resizedImage unlockFocus];
NSMutableDictionary *options=[NSMutableDictionary dictionaryWithObject:[NSDecimalNumber numberWithFloat:1.0]
forKey:NSImageCompressionFactor];
[options setValue:[NSNumber numberWithBool:YES] forKey:NSImageProgressive];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithData:[resizedImage TIFFRepresentation]];
NSData* jpegData = [rep representationUsingType:NSJPEGFileType
properties:options];
[jpegData writeToFile:#"~/nsimg_4389-800x600.jpg" atomically:YES];
}
If anyone can explain or suggest something here, i'm really appreciate this.
Thanks.
There are some quirks with scaling with CIImage. Will this post by Dan Wood help?
My understanding is that due to its nature, the hardware Core Image renderer has some odd quirks when rendering for anything but on-screen display. So, you are recommended to use the software renderer for other situations.
http://developer.apple.com/mac/library/qa/qa2005/qa1416.html
That's definitely a CoreImage bug. File it at bugreport.apple.com
Related
How to shift NSImage horizontally that shifted pixels appear at the other side so it looks like a loop?
Currently I am using drawInRect. Is there any CIFilter or smarter way to do this?
- (CIImage *)image:(NSImage *)image shiftedBy:(CGFloat)shiftAmount
{
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[rep setSize:NSMakeSize(width, height)];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep:rep];
[NSGraphicsContext setCurrentContext:context];
CGRect rect0 = CGRectMake(0, 0, width, height);
CGRect leftSourceRect, rightSourceRect;
CGRectDivide(rect0, &leftSourceRect, &rightSourceRect, shiftAmount, CGRectMinXEdge);
CGRect rightDestinationRect = CGRectOffset(leftSourceRect, width - rightSourceRect.origin.x, 0);
CGRect leftDestinationRect = rightSourceRect;
leftDestinationRect.origin.x = 0;
[image drawInRect:leftDestinationRect fromRect:rightSourceRect operation:NSCompositingOperationSourceOver fraction:1.0];
[image drawInRect:rightDestinationRect fromRect:leftSourceRect operation:NSCompositingOperationSourceOver fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
return [[CIImage alloc] initWithBitmapImageRep:rep];
}
I tried it with CIFilter however the performance hit is 3-4x slower. However, the code is more readable.
- (CIImage *)image:(NSImage *)image shiftXBy:(CGFloat)shiftX YBy:(CGFloat)shiftY
{
//avoid calling TIFFRepresentation here cause the performance hit is even bigger
CIImage *ciImage = [[CIImage alloc] initWithData:[image TIFFRepresentation]];
CGAffineTransform xform = CGAffineTransformIdentity;
NSValue *xformObj = [NSValue valueWithBytes:&xform objCType:#encode(CGAffineTransform)];
ciImage = [ciImage imageByApplyingFilter:#"CIAffineTile"
withInputParameters:#{kCIInputTransformKey : xformObj} ];
ciImage = [ciImage imageByCroppingToRect:CGRectMake(shiftX, shiftY, image.size.width, image.size.height)];
return ciImage;
}
For optimum performance you need to use CALayer. Here’s the basic concept:
Your NSView subclass should have wantsLayer and layerUsesCoreImageFilters set to true.
Assign your image to content property of NSView.layer (or add a new sublayer).
Create CIAffineTile filter and add it to the layer.
Now you can change values of the filter without reloading or redrawing the image. This all would be hardware accelerated.
I’m currently using this code bellow in Xcode 7.2, which takes the image and pastes it in imessages etc. But it’s too large (in dimensions). Is it possible to make this image smaller?
UIPasteboard *pasteboard = [UIPasteboard generalPasteboard];
NSData *imgData = UIImagePNGRepresentation(image);
[pasteboard setData:imgData forPasteboardType:[UIPasteboardTypeListImage objectAtIndex:0]];
Not whilst it's in the pasteboard. BUT, if you're worried about it being the wrong size when you paste it somewhere, FEAR NOT, Apple have been kind to us and have built quite a lot of the presentation "resizing" into things for us.
What you need to do if you really HAVE to resize the image is:
1) Be lazy and use code which you can resume... i.e. I found this on here I believe:
- (NSImage *)imageResize:(NSImage*)anImage newSize:(NSSize)newSize {
NSImage *sourceImage = anImage;
[sourceImage setScalesWhenResized:YES];
// Report an error if the source isn't a valid image
if (![sourceImage isValid]){
NSLog(#"Invalid Image");
} else {
NSImage *smallImage = [[NSImage alloc] initWithSize: newSize];
[smallImage lockFocus];
[sourceImage setSize: newSize];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[sourceImage drawAtPoint:NSZeroPoint fromRect:CGRectMake(0, 0, newSize.width, newSize.height) operation:NSCompositeCopy fraction:1.0];
[smallImage unlockFocus];
return smallImage;
}
return nil;
}
then in the bit of code which you've given us:
*pasteboard = [UIPasteboard generalPasteboard];
NSImage *myShinyResizedImage = [self imageResize: oldImage newSize: CGSizeMake(100.0, 100.0)];
NSData *imgData = UIImagePNGRepresentation(myShinyResizedImage);
[pasteboard setData:imgData forPasteboardType:[UIPasteboardTypeListImage objectAtIndex:0]];
And it's been resized before if goes off to get pasted elsewhere.
I'm trying to merge two images to a background image. This works fine on non-Retina Macs.
The resulting image has the size of my original background image. But on a MacBook Pro Retina the resulting image is doubled. I read that one should use imageWithSize for solving that issue cause since 10.8 / Retina support images aren't measured in pixels anymore; as we know it's points.
I've no clue how I should change my code to solve this.
What i have so far:
NSSize size = background.size ;
GLsizei width = (GLsizei)size.width;
GLsizei height = (GLsizei)size.height;
NSBitmapImageRep* imgRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:width pixelsHigh:height bitsPerSample:8 samplesPerPixel:3 hasAlpha:NO isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:width*3 bitsPerPixel:0];
#ifdef __MAC_10_8
NSImage* backgroundWrapper = [NSImage imageWithSize:size flipped:YES drawingHandler:^(NSRect dstRect){ return [imgRep drawInRect:dstRect]; }];
NSPoint backgroundPoint;
backgroundPoint.x = 0;
backgroundPoint.y = 0;
[backgroundWrapper lockFocusFlipped:YES];
[background drawAtPoint:backgroundPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[firstImage firstImage fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[backgroundWrapper unlockFocus];
NSBitmapImageRep *bmpImageRep = [[NSBitmapImageRep alloc]initWithData:[backgroundWrapper TIFFRepresentation]];
[backgroundWrapper addRepresentation:bmpImageRep];
#else
[background lockFocusFlipped:YES];
[firstImage drawAtPoint:firstImagePoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[background unlockFocus];
NSBitmapImageRep *bmpImageRep = [[NSBitmapImageRep alloc]initWithData:[background TIFFRepresentation]];
[background addRepresentation:bmpImageRep];
#endif
NSData *data = [bmpImageRep representationUsingType: NSPNGFileType properties: nil];
How can I get this to work on a Retina display?
edit
I also tried this:
- (NSImage )drawImage:(NSImage)background with:(NSImage*)firstImage {
return [NSImage imageWithSize:NSMakeSize(3200, 2000) flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
NSPoint backgroundPoint; backgroundPoint.x = 0;
backgroundPoint.y = 0;
[background drawAtPoint:backgroundPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[firstImage drawAtPoint:backgroundPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
return true;
}];
}
But the resulting image is 6400x4000px
Is there a way to use layer backed NSView as the contentView of a NSDockTile? Tried all sorts of tricks, but all I get is transparent area. Also tried going different route and get an image out of the CALayer and use that for [NSApp setApplicationIconImage:], but no luck either - I think the issue here is creating image representation for offscreen image.
As usual, I got my answer soon after posting the question :) I'll post it here for future reference: I solved it by creating NSImage out of the layer as described in Cocoa is my girlfriend blog post here http://www.cimgf.com/2009/02/03/record-your-core-animation-animation/
The only missing piece is that in order to have anything rendered, a view must be added to a window, so using example code from the post, my solution is:
NSView *myView = ...
NSWindow *window = [[NSWindow alloc] initWithContentRect:NSMakeRect(-1000.0, -1000.0, 256.0, 256.0) styleMask:0 backing:NSBackingStoreNonretained defer:NO];
[window setContentView:myView];
NSUInteger pixelsHigh = myView.bounds.size.height;
NSUInteger pixelsWide = myView.bounds.size.width;
NSUInteger bitmapBytesPerRow = pixelsWide * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef context = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
[myView.layer.presentationLayer renderInContext:context];
CGImageRef image = CGBitmapContextCreateImage(context);
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithCGImage:image];
CFRelease(image);
NSImage *img = [[NSImage alloc] initWithData:[bitmap TIFFRepresentation]];
[NSApp setApplicationIconImage:img];
I have an NSImage which I am trying to resize like so;
NSImage *capturePreviewFill = [[NSImage alloc] initWithData:previewData];
NSSize newSize;
newSize.height = 160;
newSize.width = 120;
[capturePreviewFill setScalesWhenResized:YES];
[capturePreviewFill setSize:newSize];
NSData *resizedPreviewData = [capturePreviewFill TIFFRepresentation];
resizedCaptureImageBitmapRep = [[NSBitmapImageRep alloc] initWithData:resizedPreviewData];
saveData = [resizedCaptureImageBitmapRep representationUsingType:NSJPEGFileType properties:nil];
[saveData writeToFile:#"/Users/ricky/Desktop/Photo.jpg" atomically:YES];
My first issue is that my image gets squashed when I try to resize it and don't conform to the aspect ratio. I read that using -setScalesWhenResized would resolve this problem but it didn't.
My second issue is that when I try to write the image to a file, the image isn't actually resized at all.
Thanks in advance,
Ricky.
I found this blog post to be very helpful for resizing my image: http://weblog.scifihifi.com/2005/06/25/how-to-resize-an-nsimage/
You will need to enforce the aspect ratio on the image resizing yourself, it won't be done for you. This is how I did it when I was trying to fit the image into the printable area on the default paper:
NSImage *image = ... // get your image
NSPrintInfo *printInfo = [NSPrintInfo sharedPrintInfo];
NSSize paperSize = printInfo.paperSize;
CGFloat usablePaperWidth = paperSize.width - printInfo.leftMargin - printInfo.rightMargin;
CGFloat resizeWidth = usablePaperWidth;
CGFloat resizeHeight = usablePaperWidth * (image.size.height / image.size.width);
Here is a slightly modified version of his code from the blog:
NSData *sourceData = [image TIFFRepresentation];
float resizeWidth = ... // your desired width;
float resizeHeight = ... // your desired height;
NSImage *sourceImage = [[NSImage alloc] initWithData: sourceData];
NSImage *resizedImage = [[NSImage alloc] initWithSize: NSMakeSize(resizeWidth, resizeHeight)];
NSSize originalSize = [sourceImage size];
[resizedImage lockFocus];
[sourceImage drawInRect: NSMakeRect(0, 0, resizeWidth, resizeHeight) fromRect: NSMakeRect(0, 0, originalSize.width, originalSize.height) operation: NSCompositeSourceOver fraction: 1.0];
[resizedImage unlockFocus];
NSData *resizedData = [resizedImage TIFFRepresentation];
[sourceImage release];
[resizedImage release];
If you can require Mac OS X 10.6 or later, send your image a CGImageForProposedRect:context:hints: message, then write the CGImage out using a CGImageDestination object.
The rectangle should have NSZeroPoint as its origin, and its size be the size you want.
This still won't scale the image proportionally (maintaining aspect ratio); you have to do that yourself.
The pre-10.6 way to do this (without going through a TIFF representation) is to lock focus on the resized image, create an NSBitmapImageRep for the extent of the image (that is, a rectangle with zero origin and the image's size), unlock focus, and then ask that bitmap image rep for JPEG data.