I've got an app that displays photos using NSImage – specifically, -[NSImage drawInRect:fromRect:operation:fraction:]. I want to highlight areas of the photo that are completely burned out (maximum values in all components, pure white) using a color like red, as some digital cameras and image processing apps do, to help the user see whether the image is overexposed, and how badly.
I've been scratching my head as to how to do this. Options I've considered:
I could probably write a Core Image filter to do it; none of the built-in filters look up to the task. That seems like overkill, though; I've been reading through the docs, and it looks fairly complicated. Big learning curve.
I could scan through the bitmap data for the image and modify it as necessary. This is easy enough to code for one bitmap format, but the multitude of bitmap formats make it a rather annoying exercise, and speed is important here, so writing general-purpose code that renders the image up to some maximal common format and works on that bitmap would be too big a speed penalty.
As it happens, I am already scanning through images (handling all the different bitmap formats) at an earlier point in the code, to generate histogram data for the images. I could pretty easily add code at that point that would remember the burned-out pixels for later use. I'm not quite sure what the best way is to do that, though. A 1-bit-per-pixel NSBitmapImageRep? How would I draw it later, making the 1-pixels draw red and the 0-pixels draw transparent, for example? I don't want to make a 32-bit NSBitmapImageRep with an alpha channel and everything just for this purpose, as memory is not infinite and images are large. But there must be a way to draw a 1-bit mask in a given color, somehow.
Before forging ahead with one of these approaches, I thought I'd see whether anybody here has a better idea. Or maybe has implemented the CI filter in question already? Apart from the learning curve, that seems like the best approach I've thought of so far – no memory overhead, and probably faster than other options, too.
Thanks...
Ben Haller
Stick Software
OK, I implemented my own Core Image filter to do this. Wasn't as hard as I expected, although the documentation is not great for this stuff. The doc examples all assume you're using ARC, so if you're not, following those examples will give you various retain/release bugs. There was also a little weirdness with the CIFilterConstructor stuff, which did not quite go as documented. But overall pretty easy. CI is cool. My code is below, for anybody who might find it useful:
Header:
#import
#interface SSTintHighlightsFilter : CIFilter
{
CIImage *inputImage;
CIColor *highlightColor;
}
#end
Implementation file:
#import "SSTintHighlightsFilter.h"
static CIKernel *tintHighlightsFilter = nil;
#implementation SSTintHighlightsFilter
+ (void)initialize
{
[CIFilter registerFilterName:#"SSTintHighlightsFilter" constructor:(id )self
classAttributes:[NSDictionary dictionaryWithObjectsAndKeys:#"Tint Highlights", kCIAttributeFilterDisplayName, [NSArray arrayWithObjects:kCICategoryColorAdjustment, kCICategoryStillImage, nil], kCIAttributeFilterCategories, nil]];
}
+ (CIFilter *)filterWithName:(NSString *)name
{
CIFilter *filter = [[self alloc] init];
return [filter autorelease];
}
- (id)init
{
if (!tintHighlightsFilter)
{
NSBundle *bundle = [NSBundle bundleForClass:[self class]];
NSString *code = [NSString stringWithContentsOfFile:[bundle pathForResource:#"tintHighlightsAndShadows" ofType:#"cikernel"] encoding:NSASCIIStringEncoding error:NULL];
NSArray *kernels = [CIKernel kernelsWithString:code];
tintHighlightsFilter = [[kernels objectAtIndex:0] retain];
}
return [super init];
}
- (NSDictionary *)customAttributes
{
NSDictionary *attrs = #{
#"highlightColor" : #{ kCIAttributeClass : [CIColor class], kCIAttributeType : kCIAttributeTypeOpaqueColor }
};
return attrs;
}
- (CIImage *)outputImage
{
CISampler *src = [CISampler samplerWithImage:inputImage];
return [self apply:tintHighlightsFilter
arguments:[NSArray arrayWithObjects:src, highlightColor, nil]
options:[NSDictionary dictionaryWithObjectsAndKeys:[src definition], kCIApplyOptionDefinition, nil]];
}
#end
tintHighlights.cikernel:
kernel vec4 tintHighlights(sampler inputImage, __color highlightColor)
{
vec4 originalColor, tintedColor;
float sum;
// fetch the source pixel
originalColor = sample(inputImage, samplerCoord(inputImage));
// calculate the color component sum as a way of testing whether we are black or white
sum = originalColor.r + originalColor.g + originalColor.b;
// replace pixels that are white with the highlight color
tintedColor = (sum > 2.99999999999999999999999) ? highlightColor : originalColor;
// preserve alpha
tintedColor.a = originalColor.a;
return tintedColor;
}
using the filter:
+ (NSImage *)showHighlightsInImage:(NSImage *)img dstRect:(NSRect)dstRect
{
NSGraphicsContext *currentContext = [NSGraphicsContext currentContext];
NSRect dstRectForCGImage = dstRect; // because the method below wants a pointer, and I don't trust it not to modify my rect...
CGImageRef cgImage = [img CGImageForProposedRect:&dstRectForCGImage context:currentContext hints:nil];
CIImage *inputImage = [[CIImage alloc] initWithCGImage:cgImage];
[SSTintHighlightsFilter class]; // get my filter initialized
CIFilter *highlightFilter = [CIFilter filterWithName:#"SSTintHighlightsFilter"];
[highlightFilter setValue:inputImage forKey:#"inputImage"];
[highlightFilter setValue:[CIColor colorWithRed:1.0 green:0.0 blue:0.0] forKey: #"highlightColor"];
[inputImage release];
CIImage *outputImage = [highlightFilter valueForKey:#"outputImage"];
NSImage *resultImage = [[NSImage alloc] initWithSize:[img size]];
[resultImage addRepresentation:[NSCIImageRep imageRepWithCIImage:outputImage]];
return [resultImage autorelease];
}
I'm not sure that I'm handling the alpha entirely robustly, with premultiplication issues and so forth, but apart from that possible glitch it is working great.
Related
I'm creating an application to capture a small area of the screen and compare it to a library of images saved to disk. I wrote a similar application a few years ago in .net and used bitblt and the WINAPI. Performance is really important and I don't mind delving into openGL if it would make a difference with performance.
You could use some code like this:
-(NSImage*)captureImageFromRect:(NSRect)captureRect
{
NSImage *resultingImage = nil;
CGImageRef image;
CGWindowID windowID = (CGWindowID)[[self window] windowNum];
image = CGWindowListCreateImage(NSRectToCGRect(captureRect), kCGWindowListOptionIncludingWindow|kCGWindowListOptionOnScreenBelowWindow, windowID, kCGWindowImageDefault);
resultingImage = [[NSImage alloc] initWithCGImage:image size:NSZeroSize];
CGImageRelease(image);
return [resultingImage autorelease];
}
I have an NSBitmapImageRep that I am creating the following way:
+ (NSBitmapImageRep *)bitmapRepOfImage:(NSURL *)imageURL {
CIImage *anImage = [CIImage imageWithContentsOfURL:imageURL];
CGRect outputExtent = [anImage extent];
NSBitmapImageRep *theBitMapToBeSaved = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL pixelsWide:outputExtent.size.width
pixelsHigh:outputExtent.size.height bitsPerSample:8 samplesPerPixel:4
hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0 bitsPerPixel:0];
NSGraphicsContext *nsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:theBitMapToBeSaved];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext: nsContext];
CGPoint p = CGPointMake(0.0, 0.0);
[[nsContext CIContext] drawImage:anImage atPoint:p fromRect:outputExtent];
[NSGraphicsContext restoreGraphicsState];
return [[theBitMapToBeSaved retain] autorelease];
}
And being saved as BMP this way:
NSBitmapImageRep *original = [imageTools bitmapRepOfImage:fileURL];
NSData *converted = [original representationUsingType:NSBMPFileType properties:nil];
[converted writeToFile:filePath atomically:YES];
The thing here is that the BMP file can be read and manipulated correctly under Mac OSX, but under Windows, it just fails to load, just like in this screenshot:
screenshot http://dl.dropbox.com/u/1661304/Grab/74a6dadb770654213cdd9290f0131880.png
If the file is opened with MS Paint (yes, MS Paint can open it) and then resaved, though, it will work.
Would appreciate a hand here. :)
Thanks in advance.
I think the main reason your code is failing is that you are creating your NSBitmapImageRep with 0 bits per pixel. That means your image rep will have precisely zero information in it. You almost certainly want 32 bits per pixel.
However, your code is an unbelievably convoluted way to obtain an NSBitmapImageRep from an image file on disk. Why on earth are you using a CIImage? That is a Core Image object designed for use with Core Image filters and makes no sense here at all. You should be using an NSImage or CGImageRef.
Your method is also poorly named. It should instead be named something like +bitmapRepForImageFileAtURL: to better indicate what it is doing.
Also, this code makes no sense:
[[theBitMapToBeSaved retain] autorelease]
Calling retain and then autorelease does nothing, because all it does in increment the retain count and then decrement it again immediately.
You are responsible for releasing theBitMapToBeSaved because you created it using alloc. Since it is being returned, you should call autorelease on it. Your additional retain call just causes a leak for no reason.
Try this:
+ (NSBitmapImageRep*)bitmapRepForImageFileAtURL:(NSURL*)imageURL
{
NSImage* image = [[[NSImage alloc] initWithContentsOfURL:imageURL] autorelease];
return [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
}
+ (NSData*)BMPDataForImageFileAtURL:(NSURL*)imageURL
{
NSBitmapImageRep* bitmap = [self bitmapRepForImageFileAtURL:imageURL];
return [bitmap representationUsingType:NSBMPFileType properties:nil];
}
You really need to review the Cocoa Drawing Guide and the Memory Management Guidelines, because it appears that you are having trouble with some basic concepts.
This is my image resize code:
CALayer *newCALayer = [[CALayer layer] retain];
NSImage* image = [[NSImage alloc] initWithData:[NSData dataWithContentsOfFile:path]];
CGImageRef newCGImageFullResolution = [image CGImageForProposedRect:nil context:nil hints:nil];
CGContextRef context = CGBitmapContextCreate(NULL, drawRect.size.width, drawRect.size.height,
CGImageGetBitsPerComponent(newCGImageFullResolution),
CGImageGetBytesPerRow(newCGImageFullResolution),
CGImageGetColorSpace(newCGImageFullResolution),
CGImageGetAlphaInfo(newCGImageFullResolution));
CGContextDrawImage(context, CGRectMake(0, 0, drawRect.size.width, drawRect.size.height), newCGImageFullResolution);
CGImageRef scaledImage = CGBitmapContextCreateImage(context);
newCALayer.contents = (id)scaledImage;
CGImageRelease(scaledImage);
newCALayer.contentsGravity = kCAGravityResizeAspect;
newCALayer.opacity = 0.0;
newCALayer.anchorPoint = CGPointMake(0.0f,0.0f);
newCALayer.frame = CGRectMake( 0.0,
0.0,
[Singleton sharedSingleton].fullscreenRect.size.width,
[Singleton sharedSingleton].fullscreenRect.size.height);
[newCALayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
//CGImageRelease(cgImageFullResolution); (bonus points if you can explain why I can't release this! I mean, I can release the scaled image ok??)
CGContextRelease(context);
[image release];
I am doing all of this from a background thread in order to preload pictures so my GUI feels snappy. It took some work getting synchronization and what not set up so the CALayers ends up in view.
But I believe the term for describing how fast this is would be "it's a dog".
Comparing to IKImageView - that thing flings up thumbnails of images faster than I can scroll.
Does anybody have some suggestions for how to handle this better than I am doing it now?
In other words, my problem is that I want to have a super-fast UX. I believe the way to accomplish this is by preloading things to CALayers (this may be wrong? I tried NSImageView and some IK-stuff, but at least CALayer is better than that).
ImageKit is probably using CGImageSourceCreateThumbnailAtIndex() to quickly get an image appropriate to the destination, rather than reading in the entire image file.
Here:
NSImage *image = [[[NSImage alloc] initWithContentsOfFile:path] autorelease];
[image setScalesWhenResized:YES]; // *
[image setDataRetained:YES]; // *
[image setSize:desiredNewSize];
Then use the image as it is.
As for why your app is slow, run it under Instruments. That will tell you specifically where you are spending the majority of the processor time you use—it may not be in your scaling code after all.
*Since 10.6, these messages do nothing useful and are deprecated, so you can omit them if you are requiring Snow Leopard or later.
I'm writing a QuickLook plugin. Well, everything works. Just want to try it make better ;).
Thus the question.
Here is a function that returns thumbnail image and that I'm using now.
QLThumbnailRequestSetImageWithData(
QLThumbnailRequestRef thumbnail,
CFDataRef data,
CFDictionaryRef properties);
);
http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImageWithData
Right now I'm creating a TIFF -> encapsulated it into NSData. An example
// Setting CFDataRef
CGSize thumbnailMaxSize = QLThumbnailRequestGetMaximumSize(thumbnail);
NSMutableAttributedString *attributedString = [[[NSMutableAttributedString alloc]
initWithString:#"dummy"
attributes:[NSDictionary dictionaryWithObjectsAndKeys:
[NSFont fontWithName:#"Monaco" size:10], NSFontAttributeName,
[NSColor colorWithCalibratedRed:0.0 green:0.0 blue:0.0 alpha:1.0], NSForegroundColorAttributeName,
nil]
] autorelease];
NSImage *thumbnailImage = [[[NSImage alloc] initWithSize:NSMakeSize(thumbnailMaxSize.width, thumbnailMaxSize.height)] autorelease];
[thumbnailImage lockFocus];
[[NSColor whiteColor] set];
NSRectFill(NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height));
[attributedString drawInRect:NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height)];
[thumbnailImage unlockFocus];
(CFDataRef)[thumbnailImage TIFFRepresentation]; // This is data
// Setting CFDictionaryRef
(CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys:#"kUTTypeTIFF", (NSString *)kCGImageSourceTypeIdentifierHint, nil ]; // this is properties
However QuickLook provides another function to return thumbnail image, namely
QLThumbnailRequestSetImage(
QLThumbnailRequestRef thumbnail,
CGImageRef image,
CFDictionaryRef properties);
);
http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImage
I have a feeling that passing CGImage to the QL instead of TIFF data would help in speeding things up.
However- I have never worked with CG context before. I know, the documentation is there :), but anyways- could anyone give an example how to turn that NSAttributed string into CGImageRef. An example is worth 10 times reading the documentation ;)
Any help appreciated. Thanks in advance!
could anyone give an example how to turn that NSAttributed string into CGImageRef.
You can't turn a string into an image; they're two completely different kinds of data, and one is two dimensional (characters over time) while the other is at-least-three dimensional (color over x and y).
What you need to do is draw the string and produce an image of the drawing. That's what you're doing now with NSImage: Creating an image and drawing the string into it.
You're asking about creating a CGImage. Creating a bitmap context, using Core Text to draw the string into it, and creating an image of the contents of the bitmap context is one way to do that.
However, you're already much closer to another solution, assuming you can require Snow Leopard. Instead of asking the NSImage for a TIFF representation, ask it for a CGImage.
Summary: exporting pixel data from NSOpenGLView to some file formats gives incorrect colours
I am developing an application to visualise some experimental data. One of its functions is to render the data in an NSOpenGLView subclass, and allow the resulting image to be exported to a file or copied to the clipboard.
The view exports the data as an NSImage, generated like this:
- (NSImage*) image
{
NSBitmapImageRep* imageRep;
NSImage* image;
NSSize viewSize = [self bounds].size;
int width = viewSize.width;
int height = viewSize.height;
[self lockFocus];
[self drawRect:[self bounds]];
[self unlockFocus];
imageRep=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:width*4
bitsPerPixel:32] autorelease];
[[self openGLContext] makeCurrentContext];
glReadPixels(0,0,width,height,GL_RGBA,GL_UNSIGNED_BYTE,[imageRep bitmapData]);
image=[[[NSImage alloc] initWithSize:NSMakeSize(width,height)] autorelease];
[image addRepresentation:imageRep];
[image setFlipped:YES]; // this is deprecated in 10.6
[image lockFocusOnRepresentation:imageRep]; // this will flip the rep
[image unlockFocus];
return image;
}
Copying uses this image very simply, like this:
- (IBAction) copy:(id) sender
{
NSImage* img = [self image];
NSPasteboard* pb = [NSPasteboard generalPasteboard];
[pb clearContents];
NSArray* copied = [NSArray arrayWithObject:img];
[pb writeObjects:copied];
}
For file writing, I use the ImageKit IKSaveOptions accessory panel to set the output file type and associated options, then use the following code to do the writing:
NSImage* glImage = [glView image];
NSRect rect = [glView bounds];
rect.origin.x = rect.origin.y = 0;
img = [glImage CGImageForProposedRect:&rect
context:[NSGraphicsContext currentContext]
hints:nil];
if (img)
{
NSURL* url = [NSURL fileURLWithPath: path];
CGImageDestinationRef dest = CGImageDestinationCreateWithURL((CFURLRef)url,
(CFStringRef)newUTType,
1,
NULL);
if (dest)
{
CGImageDestinationAddImage(dest,
img,
(CFDictionaryRef)[imgSaveOptions imageProperties]);
CGImageDestinationFinalize(dest);
CFRelease(dest);
}
}
(I've trimmed a bit of extraneous code here, but nothing that would affect the outcome as far as I can see. The newUTType comes from the IKSaveOptions panel.)
This works fine when the file is exported as GIF, JPEG, PNG, PSD or TIFF, but exporting to PDF, BMP, TGA, ICNS and JPEG-2000 produces a red colour artefact on part of the image. Example images are below, the first exported as JPG, the second as PDF.
(source: walkytalky.net)
(source: walkytalky.net)
Copy to clipboard does not exhibit this red stripe with the current implementation of image, but it did with the original implementation, which generated the imageRep using NSCalibratedRGBColorSpace rather than NSDeviceRGBColorSpace. So I'm guessing there's some issue with the colour representation in the pixels I get from OpenGL that doesn't get through the subsequent conversions properly, but I'm at a loss as to what to do about it.
So, can anyone tell me (i) what is causing this, and (ii) how can I make it go away? I don't care so much about all of the formats but I'd really like at least PDF to work.
OK. As evidenced by the deafening silence which met this question, the problem turns out to be a bit obscure. But the workaround is nice and simple, so I'm describing it here just in case anyone ever wants to know.
Summary: some file export formats do not cope well with translucency in the rendered pixels.
I don't understand the exact reasons for this, although it might possibly have something to do with the presence or absence of alpha pre-multiplication. All the formats seem to be fine with completely transparent pixels, rendering them either transparent or as white if the format doesn't support transparency. But pixels that have a partial alpha, plus something in the colour channels, may get mangled.
As it happens, I did not even want any parts of the image to be translucent, and indeed set glDisable(GL_BLEND) before the relevant rendering code. However, objects were rendered with the materials from this seemingly-canonical collection at the OpenGL home site, some of which include alpha values other than 1.0 in their specular, diffuse and ambient colours. I had slavishly copied this without paying attention to that fact that it might lead to some unwanted translucency.
For my purposes, then, the solution is straightforward: change the material definitions so that the alpha component is always 1.0.
Note that some image formats, such as PNG and TIFF, do fully support the translucency, so if you need that then those are the ones to go for.
This was, in fact what tipped me off to the answer. However it was not obvious at first because I was using OS X Preview to view the files, and the translucency is not obvious with the default view settings:
(source: walkytalky.net)
(source: walkytalky.net)
(source: walkytalky.net)
(source: walkytalky.net)
So, a second lesson from this whole episode is: enable View | Show Image Background in Preview to get the checkerboard and show up any stray transparency.