Getting a CGIImageRef from an NSImage in Cocoa on Mac OS X - cocoa

I need to get a CGIImageRef from an NSImage. Is there an easy way to do this in Cocoa for Mac OS X?

Pretty hard to miss:
-[NSImage CGImageForProposedRect:context:hints:]

If you need to target Mac OS X 10.5 or any other previous release, use the following snippet instead. If you don’t, then NSD’s answer is the right way to go.
CGImageRef CGImageCreateWithNSImage(NSImage *image) {
NSSize imageSize = [image size];
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, imageSize.width, imageSize.height, 8, 0, [[NSColorSpace genericRGBColorSpace] CGColorSpace], kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapContext flipped:NO]];
[image drawInRect:NSMakeRect(0, 0, imageSize.width, imageSize.height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
return cgImage;
}
If your image comes from a file you may be better off using an image source to load the data directly into a CGImageRef.

Related

NSImage and PDFImageRep caching still draws at only one resolution

I have an NSImage, initialized with PDF data, created like this:
NSData* data = [view dataWithPDFInsideRect:view.bounds];
slideImage = [[NSImage alloc] initWithData:data];
The slideImage is now the size of the view.
When I try to render the image in an NSImageView, it only draws sharp when the image view is exactly the original size of the image, even if you clear the cache or change the image size. I tried setting the cacheMode to NSImageCacheNever, which also didn't work. The only image rep in the image is the PDF one, and when I render it to a PDF file it shows that it's vector.
As a workaround, I create a NSBitmapImageRep with a different size, call drawInRect on the original image, and put the bitmap representation inside a new NSImage and render that, which works, but it feels like it's not optimal:
- (NSBitmapImageRep*)drawToBitmapOfWidth:(NSInteger)width
andHeight:(NSInteger)height
withScale:(CGFloat)scale
{
NSBitmapImageRep *bmpImageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:width * scale
pixelsHigh:height * scale
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0
];
bmpImageRep = [bmpImageRep bitmapImageRepByRetaggingWithColorSpace:
[NSColorSpace sRGBColorSpace]];
[bmpImageRep setSize:NSMakeSize(width, height)];
NSGraphicsContext *bitmapContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bmpImageRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:bitmapContext];
[self drawInRect:NSMakeRect(0, 0, width, height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1];
[NSGraphicsContext restoreGraphicsState];
return bmpImageRep;
}
- (NSImage*)rasterizedImageForSize:(NSSize)size
{
NSImage* newImage = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [self drawToBitmapOfWidth:size.width andHeight:size.height withScale:1];
[newImage addRepresentation:rep];
return newImage;
}
How can I get the PDF to render nicely at any size without resorting to hacks like mine?
The point of NSImage is that you create it with the size (in points) that you want it to be. The backing representation can be vector based (e.g. PDF), and the NSImage is resolution independent (i.e. it supports different pixels per point), but the NSImage still has a fixed size (in points).
One one the points of an NSImage is that it will / can add a cache representation to speed up subsequent drawing.
If you need to draw a PDF to multiple sizes, and you want to use an NSImage, you're probably best of creating an NSImage for your given target size. If you want to, you can keep the NSPDFImageRef around -- I don't think it'll save you much.
We tried the following:
NSPDFImageRep* rep = self.representations.lastObject;
return [NSImage imageWithSize:size flipped:NO drawingHandler:^BOOL (NSRect dstRect)
{
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[rep drawInRect:dstRect fromRect:NSZeroRect operation:NSCompositeCopy fraction:1 respectFlipped:YES hints:#{
NSImageHintInterpolation: #(NSImageInterpolationHigh)
}];
return YES;
}];
And that does give you nice results when scaling up, but makes for blurry images
when scaling down.

Rendering NSString to monochrome texture on OSX

I want to render an NSString to a monochrome OpenGL texture. I found many examples on how to to that on iOS, but I'm struggling to find a code snippet that works on OSX.
I know how to do the last bit, uploading a texture to the GPU, but how do I get the raw (8 bit per pixel) data?
Take a look at GLString.m in Apple's Cocoa OpenGL code sample. You can easily modify it to generate a monochrome texture. The relevant modifications result in code that looks like so:
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:frameSize.width pixelsHigh:frameSize.height
bitsPerSample:8 samplesPerPixel:1 hasAlpha:NO isPlanar:NO colorSpaceName:NSDeviceWhiteColorSpace bitmapFormat:0 bytesPerRow:frameSize.width bitsPerPixel:8];
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmap];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:context];
…
[string drawAtPoint:NSMakePoint(marginSize.width, marginSize.height)]; // draw at offset position
…
[NSGraphicsContext restoreGraphicsState];
if (0 == texName)
glGenTextures (1, &texName);
glBindTexture (GL_TEXTURE_RECTANGLE_EXT, texName);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, texSize.width, texSize.height, GL_LUMINANCE, GL_UNSIGNED_BYTE, [bitmap bitmapData]);
The key parts are:
Allocate your own NSBitmapImageRep with the desired format.
Allocate an NSGraphicsContext that writes in to your NSBitmapImageRep.
Draw your string.
Restore the graphics context to its original state.
Call glTexSubImage2D with a pixel format that is compatible with the format you initialized your NSBitmapImageRep to.

NSBitmapImageRep generated BMP can't be read on Windows

I have an NSBitmapImageRep that I am creating the following way:
+ (NSBitmapImageRep *)bitmapRepOfImage:(NSURL *)imageURL {
CIImage *anImage = [CIImage imageWithContentsOfURL:imageURL];
CGRect outputExtent = [anImage extent];
NSBitmapImageRep *theBitMapToBeSaved = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL pixelsWide:outputExtent.size.width
pixelsHigh:outputExtent.size.height bitsPerSample:8 samplesPerPixel:4
hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0 bitsPerPixel:0];
NSGraphicsContext *nsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:theBitMapToBeSaved];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext: nsContext];
CGPoint p = CGPointMake(0.0, 0.0);
[[nsContext CIContext] drawImage:anImage atPoint:p fromRect:outputExtent];
[NSGraphicsContext restoreGraphicsState];
return [[theBitMapToBeSaved retain] autorelease];
}
And being saved as BMP this way:
NSBitmapImageRep *original = [imageTools bitmapRepOfImage:fileURL];
NSData *converted = [original representationUsingType:NSBMPFileType properties:nil];
[converted writeToFile:filePath atomically:YES];
The thing here is that the BMP file can be read and manipulated correctly under Mac OSX, but under Windows, it just fails to load, just like in this screenshot:
screenshot http://dl.dropbox.com/u/1661304/Grab/74a6dadb770654213cdd9290f0131880.png
If the file is opened with MS Paint (yes, MS Paint can open it) and then resaved, though, it will work.
Would appreciate a hand here. :)
Thanks in advance.
I think the main reason your code is failing is that you are creating your NSBitmapImageRep with 0 bits per pixel. That means your image rep will have precisely zero information in it. You almost certainly want 32 bits per pixel.
However, your code is an unbelievably convoluted way to obtain an NSBitmapImageRep from an image file on disk. Why on earth are you using a CIImage? That is a Core Image object designed for use with Core Image filters and makes no sense here at all. You should be using an NSImage or CGImageRef.
Your method is also poorly named. It should instead be named something like +bitmapRepForImageFileAtURL: to better indicate what it is doing.
Also, this code makes no sense:
[[theBitMapToBeSaved retain] autorelease]
Calling retain and then autorelease does nothing, because all it does in increment the retain count and then decrement it again immediately.
You are responsible for releasing theBitMapToBeSaved because you created it using alloc. Since it is being returned, you should call autorelease on it. Your additional retain call just causes a leak for no reason.
Try this:
+ (NSBitmapImageRep*)bitmapRepForImageFileAtURL:(NSURL*)imageURL
{
NSImage* image = [[[NSImage alloc] initWithContentsOfURL:imageURL] autorelease];
return [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
}
+ (NSData*)BMPDataForImageFileAtURL:(NSURL*)imageURL
{
NSBitmapImageRep* bitmap = [self bitmapRepForImageFileAtURL:imageURL];
return [bitmap representationUsingType:NSBMPFileType properties:nil];
}
You really need to review the Cocoa Drawing Guide and the Memory Management Guidelines, because it appears that you are having trouble with some basic concepts.

How to create a clipping mask from an NSAttributedString?

I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.

How come IKImageBrowserView can resize images so much faster than I can?

This is my image resize code:
CALayer *newCALayer = [[CALayer layer] retain];
NSImage* image = [[NSImage alloc] initWithData:[NSData dataWithContentsOfFile:path]];
CGImageRef newCGImageFullResolution = [image CGImageForProposedRect:nil context:nil hints:nil];
CGContextRef context = CGBitmapContextCreate(NULL, drawRect.size.width, drawRect.size.height,
CGImageGetBitsPerComponent(newCGImageFullResolution),
CGImageGetBytesPerRow(newCGImageFullResolution),
CGImageGetColorSpace(newCGImageFullResolution),
CGImageGetAlphaInfo(newCGImageFullResolution));
CGContextDrawImage(context, CGRectMake(0, 0, drawRect.size.width, drawRect.size.height), newCGImageFullResolution);
CGImageRef scaledImage = CGBitmapContextCreateImage(context);
newCALayer.contents = (id)scaledImage;
CGImageRelease(scaledImage);
newCALayer.contentsGravity = kCAGravityResizeAspect;
newCALayer.opacity = 0.0;
newCALayer.anchorPoint = CGPointMake(0.0f,0.0f);
newCALayer.frame = CGRectMake( 0.0,
0.0,
[Singleton sharedSingleton].fullscreenRect.size.width,
[Singleton sharedSingleton].fullscreenRect.size.height);
[newCALayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
//CGImageRelease(cgImageFullResolution); (bonus points if you can explain why I can't release this! I mean, I can release the scaled image ok??)
CGContextRelease(context);
[image release];
I am doing all of this from a background thread in order to preload pictures so my GUI feels snappy. It took some work getting synchronization and what not set up so the CALayers ends up in view.
But I believe the term for describing how fast this is would be "it's a dog".
Comparing to IKImageView - that thing flings up thumbnails of images faster than I can scroll.
Does anybody have some suggestions for how to handle this better than I am doing it now?
In other words, my problem is that I want to have a super-fast UX. I believe the way to accomplish this is by preloading things to CALayers (this may be wrong? I tried NSImageView and some IK-stuff, but at least CALayer is better than that).
ImageKit is probably using CGImageSourceCreateThumbnailAtIndex() to quickly get an image appropriate to the destination, rather than reading in the entire image file.
Here:
NSImage *image = [[[NSImage alloc] initWithContentsOfFile:path] autorelease];
[image setScalesWhenResized:YES]; // *
[image setDataRetained:YES]; // *
[image setSize:desiredNewSize];
Then use the image as it is.
As for why your app is slow, run it under Instruments. That will tell you specifically where you are spending the majority of the processor time you use—it may not be in your scaling code after all.
*Since 10.6, these messages do nothing useful and are deprecated, so you can omit them if you are requiring Snow Leopard or later.

Resources