cocoa: Read pixel color of NSImage - cocoa

I have an NSImage. I would like to read the NSColor for a pixel at some x and y. Xcode seems to thing that there is a colorAtX:y: method on NSImage, but this causes a crash saying that there is no such method for NSImage. I have seen some examples where you create an NSBitmapImageRep and call the same method on that, but I have not been able to successfully convert my NSImage to an NSBitmapImageRep. The pixels on the NSBitmapImageRep are different for some reason.
There must be a simple way to do this. It cannot be this complicated.

Without seeing your code it's difficult to know what's going wrong.
You can draw the image to an NSBitmapImageRep using the initWithData: method and pass in the image's TIFFRepresentation.
You can then get the pixel value using the method colorAtX:y:, which is a method of NSBitmapImageRep, not NSImage:
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithData:[yourImage TIFFRepresentation]];
NSSize imageSize = [yourImage size];
CGFloat y = imageSize.height - 100.0;
NSColor* color = [imageRep colorAtX:100.0 y:y];
[imageRep release];
Note that you must make an adjustment for the y value because the colorAtX:y method uses a coordinate system that starts in the top left of the image, whereas the NSImage coordinate system starts at the bottom left.
Alternatively, if the pixel is visible on-screen then you can use the NSReadPixel() function to get the color of a pixel in the current coordinate system.

Function colorAtX of NSBitmapImageRep seems not to use the device color space, which may lead to color values that are slightly different from what you actually see. Use this code to get the correct color in the current device color space:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];

Related

How to get CGImageForProposedRect to provide 1:1 pixel data on Retina Mac

In our app, we're creating an PDF NSImage (therefore scalable) and then using CGImage routines to write that data to a TIFF file. This works fine on non-retina display Macintoshes, but on retina machines, the data that is returned is twice the resolution we expect (just like the screen).
The code we're using works takes a newly formed NSView subclass referencing the data to draw (not the original on-screen view) as printingMapView.
NSData *pdfData = [printingMapView dataWithPDFInsideRect: frame];
NSImage *image = [[NSImage alloc] initWithData: pdfData];
[image setSize: size];
NSRect pRect = NSMakeRect( 0, 0, [image size].width, [image size].height);
CGImageRef cgImage = [image CGImageForProposedRect: &pRect context: NULL hints:NULL];
I have looked around for any hints that could be handed to the CGImageForProposedRect:context:hints call, but there's nothing in the Apple documentation relating to content scale.
Is there any way to do this other than creating an NSBitmapImageRep of the full size and passing that in as the context parameter to CGImageForProposedRect:context:hints?
That seems like it's likely to use a lot of memory during the operation.
So CGImageForProposedRect does return 1:1 pixel data, if you are getting a CGImage out of the function that is doubled in size the NSImageRep of that NSImage must also be doubled in size. Check your code to see if you have any calls to NSImage drawInRect where you are writing to an retina context. That is what was happening to me.

How to create a clipping mask from an NSAttributedString?

I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.

How to- NSAttributedString to CGImageRef

I'm writing a QuickLook plugin. Well, everything works. Just want to try it make better ;).
Thus the question.
Here is a function that returns thumbnail image and that I'm using now.
QLThumbnailRequestSetImageWithData(
QLThumbnailRequestRef thumbnail,
CFDataRef data,
CFDictionaryRef properties);
);
http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImageWithData
Right now I'm creating a TIFF -> encapsulated it into NSData. An example
// Setting CFDataRef
CGSize thumbnailMaxSize = QLThumbnailRequestGetMaximumSize(thumbnail);
NSMutableAttributedString *attributedString = [[[NSMutableAttributedString alloc]
initWithString:#"dummy"
attributes:[NSDictionary dictionaryWithObjectsAndKeys:
[NSFont fontWithName:#"Monaco" size:10], NSFontAttributeName,
[NSColor colorWithCalibratedRed:0.0 green:0.0 blue:0.0 alpha:1.0], NSForegroundColorAttributeName,
nil]
] autorelease];
NSImage *thumbnailImage = [[[NSImage alloc] initWithSize:NSMakeSize(thumbnailMaxSize.width, thumbnailMaxSize.height)] autorelease];
[thumbnailImage lockFocus];
[[NSColor whiteColor] set];
NSRectFill(NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height));
[attributedString drawInRect:NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height)];
[thumbnailImage unlockFocus];
(CFDataRef)[thumbnailImage TIFFRepresentation]; // This is data
// Setting CFDictionaryRef
(CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys:#"kUTTypeTIFF", (NSString *)kCGImageSourceTypeIdentifierHint, nil ]; // this is properties
However QuickLook provides another function to return thumbnail image, namely
QLThumbnailRequestSetImage(
QLThumbnailRequestRef thumbnail,
CGImageRef image,
CFDictionaryRef properties);
);
http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImage
I have a feeling that passing CGImage to the QL instead of TIFF data would help in speeding things up.
However- I have never worked with CG context before. I know, the documentation is there :), but anyways- could anyone give an example how to turn that NSAttributed string into CGImageRef. An example is worth 10 times reading the documentation ;)
Any help appreciated. Thanks in advance!
could anyone give an example how to turn that NSAttributed string into CGImageRef.
You can't turn a string into an image; they're two completely different kinds of data, and one is two dimensional (characters over time) while the other is at-least-three dimensional (color over x and y).
What you need to do is draw the string and produce an image of the drawing. That's what you're doing now with NSImage: Creating an image and drawing the string into it.
You're asking about creating a CGImage. Creating a bitmap context, using Core Text to draw the string into it, and creating an image of the contents of the bitmap context is one way to do that.
However, you're already much closer to another solution, assuming you can require Snow Leopard. Instead of asking the NSImage for a TIFF representation, ask it for a CGImage.

Problem exporting NSOpenGLView pixel data to some image file formats using ImageKit & CGImageDestination

Summary: exporting pixel data from NSOpenGLView to some file formats gives incorrect colours
I am developing an application to visualise some experimental data. One of its functions is to render the data in an NSOpenGLView subclass, and allow the resulting image to be exported to a file or copied to the clipboard.
The view exports the data as an NSImage, generated like this:
- (NSImage*) image
{
NSBitmapImageRep* imageRep;
NSImage* image;
NSSize viewSize = [self bounds].size;
int width = viewSize.width;
int height = viewSize.height;
[self lockFocus];
[self drawRect:[self bounds]];
[self unlockFocus];
imageRep=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:width*4
bitsPerPixel:32] autorelease];
[[self openGLContext] makeCurrentContext];
glReadPixels(0,0,width,height,GL_RGBA,GL_UNSIGNED_BYTE,[imageRep bitmapData]);
image=[[[NSImage alloc] initWithSize:NSMakeSize(width,height)] autorelease];
[image addRepresentation:imageRep];
[image setFlipped:YES]; // this is deprecated in 10.6
[image lockFocusOnRepresentation:imageRep]; // this will flip the rep
[image unlockFocus];
return image;
}
Copying uses this image very simply, like this:
- (IBAction) copy:(id) sender
{
NSImage* img = [self image];
NSPasteboard* pb = [NSPasteboard generalPasteboard];
[pb clearContents];
NSArray* copied = [NSArray arrayWithObject:img];
[pb writeObjects:copied];
}
For file writing, I use the ImageKit IKSaveOptions accessory panel to set the output file type and associated options, then use the following code to do the writing:
NSImage* glImage = [glView image];
NSRect rect = [glView bounds];
rect.origin.x = rect.origin.y = 0;
img = [glImage CGImageForProposedRect:&rect
context:[NSGraphicsContext currentContext]
hints:nil];
if (img)
{
NSURL* url = [NSURL fileURLWithPath: path];
CGImageDestinationRef dest = CGImageDestinationCreateWithURL((CFURLRef)url,
(CFStringRef)newUTType,
1,
NULL);
if (dest)
{
CGImageDestinationAddImage(dest,
img,
(CFDictionaryRef)[imgSaveOptions imageProperties]);
CGImageDestinationFinalize(dest);
CFRelease(dest);
}
}
(I've trimmed a bit of extraneous code here, but nothing that would affect the outcome as far as I can see. The newUTType comes from the IKSaveOptions panel.)
This works fine when the file is exported as GIF, JPEG, PNG, PSD or TIFF, but exporting to PDF, BMP, TGA, ICNS and JPEG-2000 produces a red colour artefact on part of the image. Example images are below, the first exported as JPG, the second as PDF.
(source: walkytalky.net)
(source: walkytalky.net)
Copy to clipboard does not exhibit this red stripe with the current implementation of image, but it did with the original implementation, which generated the imageRep using NSCalibratedRGBColorSpace rather than NSDeviceRGBColorSpace. So I'm guessing there's some issue with the colour representation in the pixels I get from OpenGL that doesn't get through the subsequent conversions properly, but I'm at a loss as to what to do about it.
So, can anyone tell me (i) what is causing this, and (ii) how can I make it go away? I don't care so much about all of the formats but I'd really like at least PDF to work.
OK. As evidenced by the deafening silence which met this question, the problem turns out to be a bit obscure. But the workaround is nice and simple, so I'm describing it here just in case anyone ever wants to know.
Summary: some file export formats do not cope well with translucency in the rendered pixels.
I don't understand the exact reasons for this, although it might possibly have something to do with the presence or absence of alpha pre-multiplication. All the formats seem to be fine with completely transparent pixels, rendering them either transparent or as white if the format doesn't support transparency. But pixels that have a partial alpha, plus something in the colour channels, may get mangled.
As it happens, I did not even want any parts of the image to be translucent, and indeed set glDisable(GL_BLEND) before the relevant rendering code. However, objects were rendered with the materials from this seemingly-canonical collection at the OpenGL home site, some of which include alpha values other than 1.0 in their specular, diffuse and ambient colours. I had slavishly copied this without paying attention to that fact that it might lead to some unwanted translucency.
For my purposes, then, the solution is straightforward: change the material definitions so that the alpha component is always 1.0.
Note that some image formats, such as PNG and TIFF, do fully support the translucency, so if you need that then those are the ones to go for.
This was, in fact what tipped me off to the answer. However it was not obvious at first because I was using OS X Preview to view the files, and the translucency is not obvious with the default view settings:
(source: walkytalky.net)
(source: walkytalky.net)
(source: walkytalky.net)
(source: walkytalky.net)
So, a second lesson from this whole episode is: enable View | Show Image Background in Preview to get the checkerboard and show up any stray transparency.

How do I draw an NSString at an angle?

Is there a set of string attributes I can specify that will draw the text at an angle when I call:
[label drawAtPoint:textStart withAttributes:attributes];
Here's an example that uses a transform to rotate the drawing context. Essentially it's just like setting a color or shadow, just make sure to use -concat instead of -set.
CGFloat rotateDeg = 4.0f;
NSAffineTransform *rotate = [[NSAffineTransform alloc] init];
[rotate rotateByDegrees:rotateDeg];
[rotate concat];
// Lock focus if needed and draw strings, images here.
[rotate release];
NSString itself doesn't have rotation, but you can rotate the context. The string will always be drawn "horizontally" as far as the coordinate space goes, but what actual direction that corresponds to depends on the context. Just use NSAffineTransform to spin it as needed.

Resources