How to DrawImage on OSX with CGContext - macos

I have a CGContext and I tried to draw an image on it.
I have a NSImage * load that i used on other method .
NSImage *check = [[NSImage alloc] initWithContentsOfFile:#"check.icns"];
CGContextDrawImage(arg2, CGRectMake(0, 0, 145, 15), check);
But CGContext want a CGImage * how can I used my NSImage * ?
Thanks

You should consider just opening it as a CGImage in this case, if that is all you will need it for. If an NSImage is what you need, then see -[NSImage CGImageForProposedRect:context:hints:]. This method may not require a copy, and it can produce a CGImage representation ideal for drawing into the destination context.
If needed, you can create a NSGraphicsContext from a CGContext using +[NSGraphicsContext graphicsContextWithGraphicsPort:flipped:].

Related

How to get CGImageForProposedRect to provide 1:1 pixel data on Retina Mac

In our app, we're creating an PDF NSImage (therefore scalable) and then using CGImage routines to write that data to a TIFF file. This works fine on non-retina display Macintoshes, but on retina machines, the data that is returned is twice the resolution we expect (just like the screen).
The code we're using works takes a newly formed NSView subclass referencing the data to draw (not the original on-screen view) as printingMapView.
NSData *pdfData = [printingMapView dataWithPDFInsideRect: frame];
NSImage *image = [[NSImage alloc] initWithData: pdfData];
[image setSize: size];
NSRect pRect = NSMakeRect( 0, 0, [image size].width, [image size].height);
CGImageRef cgImage = [image CGImageForProposedRect: &pRect context: NULL hints:NULL];
I have looked around for any hints that could be handed to the CGImageForProposedRect:context:hints call, but there's nothing in the Apple documentation relating to content scale.
Is there any way to do this other than creating an NSBitmapImageRep of the full size and passing that in as the context parameter to CGImageForProposedRect:context:hints?
That seems like it's likely to use a lot of memory during the operation.
So CGImageForProposedRect does return 1:1 pixel data, if you are getting a CGImage out of the function that is doubled in size the NSImageRep of that NSImage must also be doubled in size. Check your code to see if you have any calls to NSImage drawInRect where you are writing to an retina context. That is what was happening to me.

cocoa: Read pixel color of NSImage

I have an NSImage. I would like to read the NSColor for a pixel at some x and y. Xcode seems to thing that there is a colorAtX:y: method on NSImage, but this causes a crash saying that there is no such method for NSImage. I have seen some examples where you create an NSBitmapImageRep and call the same method on that, but I have not been able to successfully convert my NSImage to an NSBitmapImageRep. The pixels on the NSBitmapImageRep are different for some reason.
There must be a simple way to do this. It cannot be this complicated.
Without seeing your code it's difficult to know what's going wrong.
You can draw the image to an NSBitmapImageRep using the initWithData: method and pass in the image's TIFFRepresentation.
You can then get the pixel value using the method colorAtX:y:, which is a method of NSBitmapImageRep, not NSImage:
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithData:[yourImage TIFFRepresentation]];
NSSize imageSize = [yourImage size];
CGFloat y = imageSize.height - 100.0;
NSColor* color = [imageRep colorAtX:100.0 y:y];
[imageRep release];
Note that you must make an adjustment for the y value because the colorAtX:y method uses a coordinate system that starts in the top left of the image, whereas the NSImage coordinate system starts at the bottom left.
Alternatively, if the pixel is visible on-screen then you can use the NSReadPixel() function to get the color of a pixel in the current coordinate system.
Function colorAtX of NSBitmapImageRep seems not to use the device color space, which may lead to color values that are slightly different from what you actually see. Use this code to get the correct color in the current device color space:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];

Creating memory efficient thumbnails using an NSImageView (cocoa/OSX)

I am creating small NSImageViews (32x32 pixels) from large images often 512x512 or even 4096 x 2048 for an extreme test case. My problem is that with my extreme test case, my applicaiton memory footprint seems to go up by over 15MB when I display my thumbnail, this makes me think the NSImage is being stored in memory as a 4096x2048 instead of 32x32 and I was wondering if there is a way to avoid this. Here is the process I go through to create the NsImageView:
• First I create an NSImage using initByReferencingFile: (pointing to the 4096x2048 .png file)
• Next I initialize the NSImageView with a call to initWithFrame:
• Then I call setImage: to assign my NSImage to the NSImageView
• Finally I set the NSImageView to NSScaleProportionally
I clearly do nothing to force the NSImage to size down to 32x32 but I have had trouble finding a good way to handle this.
You can simply create a new 32x32 NSImage from the original and then release the original image.
First, create the 32x32 image:
NSImage *smallImage = [[NSImage alloc]initWithSize:NSMakeSize(32, 32)];
Then, lock focus on the image and draw the original on to it:
NSSize originalSize = [originalImage size];
NSRect fromRect = NSMakeRect(0, 0, originalSize.width, originalSize.height);
[smallImage lockFocus];
[originalImage drawInRect:NSMakeRect(0, 0, 32, 32) fromRect:fromRect operation:NSCompositeCopy fraction:1.0f];
[smallImage unlockFocus];
Then you may do as you please with the smaller image:
[imageView setImage:smallImage];
Remember to release!
[originalImage release];
[smallImage release];

How to create a clipping mask from an NSAttributedString?

I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.

How come IKImageBrowserView can resize images so much faster than I can?

This is my image resize code:
CALayer *newCALayer = [[CALayer layer] retain];
NSImage* image = [[NSImage alloc] initWithData:[NSData dataWithContentsOfFile:path]];
CGImageRef newCGImageFullResolution = [image CGImageForProposedRect:nil context:nil hints:nil];
CGContextRef context = CGBitmapContextCreate(NULL, drawRect.size.width, drawRect.size.height,
CGImageGetBitsPerComponent(newCGImageFullResolution),
CGImageGetBytesPerRow(newCGImageFullResolution),
CGImageGetColorSpace(newCGImageFullResolution),
CGImageGetAlphaInfo(newCGImageFullResolution));
CGContextDrawImage(context, CGRectMake(0, 0, drawRect.size.width, drawRect.size.height), newCGImageFullResolution);
CGImageRef scaledImage = CGBitmapContextCreateImage(context);
newCALayer.contents = (id)scaledImage;
CGImageRelease(scaledImage);
newCALayer.contentsGravity = kCAGravityResizeAspect;
newCALayer.opacity = 0.0;
newCALayer.anchorPoint = CGPointMake(0.0f,0.0f);
newCALayer.frame = CGRectMake( 0.0,
0.0,
[Singleton sharedSingleton].fullscreenRect.size.width,
[Singleton sharedSingleton].fullscreenRect.size.height);
[newCALayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
//CGImageRelease(cgImageFullResolution); (bonus points if you can explain why I can't release this! I mean, I can release the scaled image ok??)
CGContextRelease(context);
[image release];
I am doing all of this from a background thread in order to preload pictures so my GUI feels snappy. It took some work getting synchronization and what not set up so the CALayers ends up in view.
But I believe the term for describing how fast this is would be "it's a dog".
Comparing to IKImageView - that thing flings up thumbnails of images faster than I can scroll.
Does anybody have some suggestions for how to handle this better than I am doing it now?
In other words, my problem is that I want to have a super-fast UX. I believe the way to accomplish this is by preloading things to CALayers (this may be wrong? I tried NSImageView and some IK-stuff, but at least CALayer is better than that).
ImageKit is probably using CGImageSourceCreateThumbnailAtIndex() to quickly get an image appropriate to the destination, rather than reading in the entire image file.
Here:
NSImage *image = [[[NSImage alloc] initWithContentsOfFile:path] autorelease];
[image setScalesWhenResized:YES]; // *
[image setDataRetained:YES]; // *
[image setSize:desiredNewSize];
Then use the image as it is.
As for why your app is slow, run it under Instruments. That will tell you specifically where you are spending the majority of the processor time you use—it may not be in your scaling code after all.
*Since 10.6, these messages do nothing useful and are deprecated, so you can omit them if you are requiring Snow Leopard or later.

Resources