I wish to overlay one CGImage over another.
As an example the first CGImage is 1024x768 and want to overlay a second 100x100 CGImage at a given location.
I have seen how to do this using NSImage but don't really want to convert my CGImages to NSImage's then do overlay then convert the result back to CGImage. I have also seen iOS versions of the code, but unsure how to go about it on Mac?
I'm mostly used to iOS, so I might be out of my depth here, but assuming you have a graphics context (sized like the larger of the two images), can't you just draw the two CGImages on top of each other?
CGImageRef img1024x768;
CGImageRef img100x100;
CGSize imgSize = CGSizeMake(CGImageGetWidth(img1024x768), CGImageGetHeight(img1024x768));
CGRect largeBounds = CGRectMake(0, 0, CGImageGetWidth(img1024x768), CGImageGetHeight(img1024x768));
CGContextDrawImage(ctx, largeBounds, img1024x768);
CGRect smallBounds = CGRectMake(0, 0, CGImageGetWidth(img100x100), CGImageGetHeight(img100x100));
CGContextDrawImage(ctx, smallBounds, img100x100);
And then draw the result into a NSImage?
Related
In our app, we're creating an PDF NSImage (therefore scalable) and then using CGImage routines to write that data to a TIFF file. This works fine on non-retina display Macintoshes, but on retina machines, the data that is returned is twice the resolution we expect (just like the screen).
The code we're using works takes a newly formed NSView subclass referencing the data to draw (not the original on-screen view) as printingMapView.
NSData *pdfData = [printingMapView dataWithPDFInsideRect: frame];
NSImage *image = [[NSImage alloc] initWithData: pdfData];
[image setSize: size];
NSRect pRect = NSMakeRect( 0, 0, [image size].width, [image size].height);
CGImageRef cgImage = [image CGImageForProposedRect: &pRect context: NULL hints:NULL];
I have looked around for any hints that could be handed to the CGImageForProposedRect:context:hints call, but there's nothing in the Apple documentation relating to content scale.
Is there any way to do this other than creating an NSBitmapImageRep of the full size and passing that in as the context parameter to CGImageForProposedRect:context:hints?
That seems like it's likely to use a lot of memory during the operation.
So CGImageForProposedRect does return 1:1 pixel data, if you are getting a CGImage out of the function that is doubled in size the NSImageRep of that NSImage must also be doubled in size. Check your code to see if you have any calls to NSImage drawInRect where you are writing to an retina context. That is what was happening to me.
I have an NSImage. I would like to read the NSColor for a pixel at some x and y. Xcode seems to thing that there is a colorAtX:y: method on NSImage, but this causes a crash saying that there is no such method for NSImage. I have seen some examples where you create an NSBitmapImageRep and call the same method on that, but I have not been able to successfully convert my NSImage to an NSBitmapImageRep. The pixels on the NSBitmapImageRep are different for some reason.
There must be a simple way to do this. It cannot be this complicated.
Without seeing your code it's difficult to know what's going wrong.
You can draw the image to an NSBitmapImageRep using the initWithData: method and pass in the image's TIFFRepresentation.
You can then get the pixel value using the method colorAtX:y:, which is a method of NSBitmapImageRep, not NSImage:
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithData:[yourImage TIFFRepresentation]];
NSSize imageSize = [yourImage size];
CGFloat y = imageSize.height - 100.0;
NSColor* color = [imageRep colorAtX:100.0 y:y];
[imageRep release];
Note that you must make an adjustment for the y value because the colorAtX:y method uses a coordinate system that starts in the top left of the image, whereas the NSImage coordinate system starts at the bottom left.
Alternatively, if the pixel is visible on-screen then you can use the NSReadPixel() function to get the color of a pixel in the current coordinate system.
Function colorAtX of NSBitmapImageRep seems not to use the device color space, which may lead to color values that are slightly different from what you actually see. Use this code to get the correct color in the current device color space:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];
I am creating small NSImageViews (32x32 pixels) from large images often 512x512 or even 4096 x 2048 for an extreme test case. My problem is that with my extreme test case, my applicaiton memory footprint seems to go up by over 15MB when I display my thumbnail, this makes me think the NSImage is being stored in memory as a 4096x2048 instead of 32x32 and I was wondering if there is a way to avoid this. Here is the process I go through to create the NsImageView:
• First I create an NSImage using initByReferencingFile: (pointing to the 4096x2048 .png file)
• Next I initialize the NSImageView with a call to initWithFrame:
• Then I call setImage: to assign my NSImage to the NSImageView
• Finally I set the NSImageView to NSScaleProportionally
I clearly do nothing to force the NSImage to size down to 32x32 but I have had trouble finding a good way to handle this.
You can simply create a new 32x32 NSImage from the original and then release the original image.
First, create the 32x32 image:
NSImage *smallImage = [[NSImage alloc]initWithSize:NSMakeSize(32, 32)];
Then, lock focus on the image and draw the original on to it:
NSSize originalSize = [originalImage size];
NSRect fromRect = NSMakeRect(0, 0, originalSize.width, originalSize.height);
[smallImage lockFocus];
[originalImage drawInRect:NSMakeRect(0, 0, 32, 32) fromRect:fromRect operation:NSCompositeCopy fraction:1.0f];
[smallImage unlockFocus];
Then you may do as you please with the smaller image:
[imageView setImage:smallImage];
Remember to release!
[originalImage release];
[smallImage release];
I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.
I have imageA (taken from the users iPhone camera) and imageB, an image with a silly boarder (for eg) with plenty of transparent alpha space.
What I would like to to do is to merge these two images, laying imageB over imageA, and then saving them as imageC for other work.
Is there a way to do this?
Cheers
I've got this so far
-(void)merge
{
CGSize size = CGSizeMake(320, 480);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0,0);
UIImage *imageA = imageView.image;
[imageA drawAtPoint:thumbPoint];
UIImage* starred = [UIImage imageNamed:#"imageB.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[starred drawAtPoint:starredPoint];
UIImage *imageC = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView.image = imageC;
}
I can't see/dont know what I'm doing wrong here
That code looks correct (though I would recommend converting it to use a created CGBitmapContext for thread-safety), but is imageB supposed to be a JPEG? JPEGs don't support transparency, so, in order for the blending to work, it should really be a PNG.
Note that for retina support you should use: UIGraphicsBeginImageContextWithOptions(size, YES, 0); // 0 means let iOS deal with scale for you