I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.
Related
I am working on a OSX/Cocoa graphics application which (for performance reasons) I would like to render at 640x480 when the user selects "full screen" mode. For what it's worth, the content is a custom NSView which draws using openGL.
I understand that rather than actually change the user's resolution, it's preferable to change the backbuffer (as explained on another SO question here: Programmatically change resolution OS X).
Following that advice, I end up with the following two methods (see below) to toggle between fullscreen and windowed. The trouble is that when I go fullscreen, the content does indeed render at 640x480 but is not scaled (IE it appears as if we stayed at the window's resolution and "zoomed" into a 640x480 corner of the render).
I'm probably missing something obvious here - I suppose I could translate the render according to the actual screen resolution to "center" it, but that seems overcomplicated?
- (void)goFullscreen{
// Bounce if we're already fullscreen
if(_isFullscreen){return;}
// Save original size and position
NSRect frame = [self.window.contentView frame];
original_size = frame.size;
original_position = frame.origin;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO],NSFullScreenModeAllScreens,
nil];
// In lieu of changing resolution, we set the backbuffer to 640x480
GLint dim[2] = {640, 480};
CGLSetParameter([[self openGLContext] CGLContextObj], kCGLCPSurfaceBackingSize, dim);
CGLEnable ([[self openGLContext] CGLContextObj], kCGLCESurfaceBackingSize);
// Go fullscreen!
[self enterFullScreenMode:[NSScreen mainScreen] withOptions:options];
_isFullscreen=true;
}
- (void)goWindowed{
// Bounce if we're already windowed
if(!_isFullscreen){return;}
// Reset backbuffer
GLint dim[2] = {original_size.width, original_size.height};
CGLSetParameter([[self openGLContext] CGLContextObj], kCGLCPSurfaceBackingSize, dim);
CGLEnable ([[self openGLContext] CGLContextObj], kCGLCESurfaceBackingSize);
// Go windowed!
[self exitFullScreenModeWithOptions:nil];
[self.window makeFirstResponder:self];
_isFullscreen=false;
}
Update
Here's now to do something similar to datenwolf's suggestion below, but not using openGL (useful for non-gl content).
// Render into a specific size
renderDimensions = NSMakeSize(640, 480);
NSImage *drawIntoImage = [[NSImage alloc] initWithSize:renderDimensions];
[drawIntoImage lockFocus];
[self drawViewOfSize:renderDimensions];
[drawIntoImage unlockFocus];
[self syphonSendImage:drawIntoImage];
// Resize to fit preview area and draw
NSSize newSize = NSMakeSize(self.frame.size.width, self.frame.size.height);
[drawIntoImage setSize: newSize];
[[NSColor blackColor] set];
[self lockFocus];
[NSBezierPath fillRect:self.frame];
[drawIntoImage drawAtPoint:NSZeroPoint fromRect:self.frame operation:NSCompositeCopy fraction:1];
[self unlockFocus];
Use a FBO with a texture of the desired target resolution attached and render to that FBO/texture in said resolution. Then switch to the main framebuffer and draw a full screen quad using the texture rendered to just before. Use whatever magnification filter you like best. If you want to bring out the big guns you could implement a Lancosz / sinc interpolator in the fragment shader to upscaling the intermediary texture.
I wish to overlay one CGImage over another.
As an example the first CGImage is 1024x768 and want to overlay a second 100x100 CGImage at a given location.
I have seen how to do this using NSImage but don't really want to convert my CGImages to NSImage's then do overlay then convert the result back to CGImage. I have also seen iOS versions of the code, but unsure how to go about it on Mac?
I'm mostly used to iOS, so I might be out of my depth here, but assuming you have a graphics context (sized like the larger of the two images), can't you just draw the two CGImages on top of each other?
CGImageRef img1024x768;
CGImageRef img100x100;
CGSize imgSize = CGSizeMake(CGImageGetWidth(img1024x768), CGImageGetHeight(img1024x768));
CGRect largeBounds = CGRectMake(0, 0, CGImageGetWidth(img1024x768), CGImageGetHeight(img1024x768));
CGContextDrawImage(ctx, largeBounds, img1024x768);
CGRect smallBounds = CGRectMake(0, 0, CGImageGetWidth(img100x100), CGImageGetHeight(img100x100));
CGContextDrawImage(ctx, smallBounds, img100x100);
And then draw the result into a NSImage?
I have a CGContext and I tried to draw an image on it.
I have a NSImage * load that i used on other method .
NSImage *check = [[NSImage alloc] initWithContentsOfFile:#"check.icns"];
CGContextDrawImage(arg2, CGRectMake(0, 0, 145, 15), check);
But CGContext want a CGImage * how can I used my NSImage * ?
Thanks
You should consider just opening it as a CGImage in this case, if that is all you will need it for. If an NSImage is what you need, then see -[NSImage CGImageForProposedRect:context:hints:]. This method may not require a copy, and it can produce a CGImage representation ideal for drawing into the destination context.
If needed, you can create a NSGraphicsContext from a CGContext using +[NSGraphicsContext graphicsContextWithGraphicsPort:flipped:].
I have an NSImage. I would like to read the NSColor for a pixel at some x and y. Xcode seems to thing that there is a colorAtX:y: method on NSImage, but this causes a crash saying that there is no such method for NSImage. I have seen some examples where you create an NSBitmapImageRep and call the same method on that, but I have not been able to successfully convert my NSImage to an NSBitmapImageRep. The pixels on the NSBitmapImageRep are different for some reason.
There must be a simple way to do this. It cannot be this complicated.
Without seeing your code it's difficult to know what's going wrong.
You can draw the image to an NSBitmapImageRep using the initWithData: method and pass in the image's TIFFRepresentation.
You can then get the pixel value using the method colorAtX:y:, which is a method of NSBitmapImageRep, not NSImage:
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithData:[yourImage TIFFRepresentation]];
NSSize imageSize = [yourImage size];
CGFloat y = imageSize.height - 100.0;
NSColor* color = [imageRep colorAtX:100.0 y:y];
[imageRep release];
Note that you must make an adjustment for the y value because the colorAtX:y method uses a coordinate system that starts in the top left of the image, whereas the NSImage coordinate system starts at the bottom left.
Alternatively, if the pixel is visible on-screen then you can use the NSReadPixel() function to get the color of a pixel in the current coordinate system.
Function colorAtX of NSBitmapImageRep seems not to use the device color space, which may lead to color values that are slightly different from what you actually see. Use this code to get the correct color in the current device color space:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];
I am trying to create CGImage from NSTextField.
I got some success in this. Still I cant get CGImage that consisting of only text. I mean to say that,every time capturing the textfield I am getting color of the background window along with it.(Looks like I am not getting alpha channel info)
I tried following snippet from http://www.cocoadev.com/index.pl?ConvertNSImageToCGImage
NSBitmapImageRep * bm = [NSBitmapImageRep alloc];
[theView lockFocus];
[bitmap initWithFocusedViewRect:[theView bounds]];
[theView unlockFocus]
[bma retain];// data provider will release this
int rowBytes, width, height;
rowBytes = [bm bytesPerRow];
width = [bm pixelsWide];
height = [bm pixelsHigh];
CGDataProviderRef provider = CGDataProviderCreateWithData( bm, [bm bitmapData], rowBytes * height, BitmapReleaseCallback );
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName( kCGColorSpaceGenericRGB );
CGBitmapInfo bitsInfo = kCGImageAlphaPremultipliedLast;
CGImageRef img = CGImageCreate( width, height, 8, 32, rowBytes, colorspace, bitsInfo, provider, NULL, NO, kCGRenderingIntentDefault );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorspace );
return img;
Any help to get CGImage without background color?
-initWithFocusedViewRect: reads from the window backing store, so essentially it's a screenshot of that portion of the window. That's why you're getting the window background color in your image.
-[NSView cacheDisplayInRect:toBitmapImageRep:] is very similar, but it causes the view and its subviews, but not its superviews, to redraw themselves. If your text field is borderless, then this might suffice for you. (Make sure to use -bitmapImageRepForCachingDisplayInRect: to create your NSBitmapImageRep!)
There's one more option that might be considered even more correct than the above. NSTextField draws its content using its NSTextFieldCell. There's nothing really stopping you from just creating an image with the appropriate size, locking focus on it, and then calling -drawInteriorWithFrame:inView:. That should just draw the text, exactly as it was drawn in the text field.
Finally, if you just want to draw text, don't forget about NSStringDrawing. NSString has some methods that will draw with attributes (drawAtPoint:withAttributes:), and NSAttributedString also has drawing methods (drawAtPoint:). You could use one of those instead of asking the NSTextFieldCell to draw for you.