Retrieving CGImage from NSView - cocoa

I am trying to create CGImage from NSTextField.
I got some success in this. Still I cant get CGImage that consisting of only text. I mean to say that,every time capturing the textfield I am getting color of the background window along with it.(Looks like I am not getting alpha channel info)
I tried following snippet from http://www.cocoadev.com/index.pl?ConvertNSImageToCGImage
NSBitmapImageRep * bm = [NSBitmapImageRep alloc];
[theView lockFocus];
[bitmap initWithFocusedViewRect:[theView bounds]];
[theView unlockFocus]
[bma retain];// data provider will release this
int rowBytes, width, height;
rowBytes = [bm bytesPerRow];
width = [bm pixelsWide];
height = [bm pixelsHigh];
CGDataProviderRef provider = CGDataProviderCreateWithData( bm, [bm bitmapData], rowBytes * height, BitmapReleaseCallback );
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName( kCGColorSpaceGenericRGB );
CGBitmapInfo bitsInfo = kCGImageAlphaPremultipliedLast;
CGImageRef img = CGImageCreate( width, height, 8, 32, rowBytes, colorspace, bitsInfo, provider, NULL, NO, kCGRenderingIntentDefault );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorspace );
return img;
Any help to get CGImage without background color?

-initWithFocusedViewRect: reads from the window backing store, so essentially it's a screenshot of that portion of the window. That's why you're getting the window background color in your image.
-[NSView cacheDisplayInRect:toBitmapImageRep:] is very similar, but it causes the view and its subviews, but not its superviews, to redraw themselves. If your text field is borderless, then this might suffice for you. (Make sure to use -bitmapImageRepForCachingDisplayInRect: to create your NSBitmapImageRep!)
There's one more option that might be considered even more correct than the above. NSTextField draws its content using its NSTextFieldCell. There's nothing really stopping you from just creating an image with the appropriate size, locking focus on it, and then calling -drawInteriorWithFrame:inView:. That should just draw the text, exactly as it was drawn in the text field.
Finally, if you just want to draw text, don't forget about NSStringDrawing. NSString has some methods that will draw with attributes (drawAtPoint:withAttributes:), and NSAttributedString also has drawing methods (drawAtPoint:). You could use one of those instead of asking the NSTextFieldCell to draw for you.

Related

cocoa: how do I draw camera frames on to the screen

What I am trying to do is to display camera feeds within a NSview using AVfoundation. I know this can be easily achieved by using the "AVCaptureVideoPreviewLayer". However, the long term plan is to do some frame processing for tracking hand gestures, thus I prefer to draw the frames manually. The way I did was to use the "AVCaptureVideoDataOutput" and implement the "(void)captureOutput: didOutputSampleBuffer: fromConnection:" delegate function.
Below is my implementation of the delegate function. Within the delegate function I create an CGImage from the sample buffer and render it onto an CALayer. However this does NOT work as I do not see any video frames rendered on screen. The CALayer (mDrawlayer) was created in function "awakeFromNib" and attached to a custom view in the story board. I verify the CALayer creation by setting the background colour to orange and it works.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext =
CGBitmapContextCreate(baseAddress,width,height, 8, bytesPerRow,
colorSpace, kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGImageRef imgRef = CGBitmapContextCreateImage(newContext);
mDrawLayer.contents = (id) CFBridgingRelease(imgRef);
[mDrawLayer display];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
Obviously I am not doing something correctly, so how should I render the camera frames one by one onto the CALayer?
Also, I would like to know If my approach is correct. What is standard way of doing this?
Your help will be greatly appreciated. Thanks:)

CATextLayer gets rasterized too early and it is blurred

I have some troubles with CATextLayer, that could be due to me, but I didn't find any help on this topic. I am on OS X (on iOS it should be the same).
I create a CATextLayer layers with scale factor > 1 and what I get is a blurred text. The layer is rasterized before applying the scale, I think. Is this the expected behavior? I hope it is not, because it just makes no sense... A CAShapeLayer is rasterized after that its transformation matrix is applied, why the CATextLayer should be different?
In case I am doing something wrong... what is it??
CATextLayer *layer = [CATextLayer layer];
layer.string = #"I like what I am doing";
layer.font = (__bridge CFTypeRef)[NSFont systemFontOfSize:24];
layer.fontSize = 24;
layer.anchorPoint = CGPointZero;
layer.frame = CGRectMake(0, 0, 400, 100);
layer.foregroundColor = [NSColor blackColor].CGColor;
layer.transform = CATransform3DMakeScale(2., 2., 1.);
layer.shouldRasterize = NO;
[self.layer addSublayer:layer];
The solution I use at the moment is to set the contentsScale property of the layer to the scale factor. The problem is that this solution doesn't scale: if the scale factor of any of the parent layers changes, then contentsScale should be updated too. I should write code to traverse the layers tree to update the contentsScale properties of all CATextLayers... not exactly what I would like to do.
Another solution, that is not really a solution, is to convert the text to a shape and use a CAShapeLayer. But then I don't see the point of having CATextLayers.
A custom subclass of CALayer could help in solving this problem?
EDIT: Even CAGradientLayer is able to render its contents, like CAShapeLayer, after that its transformation matrix is applied. Can someone explain how it is possible?
EDIT 2: My guess is that paths and gradients are rendered as OpenGL display lists, so they are rasterized at the actual size on the screen by OpenGL itself. Texts are rasterized by Core Animation, so they are bitmaps for OpenGL.
I think that I will go with the contentsScale solution for the moment. Maybe, in the future, I will convert texts to shapes. In order to get best results with little work, this is the code I use now:
[CATransaction setDisableActions:YES];
CGFloat contentsScale = ceilf(scaleOfParentLayer);
// _scalableTextLayer is a CATextLayer
_scalableTextLayer.contentsScale = contentsScale;
[_scalableTextLayer displayIfNeeded];
[CATransaction setDisableActions:NO];
After trying all the approaches, the solution I am using now is a custom subclass of CALayer. I don't use CATextLayer at all.
I override the contentsScale property with this custom setter method:
- (void)setContentsScale:(CGFloat)cs
{
CGFloat scale = MAX(ceilf(cs), 1.); // never less than 1, always integer
if (scale != self.contentsScale) {
[super setContentsScale:scale];
[self setNeedsDisplay];
}
}
The value of the property is always rounded to the upper integer value. When the rounded value changes, then the layer must be redrawn.
The display method of my CALayer subclass creates a bitmap image of the size of the text multiplied by the contentsScale factor and by the screen scale factor.
- (void)display
{
CGFloat scale = self.contentsScale * [MyUtils screenScale];
CGFloat width = self.bounds.size.width * scale;
CGFloat height = self.bounds.size.height * scale;
CGContextRef bitmapContext = [MyUtils createBitmapContextWithSize:CGSizeMake(width, height)];
CGContextScaleCTM(bitmapContext, scale, scale);
CGContextSetShouldSmoothFonts(bitmapContext, 0);
CTLineRef line = CTLineCreateWithAttributedString((__bridge CFAttributedStringRef)(_text));
CGContextSetTextPosition(bitmapContext, 0., self.bounds.size.height-_ascender);
CTLineDraw(line, bitmapContext);
CFRelease(line);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
self.contents = (__bridge id)(image);
CGImageRelease(image);
CGContextRelease(bitmapContext);
}
When I change the scale factor of the root layer of my hierarchy, I loop on all text layers and set the contentsScale property to the same factor. The display method is called only if the rounded value of the scale factor changes (i.e. if the previous value was 1.6 and now I set 1.7, nothing happens. But if the new value is 2.1, then the layer is redisplayed).
The cost in terms of speed of the redraw is little. My test is to change continuously the scale factor of a hierarchy of 40 text layers on an 3rd gen. iPad. It works like butter.
CATextLayer is different because the underlying CoreText renders the glyphs with the specified font size (educated guess based on experiments).
You could add an action to the parent layer so as soon as it's scale changes, it changes the font size of the text layer.
Blurriness could also come from misaligned pixels. That can happen if you put the text layer to non integral position or any transformation in the superlayer hierarchy.
Alternatively you could subclass CALayer and then draw the text using Cocoa in drawInContext:
see example here:
http://lists.apple.com/archives/Cocoa-dev/2009/Jan/msg02300.html
http://people.omnigroup.com/bungi/TextDrawing-20090129.zip
If you want to have the exact behaviour of a CAShapeLayer then you will need to convert your string into a bezier path and have CAShapeLayer render it. It's a bit of work but then you will have the exact behaviour you are looking for. An alternate approach, is to scale the fontSize instead. This yields crisp text every time but it might not fit to you exact situation.
To draw text as CAShapeLayer have a look at Apple Sample Code "CoreAnimationText":
http://developer.apple.com/library/mac/#samplecode/CoreAnimationText/Listings/Readme_txt.html

cocoa: Read pixel color of NSImage

I have an NSImage. I would like to read the NSColor for a pixel at some x and y. Xcode seems to thing that there is a colorAtX:y: method on NSImage, but this causes a crash saying that there is no such method for NSImage. I have seen some examples where you create an NSBitmapImageRep and call the same method on that, but I have not been able to successfully convert my NSImage to an NSBitmapImageRep. The pixels on the NSBitmapImageRep are different for some reason.
There must be a simple way to do this. It cannot be this complicated.
Without seeing your code it's difficult to know what's going wrong.
You can draw the image to an NSBitmapImageRep using the initWithData: method and pass in the image's TIFFRepresentation.
You can then get the pixel value using the method colorAtX:y:, which is a method of NSBitmapImageRep, not NSImage:
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithData:[yourImage TIFFRepresentation]];
NSSize imageSize = [yourImage size];
CGFloat y = imageSize.height - 100.0;
NSColor* color = [imageRep colorAtX:100.0 y:y];
[imageRep release];
Note that you must make an adjustment for the y value because the colorAtX:y method uses a coordinate system that starts in the top left of the image, whereas the NSImage coordinate system starts at the bottom left.
Alternatively, if the pixel is visible on-screen then you can use the NSReadPixel() function to get the color of a pixel in the current coordinate system.
Function colorAtX of NSBitmapImageRep seems not to use the device color space, which may lead to color values that are slightly different from what you actually see. Use this code to get the correct color in the current device color space:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];

UINavigationBar with image and default gradient background with iOS5

I'm attempting to use the new [UINavigationBar appearance] functionality in iOS5 to add a logo image to the UINavigationBars in my application. Primarily, I'd like to keep the default gradient, but center a transparent png in the NavBar. The logo image is roughly 120 pixels wide (240 pixels#2x).
I have first attempted this by setting the background image. The default behavior for setBackgroundImage:forBarMetrics: appears to be to tile the image, and all transparent parts show the default navbar background color, black. I can also set the background color via the appearance modifier, and get a flat color background, but I'd really like to get the original gradient behavior without maintaining a separate image resource for it. It also makes it easier to adjust in code, since I can adjust the tint there, rather than re-generating a new image if I decide to change it.
What I'm trying to use:
UIImage *logoImage = [UIImage imageNamed:#"logoImage"];
[[UINavigationBar appearance] setBackgroundImage:logoImage forBarMetrics:UIBarMetricsDefault];
You can do this two ways. If you want to always have the image in the navigation bar, then create an image view and set it as a subview of the navigation bar:
[self setLogoImageView:[[UIImageView alloc] init]];
[logoImageView setImage:[UIImage imageNamed:#"logo.png"]];
[logoImageView setContentMode:UIViewContentModeScaleAspectFit];
CGRect navFrame = [[navController navigationBar] frame];
float imageViewHeight = navFrame.size.height - 9;
float x_pos = navFrame.origin.x + navFrame.size.width/2 - 111/2;
float y_pos = navFrame.size.height/2 - imageViewHeight/2.0;
CGRect logoFrame = CGRectMake(x_pos, y_pos, 111, imageViewHeight);
[logoImageView setFrame:logoFrame];
[[[self navigationController] navigationBar] addSubview:logoImageView];
If you only want to display the logo in a certain view, then set the view's navigation item:
[logoImageView setAutoresizingMask:UIViewAutoresizingFlexibleHeight];
[[self navigationItem] setTitleView:logoImageView];

How to create a clipping mask from an NSAttributedString?

I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.

Resources