cocoa: how do I draw camera frames on to the screen - cocoa

What I am trying to do is to display camera feeds within a NSview using AVfoundation. I know this can be easily achieved by using the "AVCaptureVideoPreviewLayer". However, the long term plan is to do some frame processing for tracking hand gestures, thus I prefer to draw the frames manually. The way I did was to use the "AVCaptureVideoDataOutput" and implement the "(void)captureOutput: didOutputSampleBuffer: fromConnection:" delegate function.
Below is my implementation of the delegate function. Within the delegate function I create an CGImage from the sample buffer and render it onto an CALayer. However this does NOT work as I do not see any video frames rendered on screen. The CALayer (mDrawlayer) was created in function "awakeFromNib" and attached to a custom view in the story board. I verify the CALayer creation by setting the background colour to orange and it works.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext =
CGBitmapContextCreate(baseAddress,width,height, 8, bytesPerRow,
colorSpace, kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGImageRef imgRef = CGBitmapContextCreateImage(newContext);
mDrawLayer.contents = (id) CFBridgingRelease(imgRef);
[mDrawLayer display];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
Obviously I am not doing something correctly, so how should I render the camera frames one by one onto the CALayer?
Also, I would like to know If my approach is correct. What is standard way of doing this?
Your help will be greatly appreciated. Thanks:)

Related

Access CGImageRef underlying bytes and modify?

I am working on an OSX app that does some pixel-level image manipulation. I am using the following code to access the pixel color components (RGBA) as regular bytes cast as uint8 pointers.
NSImage *image = self.iv.image;
NSRect imageRect = NSMakeRect(0, 0, image.size.width, image.size.height);
CGImageRef cgImage = [image CGImageForProposedRect:&imageRect context:NULL hints:nil];
NSData *data = (NSData *)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(cgImage)));
uint8 *pixels = (uint8 *)[data bytes];
At this point I apply some byte level changes in:
for (int i = 0; i < [data length]; i += 4) { ... }
Changing this region of memory does not appear to have any effect on the original CGImageRef (which is at the time displayed in an NSImageView). I must do the following to see the image update accordingly:
CGImageRef newImageRef = CGImageCreate (width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault);
NSSize size = NSMakeSize(CGImageGetWidth(newImageRef),
CGImageGetHeight(newImageRef));
NSImage * newIm = [[NSImage alloc] initWithCGImage:newImageRef size:size];
self.iv.image = newIm;
In other words, the bytes I get back to modify are just a copy of the original bytes, presumably as a result of CGDataProviderCopyData(CGImageGetDataProvider(cgImage).
My question is as follows. Is there is a way to access the underlying bytes of the CGImageRef directly such that when I modify them the image is updated on screen as I manipulate them?
No. CGImages are immutable. You can't change them once they are created.
In your code, the call to [data bytes] gives a pointer to const void. You have cast away the const which gets it to compile without warnings, but that's a violation of the design contract. Writing to the buffer backing the data provider is not legal and not guaranteed to work, even if you create a new CGImage from it.
I will also point out that the format of the data in the buffer may be quite different from what you were expecting. There's no good reason to expect the data to be 32 bits per pixel, RGBA vs. BGRA vs. ARGB vs. …, or anything.
I strong recommend that you read the sections about the various image objects in the 10.6 AppKit release notes. Scroll down to "NSImage, CGImage, and CoreGraphics impedance matching" and read through all of the following image-related sections until you hit "NSComboBox". The section "NSBitmapImageRep: CoreGraphics impedance matching and performance notes" is one of the more important for your purposes.
Beyond what that says, you could just maintain a pixel buffer that you allocated yourself in whatever format you prefer. Then, when you want a CGImage of that, create it from the buffer, draw with it, and discard it. Any pixel manipulations would be done on that buffer.

How to overlay one CGImage over another

I wish to overlay one CGImage over another.
As an example the first CGImage is 1024x768 and want to overlay a second 100x100 CGImage at a given location.
I have seen how to do this using NSImage but don't really want to convert my CGImages to NSImage's then do overlay then convert the result back to CGImage. I have also seen iOS versions of the code, but unsure how to go about it on Mac?
I'm mostly used to iOS, so I might be out of my depth here, but assuming you have a graphics context (sized like the larger of the two images), can't you just draw the two CGImages on top of each other?
CGImageRef img1024x768;
CGImageRef img100x100;
CGSize imgSize = CGSizeMake(CGImageGetWidth(img1024x768), CGImageGetHeight(img1024x768));
CGRect largeBounds = CGRectMake(0, 0, CGImageGetWidth(img1024x768), CGImageGetHeight(img1024x768));
CGContextDrawImage(ctx, largeBounds, img1024x768);
CGRect smallBounds = CGRectMake(0, 0, CGImageGetWidth(img100x100), CGImageGetHeight(img100x100));
CGContextDrawImage(ctx, smallBounds, img100x100);
And then draw the result into a NSImage?

Why would glBindFramebuffer(GL_FRAMEBUFFER, 0) result in blank screen in cocos2D-iphone?

[iPad-3]-[iOS 5.0]-[Objective-C]-[XCode 4.3.3]-[Cocos2D]-[openGL|ES 2.0]
I'm learning how to use openGL|ES 2.0 and have stumbled on Frame Buffer Objects (FBO)s
Info:
I'm working with Cocos2D which has a lot of extra-fancy handling for drawing. I imagine that this may be linked with the issue. If the 'default' frame buffer for cocos is different from the actual default frame buffer that draws to the screen, this could result in a mis-draw
My Problem:
in the init function of my "helloworld.m" class, if I place "glBindFrameBuffer(GL_FRAMEBUFFER, 0);" anywhere, I simply get a blank screen!
-(id) init
{
if( (self=[super init]))
{
CGSize winSize = [CCDirector sharedDirector].winSize;
glBindFramebuffer(GL_FRAMEBUFFER, 0);
CCSprite * spriteBG = [[CCSprite alloc] initWithFile:#"cocos_retina.png"];
spriteBG.position = ccp(512,384);
//[self addChild:spriteBG z:1];
[self scheduleUpdate];
_mTouchDown = NO;
_mSprite = [CCSprite spriteWithTexture:_mMainTexture];
_mSprite.position = ccp(512,384);
[self addChild:_mSprite];
self.isTouchEnabled = YES;
} return self;}
Am I missing something basic and obvious?
As far as I've learned, the function "glBindFramebuffer(GL_FRAMEBUFFER, 0);" simply just setting the Framebuffer to 0 applies the default framebuffer that draws to the screen.
The Problem was that either iOS or Cocos2D (or both) can have a unique framebuffer.
The handle of that unique frame buffer would be different than 0, and may be different each time.
To solve this, I have to grab the current FBO's handle, do my custom Framebuffer stuff and then re-apply the FBO's handle after I'm done.
Creates a variable to reference the original Frame Buffer Object
GLint oldFBO;
Assigns the currently used FBO's handle (which is a 'GLint') to the variable 'oldFBO'
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &oldFBO);
//here is when you would create or manipulate custom framebuffers.//
After that, You set the original FBO as the current Framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, oldFBO);

How to create a clipping mask from an NSAttributedString?

I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.

Retrieving CGImage from NSView

I am trying to create CGImage from NSTextField.
I got some success in this. Still I cant get CGImage that consisting of only text. I mean to say that,every time capturing the textfield I am getting color of the background window along with it.(Looks like I am not getting alpha channel info)
I tried following snippet from http://www.cocoadev.com/index.pl?ConvertNSImageToCGImage
NSBitmapImageRep * bm = [NSBitmapImageRep alloc];
[theView lockFocus];
[bitmap initWithFocusedViewRect:[theView bounds]];
[theView unlockFocus]
[bma retain];// data provider will release this
int rowBytes, width, height;
rowBytes = [bm bytesPerRow];
width = [bm pixelsWide];
height = [bm pixelsHigh];
CGDataProviderRef provider = CGDataProviderCreateWithData( bm, [bm bitmapData], rowBytes * height, BitmapReleaseCallback );
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName( kCGColorSpaceGenericRGB );
CGBitmapInfo bitsInfo = kCGImageAlphaPremultipliedLast;
CGImageRef img = CGImageCreate( width, height, 8, 32, rowBytes, colorspace, bitsInfo, provider, NULL, NO, kCGRenderingIntentDefault );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorspace );
return img;
Any help to get CGImage without background color?
-initWithFocusedViewRect: reads from the window backing store, so essentially it's a screenshot of that portion of the window. That's why you're getting the window background color in your image.
-[NSView cacheDisplayInRect:toBitmapImageRep:] is very similar, but it causes the view and its subviews, but not its superviews, to redraw themselves. If your text field is borderless, then this might suffice for you. (Make sure to use -bitmapImageRepForCachingDisplayInRect: to create your NSBitmapImageRep!)
There's one more option that might be considered even more correct than the above. NSTextField draws its content using its NSTextFieldCell. There's nothing really stopping you from just creating an image with the appropriate size, locking focus on it, and then calling -drawInteriorWithFrame:inView:. That should just draw the text, exactly as it was drawn in the text field.
Finally, if you just want to draw text, don't forget about NSStringDrawing. NSString has some methods that will draw with attributes (drawAtPoint:withAttributes:), and NSAttributedString also has drawing methods (drawAtPoint:). You could use one of those instead of asking the NSTextFieldCell to draw for you.

Resources