How can I obtain raw data from a CVImageBuffer object - cocoa

I'm trying to use cocoa to grab images from a webcam. I'm able to get the image in RGBA format using the QTKit and the didOutputVideoFrame delegate call, and converting the CVImageBuffer to a CIImage and then to a NSBitmapImageRep.
I know my camera grabs natively in YUV, what I want is to get the YUV data directly from the CVImageBuffer, and proccess the YUV frame before displaying it.
My question is: How can I get the YUV data from the CVImageBuffer?
thanks.

You might be able to create a CIImage from the buffer using +[CIImage imageWithCVBuffer:] and then render that CIImage into a CGBitmapContext of the desired pixel format.
Note, I have not tested this solution.

I asked what I thought was a different question, but it turned out to have the same answer as this one: raw data from CVImageBuffer without rendering?

Related

Identifying content/image type from NSImage

I have an NSImage instance and I'd like to identify the type of image it is (JPG, PNG, GIF, etc.).
Is the original format of the image preserved and possible to retrieve somehow?
EDIT:
Answers in other questions mention using CGImageSource. However, there does not seem to be a way of extracting the original data from the NSImage - only TIFFRepresentation, which will always return the TIFF image type via CGImageSource.
EDIT 2:
I think my suspicions may be true in that NSImage does not maintain the original format of the image. Everything is converted to bitmap representation. Therefore it is not possible to retrieve the original format without keeping track of it yourself. See similar answer from iOS equivalent: Detect whether an UIImage is PNG or JPEG?

Is it possible to get the raw IR image data from the Superframe?

I am using the java API with the Tango Peanut phone, and I wanted to know if there is a way to get the raw IR image from the RGB-IR camera that the depth sensor uses. I know that a quarter of the pixels from the RGB-IR camera is IR data. I also know that all of the 4MP RGB-IR image gets put into the superframe and then converted to YUV. However it was unclear on how to decode the IR channel, or if it is even possible at this point. If its lost inside the YUV superframe, is there any other way I can retrieve the raw IR image?
In order to get the depth data:
http://www.pocketmagic.net/wp-content/uploads/2014/05/sbrl.zip
taken from http://palcu.blogspot.ro/2014/04/google-project-tango-hackathon-in.html
You might also wanna try
https://github.com/googlesamples/tango-examples-c/wiki/Depth:-Point-Cloud-Viewer
http://www.pocketmagic.net/2014/05/google-tango-hackathon-in-timisoara/#.VCQ6H1eHuiN
Please keep me posted on success/failure. Are you on a tablet or phone?

Windows Phone 7, Steganography and MediaLibrary.SavePicture

I am working with MediaLibrary on WP7 and I am doing steganography on BitmapImage (WriteableBitmap) which works fine (using this approach: http://www.codeproject.com/Articles/4877/Steganography-Hiding-messages-in-the-Noise-of-a-Pi)
Now the problem occurs when I call MediaLibrary.SavePicture method to save my bitmap to the phone. When I load this saved bitmap again from the phone, I can see that the pixels of the bitmap are shifted and my steganography data is lost.
Is there a way to avoid this behavior during the save method?
Better yet, is there a way to attach some metadata to my bitmaps that would be persisted with the bitmap?
Thanks a lot!
Leo
The issue might be caused by the fact that MediaLibrary.SavePicture saves the stream as a JPEG whereas your bytestream represents an uncompressed bitmap. Since JPEG is a lossy compression format, your data might be thrown away and so your hidden byte stream becomes corrupt. I'm not familiar with steganography but, if possible, you could try creating a blank JPEG image and writing your data to that. This way, your image format remains the same. You could try using Extensions.SaveJpeg with a quality value of 100, writing the data to that and then saving it to the MediaLibrary.

Can CGPDFDataFormatJPEG2000 be used for something other than a JPEG2000 image?

Using the Quartz 2D PDF routines, can the CGPDFDataFormat format of a CGPDFStreamRef PDF stream be equal to CGPDFDataFormatJPEG2000 in any case other than for an XObject image with a filter of /JPXDecode?
In other words, is the CGPDFDataFormatJPEG2000 format ever used for anything other than JPEG2000 image streams? The reasonable answer would be no, but there can always be a difference between common usage and what's theoretically possible.
JPXDecode filter expects a JPEG2000 image file to be stored in the image XObject, not just compressed raw data. I can say 100% it is always used for image XObjects. But theoretically nothing stops you to wrap your raw content stream data as a JPEG2000 image and then use the JPXDecode filter with a regular content stream. It is just not practical.

Extracting an image from H.264 sample data (Objective-C / Mac OS X)

Given a sample buffer of H.264, is there a way to extract the frame it represents as an image?
I'm using QTKit to capture video from a camera and using a QTCaptureMovieFileOutput as the output object.
I want something similar to the CVImageBufferRef that is passed as a parameter to the QTCaptureVideoPreviewOutput delegate method. For some reason, the file output doesn't contain the CVImageBufferRef.
What I do get is a QTSampleBuffer which, since I've set it in the compression options, contains an H.264 sample.
I have seen that on the iPhone, CoreMedia and AVFoundation can be used to create a CVImageBufferRef from the given CMSampleBufferRef (Which, I imagine is as close to the QTSampleBuffer as I'll be able to get) - but this is the Mac, not the iPhone.
Neither CoreMedia or AVFoundation are on the Mac, and I can't see any way to accomplish the same task.
What I need is an image (whether it be a CVImageBufferRef, CIImage or NSImage doesn't matter) from the current frame of the H.264 sample that is given to me by the Output object's call back.
Extended info (from the comments below)
I have posted a related question that focusses on the original issue - attempting to simply play a stream of video samples using QTKit: Playing a stream of video data using QTKit on Mac OS X
It appears not to be possible which is why I've moved onto trying to obtain frames as images and creating an appearance of video, by scaling, compressing and converting the image data from CVImageBufferRef to NSImage and sending it to a peer over the network.
I can use the QTCapturePreviewVideoOutput (or decompressed) to get uncompressed frame images in the form of CVImageBufferRef.
However, these images references need compressing, scaling and converting into NSImages before they're any use to me, hence the attempt to get an already scaled and compressed frame from the framework using the QTCaptureMovieFileOutput (which allows a compression and image size to be set before starting the capture), saving me from having to do the expensive compression, scale and conversion operations, which kill CPU.
Does the Creating a Single-Frame Grabbing Application section of the QTKit Application Programming Guide not work for you in this instance?

Resources