How to persist CVPixelBuffer? - cvpixelbuffer

I need to extract frames from iOS front-facing camera, convert each of them to CVPixelBuffer and actually store CVPixelBuffer objects on a disk without loosing data.
Is there a way to achieve this?

Related

Parsing RAW camera images

Does parsing of a RAW camera image yield all the necessary informations to create a viewable picture from it?
More specifically, is it possible to obtain informations like sensor layout, crop position and size, ... just from the RAW file?
I'm asking because in the case of crop, there is ActiveArea TIFF tag but it seems it is not always available. Also, I see that RAW conversion software seem to always include a hand-crafted database of attributes like black-point, sensor pattern so I was wondering whether that is some kind of optimisation or that all the required information is simply not there in the RAW?

Is it possible to get the raw IR image data from the Superframe?

I am using the java API with the Tango Peanut phone, and I wanted to know if there is a way to get the raw IR image from the RGB-IR camera that the depth sensor uses. I know that a quarter of the pixels from the RGB-IR camera is IR data. I also know that all of the 4MP RGB-IR image gets put into the superframe and then converted to YUV. However it was unclear on how to decode the IR channel, or if it is even possible at this point. If its lost inside the YUV superframe, is there any other way I can retrieve the raw IR image?
In order to get the depth data:
http://www.pocketmagic.net/wp-content/uploads/2014/05/sbrl.zip
taken from http://palcu.blogspot.ro/2014/04/google-project-tango-hackathon-in.html
You might also wanna try
https://github.com/googlesamples/tango-examples-c/wiki/Depth:-Point-Cloud-Viewer
http://www.pocketmagic.net/2014/05/google-tango-hackathon-in-timisoara/#.VCQ6H1eHuiN
Please keep me posted on success/failure. Are you on a tablet or phone?

How to convert animation to video format?

I have application which simply is an animation (some circles moving around).
I want to know how can I save this animation as video like MP4?
OR is it possible to record(capture) things which happen inside a node and save it as video format?
There is no build-in functionality for that.
If you just want to record how your application run there are several tools for that. E.g Fraps
If you want to create your own video programmatically you need to use some 3rd party software (or write one), which allows to encode set of images to video. E.g. Xuggle. Here you can find how to take screenshots in JavaFX: Taking a screenshot of a scene or a portion of a scene in JavaFx 2.2

Windows Phone 7, Steganography and MediaLibrary.SavePicture

I am working with MediaLibrary on WP7 and I am doing steganography on BitmapImage (WriteableBitmap) which works fine (using this approach: http://www.codeproject.com/Articles/4877/Steganography-Hiding-messages-in-the-Noise-of-a-Pi)
Now the problem occurs when I call MediaLibrary.SavePicture method to save my bitmap to the phone. When I load this saved bitmap again from the phone, I can see that the pixels of the bitmap are shifted and my steganography data is lost.
Is there a way to avoid this behavior during the save method?
Better yet, is there a way to attach some metadata to my bitmaps that would be persisted with the bitmap?
Thanks a lot!
Leo
The issue might be caused by the fact that MediaLibrary.SavePicture saves the stream as a JPEG whereas your bytestream represents an uncompressed bitmap. Since JPEG is a lossy compression format, your data might be thrown away and so your hidden byte stream becomes corrupt. I'm not familiar with steganography but, if possible, you could try creating a blank JPEG image and writing your data to that. This way, your image format remains the same. You could try using Extensions.SaveJpeg with a quality value of 100, writing the data to that and then saving it to the MediaLibrary.

How can I obtain raw data from a CVImageBuffer object

I'm trying to use cocoa to grab images from a webcam. I'm able to get the image in RGBA format using the QTKit and the didOutputVideoFrame delegate call, and converting the CVImageBuffer to a CIImage and then to a NSBitmapImageRep.
I know my camera grabs natively in YUV, what I want is to get the YUV data directly from the CVImageBuffer, and proccess the YUV frame before displaying it.
My question is: How can I get the YUV data from the CVImageBuffer?
thanks.
You might be able to create a CIImage from the buffer using +[CIImage imageWithCVBuffer:] and then render that CIImage into a CGBitmapContext of the desired pixel format.
Note, I have not tested this solution.
I asked what I thought was a different question, but it turned out to have the same answer as this one: raw data from CVImageBuffer without rendering?

Resources