Re-render video using the new Photos Framework in iOS8 - ios8

I need to be able take a video from Photos and re-rendering, both clipping it in time, changing the width and height, and frame rate. Certainly I need to start with:
PHContentEditingInputRequestOptions *options = [[PHContentEditingInputRequestOptions alloc] init];
[self.asset requestContentEditingInputWithOptions:options completionHandler:^(PHContentEditingInput *contentEditingInput, NSDictionary *info) {
// Get full image
NSURL *url = [contentEditingInput fullSizeImageURL];
}];
And I should be able to adjust width, height and duration. Grab an NSData from that, write that out to the file syset.m
But the url is nil, which implies to me that I can't edit videos with the new Photos framework. (ALAsset didn't have a problem with this using AVAssetExportSession.) This makes sense since the Apple Dev sample code can't edit videos either.
Now, to make life easier I could just pass that url to an AVAssetExportSession but I can't, because it is nil. If I just modified width, height and duration I'd still need to grab an NSData from it, write that out to the file system.
I do not need to write the modified video back to Photos, I actually need the video on the file system since I'll be uploading it to our servers.

fullSizeImageURL is for working with Photo assets. You want the avAsset property when working with a video. Modify the actual video, not the metadata, by writing a new video file.
To do that, you could use that avAsset in an AVMutableComposition:
Insert the appropriate time range of the avAsset's video track (AVAssetTrack) into an AVMutableCompositionTrack. That'll do your trimming.
Place/size it appropriately using layer instructions. (AVMutableVideoCompositionLayerInstruction) to do your cropping and scaling.

Related

When creating a CGImage with a decode array, the output has all of it's pixels offset by a small amount

Very weird behavior, but I have narrowed the problem down as far as I can go I think
I have a NSImage, let's call it inputImage. It is represented by a NSBitmapImageRep in a CGColorSpaceCreateDeviceGray, if that matters
I want to create a CGImageRef from it, but with inverted colors.
NSData *data = [inputImage TIFFRepresentation];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
NSBitmapImageRep *maskedRep = (NSBitmapImageRep*)[inputImage representations][0];
CGFloat decode[] = {0.0, 1.0};
maskRef = CGImageMaskCreate(maskedRep.pixelsWide,
maskedRep.pixelsHigh,
8,
maskedRep.bitsPerPixel,
maskedRep.bytesPerRow,
provider,
decode,
false);
NSImage *testImage = [[NSImage alloc] initWithCGImage:maskRef size:NSMakeSize(1280,1185)]; //the size of the image, hard-coded for testing
NSLog(#"testimage: %#", testImage);
The problem is when I look at testImage, all of the pixels are slightly offset to the right from the original image.
inputImage:
testImage:
It's much easier to see if you save the pictures off, but you'll notice that everything in testImage is offset to the right by about 5 pixels or so. You'll see a white gap to the left of the black content in testImage
Somewhere in my 5 lines of code I am somehow moving my image over. Does anybody have any idea how this could be happening? I'm currently suspecting TIFFRepresentation
The data provider you pass to CGImageMaskCreate() is supposed to be raw pixels in the form specified by the other parameters, not an image file format. You shouldn't be passing TIFF data. Frankly, I'm surprised you got anything remotely resembling your original image, rather than noise/garbage. I'm guessing the TIFF data wasn't compressed, by sheer chance. You can actually see a bit of noise, which is the TIFF header interpreted as pixels, at the upper-left of your mask.
Probably, your best bet is to create a CGImage from your inputImage (or, depending on how inputImage was created, skip NSImage and create the CGImage directly from a file using the CGImageSource API or CGImageCreateWith{JPEG,PNG}DataProvider()). To get a CGImage from an NSImage, use -CGImageForProposedRect:context:hints:.
Then, get the data provider from that CGImage and create the mask CGImage from that, using the various properties (width, height, bpp, etc.) queried from the first CGImage using the various CGImageGet... functions.

Reduce resolution in captureOutput:didOutputSampleBuffer:fromConnection:

I'm trying to use a smaller resolution when getting access a webcam video feed, I need to do fast editing when it comes down to previewing. Currently when the image is being outputted from the sample buffer the resolution comes out as 1600x1200 which is too high for what I want to do with it
When setting up the session I use this which it accepts, however regardless of this is does not seem that the changes are being made
_session = [[AVCaptureSession alloc] init];
if ([_session canSetSessionPreset:AVCaptureSessionPreset320x240])
{
[_session setSessionPreset:AVCaptureSessionPreset320x240];
}
One other thing I will also need the webcam to take full size images using captureStillImageAsynchronouslyFromConnection:, this is currently fine

HTTP Live Streaming of static file to iOS device

I'm trying to understand the "chunked" aspect of HTTP Live Streaming a static video file to an iOS device. Where does the chunking of the video file happen?
Edit: from reading HTTP LIve Streaming and a bit more of https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07 it sounds like the video file is split into .ts segments on the server. Or the m3u8 playlists can specify byte offsets into the file (apparently using EXT-X-BYTERANGE).
Here's what I understand of this process after reading Apple's HLS description and https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07:
A static file lives on my server. It has the proper audio/video encoding (H.264 and AAC).
I'll pass an m3u8 playlist to the media player (MPMoviePlayer or similar) in my app.
The app will "reload the index" during media playback. In other words the app will request additional segments to play.
each 10 second segment is in an MPEG Transport Stream container.
My understanding of this process is incomplete (and perhaps incorrect). Any additional info is much appreciated.
What are you asking for?? Info???
-The app is not reloading the index but playing it... using the M3U8 file that switches the correct encoded file. That way you only have to make a connection between the mediaPlayer and the "Manifest File" for example...
NSURL *fileURL = [NSURL URLWithString:#"http://techxvweb.fr/html5/AppleOutput/2012-03-10-j23-dax-smr-mt1-m3u8-aapl.ism/manifest(format=m3u8-aapl)"];
moviePlayerController = [[MPMoviePlayerController alloc] initWithContentURL:fileURL];
/* Inset the movie frame in the parent view frame. */
CGRect viewInsetRect = CGRectInset ([self.view bounds],0.0, 0.0 );
[[moviePlayerController view] setFrame: viewInsetRect ];
[self.view addSubview:moviePlayerController.view];
[moviePlayerController play];
where the NSUrl is the url to your manifestFile... note that I'm adding:
/manifest(format=m3u8-aapl)
to the original manifest file, what parses the "ISM" file to the correct M3U8 syntax
NSURL *fileURL = [NSURL URLWithString:#"http://techxvweb.fr/html5/AppleOutput/2012-03-10-j23-dax-smr-mt1-m3u8-aapl.ism/manifest(format=m3u8-aapl)"];

Write NSImage to file

I have an NSImage from NSImage *myImage = [[NSBitmapImageRep alloc] initWithFocusedViewRect:[outputView bounds]]; and I need to save it to a file. I havent been able to find anything about saving NSImage in any format. Has anyone done this? Is it even possible?
Thanks
The way you say you're making an NSImage doesn't make sense. You show how to create an NSBitmapImageRep, not an NSImage.
Before you save it to a file, you convert to NSData. There is an NSImage method to convert to TIFF data, and there is an NSBitmapImageRep method to convert to data in several formats.
If you're creating a snapshot of an actual view (as opposed to an image that you've locked focus on), then an alternative to creating a bitmap image rep would be to ask the view for PDF data of the desired rectangle. This will be vector rather than raster (except where the view itself draws an image), which will scale more nicely to higher resolutions. You would then write that data to a file the same as you would any other data.
Having an NSBitmapImageRep (as JWWalker pointed out, that code doesn't create an NSImage instance), you can ask the image rep for a CGImage version of itself, and then create a CGImageDestination to write that image to a file. This may be more efficient than obtaining a data object (which will hold the raster data in memory) and provides more options.

Set resolution in QTCapture?

I'm recording from a webcam. The camera looks great in PhotoBooth. However, when I preview it in my program with a QTCaptureView, or record it to a file, it is very, very slow. The reason is that QuickTime is giving me the maximum possible resolution of 1600x1200. How can I force a more reasonable size for both my QTCaptureView and my recording to file?
As described here, you can set the pixel buffer attributes within the output from your QTCaptureSession to change the resolution of the video being captured. For example:
[[[myCaptureSession outputs] objectAtIndex:0] setPixelBufferAttributes: [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:480], kCVPixelBufferHeightKey,
[NSNumber numberWithInt:640], kCVPixelBufferWidthKey, nil]];
will set the video resolution to be 640x480 for the first output in your capture session. This should also adjust the camera settings themselves to have it return image frames of that size (if supported by the camera hardware).
You may also wish to use base MPEG4 encoding, instead of h.264, to do your realtime video recording. This can be set using code similar to the following:
NSArray *outputConnections = [mCaptureMovieFileOutput connections];
QTCaptureConnection *connection;
for (connection in outputConnections)
{
if ([[connection mediaType] isEqualToString:QTMediaTypeVideo])
[mCaptureMovieFileOutput setCompressionOptions:[QTCompressionOptions compressionOptionsWithIdentifier:#"QTCompressionOptionsSD480SizeMPEG4Video"] forConnection:connection];
}
h.264 encoding, particularly the Quicktime implementation, uses a lot more CPU power to encode than the base MPEG4.
The solution above (setPixelBufferAttributes:) does set the preview size correctly, but once movie recording starts, the preview image will get set back to it's original value (1280 x 1024 on my MBP) if you've set (almost) any compression options.
If that was just during movie recording that would be one thing, but once recording is complete, further calls to setPixelBufferAttributes will have no effect.
So, you can change the preview image size, as long as you don't plan on doing any actual compressed movie recording.
This is on 10.5.8/9L30, MBP with a GeForce 8600M. Any compression option except for no compression or QTCompressionOptionsSD240SizeH264Video breaks as described above.
rdar://7447812
To add more information about the topic:
you can't specifiy directly the definition on the capture side. Rather, this is the output of the capture session that defines the definition. e.g.
if you capture into a QtCaptureDecompressedVideoOutput, you shall specify the definition on this object.

Resources