HTTP Live Streaming of static file to iOS device - mpmovieplayercontroller

I'm trying to understand the "chunked" aspect of HTTP Live Streaming a static video file to an iOS device. Where does the chunking of the video file happen?
Edit: from reading HTTP LIve Streaming and a bit more of https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07 it sounds like the video file is split into .ts segments on the server. Or the m3u8 playlists can specify byte offsets into the file (apparently using EXT-X-BYTERANGE).
Here's what I understand of this process after reading Apple's HLS description and https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07:
A static file lives on my server. It has the proper audio/video encoding (H.264 and AAC).
I'll pass an m3u8 playlist to the media player (MPMoviePlayer or similar) in my app.
The app will "reload the index" during media playback. In other words the app will request additional segments to play.
each 10 second segment is in an MPEG Transport Stream container.
My understanding of this process is incomplete (and perhaps incorrect). Any additional info is much appreciated.

What are you asking for?? Info???
-The app is not reloading the index but playing it... using the M3U8 file that switches the correct encoded file. That way you only have to make a connection between the mediaPlayer and the "Manifest File" for example...
NSURL *fileURL = [NSURL URLWithString:#"http://techxvweb.fr/html5/AppleOutput/2012-03-10-j23-dax-smr-mt1-m3u8-aapl.ism/manifest(format=m3u8-aapl)"];
moviePlayerController = [[MPMoviePlayerController alloc] initWithContentURL:fileURL];
/* Inset the movie frame in the parent view frame. */
CGRect viewInsetRect = CGRectInset ([self.view bounds],0.0, 0.0 );
[[moviePlayerController view] setFrame: viewInsetRect ];
[self.view addSubview:moviePlayerController.view];
[moviePlayerController play];
where the NSUrl is the url to your manifestFile... note that I'm adding:
/manifest(format=m3u8-aapl)
to the original manifest file, what parses the "ISM" file to the correct M3U8 syntax
NSURL *fileURL = [NSURL URLWithString:#"http://techxvweb.fr/html5/AppleOutput/2012-03-10-j23-dax-smr-mt1-m3u8-aapl.ism/manifest(format=m3u8-aapl)"];

Related

Re-render video using the new Photos Framework in iOS8

I need to be able take a video from Photos and re-rendering, both clipping it in time, changing the width and height, and frame rate. Certainly I need to start with:
PHContentEditingInputRequestOptions *options = [[PHContentEditingInputRequestOptions alloc] init];
[self.asset requestContentEditingInputWithOptions:options completionHandler:^(PHContentEditingInput *contentEditingInput, NSDictionary *info) {
// Get full image
NSURL *url = [contentEditingInput fullSizeImageURL];
}];
And I should be able to adjust width, height and duration. Grab an NSData from that, write that out to the file syset.m
But the url is nil, which implies to me that I can't edit videos with the new Photos framework. (ALAsset didn't have a problem with this using AVAssetExportSession.) This makes sense since the Apple Dev sample code can't edit videos either.
Now, to make life easier I could just pass that url to an AVAssetExportSession but I can't, because it is nil. If I just modified width, height and duration I'd still need to grab an NSData from it, write that out to the file system.
I do not need to write the modified video back to Photos, I actually need the video on the file system since I'll be uploading it to our servers.
fullSizeImageURL is for working with Photo assets. You want the avAsset property when working with a video. Modify the actual video, not the metadata, by writing a new video file.
To do that, you could use that avAsset in an AVMutableComposition:
Insert the appropriate time range of the avAsset's video track (AVAssetTrack) into an AVMutableCompositionTrack. That'll do your trimming.
Place/size it appropriately using layer instructions. (AVMutableVideoCompositionLayerInstruction) to do your cropping and scaling.

MPMediaPickerController for in house app videos

I have an app that has stored video files. I would like to have the ability for the user to pick and play ANY video stored in my apps file, NOT a predetermined moviePlayer like,
//
NSString *movieFile = [[NSBundle mainBundle}
pathForResource:#"somemoviename" ofType:#m4v"];
MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc]
[initWithContentURL:
[NSURL fileURLWithPath: movieFile]];
//
so it would work just like getting a video from the photo library on my iPhone, but its doing it with the video files in the application itself.

Reduce resolution in captureOutput:didOutputSampleBuffer:fromConnection:

I'm trying to use a smaller resolution when getting access a webcam video feed, I need to do fast editing when it comes down to previewing. Currently when the image is being outputted from the sample buffer the resolution comes out as 1600x1200 which is too high for what I want to do with it
When setting up the session I use this which it accepts, however regardless of this is does not seem that the changes are being made
_session = [[AVCaptureSession alloc] init];
if ([_session canSetSessionPreset:AVCaptureSessionPreset320x240])
{
[_session setSessionPreset:AVCaptureSessionPreset320x240];
}
One other thing I will also need the webcam to take full size images using captureStillImageAsynchronouslyFromConnection:, this is currently fine

Upload JPG image to Twitter

I'm trying to upload a jpg image to twitter. I can upload the #"icon.png" as many times as I want but can't seem to get a jpg to upload. I always get error 403 back from twitter. This is my code:
ACAccount *twitterAccount = [arrayOfAccounts objectAtIndex:0];
UIImage *image =[UIImage imageNamed:file]; //file is the name of a jpg image. If replaced with #"icon.png" the upload works
TWRequest *postRequest = [[TWRequest alloc] initWithURL:[NSURL URLWithString:#"https://upload.twitter.com/1/statuses/update_with_media.json"]
parameters:[NSDictionary dictionaryWithObject:#"Hello. This is a tweet." forKey:#"status"] requestMethod:TWRequestMethodPOST];
[postRequest addMultiPartData:UIImagePNGRepresentation(image) withName:#"media" type:#"multipart/png"]; //I've tried this with UIImageJPEGRepresentation(image, 80) and multipart/jpg and jpeg to no avail
// Set the account used to post the tweet.
[postRequest setAccount:twitterAccount];
[postRequest performRequestWithHandler:^(NSData *responseData, NSHTTPURLResponse *urlResponse, NSError *error)
{
NSLog(#"Twitter response, HTTP response: %i", [urlResponse statusCode]);
}];
Please help!
If I use the below it works
UIImage *image =[UIImage imageNamed:#"background.png"]; //This file is copied to the device when the app is built.
But, if I use this is does not work, even though I have confirmed that this path and file name does intact pull up the image
UIImage *image =[UIImage imageNamed:[NSString stringWithFormat:#"%#/%#", dir, file]];
What am I doing wrong? Thanks
/// SOLUTION
I may be an idiot, please don't crucify me, but here is what I needed to change the UIImage to
UIImage *image = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:#"%#/%#", dir, file]];
This allows both PNG and JPG to upload without issue. Stupid mistake but at least it's working now. Thanks for the replies!!
The 403 error means the error comes from Twitter; you cannot share the same content twice from the same Twitter account. If you want to upload the same picture, then you must first remove it from the Twitter account, then try to upload it again. Otherwise, upload a different picture.

Set resolution in QTCapture?

I'm recording from a webcam. The camera looks great in PhotoBooth. However, when I preview it in my program with a QTCaptureView, or record it to a file, it is very, very slow. The reason is that QuickTime is giving me the maximum possible resolution of 1600x1200. How can I force a more reasonable size for both my QTCaptureView and my recording to file?
As described here, you can set the pixel buffer attributes within the output from your QTCaptureSession to change the resolution of the video being captured. For example:
[[[myCaptureSession outputs] objectAtIndex:0] setPixelBufferAttributes: [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:480], kCVPixelBufferHeightKey,
[NSNumber numberWithInt:640], kCVPixelBufferWidthKey, nil]];
will set the video resolution to be 640x480 for the first output in your capture session. This should also adjust the camera settings themselves to have it return image frames of that size (if supported by the camera hardware).
You may also wish to use base MPEG4 encoding, instead of h.264, to do your realtime video recording. This can be set using code similar to the following:
NSArray *outputConnections = [mCaptureMovieFileOutput connections];
QTCaptureConnection *connection;
for (connection in outputConnections)
{
if ([[connection mediaType] isEqualToString:QTMediaTypeVideo])
[mCaptureMovieFileOutput setCompressionOptions:[QTCompressionOptions compressionOptionsWithIdentifier:#"QTCompressionOptionsSD480SizeMPEG4Video"] forConnection:connection];
}
h.264 encoding, particularly the Quicktime implementation, uses a lot more CPU power to encode than the base MPEG4.
The solution above (setPixelBufferAttributes:) does set the preview size correctly, but once movie recording starts, the preview image will get set back to it's original value (1280 x 1024 on my MBP) if you've set (almost) any compression options.
If that was just during movie recording that would be one thing, but once recording is complete, further calls to setPixelBufferAttributes will have no effect.
So, you can change the preview image size, as long as you don't plan on doing any actual compressed movie recording.
This is on 10.5.8/9L30, MBP with a GeForce 8600M. Any compression option except for no compression or QTCompressionOptionsSD240SizeH264Video breaks as described above.
rdar://7447812
To add more information about the topic:
you can't specifiy directly the definition on the capture side. Rather, this is the output of the capture session that defines the definition. e.g.
if you capture into a QtCaptureDecompressedVideoOutput, you shall specify the definition on this object.

Resources