ALAssetsGroupSavedPhotos recent, not all, videos under iOS8 - ios8

Our app lets the user load a video from their camera roll. This is pretty standard stuff:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
// Enumeration over all groups with videos
ALAssetsLibraryGroupsEnumerationResultsBlock groupsEnumerationBlock = ^(ALAssetsGroup *group, BOOL *stop)
{
[group setAssetsFilter:[ALAssetsFilter allVideos]];
[group enumerateAssetsUsingBlock:^(ALAsset *result, NSUInteger index, BOOL *stop)
{
if (result) {
// do stuff here with each video
}
}];
};
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock: groupsEnumerationBlock
failureBlock:^(NSError *error) {
log4Debug(#"No groups found or accessible for Camera Roll.");
}
];
The problem is of course with iOS8. That code enumerates over all the videos under iOS7, but under iOS8 it enumerates over all the recent videos. Videos older than 30 days are not available.
Indeed, when you look at the Photos app under iOS8 you don't even see a Camera Roll anymore, just a "Recently Added" album. Now, there is also a "Videos" album which has all videos. Accessing that would be fine here.
We cannot convert to PhotoKit (today). We'll want to do that soon but right now we need a solution that works with both iOS7 and iOS8.

Have you tried this:
PHFetchOptions *allPhotosOptions = [PHFetchOptions new];
allPhotosOptions.sortDescriptors = #[[NSSortDescriptor sortDescriptorWithKey:#"creationDate" ascending:YES]];
PHFetchResult *allPhotosResult = [PHAsset fetchAssetsWithMediaType:PHAssetMediaTypeVideo options:allPhotosOptions];
When I tested this on my device it returned all videos I have on the device and not just recent ones.

Related

How do I play a video on tvOS for Apple TV?

I started a blank tvOS project and created the following code:
- (void)viewDidLoad
{
[super viewDidLoad];
AVPlayer *avPlayer = [AVPlayer playerWithURL:[NSURL URLWithString:#"http://www.myurl.com/myvideo.mp4"]];
AVPlayerLayer *avPlayerLayer = [AVPlayerLayer playerLayerWithPlayer:avPlayer];
avPlayerLayer.frame = CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height);
[self.view.layer addSublayer:avPlayerLayer];
[avPlayer play];
}
Nothing happens in the simulator though once the app loads. No video, nothing, just a blank translucent screen in my Apple TV simulator.
What's the proper way to play a sample video on app launch for an Apple TV app from an HTTP source?
I just pasted your code in my tvOS sample project, replaced the URL and ran it.
Nothing happened. Well, except for the fact that there's a log entry telling me that App Transport Security has blocked my URL request.
So I headed to the Info.plist, disabled ATS and upon next launch the video showed up just fine.
So if you're also using a non-HTTPS URL you're very likely running into this issue which is easily fixed by either using an HTTPS URL, disabling ATS completely or allowing specific non-HTTPs URLs in your Info.plist.
P.S.: I used this video for testing.
You could also use TVML and TVMLJS
https://developer.apple.com/library/prerelease/tvos/documentation/TVMLJS/Reference/TVJSFrameworkReference/
Adhere to the 'TVApplicationControllerDelegate' protocol and add some properties.
AppDelegate.h
#interface AppDelegate : UIResponder <UIApplicationDelegate, TVApplicationControllerDelegate>
...
#property (strong, nonatomic) TVApplicationController *appController;
#property (strong, nonatomic) TVApplicationControllerContext *appControllerContext;
Then add the following to 'didFinishLaunchingWithOptions'
AppDelegate.m
#define url #"http://localhost:8000/main.js"
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
self.appControllerContext = [[TVApplicationControllerContext alloc] init];
NSURL *javascriptURL = [NSURL URLWithString:url];
self.appControllerContext.javaScriptApplicationURL= javascriptURL;
for (id key in launchOptions) {
id val=[launchOptions objectForKey:key];
NSLog(#"key=%# value=%#", key, val);
if([val isKindOfClass:[NSString class]]) [self.appControllerContext.launchOptions objectForKey:val];
self.appController = [[TVApplicationController alloc] initWithContext:self.appControllerContext window:self.window delegate:self];
}
return YES;
}
create a folder and add the following files
main.js
index.tvml
main.js
function launchPlayer() {
var player = new Player();
var playlist = new Playlist();
var mediaItem = new MediaItem("video", "http://trailers.apple.com/movies/focus_features/9/9-clip_480p.mov");
player.playlist = playlist;
player.playlist.push(mediaItem);
player.present();
//player.play()
}
//in application.js
App.onLaunch = function(options) {
launchPlayer();
}
careful with this url in the mediaItem
Set up a template of your choice.
index.tvml
<document>
<alertTemplate>
<title>…</title>
<description>…</description>
<button>
<text>…</text>
</button>
<text>…</text>
</alertTemplate>
</document>
open terminal and navigate to this folder then run
python -m SimpleHTTPServer 8000
make sure the port here is the port in your ObjC url. The Apple examples use 9001.
See these tutorials for more info
http://jamesonquave.com/blog/developing-tvos-apps-for-apple-tv-with-swift/
http://jamesonquave.com/blog/developing-tvos-apps-for-apple-tv-part-2/
One issue I ran into was trying to play a local video file. It wouldn't work and there were constraint issues etc.
It looks like you can't use python to play the videos so either try apache or link to a video on the web.
This SO answer pointed me there.
The best way to play video in your app on AppleTV is going to be AVKit's AVPlayerViewController. If you use AVKit, you get a lot of stuff for free.
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVPlayerViewController_Class/index.html
You simply add that player to the viewController's player property:
// instantiate here or in storyboard
AVPlayerViewController *viewController = [[AVPlayerViewController alloc] initWithNibName:nil bundle:nil];
viewController.player = player;
[self addChildViewController:viewController];
[self.view addSubview:viewController.view];
[viewController didMoveToParentViewController:self];
// setup constraints, etc.
// play the video
[player play];
Also as mentioned below, make sure the video you're trying to play is coming either from an HTTPS connection or that you've disabled App Transport Security by setting the proper flags in the plist.
I didn't like the answers which messed about with subviews etc.
For full-screen playback, I use the following (Non-ARC) code:
// Play the stream
NSString *wifiStreamAddress = #"http://yourmoviefile.m3u8";
AVPlayer *player = [[AVPlayer alloc] initWithURL: [NSURL URLWithString: wifiStreamAddress] ];
AVPlayerViewController *playerViewController = [[AVPlayerViewController alloc] init];
playerViewController.player = player;
// Keep pointers to player and controller using retained properties:
self.player = player;
self.playerViewController = playerViewController;
[player release];
[playerViewController release];
[self presentViewController: playerViewController animated: true completion: ^{
[self.player play];
}];
This works really neat, animating presentation and fading back to previous view when you tap the MENU button. Also, it works great with the remote control using all the standard functions.
Its working for me.
May be helpful for you
-(void)playAction
{
AVPlayerViewController *viewController = [[AVPlayerViewController alloc] initWithNibName:nil bundle:nil];
viewController.player = player;
[self addChildViewController:viewController];
[self.view addSubview:viewController.view];
[viewController didMoveToParentViewController:self];
// play the video
[player play];
}
Swift version
Make a PlayViewController which inherit the AVPlayerViewController.
In the viewcontroller which has play button, add such function
#IBAction func onClickPlay(sender: AnyObject) {
let playerVC = PlayerViewController()
playerVC.playVideo(urlString)
[self.presentViewController(playerVC, animated: true, completion: nil)]
}
In the PlayerViewController
func playVimeoVideo(link : String) {
player = AVPlayer(URL: NSURL(string: link)!)
player?.play()
}
Notice
The question and some answers may be a little misleading so that you might think that only the url with ".mp4" at the end can be played by the Apple TV. I believed so at the first time I saw the post. It is not true. In fact, with AVPlayerViewController you can play Vimeo streaming video! The link to the stream video is not like https://vimeo.com/92655878. It is possible to get it from Vimeo site by extracting it from a json file, which can be downloaded from this link
let link = "https://vimeo.com/api/oembed.json?url=https%3A//vimeo.com/" + videoId
To be able to get correct url for the video, you need to use the Vimeo Pro user access to get the stream link for a specific video.

passing video frame to Core Image on osx

Hi all you awesome coders! I've put together this thing from various helpful sources over the last couple of weeks (including a lot of posts from stackoverflow), trying to create something that will take a webcam feed and detect smiles when they occur (might as well draw boxes around the faces and the smiles as well, that doesn't seem like it would be hard once they are detected). Please give me some lee-way if it's messy code because I'm still very much learning.
Currently I'm stuck at trying to pass the image to a CIImage so it can be analysed for faces (I plan to deal with smiles after the face hurdle is overcome). As it is the compiler succeeds if I comment out the block after (5) - it brings up a simple AVCaptureVideoPreviewLayer in a window. I think this is what I've called "rootLayer", so it's like the first layer of the displayed output, and after I detect faces in the video frames I'll show a rectangle following the "bounds" of any detected face in a new layer overlaid on top of this one, and I've called that layer "previewLayer"... correct?
But with the block after (5) there, the compiler throws out three errors -
Undefined symbols for architecture x86_64:
"_CMCopyDictionaryOfAttachments", referenced from:
-[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o
"_CMSampleBufferGetImageBuffer", referenced from:
-[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Can anyone tell me where I'm going wrong and what my next steps are?
Thanks for any help, I've been stuck at this point for a couple of days and I can't figure it out, all the examples I can find are for IOS and don't work in OSX.
- (id)init
{
self = [super init];
if (self) {
// Move the output part to another function
[self addVideoDataOutput];
// Create a capture session
session = [[AVCaptureSession alloc] init];
// Set a session preset (resolution)
self.session.sessionPreset = AVCaptureSessionPreset640x480;
// Select devices if any exist
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice) {
[self setSelectedVideoDevice:videoDevice];
} else {
[self setSelectedVideoDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeMuxed]];
}
NSError *error = nil;
// Add an input
videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
[self.session addInput:self.videoDeviceInput];
// Start the session (app opens slower if it is here but I think it is needed in order to send the frames for processing)
[[self session] startRunning];
// Initial refresh of device list
[self refreshDevices];
}
return self;
}
-(void) addVideoDataOutput {
// (1) Instantiate a new video data output object
AVCaptureVideoDataOutput * captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.videoSettings = #{ (NSString *) kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
// discard if the data output queue is blocked (while CI processes the still image)
captureOutput.alwaysDiscardsLateVideoFrames = YES;
// (2) The sample buffer delegate requires a serial dispatch queue
dispatch_queue_t captureOutputQueue;
captureOutputQueue = dispatch_queue_create("CaptureOutputQueue", DISPATCH_QUEUE_SERIAL);
[captureOutput setSampleBufferDelegate:self queue:captureOutputQueue];
dispatch_release(captureOutputQueue); //what does this do and should it be here or after we receive the processed image back?
// (3) Define the pixel format for the video data output
NSString * key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber * value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = #{key:value};
[captureOutput setVideoSettings:settings];
// (4) Configure the output port on the captureSession property
if ( [self.session canAddOutput:captureOutput] )
[session addOutput:captureOutput];
}
// Implement the Sample Buffer Delegate Method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// I *think* I have a video frame now in some sort of image format... so have to convert it into a CIImage before I can process it:
// (5) Convert CMSampleBufferRef to CVImageBufferRef, then to a CI Image (per weichsel's answer in July '13)
CVImageBufferRef cvFrameImage = CMSampleBufferGetImageBuffer(sampleBuffer); // Having trouble here, prog. stops and won't recognise CMSampleBufferGetImageBuffer.
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage options:(__bridge NSDictionary *)attachments];
//self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage];
//OK so it is a CIImage. Find some way to send it to a separate CIImage function to find the faces, then smiles. Then send it somewhere else to be displayed on top of AVCaptureVideoPreviewLayer
//TBW
}
- (NSString *)windowNibName
{
return #"AVRecorderDocument";
}
- (void)windowControllerDidLoadNib:(NSWindowController *) aController
{
[super windowControllerDidLoadNib:aController];
// Attach preview to session
CALayer *rootLayer = self.previewView.layer;
[rootLayer setMasksToBounds:YES]; //aaron added
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[self.previewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[self.previewLayer setFrame:[rootLayer bounds]];
//[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable]; //don't think I need this for OSX?
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[rootLayer addSublayer:previewLayer];
// [newPreviewLayer release]; //what's this for?
}
(moved from the comments section)
Wow. I guess two days and one StackOverflow post is what it takes to figure out that I haven't added CoreMedia.framework to my project.

AVFoundation - how to mirror video from webcam - Mac OS X

I am trying to mirror video received from webcam on mac os x. I would like to avoid doing a manual flip/tranform after receiving the video buffer . So, I want to setup AVCaptureSession such that video buffer received in captureOutput method of AVCaptureVideoDataOutputSampleBufferDelegate is mirrored by AVFoundation itself. I don't want to use the preview layer.
On an iMac(10.8.5), to mirror video, AVCaptureConnection isVideoMirroringSupported is successfully tested before setting the videoMirrored property. But video buffer received in captureOutput delegate isn't mirrored.
Note: Video mirroring on iOS was successful, when I followed this SO answer. But it isn't helping on mac os x.
Code used is below. Error checking is left out for this post.
//create session
_session = [[AVCaptureSession alloc] init];
//get capture device
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//create sesion input
NSError * error;
_sessionInput = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
//create session output
_sessionOutput = [[AVCaptureVideoDataOutput alloc] init];
[_sessionOutput setAlwaysDiscardsLateVideoFrames:YES];
[[_sessionOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[_sessionOutput setVideoSettings:videoSettings];
//serial queue to process video frames
dispatch_queue_t videoOutputQueue = dispatch_queue_create("deviceeraQueue", DISPATCH_QUEUE_SERIAL);
[_sessionOutput setSampleBufferDelegate:self queue:videoOutputQueue];
//begin session configuration
[_session beginConfiguration ];
//input and output for session
if( [_session canAddInput:_sessionInput]) {
[_session addInput:_sessionInput];
}
if( [_session canAddOutput:_sessionOutput]) {
[_session addOutput:_sessionOutput];
}
//set video mirroring
AVCaptureConnection* avConnection = [_sessionOutput connectionWithMediaType:AVMediaTypeVideo];
if( [avConnection isVideoMirroringSupported]) {
avConnection.videoMirrored = YES;
NSLog(#"Video mirroring Support: YES"); // this line is printed
} else {
NSLog(#"Video mirroring Support: NO");
}
//set session preset
[_session setSessionPreset:AVCaptureSessionPreset640x480];
[ _session commitConfiguration ];
...........
...........
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
.........
//sampleBuffer is not mirrored video
........
Of lesser importance 1 - though C++, I also tried looking into OpenCV's VideoCapture implementation for way to mirror video. But, OpenCV don't mirror video from Mac(uses flip). Left is libVlc/V4L.
Of lesser importance 2 - In slide 73 of this 2010 wwdc apple presentation (3Mb pdf), there is a mention that setVideoOrientation is not supported on 'AVCaptureVideoDataOutput` connection. But in 2013, apple docs are updated and supports this method.
You can add a transform on the preview layer to flip x value of the frames before they get to the preview window.
[[self previewLayer] setTransform:CATransform3DMakeScale(-1, 1, 1)];
Then you can run the recorded video through export session and do the same transformation. That way the video preview will match the final recorded video. Bit of a hack, but it gets the same results.
Why hack it when it's very easy. Just set automaticallyAdjustVideoMirroring of your AVCaptureConnection then set it manually.
aPreviewLayer.connection.automaticallyAdjustsVideoMirroring = NO;
aPreviewLayer.connection.videoMirrored = YES;
Swift 5 version of "Þorvaldur Rúnarsson" answer:
previewLayer.connection?.automaticallyAdjustsVideoMirroring = false
previewLayer.connection?.isVideoMirrored = true

image cropping an AVCaptureSession Image

So I have been at it all day to no luck and it has been needless to say quite frustrating, I have looked up many examples and downloadable categories which all tout being able to crop images flawlessly. Which they do, However the minute i try to do it from an image genrated via AVCaptureSession it does not work as well. I consulted both these sources
http://codefuel.wordpress.com/2011/04/22/image-cropping-from-a-uiscrollview/
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
and the project from the first link seems to work directly as advertised but as soon as i hack it to do the same magic on an av capture image...nope...
does anyone have insight into this? Also here is my code for reference.
- (IBAction)TakePhotoPressed:(id)sender
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
//NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
//NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
NSLog(#"%f",image.size.width);
NSLog(#"%f",image.size.height);
float scale = 1.0f/_scrollView.zoomScale;
CGRect visibleRect;
visibleRect.origin.x = _scrollView.contentOffset.x * scale;
visibleRect.origin.y = _scrollView.contentOffset.x * scale;
visibleRect.size.width = _scrollView.bounds.size.width * scale;
visibleRect.size.height = _scrollView.bounds.size.height * scale;
UIImage* cropped = [self cropImage:image withRect:visibleRect];
[croppedImage setImage:cropped];
[image release];
}
];
[croppedImage setHidden:NO];
}
cropImage function used above.
-(UIImage*)cropImage :(UIImage*)originalImage withRect :(CGRect) rect
{
CGRect transformedRect=rect;
if(originalImage.imageOrientation==UIImageOrientationRight)
{
transformedRect.origin.x = rect.origin.y;
transformedRect.origin.y = originalImage.size.width-(rect.origin.x+rect.size.width);
transformedRect.size.width = rect.size.height;
transformedRect.size.height = rect.size.width;
}
CGImageRef cr = CGImageCreateWithImageInRect(originalImage.CGImage, transformedRect);
UIImage* cropped = [UIImage imageWithCGImage:cr scale:originalImage.scale orientation:originalImage.imageOrientation];
[croppedImage setFrame:CGRectMake(croppedImage.frame.origin.x,
croppedImage.frame.origin.y,
cropped.size.width,
cropped.size.height)];
CGImageRelease(cr);
return cropped;
}
I am also tempted for verbosity and arming whomever might help me in my plight with as much information as possible to post my init of my scrollView and avcapture session. However That may be a bit too much so if you want to see it just ask.
Now as for results of what the code actually does?..
What it looks like before i take the picture
And After...
EDIT:
Well I have a few views now and no comment's so either no one has figured it out or it's so simple they thought i would have figured it out again...In any case i have not made any progress. So for anyone interested here is a small sample app with the code all set up and you can see what i am doing
https://docs.google.com/open?id=0Bxr4V3a9QFM_NnoxMkhzZTVNVEE
It seems that this little conundrum did not only have me stumped as after nearly a week,but a scant few of whoever viewed my question had no suggestions either. I must say for this particular problem i could not get it to work in this way, I pondered and tinkered and mused for a while to no avail. Until i did this
[self HideElements];
UIGraphicsBeginImageContext(chosenPhotoView.frame.size);
[chosenPhotoView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self ShowElements];
And that's it, less code and it worked pretty much instantly. So instead of trying to crop an image via the scrollview I take a screenshot of the screen at that time then crop the image using the scrollviews frame variables. And the hide/show element functions hide any overlapping elements on the picture i want.

Cocoa: AVFoundation - no active/enabled connections

I'm trying to build a simple app that captures video off of the built-in iSight camera of a MacBook. I've looked at a couple of example projects on the developer site and am following the tutorial here: Apple's AVFoundation Guide.
Each time I keep breaking on the AVCaptureMovieFileOutput, I get an uncaught exception - no active/enabled connections. I'm new to AV framework so I'm not sure why it recognizes the iSight, allows me to input it to the session, allows me to make a movie output for the session but then tells me there's no connections? What connections is it looking for? (Note: I do not have a QTMovieView in my viewcontroller yet but thought I would only need that for playback, not recording).
I know the iSight is working as I just used it recently with Skype.
Here's my relevant code:
thisSession = [[AVCaptureSession alloc] init];
//set presets for this session
if ([thisSession canSetSessionPreset:AVCaptureSessionPreset640x480]) {
thisSession.sessionPreset = AVCaptureSessionPreset640x480;
NSLog(#"Session Preset: OK");
//capture a device - captures all the devices, microphone, camera, etc.
NSArray *devices = [AVCaptureDevice devices];
//this will hold our decvice
AVCaptureDevice* iSightCamera;
for (AVCaptureDevice *device in devices) {
//we only want to work with the internal camera
if ([[device localizedName] isEqualToString:#"Built-in iSight"]) {
iSightCamera = device;
//creating an input of the device for the session
NSError *error = nil;
AVCaptureDeviceInput* iSightCameraInput =
[AVCaptureDeviceInput deviceInputWithDevice:iSightCamera error:&error];
if (!iSightCameraInput) {
NSLog(#"Error creating device input: %#", error);
} else {
NSLog(#"iSight device input created!");
//adding the device input to the session
if ([thisSession canAddInput:iSightCameraInput]) {
[thisSession addInput:iSightCameraInput];
NSLog(#"iSight input added to session!");
//add the output to the session
AVCaptureMovieFileOutput *movieOutput = [[AVCaptureMovieFileOutput alloc] init];
if ([thisSession canAddOutput:movieOutput]) {
[thisSession beginConfiguration];
[thisSession addOutput:movieOutput];
[thisSession commitConfiguration];
NSLog(#"Movie output added to the session!");
//start writing the movie
NSURL *movieFolder = [NSURL fileURLWithPath:[#"~/Movies" stringByExpandingTildeInPath]];
[movieOutput startRecordingToOutputFileURL:movieFolder recordingDelegate:self];
}
else {
NSLog(#"Error: Could not add movie output to the session.");
}
}
else {
NSLog(#"Error: Could not add iSight to session.");
}
}
}
}

Resources