I capturing a video using AVCaptureConnection in my iOS app. After that I add some images in the video as CALayers. Everything is working fine but I get a black frame at the very end of the resulting video after adding images. There is no frame of actual audio/video that has been affected in this. For audio I am extracting it and changing its pitch and then add it using AVMutableComposition. Here is the code that I am using. Please help me with what I am doing wrong or do I need to add something else.
cmp = [AVMutableComposition composition];
AVMutableCompositionTrack *videoComposition = [cmp addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioComposition = [cmp addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *sourceVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *sourceAudioTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[videoComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
[audioComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceAudioTrack atTime:kCMTimeZero error:nil];
animComp = [AVMutableVideoComposition videoComposition];
animComp.renderSize = CGSizeMake(320, 320);
animComp.frameDuration = CMTimeMake(1,30);
animComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
// to gather the audio part of the video
NSArray *tracksToDuck = [cmp tracksWithMediaType:AVMediaTypeAudio];
NSMutableArray *trackMixArray = [NSMutableArray array];
for (NSInteger i = 0; i < [tracksToDuck count]; i++) {
AVMutableAudioMixInputParameters *trackMix = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:[tracksToDuck objectAtIndex:i]];
[trackMix setVolume:5 atTime:kCMTimeZero];
[trackMixArray addObject:trackMix];
}
audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = trackMixArray;
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [asset duration]);
AVMutableVideoCompositionLayerInstruction *layerVideoInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoComposition];
[layerVideoInstruction setOpacity:1.0 atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject:layerVideoInstruction] ;
animComp.instructions = [NSArray arrayWithObject:instruction];
[self exportMovie:self];
This is my method for exporting the video
-(IBAction) exportMovie:(id)sender{
//successCheck = NO;
NSArray *docPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *tempPath = [docPaths objectAtIndex:0];
//NSLog(#"Temp Path: %#",tempPath);
NSString *fileName = [NSString stringWithFormat:#"%#/Final.MP4",tempPath];
NSFileManager *fileManager = [NSFileManager defaultManager] ;
if([fileManager fileExistsAtPath:fileName ]){
NSError *ferror = nil ;
[fileManager removeItemAtPath:fileName error:&ferror];
}
NSURL *exportURL = [NSURL fileURLWithPath:fileName];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:cmp presetName:AVAssetExportPresetMediumQuality] ;
exporter.outputURL = exportURL;
exporter.videoComposition = animComp;
//exporter.audioMix = audioMix;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){
switch (exporter.status) {
case AVAssetExportSessionStatusFailed:{
NSLog(#"Fail");
break;
}
case AVAssetExportSessionStatusCompleted:{
NSLog(#"Success video");
});
break;
}
default:
break;
}
}];
NSLog(#"outside");
}
there is a property of exportsession to give the time range ,
try giving time range little less than the actual time (few nano seconds less)
You can get the true video duration from AVAssetTrack. The duration of AVAsset is sometimes longer than AVAssetTrack' one.
Check durations out like this.
print(asset.duration.seconds.description)
print(videoTrack.timeRange.duration.description)
So you can change this line.
[videoComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
To like this.
[videoComposition insertTimeRange:sourceVideoTrack.timeRange, ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
For swift 5
videoComposition.insertTimeRange(sourceVideoTrack.timeRange, of: sourceVideoTrack, at: CMTime.zero)
Then you will avoid the last black frame :)
Hope this helps someone still suffer.
Just wanted to write this for people with my specific issue.
I was taking a video and trying to speed it up / slow it down by taking an AVMutableComposition and scaling the time range of the audio and video components via scaleTimeRange
Scaling the time range to 2x speed or 3x speed sometimes caused the last few frames of the video to be black. Fortunately, #Khushboo's answer fixed my problem as well.
However, instead of decreasing the exporter's timeRange by a few nanoseconds, I just made it the same as the composition's duration which ended up working perfectly.
exporter?.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
Hope this helps!
Related
This is suppose to play all the songs in the NSArray but it only plays one and stops. What am I doing wrong? The array contains a whole bunch of links to different .mp3 formatted songs.
NSArray *array = [myString componentsSeparatedByString:#"\n"];
int nextTag = 1;
for (soundPath in array) {
NSURL *url = [NSURL URLWithString:soundPath];
asset = [AVURLAsset URLAssetWithURL:url options:nil];
yourMediaPlayer = [[MPMoviePlayerController alloc] initWithContentURL:url];
yourMediaPlayer.controlStyle = MPMovieControlStyleNone;
yourMediaPlayer.currentPlaybackTime = 0;
yourMediaPlayer.shouldAutoplay = FALSE;
[yourMediaPlayer play];
nextTag++;
}
Instead of trying to get MPMoviePlayerController play a list of items use AVQueuePlayer instead. It has a convenient advanceToNextItem method.
You can create a player already loaded with items (class AVPlayerItem) either with initWithItems or queuePlayerWithItems. Then just call play on it (inherited from AVPlayer) and it should play the items one after the other.
See:
AVQueuePlayer docs
AVPlayer docs
AV Foundation Programming Guide
AVPlayerItem docs
Using the QTKit framework, I'm developing a little app.
In the app, I'm trying to append a movie after a other movie, which in essence is already working (most of the time), but I'm having a little trouble with the appended movie. The movie is which I'm appending to is quite big, like 1920x1080, and the appended movie is usually much smaller, but I never know what size it exactly is. The appended movie sort of stays its own size in the previous 1920x1080 frame, as seen here:
Is there anyone familiar with this? Is there a way I can scale the movie which I need to append to, to the size of the appended movie? There is no reference of such a thing in the documentation.
This is are some relevant methods:
`QTMovie *segmentTwo = [QTMovie movieWithURL:finishedMovie error:nil];
QTTimeRange range = { .time = QTZeroTime, .duration = [segmentTwo duration] };
[segmentTwo setSelection:range];
[leader appendSelectionFromMovie:segmentTwo];
while([[leader attributeForKey:QTMovieLoadStateAttribute] longValue] != 100000L)
{
//wait until QTMovieLoadStateComplete
}
NSDictionary *exportAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], QTMovieExport,
[NSNumber numberWithLong:kQTFileTypeMovie], QTMovieExportType, nil];
NSString *outputFile = [NSString stringWithFormat:#"%#.mov", onderwerp];
NSString *filepath = [[#"~/Desktop" stringByExpandingTildeInPath] stringByAppendingFormat:#"/%#", outputFile];
BOOL succes = [leader writeToFile:filepath withAttributes:exportAttributes error:&theError];
Leader is initialized like this:
NSDictionary *movieAttributes = [NSDictionary dictionaryWithObjectsAndKeys:path, QTMovieFileNameAttribute, [NSNumber numberWithBool:YES], QTMovieEditableAttribute, nil];
leader = [QTMovie movieWithAttributes: movieAttributes error:&error];
This contained all the information I need, although without using the QTKit framework. QTKit - Merge two videos with different width and height?
I'm just beginning to use CABasicAnimations. So far it seems to me like the same code won't necessarily work twice on anything. In one particular instance (the solution for which may cure all my ills!) I have made my own (indeterminate) progress indicator. Just a png from PhotoShop which is rotated until a task is done, it's initiated in the view's initWithRect:
CALayer *mainLayer = [CALayer layer];
[myView setWantsLayer:YES];
[myView setLayer:mainLayer];
progressLayer = [CALayer layer];
progressLayer.opacity = 0;
progressLayer.cornerRadius = 0.0;
progressLayer.bounds = CGRectMake(0.0,0.0,50.0,50.0);
NSDictionary* options = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceShouldCache,
(id)kCFBooleanTrue, (id)kCGImageSourceShouldAllowFloat,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
NULL];
CGImageSourceRef isr = CGImageSourceCreateWithURL((__bridge CFURLRef)[[NSBundle mainBundle] URLForImageResource:#"progress_indicator.png"], NULL);
progressLayer.contents = (__bridge id)CGImageSourceCreateImageAtIndex(isr, 0, (__bridge CFDictionaryRef)options);
[mainLayer addSublayer:progressLayer];
And then brought 'onscreen' in a seperate method with:
[CATransaction begin]; //I did this block to snap the indicator to the centre
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
progressLayer.anchorPoint = anchorMiddle; //make sure the png is in the view centre
progressLayer.position = viewCentre;
progressLayer.opacity = 1.0;
[CATransaction setValue:(id)kCFBooleanFalse forKey:kCATransactionDisableActions];
[CATransaction commit];
[CATransaction flush];
CABasicAnimation* rotationAnim = [CABasicAnimation animationWithKeyPath: #"transform.rotation.z"];
rotationAnim.fromValue = [NSNumber numberWithFloat:0.0];
rotationAnim.toValue = [NSNumber numberWithFloat:-2 * M_PI];
rotationAnim.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear];
rotationAnim.duration = 5;
rotationAnim.repeatCount = 10000;
rotationAnim.removedOnCompletion = NO;
rotationAnim.autoreverses = NO;
[progressLayer addAnimation:rotationAnim forKey:#"transform.rotation.z"];
It often works - but not always. In general CABasicAnimations are driving me slightly mad: I cut & paste code from the internet and sometimes they work sometimes not. My only thought is it's being blocked by other threads. I have a minimum of 4 processes despatched using GCD. Is it just the case that I've blocked up my MacBookPro?
Thanks,
Todd.
Oh dear. I think I just found the problem: I was calling the progress indicator from within a GCD block. I took the call out and into the main body of the code (as it were) and all seems good now....
i need some help for getting my statics work...
I use core data to store my values from user input and each "line" has a time value.
Now i need to calculate some of these values that are in specific time range, lets say the last 30 days.
But i don't know how to do it, i'm a little new to working with date and time rangers.
Can somebody help me out?
kind regards,
Ingemar
You need to use a predicate to filter your data.
NSManagedObjectContext *context; // Assume this exists.
NSEntityDescription *entityDescription; // Assume this exists.
NSDate minDate, maxDate; // Assume these exist.
NSFetchRequest *request = [[NSFetchRequest alloc] init];
[request setEntity:entityDescription];
NSPredicate *setPredicate:predicate = [NSPredicate predicateWithFormat:#"date BETWEEN %#",
[NSArray arrayWithObjects:minDate, maxDate, nil]];
[request setPredicate:predicate];
NSError *error;
NSArray *filteredResult = [context executeFetchRequest:request error:&error];
// Handle error.
[request release];
thanks for your answer. I will try your way.
I found this solution by myself, any concerns about it?
NSTimeInterval aktuellesDatumInSekunden = [aktuellesDatum timeIntervalSinceReferenceDate];
NSTimeInterval vordreissigTagen = [letztedreizigTage timeIntervalSinceReferenceDate];
double dBoluse = 0;
double dWerteKleinerSechzig = 0;
for (TblBolusWerte *ausgabeBoliTag in statistikDataWithPredicate){
NSTimeInterval DatumAusDB = [ausgabeBoliTag.creationTime timeIntervalSinceReferenceDate];
if (DatumAusDB >= vordreissigTagen && DatumAusDB <= aktuellesDatumInSekunden){
in a cocoa application I'm currently coding, I'm getting snapshot images from a Quartz Composer renderer (NSImage objects) and I would like to encode them in a QTMovie in 720*480 size, 25 fps, and H264 codec using the addImage: method. Here is the corresponding piece of code:
qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size
imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: #"avc1", // use the H264 codec
QTAddImageCodecType, nil];
qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object
long fps = 25;
frameNum = 0;
NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
if(![qRenderer renderAtTime: renderingTime arguments:NULL])
NSLog(#"Rendering failed at time %.3fs", renderingTime);
myImage = [qRenderer snapshotImage];
[qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
[myImage release];
frameNum ++;
renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release];
It works, however my application is not able to do that in real time on my new MacBook Pro, while I know that QuickTime Broadcaster can encode images in real time in H264 with an even higher quality that the one I use, on the same computer.
So why ? What's the issue here? Is this a hardware management issue (multi-core threading, GPU,...) or am I missing something? Let me preface that I'm new (2 weeks of practice) in the Apple development world, both in objective-C, cocoa, X-code, Quicktime and Quartz Composer libraries, etc.
Thanks for any help
AVFoundation is a more efficient way to render a QuartzComposer animation to an H.264 video stream.
size_t width = 640;
size_t height = 480;
const char *outputFile = "/tmp/Arabesque.mp4";
QCComposition *composition = [QCComposition compositionWithFile:#"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];
unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:#(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];
NSDictionary *videoSettings = #{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : #(width), AVVideoHeightKey : #(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
[videoWriter addInput:writerInput];
[writerInput release];
AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];
int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
__block long frameNumber = 0;
dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);
NSLog(#"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
while ([writerInput isReadyForMoreMediaData]) {
NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
if (![renderer renderAtTime:frameTime arguments:NULL]) {
NSLog(#"Rendering failed at time %.3fs", frameTime);
break;
}
CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:#"CVPixelBuffer"];
[pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
CFRelease(frame);
frameNumber++;
if (frameNumber >= totalFrameCount) {
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
[renderer release];
NSLog(#"Rendered %ld frames.", frameNumber);
break;
}
}
}];
In my testing this is around twice as fast as your posted code that uses QTKit. The biggest improvement appears to come from the H.264 encoding being handed off to the GPU rather than being performed in software. From a quick glance at a profile it appears that the remaining bottlenecks are the rendering of the composition itself, and reading the rendered data back from the GPU in to a pixel buffer. Obviously the complexity of your composition will have some impact on this.
It may be possible to further optimize this by using QCRenderer's ability to provide snapshots as CVOpenGLBufferRefs, which may keep the frame's data on the GPU rather than reading it back to hand it off to the encoder. I didn't look too far in to that though.