Why is my QTKit based image encoding application so slow? - cocoa

in a cocoa application I'm currently coding, I'm getting snapshot images from a Quartz Composer renderer (NSImage objects) and I would like to encode them in a QTMovie in 720*480 size, 25 fps, and H264 codec using the addImage: method. Here is the corresponding piece of code:
qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size
imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: #"avc1", // use the H264 codec
QTAddImageCodecType, nil];
qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object
long fps = 25;
frameNum = 0;
NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
if(![qRenderer renderAtTime: renderingTime arguments:NULL])
NSLog(#"Rendering failed at time %.3fs", renderingTime);
myImage = [qRenderer snapshotImage];
[qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
[myImage release];
frameNum ++;
renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release];
It works, however my application is not able to do that in real time on my new MacBook Pro, while I know that QuickTime Broadcaster can encode images in real time in H264 with an even higher quality that the one I use, on the same computer.
So why ? What's the issue here? Is this a hardware management issue (multi-core threading, GPU,...) or am I missing something? Let me preface that I'm new (2 weeks of practice) in the Apple development world, both in objective-C, cocoa, X-code, Quicktime and Quartz Composer libraries, etc.
Thanks for any help

AVFoundation is a more efficient way to render a QuartzComposer animation to an H.264 video stream.
size_t width = 640;
size_t height = 480;
const char *outputFile = "/tmp/Arabesque.mp4";
QCComposition *composition = [QCComposition compositionWithFile:#"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];
unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:#(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];
NSDictionary *videoSettings = #{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : #(width), AVVideoHeightKey : #(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
[videoWriter addInput:writerInput];
[writerInput release];
AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];
int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
__block long frameNumber = 0;
dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);
NSLog(#"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
while ([writerInput isReadyForMoreMediaData]) {
NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
if (![renderer renderAtTime:frameTime arguments:NULL]) {
NSLog(#"Rendering failed at time %.3fs", frameTime);
break;
}
CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:#"CVPixelBuffer"];
[pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
CFRelease(frame);
frameNumber++;
if (frameNumber >= totalFrameCount) {
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
[renderer release];
NSLog(#"Rendered %ld frames.", frameNumber);
break;
}
}
}];
In my testing this is around twice as fast as your posted code that uses QTKit. The biggest improvement appears to come from the H.264 encoding being handed off to the GPU rather than being performed in software. From a quick glance at a profile it appears that the remaining bottlenecks are the rendering of the composition itself, and reading the rendered data back from the GPU in to a pixel buffer. Obviously the complexity of your composition will have some impact on this.
It may be possible to further optimize this by using QCRenderer's ability to provide snapshots as CVOpenGLBufferRefs, which may keep the frame's data on the GPU rather than reading it back to hand it off to the encoder. I didn't look too far in to that though.

Related

OS X, Cocoa: How to highlight burned-out areas of a photo?

I've got an app that displays photos using NSImage – specifically, -[NSImage drawInRect:fromRect:operation:fraction:]. I want to highlight areas of the photo that are completely burned out (maximum values in all components, pure white) using a color like red, as some digital cameras and image processing apps do, to help the user see whether the image is overexposed, and how badly.
I've been scratching my head as to how to do this. Options I've considered:
I could probably write a Core Image filter to do it; none of the built-in filters look up to the task. That seems like overkill, though; I've been reading through the docs, and it looks fairly complicated. Big learning curve.
I could scan through the bitmap data for the image and modify it as necessary. This is easy enough to code for one bitmap format, but the multitude of bitmap formats make it a rather annoying exercise, and speed is important here, so writing general-purpose code that renders the image up to some maximal common format and works on that bitmap would be too big a speed penalty.
As it happens, I am already scanning through images (handling all the different bitmap formats) at an earlier point in the code, to generate histogram data for the images. I could pretty easily add code at that point that would remember the burned-out pixels for later use. I'm not quite sure what the best way is to do that, though. A 1-bit-per-pixel NSBitmapImageRep? How would I draw it later, making the 1-pixels draw red and the 0-pixels draw transparent, for example? I don't want to make a 32-bit NSBitmapImageRep with an alpha channel and everything just for this purpose, as memory is not infinite and images are large. But there must be a way to draw a 1-bit mask in a given color, somehow.
Before forging ahead with one of these approaches, I thought I'd see whether anybody here has a better idea. Or maybe has implemented the CI filter in question already? Apart from the learning curve, that seems like the best approach I've thought of so far – no memory overhead, and probably faster than other options, too.
Thanks...
Ben Haller
Stick Software
OK, I implemented my own Core Image filter to do this. Wasn't as hard as I expected, although the documentation is not great for this stuff. The doc examples all assume you're using ARC, so if you're not, following those examples will give you various retain/release bugs. There was also a little weirdness with the CIFilterConstructor stuff, which did not quite go as documented. But overall pretty easy. CI is cool. My code is below, for anybody who might find it useful:
Header:
#import
#interface SSTintHighlightsFilter : CIFilter
{
CIImage *inputImage;
CIColor *highlightColor;
}
#end
Implementation file:
#import "SSTintHighlightsFilter.h"
static CIKernel *tintHighlightsFilter = nil;
#implementation SSTintHighlightsFilter
+ (void)initialize
{
[CIFilter registerFilterName:#"SSTintHighlightsFilter" constructor:(id )self
classAttributes:[NSDictionary dictionaryWithObjectsAndKeys:#"Tint Highlights", kCIAttributeFilterDisplayName, [NSArray arrayWithObjects:kCICategoryColorAdjustment, kCICategoryStillImage, nil], kCIAttributeFilterCategories, nil]];
}
+ (CIFilter *)filterWithName:(NSString *)name
{
CIFilter *filter = [[self alloc] init];
return [filter autorelease];
}
- (id)init
{
if (!tintHighlightsFilter)
{
NSBundle *bundle = [NSBundle bundleForClass:[self class]];
NSString *code = [NSString stringWithContentsOfFile:[bundle pathForResource:#"tintHighlightsAndShadows" ofType:#"cikernel"] encoding:NSASCIIStringEncoding error:NULL];
NSArray *kernels = [CIKernel kernelsWithString:code];
tintHighlightsFilter = [[kernels objectAtIndex:0] retain];
}
return [super init];
}
- (NSDictionary *)customAttributes
{
NSDictionary *attrs = #{
#"highlightColor" : #{ kCIAttributeClass : [CIColor class], kCIAttributeType : kCIAttributeTypeOpaqueColor }
};
return attrs;
}
- (CIImage *)outputImage
{
CISampler *src = [CISampler samplerWithImage:inputImage];
return [self apply:tintHighlightsFilter
arguments:[NSArray arrayWithObjects:src, highlightColor, nil]
options:[NSDictionary dictionaryWithObjectsAndKeys:[src definition], kCIApplyOptionDefinition, nil]];
}
#end
tintHighlights.cikernel:
kernel vec4 tintHighlights(sampler inputImage, __color highlightColor)
{
vec4 originalColor, tintedColor;
float sum;
// fetch the source pixel
originalColor = sample(inputImage, samplerCoord(inputImage));
// calculate the color component sum as a way of testing whether we are black or white
sum = originalColor.r + originalColor.g + originalColor.b;
// replace pixels that are white with the highlight color
tintedColor = (sum > 2.99999999999999999999999) ? highlightColor : originalColor;
// preserve alpha
tintedColor.a = originalColor.a;
return tintedColor;
}
using the filter:
+ (NSImage *)showHighlightsInImage:(NSImage *)img dstRect:(NSRect)dstRect
{
NSGraphicsContext *currentContext = [NSGraphicsContext currentContext];
NSRect dstRectForCGImage = dstRect; // because the method below wants a pointer, and I don't trust it not to modify my rect...
CGImageRef cgImage = [img CGImageForProposedRect:&dstRectForCGImage context:currentContext hints:nil];
CIImage *inputImage = [[CIImage alloc] initWithCGImage:cgImage];
[SSTintHighlightsFilter class]; // get my filter initialized
CIFilter *highlightFilter = [CIFilter filterWithName:#"SSTintHighlightsFilter"];
[highlightFilter setValue:inputImage forKey:#"inputImage"];
[highlightFilter setValue:[CIColor colorWithRed:1.0 green:0.0 blue:0.0] forKey: #"highlightColor"];
[inputImage release];
CIImage *outputImage = [highlightFilter valueForKey:#"outputImage"];
NSImage *resultImage = [[NSImage alloc] initWithSize:[img size]];
[resultImage addRepresentation:[NSCIImageRep imageRepWithCIImage:outputImage]];
return [resultImage autorelease];
}
I'm not sure that I'm handling the alpha entirely robustly, with premultiplication issues and so forth, but apart from that possible glitch it is working great.

AvMutableComposition issues having black frame at the end

I capturing a video using AVCaptureConnection in my iOS app. After that I add some images in the video as CALayers. Everything is working fine but I get a black frame at the very end of the resulting video after adding images. There is no frame of actual audio/video that has been affected in this. For audio I am extracting it and changing its pitch and then add it using AVMutableComposition. Here is the code that I am using. Please help me with what I am doing wrong or do I need to add something else.
cmp = [AVMutableComposition composition];
AVMutableCompositionTrack *videoComposition = [cmp addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioComposition = [cmp addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *sourceVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *sourceAudioTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[videoComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
[audioComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceAudioTrack atTime:kCMTimeZero error:nil];
animComp = [AVMutableVideoComposition videoComposition];
animComp.renderSize = CGSizeMake(320, 320);
animComp.frameDuration = CMTimeMake(1,30);
animComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
// to gather the audio part of the video
NSArray *tracksToDuck = [cmp tracksWithMediaType:AVMediaTypeAudio];
NSMutableArray *trackMixArray = [NSMutableArray array];
for (NSInteger i = 0; i < [tracksToDuck count]; i++) {
AVMutableAudioMixInputParameters *trackMix = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:[tracksToDuck objectAtIndex:i]];
[trackMix setVolume:5 atTime:kCMTimeZero];
[trackMixArray addObject:trackMix];
}
audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = trackMixArray;
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [asset duration]);
AVMutableVideoCompositionLayerInstruction *layerVideoInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoComposition];
[layerVideoInstruction setOpacity:1.0 atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject:layerVideoInstruction] ;
animComp.instructions = [NSArray arrayWithObject:instruction];
[self exportMovie:self];
This is my method for exporting the video
-(IBAction) exportMovie:(id)sender{
//successCheck = NO;
NSArray *docPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *tempPath = [docPaths objectAtIndex:0];
//NSLog(#"Temp Path: %#",tempPath);
NSString *fileName = [NSString stringWithFormat:#"%#/Final.MP4",tempPath];
NSFileManager *fileManager = [NSFileManager defaultManager] ;
if([fileManager fileExistsAtPath:fileName ]){
NSError *ferror = nil ;
[fileManager removeItemAtPath:fileName error:&ferror];
}
NSURL *exportURL = [NSURL fileURLWithPath:fileName];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:cmp presetName:AVAssetExportPresetMediumQuality] ;
exporter.outputURL = exportURL;
exporter.videoComposition = animComp;
//exporter.audioMix = audioMix;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){
switch (exporter.status) {
case AVAssetExportSessionStatusFailed:{
NSLog(#"Fail");
break;
}
case AVAssetExportSessionStatusCompleted:{
NSLog(#"Success video");
});
break;
}
default:
break;
}
}];
NSLog(#"outside");
}
there is a property of exportsession to give the time range ,
try giving time range little less than the actual time (few nano seconds less)
You can get the true video duration from AVAssetTrack. The duration of AVAsset is sometimes longer than AVAssetTrack' one.
Check durations out like this.
print(asset.duration.seconds.description)
print(videoTrack.timeRange.duration.description)
So you can change this line.
[videoComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
To like this.
[videoComposition insertTimeRange:sourceVideoTrack.timeRange, ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
For swift 5
videoComposition.insertTimeRange(sourceVideoTrack.timeRange, of: sourceVideoTrack, at: CMTime.zero)
Then you will avoid the last black frame :)
Hope this helps someone still suffer.
Just wanted to write this for people with my specific issue.
I was taking a video and trying to speed it up / slow it down by taking an AVMutableComposition and scaling the time range of the audio and video components via scaleTimeRange
Scaling the time range to 2x speed or 3x speed sometimes caused the last few frames of the video to be black. Fortunately, #Khushboo's answer fixed my problem as well.
However, instead of decreasing the exporter's timeRange by a few nanoseconds, I just made it the same as the composition's duration which ended up working perfectly.
exporter?.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
Hope this helps!

NSImage + NSBitmapImageRep = Converting RAW image file from one format to another

I am trying to write a prototype to prove that RAW conversion from one format to another is possible. I have to convert a Nikon's raw file which is of .NEF format to Canon's .CR2 format. With help of various posts I create the original image TIFF representation's BitmapImageRep and use this to write the output file which has a .CR2 extension.
It does work but only problem for me is, the input file is of 21.5 MB but the output am getting is of 144.4 MB. While using NSTIFFCompressionPackBits gives me 142.1 MB.
I want to understand what is happening, I have tried various compression enums available but with no success.
Please help me understanding it. This is the source code:
#interface NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName;
#end
#implementation NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName
{
// Cache the reduced image
NSData *imageData = [self TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
// http://www.cocoabuilder.com/archive/cocoa/151789-nsbitmapimagerep-compressed-tiff-large-files.html
NSDictionary *imageProps = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:NSTIFFCompressionJPEG],NSImageCompressionMethod,
[NSNumber numberWithFloat: 1.0], NSImageCompressionFactor,
nil];
imageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[imageData writeToFile:fileName atomically:NO];
}
#end
How could I get the output file which is in CR2 format but almost around the size of the input file with little variation as required for a CR2 file?
Edit 1:
Done changes based on Peter's suggestion of using CGImageDestinationAddImageFromSource method, but still I am getting the same result. The input source NEF file size 21.5 MB but the destination file size after conversion 144.4 MB.
Please review the code:
-(void)saveAsCR2WithCGImageMethodUsingName:(NSString*)inDestinationfileName withSourceFile:(NSString*)inSourceFileName
{
CGImageSourceRef sourceFile = MyCreateCGImageSourceRefFromFile(inSourceFileName);
CGImageDestinationRef destinationFile = createCGImageDestinationRefFromFile(inDestinationfileName);
CGImageDestinationAddImageFromSource(destinationFile, sourceFile, 0, NULL);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
CGImageDestinationFinalize(destinationFile);
}
CGImageSourceRef MyCreateCGImageSourceRefFromFile (NSString* path)
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
return myImageSource;
}
CGImageDestinationRef createCGImageDestinationRefFromFile (NSString *path)
{
NSURL *url = [NSURL fileURLWithPath:path];
CGImageDestinationRef myImageDestination;
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate( NULL, (const void **)myKeys, (const void **)myValues, 3,
&kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/imageio_basics/ikpg_basics.html#//apple_ref/doc/uid/TP40005462-CH216-SW3
CFStringRef destFileType = CFSTR("public.tiff");
// CFStringRef destFileType = kUTTypeJPEG;
CFArrayRef types = CGImageDestinationCopyTypeIdentifiers(); CFShow(types);
myImageDestination = CGImageDestinationCreateWithURL((CFURLRef)url, destFileType, 1, myOptions);
return myImageDestination;
}
Edit 2: Used the second approach told by #Peter. This gives interesting result. It's effect is the same as renaming the file in finder something like "example_image.NEF" to "example_image.CR2". Surprisingly what happens when converting both programmatically and in finder is, the source file which is 21.5 MB will turn out to be 59 KB. This is without any compression set in the code. Please see the code and suggest:
-(void)convertNEFWithTiffIntermediate:(NSString*)inNEFFile toCR2:(NSString*)inCR2File
{
NSData *fileData = [[NSData alloc] initWithContentsOfFile:inNEFFile];
if (fileData)
{
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:fileData];
// [imageRep setCompression:NSTIFFCompressionNone
// factor:1.0];
NSDictionary *imageProps = nil;
NSData *destinationImageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[destinationImageData writeToFile:inCR2File atomically:NO];
}
}
The first thing I would try doesn't involve NSImage or NSBitmapImageRep at all. Instead, I would create a CGImageSource for the source file and a CGImageDestination for the destination file, and use CGImageDestinationAddImageFromSource to transfer all of the images from A to B.
You're converting to TIFF twice in this code:
You create an NSImage, I assume from the source file.
You ask the NSImage for its TIFFRepresentation (TIFF conversion #1).
You create an NSBitmapImageRep from the first TIFF data.
You ask the NSBitmapImageRep to generate a second TIFF representation (TIFF conversion #2).
Consider creating an NSBitmapImageRep directly from the source data, and not using NSImage at all. You would then skip directly to step 4 to generate the output data.
(But I still would try CGImageDestinationAddImageFromSource first.)
Raw image files have their own (proprietary) representation.
For example, they may use 14-bit per component, and mosaic patterns, which are not supported by your code.
I think you should use a lower-level API and really reverse engineer the RAW format you are trying to save to.
I would start with DNG, which is relatively easy, as Adobe provides an SDK to write it.

Scaling a QTMovie before appending

Using the QTKit framework, I'm developing a little app.
In the app, I'm trying to append a movie after a other movie, which in essence is already working (most of the time), but I'm having a little trouble with the appended movie. The movie is which I'm appending to is quite big, like 1920x1080, and the appended movie is usually much smaller, but I never know what size it exactly is. The appended movie sort of stays its own size in the previous 1920x1080 frame, as seen here:
Is there anyone familiar with this? Is there a way I can scale the movie which I need to append to, to the size of the appended movie? There is no reference of such a thing in the documentation.
This is are some relevant methods:
`QTMovie *segmentTwo = [QTMovie movieWithURL:finishedMovie error:nil];
QTTimeRange range = { .time = QTZeroTime, .duration = [segmentTwo duration] };
[segmentTwo setSelection:range];
[leader appendSelectionFromMovie:segmentTwo];
while([[leader attributeForKey:QTMovieLoadStateAttribute] longValue] != 100000L)
{
//wait until QTMovieLoadStateComplete
}
NSDictionary *exportAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], QTMovieExport,
[NSNumber numberWithLong:kQTFileTypeMovie], QTMovieExportType, nil];
NSString *outputFile = [NSString stringWithFormat:#"%#.mov", onderwerp];
NSString *filepath = [[#"~/Desktop" stringByExpandingTildeInPath] stringByAppendingFormat:#"/%#", outputFile];
BOOL succes = [leader writeToFile:filepath withAttributes:exportAttributes error:&theError];
Leader is initialized like this:
NSDictionary *movieAttributes = [NSDictionary dictionaryWithObjectsAndKeys:path, QTMovieFileNameAttribute, [NSNumber numberWithBool:YES], QTMovieEditableAttribute, nil];
leader = [QTMovie movieWithAttributes: movieAttributes error:&error];
This contained all the information I need, although without using the QTKit framework. QTKit - Merge two videos with different width and height?

imageWithCGImage not being released or is trapped by Cache similar to imageNamed, any work around for generating dynamic images?

I'm generating UIImages with a bit-bucket, creating them on the fly and swapping the UIImageView's image. Is there a way to edit the UIImageView's Image directly? (ie. change the color of a specific pixel, without removing the UIImage from the UIImageView, and get it to redraw.)
Currently, I'm flushing the UIImage and using imageWithCGIImage to make a new one, and assigning it to the UIImageView. This works. Shows no MemLeaks. But on the iPhone (3Gs) after about 100 image replacements, CRASHES. Cache'n issue? The memory summation seems to be hitting the phone's limit if cache not releasing, however, Simulator does not show memory consumption with each image swap. Stays flatlined without leaks.
Note: topologyImage array is the RGBA pixel-bucket. The REF variables are not released. Every attempt to do so, crashes next call. Without, Instruments reports no leaks.
=========
CGColorSpaceRef colorSpaceRef=CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent=kCGRenderingIntentDefault;
CGDataProviderRef provider=CGDataProviderCreateWithData(NULL,topologyImage,(I*I*4),NULL);
CGImageRef imageRef=CGImageCreate(I,I,8,4*8,4*I,colorSpaceRef,bitmapInfo,provider,NULL,false,renderingIntent);
UIImage *img=[UIImage imageWithCGImage:imageRef];
if( IMG[NDXtopo].vw ) {
[IMG[NDXtopo].vw setImage:img];
}
else {
IMG[NDXtopo].vw=[[UIImageView alloc] initWithImage:img];
[master.view addSubview:IMG[NDXtopo].vw];
}
Basically you should release your references, especially the CGImageRef since the imageWithCGImage doesn't take ownership of the CGImage but rather seems to copy the data internally.
The docs on this are quite unclear, but from what I have found in my testing if I don't release CGImageRefs and CGDataProviderRefs it will eventually cause the application to get memory warnings... and then crash.
Not sure why you would have a crash, but in doing a quick test with:
UIImageView *view = [[UIImageView alloc] init];
int I = 128;
unsigned char *topologyImage = malloc(I*I*4*sizeof(unsigned char));
for(int i=0; i<I*I*4; i++)
{
topologyImage[i] = 100;
}
for(int i=0; i<1000; i++)
{
CGColorSpaceRef colorSpaceRef=CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent=kCGRenderingIntentDefault;
CGDataProviderRef provider=CGDataProviderCreateWithData(NULL,topologyImage,(I*I*4),NULL);
CGImageRef imageRef=CGImageCreate(I,I,8,4*8,4*I,colorSpaceRef,bitmapInfo,provider,NULL,false,renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *img=[UIImage imageWithCGImage:imageRef];
view.image = img;
CGImageRelease(imageRef);
}
free(topologyimage);
Seems to work just fine for me, so whatever is causing your crash seems to be because of something outside of your example, like for example how you got the image data into the topologyImage

Resources