passing video frame to Core Image on osx - macos

Hi all you awesome coders! I've put together this thing from various helpful sources over the last couple of weeks (including a lot of posts from stackoverflow), trying to create something that will take a webcam feed and detect smiles when they occur (might as well draw boxes around the faces and the smiles as well, that doesn't seem like it would be hard once they are detected). Please give me some lee-way if it's messy code because I'm still very much learning.
Currently I'm stuck at trying to pass the image to a CIImage so it can be analysed for faces (I plan to deal with smiles after the face hurdle is overcome). As it is the compiler succeeds if I comment out the block after (5) - it brings up a simple AVCaptureVideoPreviewLayer in a window. I think this is what I've called "rootLayer", so it's like the first layer of the displayed output, and after I detect faces in the video frames I'll show a rectangle following the "bounds" of any detected face in a new layer overlaid on top of this one, and I've called that layer "previewLayer"... correct?
But with the block after (5) there, the compiler throws out three errors -
Undefined symbols for architecture x86_64:
"_CMCopyDictionaryOfAttachments", referenced from:
-[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o
"_CMSampleBufferGetImageBuffer", referenced from:
-[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Can anyone tell me where I'm going wrong and what my next steps are?
Thanks for any help, I've been stuck at this point for a couple of days and I can't figure it out, all the examples I can find are for IOS and don't work in OSX.
- (id)init
{
self = [super init];
if (self) {
// Move the output part to another function
[self addVideoDataOutput];
// Create a capture session
session = [[AVCaptureSession alloc] init];
// Set a session preset (resolution)
self.session.sessionPreset = AVCaptureSessionPreset640x480;
// Select devices if any exist
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice) {
[self setSelectedVideoDevice:videoDevice];
} else {
[self setSelectedVideoDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeMuxed]];
}
NSError *error = nil;
// Add an input
videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
[self.session addInput:self.videoDeviceInput];
// Start the session (app opens slower if it is here but I think it is needed in order to send the frames for processing)
[[self session] startRunning];
// Initial refresh of device list
[self refreshDevices];
}
return self;
}
-(void) addVideoDataOutput {
// (1) Instantiate a new video data output object
AVCaptureVideoDataOutput * captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.videoSettings = #{ (NSString *) kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
// discard if the data output queue is blocked (while CI processes the still image)
captureOutput.alwaysDiscardsLateVideoFrames = YES;
// (2) The sample buffer delegate requires a serial dispatch queue
dispatch_queue_t captureOutputQueue;
captureOutputQueue = dispatch_queue_create("CaptureOutputQueue", DISPATCH_QUEUE_SERIAL);
[captureOutput setSampleBufferDelegate:self queue:captureOutputQueue];
dispatch_release(captureOutputQueue); //what does this do and should it be here or after we receive the processed image back?
// (3) Define the pixel format for the video data output
NSString * key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber * value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = #{key:value};
[captureOutput setVideoSettings:settings];
// (4) Configure the output port on the captureSession property
if ( [self.session canAddOutput:captureOutput] )
[session addOutput:captureOutput];
}
// Implement the Sample Buffer Delegate Method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// I *think* I have a video frame now in some sort of image format... so have to convert it into a CIImage before I can process it:
// (5) Convert CMSampleBufferRef to CVImageBufferRef, then to a CI Image (per weichsel's answer in July '13)
CVImageBufferRef cvFrameImage = CMSampleBufferGetImageBuffer(sampleBuffer); // Having trouble here, prog. stops and won't recognise CMSampleBufferGetImageBuffer.
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage options:(__bridge NSDictionary *)attachments];
//self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage];
//OK so it is a CIImage. Find some way to send it to a separate CIImage function to find the faces, then smiles. Then send it somewhere else to be displayed on top of AVCaptureVideoPreviewLayer
//TBW
}
- (NSString *)windowNibName
{
return #"AVRecorderDocument";
}
- (void)windowControllerDidLoadNib:(NSWindowController *) aController
{
[super windowControllerDidLoadNib:aController];
// Attach preview to session
CALayer *rootLayer = self.previewView.layer;
[rootLayer setMasksToBounds:YES]; //aaron added
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[self.previewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[self.previewLayer setFrame:[rootLayer bounds]];
//[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable]; //don't think I need this for OSX?
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[rootLayer addSublayer:previewLayer];
// [newPreviewLayer release]; //what's this for?
}

(moved from the comments section)
Wow. I guess two days and one StackOverflow post is what it takes to figure out that I haven't added CoreMedia.framework to my project.

Related

AVFoundation: how to resize frames in sampleBuffer (AVCaptureSessionPreset) does not affect it

I am trying to capture video frames from my macBook camera and process them on the fly (for later face detection). To reduce memory usage, I want to reduce the capture resolution from the preset value 1200x720 to 640x480.
Here is my code to setup the Capture Session:
_session = [[AVCaptureSession alloc] init];
if ([_session canSetSessionPreset:AVCaptureSessionPreset640x480]){
[_session setSessionPreset:AVCaptureSessionPreset640x480];
NSLog(#"resolution preset changed");
}
// configure input
_camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
_deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:_camera error:nil];
// configure output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
NSDictionary* newSettings = #{ (NSString*) kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)};
_videoOutput.videoSettings = newSettings;
//discard if the data output queue is blocked
[_videoOutput setAlwaysDiscardsLateVideoFrames:YES];
// process frames on another queue
dispatch_queue_t videoDataOutputQueue;
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[_videoOutput setSampleBufferDelegate:videoBufferDelegate queue:videoDataOutputQueue];
[_session addInput:_deviceInput];
[_session addOutput:_videoOutput];
[_session startRunning];
After this, the session sets up appropriately, it logs "resolution preset changed" correctly and forwards the video data to the delegate on another queue to process it. When I inspect session.sessionPreset, it says that the preset is AVCaptureSessionPreset640x480.
Now in the delegate:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//get the image from buffer, transform it to CIImage
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
self.image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];
When inspecting self.image.extent.size, it shows and incorrect size of 1200x720, as if I had not changed the preset before. Even when inspecting the method's arguement sampleBuffer, it shows the dimension as 1200x720.
I browsed the internet and the apple reference for a couple of hours now, but could not manage to find a solution. I hope you can save me!
I seem to have found a solution by myself (or at least a workaround). Commenting out the following lines led to the intended changes of the buffer's resolution:
NSDictionary* settings = #{ (NSString*) kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)};
_videoOutput.videoSettings = settings;
My assumption is, that setting these compression settings results in an overwriting of the AVCaptureSessionPreset. However, I find it not completely clear why this should be the case (compression settings shouldn't have an influence on resolution settings, right?).

Adding filters to video with AVFoundation (OSX) - how do I write the resulting image back to AVWriter?

Setting the scene
I am working on a video processing app that runs from the command line to read in, process and then export video. I'm working with 4 tracks.
Lots of clips that I append into a single track to make one video. Let's call this the ugcVideoComposition.
Clips with Alpha which get positioned on a second track and using layer instructions, is set composited on export to play back over the top of the ugcVideoComposition.
A music audio track.
An audio track for the ugcVideoComposition containing the audio from the clips appended into the single track.
I have this all working, can composite it and export it correctly using AVExportSession.
The problem
What I now want to do is apply filters and gradients to the ugcVideoComposition.
My research so far suggests that this is done by using AVReader and AVWriter, extracting a CIImage, manipulating it with filters and then writing that out.
I haven't yet got all the functionality I had above working, but I have managed to get the ugcVideoComposition read in and written back out to disk using the AssetReader and AssetWriter.
BOOL done = NO;
while (!done)
{
while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Let's try create an image....
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];
// < Apply filters and transformations to the CIImage here
// < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >
// Write things back out.
[assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
As you can see, I can even get the CIImage from the CMSampleBuffer, and I'm confident I can work out how to manipulate the image and apply any effects etc. I need. What I don't know how to do is put the resulting manipulated image BACK into the SampleBuffer so I can write it out again.
The question
Given a CIImage, how can I put that into a sampleBuffer to append it with the assetWriter?
Any help appreciated - the AVFoundation documentation is terrible and either misses crucial points (like how to put an image back after you've extracted it, or is focussed on rendering images to the iPhone screen which is not what I want to do.
Much appreciated and thanks!
I eventually found a solution by digging through a lot of half complete samples and poor AVFoundation documentation from Apple.
The biggest confusion is that while at a high level, AVFoundation is "reasonably" consistent between iOS and OSX, the lower level items behave differently, have different methods and different techniques. This solution is for OSX.
Setting up your AssetWriter
The first thing is to make sure that when you set up the asset writer, you add an adaptor to read in from a CVPixelBuffer. This buffer will contain the modified frames.
// Create the asset writer input and add it to the asset writer.
AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
// Now create an adaptor that writes pixels too!
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
sourcePixelBufferAttributes:nil];
assetWriterVideoInput.expectsMediaDataInRealTime = NO;
[assetWriter addInput:assetWriterVideoInput];
Reading and Writing
The challenge here is that I couldn't find directly comparable methods between iOS and OSX - iOS has the ability to render a context directly to a PixelBuffer, where OSX does NOT support that option. The context is also configured differently between iOS and OSX.
Note that you should include the QuartzCore.Framework into your XCode Project as well.
Creating the context on OSX.
CIContext *context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil]; // We don't want to always create a context so we put it outside the loop
Now you want want to loop through, reading off the AssetReader and writing to the AssetWriter... but note that you are writing via the adaptor created previously, not with the SampleBuffer.
while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
// GRAB AN IMAGE FROM THE SAMPLE BUFFER
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
nil];
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];
//-----------------
// FILTER IMAGE - APPLY ANY FILTERS IN HERE
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"];
[filter setDefaults];
[filter setValue: inputImage forKey: kCIInputImageKey];
[filter setValue: #1.0f forKey: kCIInputIntensityKey];
CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];
//-----------------
// RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
// 1. Firstly render the image
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
// pixelBufferFromCGImage is documented below.
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.
// Finally write out using the adaptor.
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
Creating the PixelBuffer
There MUST be an easier way, however for now, this works and is the only way I found to get directly from a CIImage to a PixelBuffer (via a CGImage) on OSX. The following code is cut and paste from AVFoundation + AssetWriter: Generate Movie With Images and Audio
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Try using: SDAVAssetExportSession
SDAVAssetExportSession on GITHub
and then implementing a delegate to process the pixels
- (void)exportSession:(SDAVAssetExportSession *)exportSession renderFrame:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime toBuffer:(CVPixelBufferRef)renderBuffer
{ Do CIImage and CIFilter inside here }

NSOpenGLView and CVDisplayLink, no default frame buffer

I have an NSOpenGLView and OpenGL code that works with an NSTimer running in the main loop (calling setNeedsDisplay and drawRect). I would like to use a CVDisplayLink, so I can get a better frame-rate without overdriving the timer. I copied most of the code from apple's OSXGLEssentials example. The display link starts and the callback runs, but nothing is actually draw on screen. glGetError returns GL_INVALID_FRAMEBUFFER_OPERATION.
glCheckFramebufferStatus returns GL_FRAMEBUFFER_UNDEFINED for GL_FRAMEBUFFER, GL_DRAW_FRAMEBUFFER and GL_READ_FRAMEBUFFER.
Info from the documentation:
GL_FRAMEBUFFER_UNDEFINED is returned if target is the default
framebuffer, but the default framebuffer does not exist.
Here are the relevant bits of code:
- (void)awakeFromNib {
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, // Core Profile !
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAAccelerated,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFAAllowOfflineRenderers,
0
};
NSOpenGLPixelFormat *format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
NSOpenGLContext *context = [[NSOpenGLContext alloc] initWithFormat:format shareContext: nil];
// [context setView: self];
[self setPixelFormat: format];
[self setOpenGLContext: context];
}
- (void)prepareOpenGL {
[super prepareOpenGL];
NSOpenGLContext* context = [self openGLContext];
[context makeCurrentContext];
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[context setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
MyDisplay_setup();
MyDisplay_initScene(_bounds.size.width, _bounds.size.height);
CVDisplayLinkCreateWithActiveCGDisplays(&displayLink);
CVDisplayLinkSetOutputCallback(displayLink, &displayLinkCallback, (__bridge void *)self);
CGLPixelFormatObj cglPixelFormat = [[self pixelFormat] CGLPixelFormatObj];
CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext(displayLink, [context CGLContextObj], cglPixelFormat);
CVDisplayLinkStart(displayLink);
}
static CVReturn displayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp* now, const CVTimeStamp* outputTime,
CVOptionFlags flagsIn, CVOptionFlags* flagsOut, void* displayLinkContext) {
#autoreleasepool {
[(__bridge MyView*)displayLinkContext redraw];
}
return kCVReturnSuccess;
}
- (void)redraw {
NSOpenGLContext* context = [self openGLContext];
[context makeCurrentContext];
CGLLockContext([context CGLContextObj]);
MyDisplay_drawScene();
CGLFlushDrawable([context CGLContextObj]);
CGLUnlockContext([context CGLContextObj]);
}
This is an old question, but this problem still persists, so here's my answer. For reference, i don't use Xcode at all, i write code in Vim and compile with Clang, so this is the default behaviour, and nothing to do with the IB. I use only the NSOpenGLView, NSOpenGLContext, and CGDisplayLink for rendering. I have a MacBook Pro (Retina, 15-inch, Mid 2014) running macOS Sierra.
While debugging i found that NSOpenGLContext's view property returned nil for the first few frames after starting the display link. This was enough to corrupt the context if you did any rendering (other than glClear) while the view wasn't attached, and caused the same GL_FRAMEBUFFER_UNDEFINED error.
The easiest way to solve this, i found, was to assign the NSOpenGLView to its NSOpenGLContext after creation like this:
NSOpenGLView *view = ...;
view.openglContext.view = view;
I'm baffled that, apparently, it's necessary to do this even though the NSOpenGLContext is created by the NSOpenGLView, but there it is.
The trick is to open the View Effects inspector and uncheck the parent View in the Core Animation Layer section.

AVAudioPlayer memory leak

I'm stuck on some weird memory leak problem related to the AVAudioPlayer and I need help after trying everything that came to mind.
Here is the short description of the problem - code appears right after.
I initialize my player and start to play the sound track in an endless loop (and endless loop or one time play did not change the problem).
Several seconds after the music started, I switch to another sound track, hence I create a new player, initialize it, release the old one (which is playing) and then set the new one in place and play it.
At that point in time (right after I call the new Player - [Player play]) I get a memory leak (of 3.5Kb).
I tried the following:
Stop the old player and then release it - no effect
Release the Player right after the play instruction - did not start playing
Release twice the old player - crash
Memory leak DOES NOT happen when I create and play the first Player!
Also, in the reference it does say that the 'play' is async and so probably it increases the ref count by 1, but in this case, why didn't [Player stop] help?
Thanks,
Here are some parts of the code about how I use it:
- (void) loadAndActivateAudioFunction {
NSBundle *mainBundle = [NSBundle mainBundle];
NSError *error;
NSURL *audioURL = [NSURL fileURLWithPath:[mainBundle pathForResource: Name ofType: Type]];
AVAudioPlayer *player = [(AVAudioPlayer*) [AVAudioPlayer alloc] initWithContentsOfURL:audioURL error:&error];
if (!player) {
DebugLog(#"Audio Load Error: no Player: %#", [error localizedDescription]);
DuringAudioPrep = false;
return;
}
[self lock];
[self setAudioPlayer: player];
[self ActivateAudioFunction];
[self unlock];
}
- (void) setAudioPlayer : (AVAudioPlayer *) player {
if (Player)
{
if ([Player isPlaying] || Repeat) // The indication was off???
[Player stop];
[Player release];
}
Player = player;
}
- (void) ActivateAudioFunction {
[Player setVolume: Volume];
[Player setNumberOfLoops: Repeat];
[Player play];
DuringAudioPrep = false;
}
Here is method to create AVAudioPlayer without causing memory leaks. See this page for explaination.
I have confirmed in my app that this removed my AVAudioPlayer leaks 100%.
- (AVAudioPlayer *)audioPlayerWithContentsOfFile:(NSString *)path {
NSData *audioData = [NSData dataWithContentsOfFile:path];
AVAudioPlayer *player = [AVAudioPlayer alloc];
if([player initWithData:audioData error:NULL]) {
[player autorelease];
} else {
[player release];
player = nil;
}
return player;
}
Implement the protocol AVAudioPlayerDelegate and its method audioPlayerDidFinishPlaying:successfully: then release the audio player object
eg.
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag {
[player release]; // releases the player object
}
Your code looks OK to me as far as I've seen, so maybe there is code elsewhere which is causing the problem.
I will say that you're using a sort of odd idiom. Rather than retaining on create and releasing on set, I'd do something like this:
// new players will always be created autoreleased.
AVAudioPlayer *player = [[(AVAudioPlayer*) [AVAudioPlayer alloc] initWithContentsOfURL:audioURL error:&error] autorelease];
- (void) setAudioPlayer : (AVAudioPlayer *) player
{
if (Player)
{
if ([Player isPlaying] || Repeat) // The indication was off???
[Player stop];
[Player release];
}
Player = [player retain];
}
In this way, you only retain "player" objects when they actually come into your setAudioPlayer method, which might make it easier to track down.
Also, verify that it's actually an AVAudioPlayer object which is leaking. Instruments should be able to verify this for you.
Try adding MediaPlayer.framework to your project

Changing NSApplicationIcon across a running application?

I'd like to adjust the NSApplicationIcon image that gets shown automatically in all alerts to be something different than what is in the app bundle.
I know that it's possible to set the dock icon with [NSApplication setApplicationIconImage:] -- but this only affects the dock, and nothing else.
I'm able to work around this issue some of the time: I have an NSAlert *, I can call setIcon: to display my alternate image.
Unfortunately, I have a lot of nibs that have NSImageView's with NSApplicationIcon, that I would like to affect, and it would be a hassle to create outlets and put in code to change the icon. And for any alerts that I'm bringing up with the BeginAlert... type calls (which don't give an NSAlert object to muck with), I'm completely out of luck.
Can anybody think of a reasonable way to globally (for the life of a running application) override the NSApplicationIcon that is used by AppKit, with my own image, so that I can get 100% of the alerts replaced (and make my code simpler)?
Swizzle the [NSImage imageNamed:] method? This method works at least on Snow Leopard, YMMV.
In an NSImage category:
#implementation NSImage (Magic)
+ (void)load {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// have to call imageNamed: once prior to swizzling to avoid infinite loop
[[NSApplication sharedApplication] applicationIconImage];
// swizzle!
NSError *error = nil;
if (![NSImage jr_swizzleClassMethod:#selector(imageNamed:) withClassMethod:#selector(_sensible_imageNamed:) error:&error])
NSLog(#"couldn't swizzle imageNamed: application icons will not update: %#", error);
[pool release];
}
+ (id)_sensible_imageNamed:(NSString *)name {
if ([name isEqualToString:#"NSApplicationIcon"])
return [[NSApplication sharedApplication] applicationIconImage];
return [self _sensible_imageNamed:name];
}
#end
With this hacked up (untested, just wrote it) jr_swizzleClassMethod:... implementation:
+ (BOOL)jr_swizzleClassMethod:(SEL)origSel_ withClassMethod:(SEL)altSel_ error:(NSError**)error_ {
#if OBJC_API_VERSION >= 2
Method origMethod = class_getClassMethod(self, origSel_);
if (!origMethod) {
SetNSError(error_, #"original method %# not found for class %#", NSStringFromSelector(origSel_), [self className]);
return NO;
}
Method altMethod = class_getClassMethod(self, altSel_);
if (!altMethod) {
SetNSError(error_, #"alternate method %# not found for class %#", NSStringFromSelector(altSel_), [self className]);
return NO;
}
id metaClass = objc_getMetaClass(class_getName(self));
class_addMethod(metaClass,
origSel_,
class_getMethodImplementation(metaClass, origSel_),
method_getTypeEncoding(origMethod));
class_addMethod(metaClass,
altSel_,
class_getMethodImplementation(metaClass, altSel_),
method_getTypeEncoding(altMethod));
method_exchangeImplementations(class_getClassMethod(self, origSel_), class_getClassMethod(self, altSel_));
return YES;
#else
assert(0);
return NO;
#endif
}
Then, this method to illustrate the point:
- (void)doMagic:(id)sender {
static int i = 0;
i = (i+1) % 2;
if (i)
[[NSApplication sharedApplication] setApplicationIconImage:[NSImage imageNamed:NSImageNameBonjour]];
else
[[NSApplication sharedApplication] setApplicationIconImage:[NSImage imageNamed:NSImageNameDotMac]];
// any pre-populated image views have to be set to nil first, otherwise their icon won't change
// [imageView setImage:nil];
// [imageView setImage:[NSImage imageNamed:NSImageNameApplicationIcon]];
NSAlert *alert = [[[NSAlert alloc] init] autorelease];
[alert setMessageText:#"Shazam!"];
[alert runModal];
}
A couple of caveats:
Any image view already created must have setImage: called twice, as seen above to register the image changing. Don't know why.
There may be a better way to force the initial imageNamed: call with #"NSApplicationIcon" than how I've done it.
Try [myImage setName:#"NSApplicationIcon"] (after setting it as the application icon image in NSApp).
Note: On 10.6 and later, you can and should use NSImageNameApplicationIcon instead of the string literal #"NSApplicationIcon".

Resources