UIView layer renderInContext Memory not getting released - memory-management

We are trying to create multiple pdf files by using UIView.layer's renderInContext method
The below code run in a autoreleasepool.
---loop
UIGraphicsBeginPDFContextToFile(documentDirectoryFilename, CGRectZero, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[view.layer renderInContext:pdfContext];
view =nil;
pdfContext =nil
UIGraphicsEndPDFContext()
--loop ends
After couple of iterations resident memory increases to 20 mb and subsequent iteration adds to the total memory used.
Some how ARC is not releasing the memory used in rendering the pdf and thus the application crashes with low memory.
Application is using ARC
Any help or pointers to resolve the issue would be much appreciated.
Thanks
UPDATED:
Thanks for the quick reply.
My apologies for having a typo in the pseudocode.
Here is sample code from a test project.
-(BOOL)addUIViewToPDFFile:(UIView*)view newBounds:(CGRect)newBounds pdfFile:(NSString*)pdfFile{
BOOL retFlag =YES;
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:pdfFile];
UIGraphicsBeginPDFContextToFile(documentDirectoryFilename, CGRectZero, nil);
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
if([NSThread isMainThread]){
NSLog(#"Main Thread");
}else{
NSLog(#"Not in Main Thread");
}
UIGraphicsBeginPDFPage();
logMemUsage1();
[view.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
pdfContext = nil;
logMemUsage1();
return retFlag;
}
- (IBAction)createPdf:(id)sender {
for(int i=0;i<10;i++){
#autoreleasepool {
PDFViewController *vc = [[PDFViewController alloc] init];
[vc loadView];
NSString *PdfFileName =[ NSString stringWithFormat:#"TestPdf%d.pdf",i ];
[self addUIViewToPDFFile:vc.view newBounds:self.view.frame pdfFile:PdfFileName];
vc = nil;
}
}
}
One more to point to add is that when application goes into the background on press of the hardware home button and comes back again to foreground the memory seems to get cleared and released.
I am expecting that after each iteration the memory which is used for creating the pdf should be released.

Related

passing video frame to Core Image on osx

Hi all you awesome coders! I've put together this thing from various helpful sources over the last couple of weeks (including a lot of posts from stackoverflow), trying to create something that will take a webcam feed and detect smiles when they occur (might as well draw boxes around the faces and the smiles as well, that doesn't seem like it would be hard once they are detected). Please give me some lee-way if it's messy code because I'm still very much learning.
Currently I'm stuck at trying to pass the image to a CIImage so it can be analysed for faces (I plan to deal with smiles after the face hurdle is overcome). As it is the compiler succeeds if I comment out the block after (5) - it brings up a simple AVCaptureVideoPreviewLayer in a window. I think this is what I've called "rootLayer", so it's like the first layer of the displayed output, and after I detect faces in the video frames I'll show a rectangle following the "bounds" of any detected face in a new layer overlaid on top of this one, and I've called that layer "previewLayer"... correct?
But with the block after (5) there, the compiler throws out three errors -
Undefined symbols for architecture x86_64:
"_CMCopyDictionaryOfAttachments", referenced from:
-[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o
"_CMSampleBufferGetImageBuffer", referenced from:
-[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Can anyone tell me where I'm going wrong and what my next steps are?
Thanks for any help, I've been stuck at this point for a couple of days and I can't figure it out, all the examples I can find are for IOS and don't work in OSX.
- (id)init
{
self = [super init];
if (self) {
// Move the output part to another function
[self addVideoDataOutput];
// Create a capture session
session = [[AVCaptureSession alloc] init];
// Set a session preset (resolution)
self.session.sessionPreset = AVCaptureSessionPreset640x480;
// Select devices if any exist
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice) {
[self setSelectedVideoDevice:videoDevice];
} else {
[self setSelectedVideoDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeMuxed]];
}
NSError *error = nil;
// Add an input
videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
[self.session addInput:self.videoDeviceInput];
// Start the session (app opens slower if it is here but I think it is needed in order to send the frames for processing)
[[self session] startRunning];
// Initial refresh of device list
[self refreshDevices];
}
return self;
}
-(void) addVideoDataOutput {
// (1) Instantiate a new video data output object
AVCaptureVideoDataOutput * captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.videoSettings = #{ (NSString *) kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
// discard if the data output queue is blocked (while CI processes the still image)
captureOutput.alwaysDiscardsLateVideoFrames = YES;
// (2) The sample buffer delegate requires a serial dispatch queue
dispatch_queue_t captureOutputQueue;
captureOutputQueue = dispatch_queue_create("CaptureOutputQueue", DISPATCH_QUEUE_SERIAL);
[captureOutput setSampleBufferDelegate:self queue:captureOutputQueue];
dispatch_release(captureOutputQueue); //what does this do and should it be here or after we receive the processed image back?
// (3) Define the pixel format for the video data output
NSString * key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber * value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = #{key:value};
[captureOutput setVideoSettings:settings];
// (4) Configure the output port on the captureSession property
if ( [self.session canAddOutput:captureOutput] )
[session addOutput:captureOutput];
}
// Implement the Sample Buffer Delegate Method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// I *think* I have a video frame now in some sort of image format... so have to convert it into a CIImage before I can process it:
// (5) Convert CMSampleBufferRef to CVImageBufferRef, then to a CI Image (per weichsel's answer in July '13)
CVImageBufferRef cvFrameImage = CMSampleBufferGetImageBuffer(sampleBuffer); // Having trouble here, prog. stops and won't recognise CMSampleBufferGetImageBuffer.
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage options:(__bridge NSDictionary *)attachments];
//self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage];
//OK so it is a CIImage. Find some way to send it to a separate CIImage function to find the faces, then smiles. Then send it somewhere else to be displayed on top of AVCaptureVideoPreviewLayer
//TBW
}
- (NSString *)windowNibName
{
return #"AVRecorderDocument";
}
- (void)windowControllerDidLoadNib:(NSWindowController *) aController
{
[super windowControllerDidLoadNib:aController];
// Attach preview to session
CALayer *rootLayer = self.previewView.layer;
[rootLayer setMasksToBounds:YES]; //aaron added
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[self.previewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[self.previewLayer setFrame:[rootLayer bounds]];
//[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable]; //don't think I need this for OSX?
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[rootLayer addSublayer:previewLayer];
// [newPreviewLayer release]; //what's this for?
}
(moved from the comments section)
Wow. I guess two days and one StackOverflow post is what it takes to figure out that I haven't added CoreMedia.framework to my project.

Efficiently handling UIImage loading in UICollectionView

In my app the user can take a photo of their pet, or whatever, and that will show in the collection view as a circular picture. Using cellForItemAtIndexPath to check for the existence of a user generated photo (which is stored in the Documents directory as a png file), if not display a default photo.
Issues:
- When Scrolling the UICollectionView the image load halts the
smoothness of the scroll UI as the picture is going to appear on
screen.
- If you scroll the picture off the screen many times the
app crashes due to memory pressure.
QUESTION: Is there an efficient way of loading and keeping the images in memory without having to keep loading the image after the first time it is displayed?
I thought rasterizing the images would help, but it seems the UI scrolling and memory issues persist.
Here is the code:
-(UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath {
static NSString *CellIdentifierPortrait = #"allPetCellsPortrait";
AllPetsCell *cell = (AllPetsCell *)[collectionView dequeueReusableCellWithReuseIdentifier:CellIdentifierPortrait forIndexPath:indexPath];
CGRect rect = CGRectMake(cell.petView.center.x, cell.petView.center.y, cell.petView.frame.size.width, cell.petView.frame.size.height);
UIImageView *iv = [[UIImageView alloc] initWithFrame:rect];
//set real image here
NSString *petNameImage = [NSString stringWithFormat:#"%#.png", self.petNames[indexPath.section][indexPath.item]];
NSString *filePath = [_ad documentsPathForFileName:petNameImage];
BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:filePath];
iv.image = nil;
if (fileExists){
[_allPetsCollectionView performBatchUpdates:^{
if (iv && fileExists) {
iv.image = [UIImage imageWithContentsOfFile:filePath];
}
} completion:^(BOOL finished) {
//after
}];
//loadedImage = YES;
[collectionView layoutIfNeeded];
}//file exists
else{
iv.image = [UIImage imageNamed:[NSString stringWithFormat:#"%#", self.contents[indexPath.section][indexPath.item]]];
[cell.petNameLabel setTextColor:[UIColor darkGrayColor]];//be aware of color over pictures
shadow.shadowColor = [UIColor whiteColor];
}
cell.petView.clipsToBounds = YES;
UIColor *color = MediumPastelGreenColor;
if ([[NSString stringWithFormat:#"%#", self.contents[indexPath.section][indexPath.item]] isEqualToString:#"addpet.png"]) {
color = MediumLightSkyBlueColor;
}
[cell.petView setBounds:rect forBorderColor:color];
[cell.petView addSubview:iv];
if (![[NSString stringWithFormat:#"%#", self.contents[indexPath.section][indexPath.item]] isEqualToString:#"addpet.png"])
[iv addSubview:cell.petNameLabel];
//TODO: optimize the cell by rasterizing it?
if ([_contents count] > 4) {
cell.layer.shouldRasterize = YES;
}
return cell;
}
Here is a screen shot:
Thanks!
Assigning an image to the collection view cell's imageView doesn't cost you any performance issues at all..
There are 2 problems here.
You are fetching the contents (datasource for the collection view) from within the collection view's delegate method. This is causing the delay.
You are adding the imageView holding the image again and again. Thus, each time the cell is displayed, a new imageView holding a newly fetched data is being added as the subView to the reused cell (As the new imageView covers the old one as both have the same frame, you wouldn't notice. And at the sametime this eats up memory). This is causing the memory pressure crash.
SOLUTION
FOR IMPROVING PERFORMANCE
Fetch the image contents from your model class(if you are using one), or from you viewController's viewDidLoad() (You should never fetch datasource from within the collection view's delegate method). Perform the fetching process in the background queue as it wouldn't affect your UI in any manner.
Store the contents onto an array and fetch the contents only from the array to populate your collection view cells.
FOR AVOIDING MEMORY CRASH
As long as you are not using a custom collection view cell, its better to remove all the subviews from the collectionViewCell's content view each time the cell is dequeued.
I hope this solves your problem :)
I recently discovered NSCache which is a dictionary type object that I can store my *already loaded image*s into and check to see if they are loaded with the -cellForItemAtIndexPath method.
I changed the above code to accommodate for this also by using a custom dispatch queue that would be sent to the main queue. This still causes a slight pause bump in the scrolling when the initial image is loaded but after that it is read from the image cache. Not totally optimal yet but I think I am headed in the right direction.
here is the code:
if (fileExists){
//DONE: implement NSCACHE and dispatch queue
const char *petImageQ = "pet image";
dispatch_queue_t petImageQueue = dispatch_queue_create( petImageQ, NULL);
//check to see if image exsists in cache - if not add it.
if ([_petImagesCache objectForKey:petNameImage] == nil) {
dispatch_async(petImageQueue, ^{
dispatch_async(dispatch_get_main_queue(), ^{
[collectionView performBatchUpdates:^{
iv.image = [UIImage imageWithContentsOfFile:filePath];
} completion:^(BOOL finished) {
//add to cache
[_petImagesCache setObject:iv.image forKey:petNameImage];
}];
});
});
}
else{
//load from cache
iv.image = [_petImagesCache objectForKey:petNameImage];
}
I hope this helps. Please add more to this for improving it!

NSOpenGLView and CVDisplayLink, no default frame buffer

I have an NSOpenGLView and OpenGL code that works with an NSTimer running in the main loop (calling setNeedsDisplay and drawRect). I would like to use a CVDisplayLink, so I can get a better frame-rate without overdriving the timer. I copied most of the code from apple's OSXGLEssentials example. The display link starts and the callback runs, but nothing is actually draw on screen. glGetError returns GL_INVALID_FRAMEBUFFER_OPERATION.
glCheckFramebufferStatus returns GL_FRAMEBUFFER_UNDEFINED for GL_FRAMEBUFFER, GL_DRAW_FRAMEBUFFER and GL_READ_FRAMEBUFFER.
Info from the documentation:
GL_FRAMEBUFFER_UNDEFINED is returned if target is the default
framebuffer, but the default framebuffer does not exist.
Here are the relevant bits of code:
- (void)awakeFromNib {
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, // Core Profile !
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAAccelerated,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFAAllowOfflineRenderers,
0
};
NSOpenGLPixelFormat *format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
NSOpenGLContext *context = [[NSOpenGLContext alloc] initWithFormat:format shareContext: nil];
// [context setView: self];
[self setPixelFormat: format];
[self setOpenGLContext: context];
}
- (void)prepareOpenGL {
[super prepareOpenGL];
NSOpenGLContext* context = [self openGLContext];
[context makeCurrentContext];
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[context setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
MyDisplay_setup();
MyDisplay_initScene(_bounds.size.width, _bounds.size.height);
CVDisplayLinkCreateWithActiveCGDisplays(&displayLink);
CVDisplayLinkSetOutputCallback(displayLink, &displayLinkCallback, (__bridge void *)self);
CGLPixelFormatObj cglPixelFormat = [[self pixelFormat] CGLPixelFormatObj];
CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext(displayLink, [context CGLContextObj], cglPixelFormat);
CVDisplayLinkStart(displayLink);
}
static CVReturn displayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp* now, const CVTimeStamp* outputTime,
CVOptionFlags flagsIn, CVOptionFlags* flagsOut, void* displayLinkContext) {
#autoreleasepool {
[(__bridge MyView*)displayLinkContext redraw];
}
return kCVReturnSuccess;
}
- (void)redraw {
NSOpenGLContext* context = [self openGLContext];
[context makeCurrentContext];
CGLLockContext([context CGLContextObj]);
MyDisplay_drawScene();
CGLFlushDrawable([context CGLContextObj]);
CGLUnlockContext([context CGLContextObj]);
}
This is an old question, but this problem still persists, so here's my answer. For reference, i don't use Xcode at all, i write code in Vim and compile with Clang, so this is the default behaviour, and nothing to do with the IB. I use only the NSOpenGLView, NSOpenGLContext, and CGDisplayLink for rendering. I have a MacBook Pro (Retina, 15-inch, Mid 2014) running macOS Sierra.
While debugging i found that NSOpenGLContext's view property returned nil for the first few frames after starting the display link. This was enough to corrupt the context if you did any rendering (other than glClear) while the view wasn't attached, and caused the same GL_FRAMEBUFFER_UNDEFINED error.
The easiest way to solve this, i found, was to assign the NSOpenGLView to its NSOpenGLContext after creation like this:
NSOpenGLView *view = ...;
view.openglContext.view = view;
I'm baffled that, apparently, it's necessary to do this even though the NSOpenGLContext is created by the NSOpenGLView, but there it is.
The trick is to open the View Effects inspector and uncheck the parent View in the Core Animation Layer section.

NSImage and related APIs leaking memory

Following is the code snippet that I have:
// Make Auto release pool
NSAutoreleasePool * autoReleasePool = [[NSAutoreleasePool alloc] init];
try
{
if (mCapture)
{
// Get the image reference
NSImage* image = NULL;
image = [mCapture getCurrentFrameImage];
// Get the TIFF data
NSData *pDataTifData = [[NSData alloc] initWithData:[image TIFFRepresentation]];
NSBitmapImageRep *pBitmapImageRep = [[NSBitmapImageRep alloc] initWithData:pDataTifData];
// Convert to BMP data
NSData *pDataBMPData;
pDataBMPData = [pBitmapImageRep representationUsingType: NSPNGFileType
properties: nil];
// Save to specified path
ASL::String strPath = ASL::MakeString(capInfo->thefile.name);
NSString* pPath = (NSString*)ASL::MakeCFString(strPath);
[pDataBMPData writeToFile:pPath
atomically: YES];
::CFRelease(pPath);
pDataBMPData = nil;
[pBitmapImageRep release];
pBitmapImageRep = nil;
[pDataTifData release];
pDataTifData = nil;
image = nil;
}
}
catch(...)
{
}
[autoReleasePool drain];
Note that image = [mCapture getCurrentFrameImage]; is returning an autoreleased NSImage. I am releasing objects and also have NSAutoreleasePool in place. But still it is leaking about 3-4 MB of memory everytime this code snippet is executed. I am not sure where the mistake is.
You could simplify this code a lot by making captureCurrentFrameImage return an NSBitmapImageRep instead of an NSImage, since you never actually use an NSImage for anything here. You can wrap the image rep in an image when necessary, and for this code, simply use the image rep by itself to produce the PNG data. Among other things, this saves you a trip through the TIFF representation.
If it still leaks after you make those changes, run your app under Instruments's Leaks template; the two instruments in that template, Leaks and ObjectAlloc, will help you hunt down whatever leaks you have.

Getting crash after picking images from UIImagePickerController (Related to memory leak?)

I have been trying to minimize my memory footprint with UIImagePickerController, but I'm starting to think that the memory problems I am having are resulting from poor memory management, instead of a particular way to handle the UIImagePickerController object.
My workflow is this: The "Edit Image" button is clicked, which presents a UIActionSheet. This action sheet allows you to delete, take a picture, choose from the library, or cancel. If you select Choose from the library or Take Picture, I alloc an instance of UIImagePickerController and present it, followed by a release of UIImagePickerController:
-(void)actionSheet:(UIActionSheet *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex
{
if (actionSheet.tag != 999) {
UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];
imagePicker.delegate = self;
BOOL pickImage = nil;
if (actionSheet.tag == iPhoneWithDelete) {
switch (buttonIndex) {
case 0:
object.objectImage = nil;
pickImage = NO;
break;
case 1:
imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
pickImage = YES;
break;
case 2:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
} else if (actionSheet.tag == iPhoneNoDelete) {
switch (buttonIndex) {
case 0:
imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
pickImage = YES;
break;
case 1:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
} else if (actionSheet.tag == iPodWithDelete) {
switch (buttonIndex) {
case 0:
object.objectImage = nil;
pickImage = NO;
break;
case 1:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
} else if (actionSheet.tag == iPodNoDelete) {
switch (buttonIndex) {
case 0:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
}
if (pickImage) {
imagePicker.allowsEditing = YES;
[self presentModalViewController:imagePicker animated:YES];
} else {
[self setupImageButton];
[self setupChooseImageButton];
}
[imagePicker release];
}
}
Once I get a selection back from the UIImagePickerController, I save 2 images, a resized version of the edited image to use for a thumbnail, and a 800x600 version of the original unedited image into a relationship attribute (Transformational, using the same UIImage to PNG transformations found in the Recipes demo code) for display use: (the resize methods are based on the one demoed in this SO post.)
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[self dismissModalViewControllerAnimated:YES];
NSManagedObject *oldImage = object.imageFull;
if (oldImage != nil)
{
[object.managedObjectContext deleteObject:oldImage];
}
NSManagedObject *image = [NSEntityDescription insertNewObjectForEntityForName:#"Image" inManagedObjectContext:object.managedObjectContext];
object.imageFull = image;
UIImage *rawImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
CGSize size = CGSizeMake(800, 600);
UIImage *fullImage = [UIImageManipulator scaleImage:rawImage toSize:size];
[image setValue:fullImage forKey:#"imageFull"];
UIImage *processedImage = [UIImageManipulator scaleImage:[info objectForKey:#"UIImagePickerControllerEditedImage"] toSize:CGSizeMake(75, 75)];
object.objectImage = processedImage;
[self setupImageButton];
[self setupChooseImageButton];
rawImage = nil;
fullImage = nil;
processedImage = nil;
}
When I go through viewDidUnload I am setting self.object = nil, and [object release] during dealloc, but I'm still getting memory warnings after about 10 image changes, with a crash at around 20. It leads me to believe that I am not getting that full image out of memory the correct way. What am I missing here?
And on a second note, does the Camera source use significantly more memory than the Photo Albums source? I tend to get more crashes when using the camera.
--EDIT--
Starting a bounty for any information about what I may be handling wrong. I will update this post with any answers to anything I have been unclear about. Just at my wit's end on this.
--EDIT 2--
Reworked the code to take chrissr's suggestions into account, and implemented GCD to improve usability. Is this as efficient as this process gets? Still getting memory warnings, and a crash around 20 images in. I'm sure that the combination of doing expensive UIImage resizing and using UIImagePickerController is murdering the CPU, but I can't imagine that every app is dealing with uncertainty around the UIImagePickerController. My memory footprint is around 2 megs. I have been operating under the assumption that that was plenty of overhead. Should I reduce that footprint further?
Here is the modified code:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[self dismissModalViewControllerAnimated:YES];
if (object.imagePath != nil) {
[self deleteImages];
}
dispatch_queue_t image_queue;
image_queue = dispatch_queue_create("com.gordonfontenot.app", NULL);
dispatch_async(image_queue, ^{
NSDate *now = [NSDate date];
NSDateFormatter *f = [[NSDateFormatter alloc] init];
[f setDateFormat:#"yyyyMMDDHHmmss"];
NSString *imageName = [NSString stringWithFormat:#"Image-%#-%i", [f stringFromDate:now], arc4random() % 100];
NSString *thumbName = [NSString stringWithFormat:#"%#-thumb", imageName];
[f release];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *fullPath = [documentsDirectory stringByAppendingPathComponent:imageName];
NSString *thumbPath = [documentsDirectory stringByAppendingPathComponent:thumbName];
NSData *thumbImageData = UIImagePNGRepresentation([UIImageManipulator scaleImage:[info objectForKey:#"UIImagePickerControllerEditedImage"] toSize:CGSizeMake(120, 120)]);
[thumbImageData writeToFile:thumbPath atomically:NO];
dispatch_async(dispatch_get_main_queue(), ^{
object.thumbPath = thumbPath;
[self setupImageButton];
imageButton.enabled = NO;
[self setupChooseImageButton];
});
NSData *fullImageData = UIImagePNGRepresentation([UIImageManipulator scaleImage:[info objectForKey:#"UIImagePickerControllerOriginalImage"] toSize:CGSizeMake(800, 600)]);
[fullImageData writeToFile:fullPath atomically:NO];
dispatch_async(dispatch_get_main_queue(), ^{
imageButton.enabled = YES;
object.imagePath = fullPath;
});
if (picker.sourceType == UIImagePickerControllerSourceTypeCamera) {
UIImageWriteToSavedPhotosAlbum([info objectForKey:#"UIImagePickerControllerOriginalImage"], self, nil, nil);
}
});
dispatch_release(image_queue);
}
Memory warnings are extremely common when dealing with the UIImagePickerController. This is especially true when using the camera. Keep in mind that while a JPG or PNG on disk may only amount to a few MB, the uncompressed in memory bitmap used to draw the image uses considerably more.
There's nothing that you're doing wrong necessarily, but some improvements can be made:
Rather than storing the image bytes in Core Data, why not write the image to disk and store the path to the file in your database?
Rather than using so many autoreleased images, can you find a way to manage their lifecycle directly and release them sooner?
Your best bet may be to write the images to disk as soon after processing as possible and free up the memory they're using. Then store their location using Core Data rather than the raw data.

Resources