I have been trying to minimize my memory footprint with UIImagePickerController, but I'm starting to think that the memory problems I am having are resulting from poor memory management, instead of a particular way to handle the UIImagePickerController object.
My workflow is this: The "Edit Image" button is clicked, which presents a UIActionSheet. This action sheet allows you to delete, take a picture, choose from the library, or cancel. If you select Choose from the library or Take Picture, I alloc an instance of UIImagePickerController and present it, followed by a release of UIImagePickerController:
-(void)actionSheet:(UIActionSheet *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex
{
if (actionSheet.tag != 999) {
UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];
imagePicker.delegate = self;
BOOL pickImage = nil;
if (actionSheet.tag == iPhoneWithDelete) {
switch (buttonIndex) {
case 0:
object.objectImage = nil;
pickImage = NO;
break;
case 1:
imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
pickImage = YES;
break;
case 2:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
} else if (actionSheet.tag == iPhoneNoDelete) {
switch (buttonIndex) {
case 0:
imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
pickImage = YES;
break;
case 1:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
} else if (actionSheet.tag == iPodWithDelete) {
switch (buttonIndex) {
case 0:
object.objectImage = nil;
pickImage = NO;
break;
case 1:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
} else if (actionSheet.tag == iPodNoDelete) {
switch (buttonIndex) {
case 0:
imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
pickImage = YES;
break;
default:
pickImage = NO;
break;
}
}
if (pickImage) {
imagePicker.allowsEditing = YES;
[self presentModalViewController:imagePicker animated:YES];
} else {
[self setupImageButton];
[self setupChooseImageButton];
}
[imagePicker release];
}
}
Once I get a selection back from the UIImagePickerController, I save 2 images, a resized version of the edited image to use for a thumbnail, and a 800x600 version of the original unedited image into a relationship attribute (Transformational, using the same UIImage to PNG transformations found in the Recipes demo code) for display use: (the resize methods are based on the one demoed in this SO post.)
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[self dismissModalViewControllerAnimated:YES];
NSManagedObject *oldImage = object.imageFull;
if (oldImage != nil)
{
[object.managedObjectContext deleteObject:oldImage];
}
NSManagedObject *image = [NSEntityDescription insertNewObjectForEntityForName:#"Image" inManagedObjectContext:object.managedObjectContext];
object.imageFull = image;
UIImage *rawImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
CGSize size = CGSizeMake(800, 600);
UIImage *fullImage = [UIImageManipulator scaleImage:rawImage toSize:size];
[image setValue:fullImage forKey:#"imageFull"];
UIImage *processedImage = [UIImageManipulator scaleImage:[info objectForKey:#"UIImagePickerControllerEditedImage"] toSize:CGSizeMake(75, 75)];
object.objectImage = processedImage;
[self setupImageButton];
[self setupChooseImageButton];
rawImage = nil;
fullImage = nil;
processedImage = nil;
}
When I go through viewDidUnload I am setting self.object = nil, and [object release] during dealloc, but I'm still getting memory warnings after about 10 image changes, with a crash at around 20. It leads me to believe that I am not getting that full image out of memory the correct way. What am I missing here?
And on a second note, does the Camera source use significantly more memory than the Photo Albums source? I tend to get more crashes when using the camera.
--EDIT--
Starting a bounty for any information about what I may be handling wrong. I will update this post with any answers to anything I have been unclear about. Just at my wit's end on this.
--EDIT 2--
Reworked the code to take chrissr's suggestions into account, and implemented GCD to improve usability. Is this as efficient as this process gets? Still getting memory warnings, and a crash around 20 images in. I'm sure that the combination of doing expensive UIImage resizing and using UIImagePickerController is murdering the CPU, but I can't imagine that every app is dealing with uncertainty around the UIImagePickerController. My memory footprint is around 2 megs. I have been operating under the assumption that that was plenty of overhead. Should I reduce that footprint further?
Here is the modified code:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[self dismissModalViewControllerAnimated:YES];
if (object.imagePath != nil) {
[self deleteImages];
}
dispatch_queue_t image_queue;
image_queue = dispatch_queue_create("com.gordonfontenot.app", NULL);
dispatch_async(image_queue, ^{
NSDate *now = [NSDate date];
NSDateFormatter *f = [[NSDateFormatter alloc] init];
[f setDateFormat:#"yyyyMMDDHHmmss"];
NSString *imageName = [NSString stringWithFormat:#"Image-%#-%i", [f stringFromDate:now], arc4random() % 100];
NSString *thumbName = [NSString stringWithFormat:#"%#-thumb", imageName];
[f release];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *fullPath = [documentsDirectory stringByAppendingPathComponent:imageName];
NSString *thumbPath = [documentsDirectory stringByAppendingPathComponent:thumbName];
NSData *thumbImageData = UIImagePNGRepresentation([UIImageManipulator scaleImage:[info objectForKey:#"UIImagePickerControllerEditedImage"] toSize:CGSizeMake(120, 120)]);
[thumbImageData writeToFile:thumbPath atomically:NO];
dispatch_async(dispatch_get_main_queue(), ^{
object.thumbPath = thumbPath;
[self setupImageButton];
imageButton.enabled = NO;
[self setupChooseImageButton];
});
NSData *fullImageData = UIImagePNGRepresentation([UIImageManipulator scaleImage:[info objectForKey:#"UIImagePickerControllerOriginalImage"] toSize:CGSizeMake(800, 600)]);
[fullImageData writeToFile:fullPath atomically:NO];
dispatch_async(dispatch_get_main_queue(), ^{
imageButton.enabled = YES;
object.imagePath = fullPath;
});
if (picker.sourceType == UIImagePickerControllerSourceTypeCamera) {
UIImageWriteToSavedPhotosAlbum([info objectForKey:#"UIImagePickerControllerOriginalImage"], self, nil, nil);
}
});
dispatch_release(image_queue);
}
Memory warnings are extremely common when dealing with the UIImagePickerController. This is especially true when using the camera. Keep in mind that while a JPG or PNG on disk may only amount to a few MB, the uncompressed in memory bitmap used to draw the image uses considerably more.
There's nothing that you're doing wrong necessarily, but some improvements can be made:
Rather than storing the image bytes in Core Data, why not write the image to disk and store the path to the file in your database?
Rather than using so many autoreleased images, can you find a way to manage their lifecycle directly and release them sooner?
Your best bet may be to write the images to disk as soon after processing as possible and free up the memory they're using. Then store their location using Core Data rather than the raw data.
Related
I'm having a little trouble switching scenes (or even displaying sprites) on my CCscene UserLevelCreation, but it only happens after i use the UIImagePickerController to take/accept a picture, And then when I try to switch scenes, It crashes with the error:
"Could not attach texture to framebuffer"
this is the code I use to take a picture:
-(void)takePhoto{
AppController *appdel = (AppController*) [[UIApplication sharedApplication] delegate];
#try {
uip = [[UIImagePickerController alloc] init] ;
uip.sourceType = UIImagePickerControllerSourceTypeCamera;
uip.allowsEditing = YES;
uip.delegate = self;
}
#catch (NSException * e) {
uip = nil;
}
#finally {
if(uip) {
[appdel.navController presentModalViewController:uip animated:NO];
}
}
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
NSString * LevelImage;
LevelImage=[info objectForKey:UIImagePickerControllerOriginalImage];
AppController *appdel = (AppController*) [[UIApplication sharedApplication] delegate];
[appdel.navController dismissModalViewControllerAnimated:YES];
[NSThread detachNewThreadSelector:#selector(writeImgToPath:) toTarget:self withObject:LevelImage];
}
And the WriteImageToPath function:
-(void)writeImgToPath:(id)sender
{
UIImage *image = sender;
NSArray *pathArr = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask,
YES);
CGSize size;
int currentProfileIndex = 1;
NSString *path = [[pathArr objectAtIndex:0]
stringByAppendingPathComponent:[NSString stringWithFormat:#"CustomLevel_%d.png",currentProfileIndex]];
size = CGSizeMake(1136, 640);
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, 1136, 640)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = UIImagePNGRepresentation(image);
[data writeToFile:path atomically:YES];
CCLOG(#"saved...");
CGRect r = CGRectMake(0, 0, 1136, 640);
UIGraphicsBeginImageContext(r.size);
UIImage *img1;
[image drawInRect:r];
img1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(img1, nil, nil, nil);
currentProfileIndex++;
[[CCDirector sharedDirector] replaceScene:[LevelEditor Scene]
withTransition:[CCTransition transitionPushWithDirection:CCTransitionDirectionLeft duration:1.0f]];
}
-UPDATE----------------------------------------------------------------------------------
I just tried switching scenes in different places in the code, and I can switch scenes freely until the Function:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
I think that function might be the problem.
So far, I have tried to:
1.switch scenes before taking the picture, and that works fine
2.I read that it might be because i was resizing the image, but commenting that out made no difference.
3.I've also added breakpoints before the scene switch, and the scene isn't nil
4.Finally i tred dismissing the uip UIImagePickerController, but that made no difference either.
Sorry for the long post /: if anyone knows whats going on any help would be really great.
Ok so I found the problem to be this line:
[NSThread detachNewThreadSelector:#selector(writeImgToPath:) toTarget:self withObject:LevelImage];
}
Apparently some Parts of Uikit are not thread-friendly, so I'm now using just
[self writeImgToPath];
And it works, hope this helps anyone with the same problem.
I'm struggling with the problem to draw an eps file on a NSView.
When I first load the eps file from a file and draw it with drawInRect: the image is displayed correctly. However, the image will not be drawn when I load it from an archive file.
I've prepared a dirty small example that you can copy/paste and try out. Create a new Cocoa App project and add this to the delegate method.
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
// Just a sample eps file
NSURL *url = [NSURL URLWithString: #"http://embedded.eecs.berkeley.edu/concurrency/latex/figure.eps"];
NSImage *epsImage = [[[NSImage alloc] initWithContentsOfURL: url] autorelease];
// Encode data
NSMutableData *mutableData = [[[NSMutableData alloc] init] autorelease];
NSKeyedArchiver *coder = [[[NSKeyedArchiver alloc] initForWritingWithMutableData: mutableData] autorelease];
[coder encodeObject: epsImage forKey: #"image"];
[coder finishEncoding];
NSString *dataFile = [#"~/Desktop/image.data" stringByExpandingTildeInPath];
[mutableData writeToFile: dataFile atomically: YES];
// Decode data
NSData *data = [NSData dataWithContentsOfFile: dataFile];
NSKeyedUnarchiver *decoder = [[NSKeyedUnarchiver alloc] initForReadingWithData: data];
NSImage *loadedImage = [decoder decodeObjectForKey: #"image"];
// Draw image
NSRect rect;
rect.origin = NSZeroPoint;
rect.size = loadedImage.size;
NSView *view = [[NSApp mainWindow] contentView];
[view lockFocus];
[loadedImage drawInRect: rect fromRect: rect operation: NSCompositeSourceOver fraction: 1.0];
[view unlockFocus];
}
To prove that the first loaded image draws correctly just change the line [loadedImage drawInRect:...] to [epsImage drawInRect:...].
I'm using NSKeyedArchiver and NSKeyedUnarchiver here for simulating encodeWithCoder: and initWithCoder:. So please focus on the fact that NSImage with NSEPSImageRep representation, which does not contain a preview (from a resource fork?) and loaded purely as eps commands, is not drawn on a NSView correctly.
Any help is appreciated.
Due to the way that cacheing works on NSImage, I've often found it more effective to actually grab the NSImageRep if I know what the type is.
In our code, we found that the most reliable way to save off images is in their original format, but that requires either saving off the data in its original format somewhere else, or requesting the data from the NSImageRep. Unfortunately, there's not a generic -(NSData*)data method of NSImageRep, so we ended up specifically checking for various types of NSImageRep and saving them off depending on what we knew them to be.
Fortunately, loading is simple, as NSImage::initWithData: will figure out the type based on the data.
Here's our long-standing code for doing this. Basically, it prefers PDF then EPS then it makes a TIFF of anything it doesn't understand.
+ (NSData*) dataWithImage:(NSImage*)image kindString:( NSString**)kindStringPtr
{
if (!image)
return nil;
NSData *pdfData=nil, *newData=nil, *epsData=nil, *imageData=nil;;
NSString *kindString=nil;
NSArray *reps = [image representations];
for (NSImageRep *rep in reps) {
if ([rep isKindOfClass: [NSPDFImageRep class]]) {
// we have a PDF, so save that
pdfData = [(NSPDFImageRep*)rep PDFRepresentation];
PDFDocument *doc = [[PDFDocument alloc] initWithData:pdfData];
newData = [doc dataRepresentation];
if (newData && ([newData length]<[pdfData length])) {
pdfData = newData;
}
break;
}
if ([rep isKindOfClass: [NSEPSImageRep class]]) {
epsData = [(NSEPSImageRep*)rep EPSRepresentation];
break;
}
}
if (pdfData) {
imageData=pdfData;
kindString= #"pdfImage";
} else if (epsData) {
imageData=epsData;
kindString=#"epsImage";
} else {
// make a big copy
NSBitmapImageRep *rep0 = [reps objectAtIndex:0];
if ([rep0 isKindOfClass: [NSBitmapImageRep class]]) {
[image setSize: NSMakeSize( [rep0 pixelsWide], [rep0 pixelsHigh])];
}
imageData = [image TIFFRepresentation];
kindString=#"tiffImage";
}
if (kindStringPtr)
*kindStringPtr=kindString;
return imageData;
}
Once we have the NSData*, it can be saved in a keyed archive, written to disk or whatever.
On the way back in, load in the NSData* and then
NSImage *image = [[NSImage alloc] initWithData: savedData];
and you should be all set.
What I have
For my game I'm creating several animations using view.animationImages = imagesArray; [view startAnimating];
In my animation helper class I use this
- (UIImage *)loadRetinaImageIfAvailable:(NSString *)path
{
NSString *retinaPath = [[path stringByDeletingLastPathComponent] stringByAppendingPathComponent:[NSString stringWithFormat:#"%##2x.%#", [[path lastPathComponent] stringByDeletingPathExtension], [path pathExtension]]];
if ([UIScreen mainScreen].scale == 2.0 && [[NSFileManager defaultManager] fileExistsAtPath:retinaPath] == YES)
return [[UIImage alloc] initWithCGImage:[[UIImage imageWithData:[NSData dataWithContentsOfFile:retinaPath]] CGImage] scale:2.0 orientation:UIImageOrientationUp];
else
return [UIImage imageWithContentsOfFile:path];
}
- (NSMutableArray *)generateCachedImageArrayWithFilename:(NSString *)filename extension:(NSString *)extension andImageCount:(int)count
{
_imagesArray = [[NSMutableArray alloc] init];
_fileExtension = extension;
_imageName = filename;
_imageCount = count;
for (int i = 0; i < count; i++)
{
NSString *tempImageNames = [NSString stringWithFormat:#"%#%i", filename, i];
NSString *imagePath = [[NSBundle mainBundle] pathForResource:tempImageNames ofType:extension];
UIImage *frameImage = [self loadRetinaImageIfAvailable:imagePath];
UIGraphicsBeginImageContext(frameImage.size);
CGRect rect = CGRectMake(0, 0, frameImage.size.width, frameImage.size.height);
[frameImage drawInRect:rect];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[_imagesArray addObject:renderedImage];
if (_isDoublingFrames)
{
[_imagesArray addObject:renderedImage];
}
else if (_isTriplingFrames)
{
[_imagesArray addObject:renderedImage];
[_imagesArray addObject:renderedImage];
}
NSLog(#"filename = %#", filename);
}
return _imagesArray;
}
Problem and facts
Without caching images I got retina versions of images byt my animations are not fluent
If I cache images this way, animations are ok, but uses non-retina versions of images
Please is there some other way to cache and get retina versions?
Are these images located in your application bundle? If so this logic is all unnecessary because the imageWithContentsOfFile: method will already detect and load the #2x image on a retina device.
Also, you'd be better off using the [UIImage imageNamed:] method instead because this automatically caches the image for re-use, and decompresses it immediately, avoiding the need for manually caching the images, or for that trickery with drawing it into an offscreen CGContext.
In my iPhone app I have a map view. In this I am displaying a number of pin views depends on the data from a web server. Here is the method I used for it.
- (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation
{
NSString *identifier = #"myPin";
self.pinView = (MKPinAnnotationView *)[self.mapView dequeueReusableAnnotationViewWithIdentifier:identifier];
if (self.pinView == nil) {
self.pinView= [[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:identifier]autorelease]; //11.1%
} else {
self.pinView.annotation = annotation;
}
UIButton* rightButton = [UIButton buttonWithType:UIButtonTypeDetailDisclosure]; //20.4%
[rightButton setTitle:annotation.title forState:UIControlStateNormal];
self.pinView.rightCalloutAccessoryView = rightButton; //2.7%
MyAnnotation *annot = (MyAnnotation*)annotation;
UIImageView *egoimageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"defaultPerson"]]; //17.5%
NSString *imageUrl = [NSString stringWithFormat:#"%#%#", CommonImageURL, [friendsProfileImageArray objectAtIndex:annot.tag]]; //9.4%
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] &&
([UIScreen mainScreen].scale == 2.0)) {
// Retina display
[egoimageView setImageWithURL:[NSURL URLWithString:imageUrl] placeholderImage:[UIImage imageNamed:#"defaultPerson#2x.png"]]; //2.6%
egoimageView.image = [UIImage imageWithCGImage:egoimageView.image.CGImage scale:egoimageView.image.size.width/25 orientation:egoimageView.image.imageOrientation];
} else {
// non-Retina display
[egoimageView setImageWithURL:[NSURL URLWithString:imageUrl] placeholderImage:[UIImage imageNamed:#"defaultPerson.png"]]; //14.6%
egoimageView.image = [UIImage imageWithCGImage:egoimageView.image.CGImage scale:egoimageView.image.size.width/25 orientation:egoimageView.image.imageOrientation]; //1.6%
}
[egoimageView sizeToFit];
self.pinView.leftCalloutAccessoryView = egoimageView; //3.1%
[egoimageView release];
self.pinView.canShowCallout=YES;
self.pinView.animatesDrop=YES;
//pin color based on status.....
if ([annot.relationshipStatus intValue]==2 )
self.pinView.pinColor=MKPinAnnotationColorGreen;
else
self.pinView.pinColor=MKPinAnnotationColorPurple; //17.1%
return self.pinView;
}
In this I have mentioned the percentage of memory allocations. If I continuously load the map, the total memory usage increases and the app crashes. How can I properly fix this? I have tried to fix it, but I don't know what more I can do.
Please help, thanks in advance.
I have two views.
1. One view has a grid of NSImageViews with images.
2. The other view has a grid of potential places to place the dragged image.
I think I have dragging working. If I click and drag an image from view 1, it works fine, but when I try to place it, the path of the source image is for an image on my desktop that isn't even part of my project. The result of the below code is that carriedData = nil, but I did some testing with NSLog and saw that it is sending the path for an image outside the project that is no my desktop.
Here is some of the D&D protocol methods I have implemented. myDragState is just en enumeration I use for highlighting the grid space that the drag is currently hovering over.
in Init...
[self registerForDraggedTypes:[NSImage imagePasteboardTypes]];
myDragState = dragStateNone;
Then the protocol methods
- (NSDragOperation)draggingEntered:(id <NSDraggingInfo>)sender
{
if ((NSDragOperationGeneric & [sender draggingSourceOperationMask]) == NSDragOperationGeneric)
{
myDragState = dragStateOver;
[self setNeedsDisplay];
return NSDragOperationGeneric;
}
else
{
return NSDragOperationNone;
}
}
- (void)draggingExited:(id <NSDraggingInfo>)sender
{
//turn off focus ring
myDragState = dragStateExited;
[self setNeedsDisplay];
}
- (void)draggingEnded:(id <NSDraggingInfo>)sender { }
- (BOOL)prepareForDragOperation:(id <NSDraggingInfo>)sender
{
return YES;
}
- (BOOL)performDragOperation:(id <NSDraggingInfo>)sender
{
NSPasteboard *paste = [sender draggingPasteboard];
//gets the dragging-specific pasteboard from the sender
NSArray *types = [NSArray arrayWithObjects:NSTIFFPboardType, nil];
//a list of types that we can accept
NSString *desiredType = [paste availableTypeFromArray:types];
NSData *carriedData = [paste dataForType:desiredType];
if (nil == carriedData)
{
//the operation failed for some reason
NSRunAlertPanel(#"Paste Error", #"Sorry, but the past operation failed", nil, nil, nil);
myDragState = dragStateNone;
[self setNeedsDisplay];
return NO;
}
else
{
//the pasteboard was able to give us some meaningful data
if ([desiredType isEqualToString:NSTIFFPboardType])
{
NSLog(#"TIFF");
//we have TIFF bitmap data in the NSData object
NSImage *newImage = [[NSImage alloc] initWithData:carriedData];
[self setImage:newImage];
myDragState = dragStateSet;
}
else
{
//this can't happen
//NSAssert(NO, #"This can't happen");
NSLog(#"Other type");
myDragState = dragStateNone;
[self setNeedsDisplay];
return NO;
}
}
//redraw us with the new image
return YES;
}
- (void)concludeDragOperation:(id <NSDraggingInfo>)sender
{
//re-draw the view with our new data
[self setNeedsDisplay:YES];
}
- (void)drawRect:(NSRect)dirtyRect
{
// Drawing code here.
NSRect ourBounds = [self bounds];
if (myDragState == dragStateSet)
{
NSImage *image = [self image];
[super drawRect:dirtyRect];
[image compositeToPoint:(ourBounds.origin) operation:NSCompositeSourceOver];
}
else if (myDragState == dragStateOver)
{
[[NSColor colorWithDeviceRed:1.0f green:1.0f blue:1.0f alpha:0.4f] set];
[NSBezierPath fillRect:ourBounds];
}
else {
//draw nothing
}
}
Edit: So I figured this out. The problem was actually with the source. I wasn't copying it to the PBoard properly. My code for that is:
NSPasteboard *zPasteBoard = [NSPasteboard pasteboardWithName:NSDragPboard];
[zPasteBoard declareTypes:[NSArray arrayWithObject:NSTIFFPboardType] owner:self];
[zPasteBoard setData:[tileImageView.image TIFFRepresentation] forType:NSTIFFPboardType];
Now I am getting some weird effect when the images show though. As the images are placed in the destination imageview, they are being resized, but the larger draggable version is also still showing up. So, I am getting two images on the image view.