Why does saving UIImages to disk increase memory usage - uiimage

I am downloading cover images uploaded by App.net users. App.net requires these cover images to be at least 960 pixels wide. I fetch them with a simple AFImageRequestOperation:
NSURLRequest *urlRequest = [NSURLRequest requestWithURL:URL];
AFImageRequestOperation *imageRequestOperation = [AFImageRequestOperation imageRequestOperationWithRequest:urlRequest success:^(UIImage *image) {
if (completionHandler) {
completionHandler(image); // Load image into UI...
}
}];
[self.fetchQueue addOperation:imageRequestOperation];
This is working, no memory spikes.
I want to cache the authenticated users' images so users don't have to download them each time the app opens. As soon as I archive the downloaded image to disk, I get huge spikes in memory. For example, my cover image is currently 3264 x 2448 pixels. When downloaded on my Mac it comes to around 1,3 MB. However, as soon as I create a NSData object with either UIImagePNGRepresentation(image) or via TMCache's setObject:forKey: method, the app's used memory spikes to around 60,0 MB.
For clarity, This is all I'm doing to write the file to disk:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
NSURL *fileURL = ... // URL of file in "/Application Support"
NSData *imageData = UIImagePNGRepresentation(imageToSave);
[imageData writeToURL:fileURL atomically:YES];
});
Can anyone tell me what is going on? Why is a 1,3 MB being extrapolated into almost sixty times that. How can I avoid this massive and potentially crippling inflation. This is one image, what if the user opens several profiles, each with a cached image?

The image dimensions are what have the greatest bearing on memory usage. For a given image size (regardless of PNG, JPG), the memory usage is pretty much the same and is given by: width x height x 4 bytes.
A cover image of 3264x2448 would decode to roughly 32MB. Perhaps the atomic write explains the doubling you see.
Spikes like this may be unavoidable if that's the size of the image you need to work with. The important thing is to make sure the memory usage isn't growing without bound. When you run the app and look at the memory instrument gauge, does it eventually go down as your app does its work? You can also try wrapping the image-writing code in an #autoreleasepool.

Related

How do you work with really large images in Metal?

TL;DR: In macOS 10.13, an MTLTexture has a maximum width and height of 16,384. What strategies can you use to be able to process and display images larger than 16,384 pixels using Metal?
In a photo viewer that I'm currently working on, I've moved most of the viewing of images into a Metal backed view that uses Core Image for doing any image adjustments. This is working really well but I've recently started testing against some really large images (panoramas) and I'm now hitting some limits that I'm not entirely sure how to workaround while remaining relatively performant.
My current environment looks like this:
Load and decode an image from from an NSURL into an IOSurface. This is done using either Image IO directly or a Core Image pipeline that renders into the IOSurface. The IOSurface is then passed from an XPC service back into the main app.
In the main app, a new MTLTexture is created that is backed by the IOSurface. Then, a CIImage is created from the MTLTexture and that CIImage is used throughout an image pipeline as the root "source image".
However, if I attempt to open an image larger that 16,384 pixels in one dimension, then I'm unable to create the original IOSurface 16,384 on my laptop. (13" MBP-TB 2016)
But even if I could create a larger IOSurface, then I'm still stuck with the same limit on the MTLTexture.
See: Apple's Metal Feature Table Set
I'm curious what strategies others would recommend to allow one to open large image files while still taking advantage of Core Image and Metal.
One attempt I've made is to just have the root source image be a CIImage that was created with a CGImageRef. However, there's a significant drop in performance between that arrangement and a CIImage backed by a texture for even smaller sized images.
Another idea I've had, but haven't yet explored, was to use CIImageProvider in some capacity but I'm not entirely sure how I'd go about "tiling" potentially several IOSurfaces or MTLTextures, and if that even makes sense or if it would be better to just allocate a single large buffer to read from. (Or perhaps use dispatch_data in some capacity?)
(macOS 10.13 or even 10.14 would be fine.)

Displaying full-sized camera raw files in OSX

This has been driving me mad for months: I have a little app to preview camera raw images. As the files in question can be quite big and stored on a slow network drive I wanted to offer the user a chance to stop the loading of the image.
Handily I found this thread:
Cancel NSData initWithContentsOfURL in NSOperation
and am using Nick's great convenience method to cache the data and be able to issue a cancel request halfway through.
Anyway, once I have the data I use:
NSImage *sourceImage = [[NSImage alloc]initWithData:data];
The problem comes when looking at Nikon .NEF files; sourceImage returns only a thumbnail and not the full size. Displaying Canon .CR2 files and in fact, any other .TIFF's and .JPEG's seems fine and sourceImage is the expected size. I've checked the amount of data that is being loaded (with NSLog and [data length]) and it does seem that all of the Nikon files' 12mb is there for the -initWithData:
If I use
NSImage *sourceImage = [[NSImage alloc]initWithContentsOfURL:myNEFURL];
then I get the full sized image of the Nikon files but of course the app blocks.
So after poking around for what is beginning to feel like my entire life I think I know that the problem is related to the Nikon's metadata stating that the file's DPI is 300 whereas Canon et al is 72.
I hoped a solution would be to lazily access the file with:
NSImage*tempImg = [[NSImage alloc] initByReferencingURL:myNEFURL];
and having seen similar postings here and elsewhere I found a common possible answer of simply
[sourceImage setSize:tempImg.size];
but of course this just resizes the tiny thumbnail up to 3000x2000 or thereabouts.
I've been messing with the following hoping that they would provide a way to get the big picture from the .NEF:
CGImageSourceRef isr = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
CGImageRef isrRef = CGImageSourceCreateImageAtIndex(isr, 0, NULL);
and
NSBitmapImageRep *bitMapIR = [[NSBitmapImageRep alloc] initWithData:data];
But checking the sizes on these show similar thumbnail widths and heights. In fact, isrRef returns an even smaller thumbnail, one that is 4.2 times smaller. Perhaps worth noting that 300 / 72 == 4.2, so isrRef is taking account of the DPI on an image where the DPI (possibly) has already been observed.
Please! Can someone [nicely] put me out of my misery and help me get the full-sized image from the loaded data?!?! Currently, I'm special-case'ing the NEF files with a case insensitive search on the file extension and then loading the URL with the blocking methods. I have to take a hit on the app blocking and the search can't be fool-proof in the long run.
As an aside: is this actually a bug in the OS? It does seem like NSImage's -initWithData: and -initWithContentsOfURL: methods use different engines to actually render the image. Would it not be reasonable to have assumed that -initWithURL: simply loads the data which then gets rendered just as though it had been presented to the class with -initWithData: ?
It's a bug - confirmed when I did a DTS. Apparently I need to file a bug report. Currently the only way is to use the NSURL methods. Instead of checking the file extension I should probably traverse the meta dictionaries and check the manufacturer's entry for "Nikon", though...

Trying to turn [NSImage imageNamed:NSImageNameUser] into NSData

If I create an NSImage via something like:
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
it only has one representation, a NSCoreUIImageRep which seems to be a private class.
I'd like to archive this image as an NSData but if I ask for the TIFFRepresentation I get a
small icon when the real NSImage I originally created seemed to be vector and would scale up to fill my image views nicely.
I was kinda hoping images made this way would have a NSPDFImageRep I could use.
Any ideas how can I get an NSData (pref the vector version or at worse a large scale bitmap version) of this NSImage?
UPDATE
Spoke with some people on Twitter and they suggested that the real source of these images are multi resolution icns files (probably not vector at all). I couldn't find the location of these on disk but interesting to hear none-the-less.
Additionally they suggested I create the system NSImage and manually render it into a high res NSImage of my own. I'm doing this now and it's working for my needs. My code:
+ (NSImage *)pt_businessDefaultIcon
{
// Draws NSImageNameUser into a rendered bitmap.
// We do this because trying to create an NSData from
// [NSImage imageNamed:NSImageNameUser] directly results in a 32x32 image.
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
NSImage *renderedIcon = [[NSImage alloc] initWithSize:CGSizeMake(PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize)];
[renderedIcon lockFocus];
NSRect inRect = NSMakeRect(0, 0, PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize);
NSRect fromRect = NSMakeRect(0, 0, icon.size.width, icon.size.width);;
[icon drawInRect:inRect fromRect:fromRect operation:NSCompositeCopy fraction:1.0];
[renderedIcon unlockFocus];
return renderedIcon;
}
(Tried to post this as my answer but I don't have enough reputation?)
You seem to be ignoring the documentation. Both of your major questions are answered there. The Cocoa Drawing Guide (companion guide linked from the NSImage API reference) has an Images section you really need to read thoroughly and refer to any time you have rep/caching/sizing/quality issues.
...if I ask for the TIFFRepresentation I get a small icon when the
real NSImage I originally created seemed to be vector and would scale
up to fill my image views nicely.
Relevant subsections of the Images section for this question are: How an Image Representation is Chosen, Images and Caching, and Image Size and Resolution. By default, the -cacheMode for a TIFF image "Behaves as if the NSImageCacheBySize setting were in effect." Also, for in-memory scaling/sizing operations, -imageInterpolation is important: "Table 6-4 lists the available interpolation settings." and "NSImageInterpolationHigh - Slower, higher-quality interpolation."
I'm fairly certain this applies to a named system image as well as any other.
I was kinda hoping images made [ by loading an image from disk ] would
have a NSPDFImageRep I could use.
Relevant subsection: Image Representations. "...with file-based images, most of the images you create need only a single image representation." and "You might create multiple representations in the following situations, however: For printing, you might want to create a PDF representation or high-resolution bitmap of your image."
You get the representation that suits the loaded image. You must create a PDF representation for a TIFF image, for example. To do so at high resolution, you'll need to refer back to the caching mode so you can get higher-res items.
There are a lot of fine details too numerous to list because of the high number of permutations of images/creation mechanisms/settings/ and what you want to do with it all. My post is meant to be a general guide toward finding the specific information you need for your situation.
For more detail, add specific details: the code you attempted to use, the type of image you're loading or creating -- you seemed to mention two different possibilities in your fourth paragraph -- and what went wrong.
I would guess that the image is "hard wired" into the graphics system somehow, and the NSImage representation of it is merely a number indicating which hard-wired graphic it is. So likely what you need to do is to draw it and then capture the drawing.
Very generally, create a view controller that will render the image, reference the VC's view property to cause the view to be rendered, extract the contentView of the VC, get the contentView.layer, render the layer into a UIGraphics context, get the UIImage from the context, extract whatever representation you want from the UIImage.
(There may be a simpler way, but this is the one I ended up using in one case.)
(And, sigh, I suppose this scheme doesn't preserve scaling either.)

Animated gif File In NSImageView eats the Memory (Mac Development)

I just try to put Animated gif file with 6 frames acquired from url Request and create nsimage with the response ,,, then set the Image in NSImageView ...
I use this
// Where returnedImage is nsImage I created with response of the connection ..
[myImageView setImage:[response returnedImage]];
I use this code to change the Image displayed when some user actions happen
I observe that The memory allocated to the program is increased linearly with large scale .. and the application might crash
I make sure that my code has no leaks ...
I do not Know why it is increases , do I have to release the previous Image that was set
Any Idea will be appreciated.
I delete and recreate the NSImageView In the Nib File ... I may be an xcode Deployment bug or something ,,, But i was allocate some memory to graphics card continuously increasing

Image file that is ~11MB size takes a lot of memory when rendered using WPF Image control

When i try to set the Source of WPF Image to a Image file that is ~11MB size and Shot in 14 MeagaPixcel Camera, the memory shoots up to around 170 MB when the image is rendered on the screen and the memory also never comes down after Rendering.
If i try to do the same using .Net 2.0 Picturebox control the memory utilized is only .5MB to 1MB.
Logically if the file size of an image is 11MB then it should at the maximum occupy only 11MB while rendering right? What is the cause of such behaviour in WPF? and is there any way to dispose the extra junk of memory after the rendering is compeleted on the screen?
To answer the first part of your question:
Images shot on digital cameras are stored as jpg files and hence compressed. When read into memory it will be uncompressed. This accounts for the difference in size you are seeing here.
For example a photo shot on a Canon EOS 450 has a file size on disk of 3 MB. It's dimensions are 3072 x 2048. This leads to a size in memory of 3072 * 2048 pixels * 24 bits / pixel = 18,874,368 bytes (does that make sense - I'm never 100% sure of these calculations)
The memory usage won't come down until the object holding the image data falls out of scope and is cleared by the garbage collection.
For example you'll need something along the lines of this code:
using (Image image = Image.FromFile(imageName))
{
// Non property item properties
FileName = imageName;
PixelFormat = image.PixelFormat;
Width = image.Size.Width;
Height = image.Size.Height;
foreach (PropertyItem pi in image.PropertyItems)
{
EXIFPropertyItem exifpi = new EXIFPropertyItem(pi);
this.propertyItems.Add(exifpi);
}
}
Once I've got all the information I need from the image the using statement allows the garbage collection to kick in and free the memory.

Resources