I just try to put Animated gif file with 6 frames acquired from url Request and create nsimage with the response ,,, then set the Image in NSImageView ...
I use this
// Where returnedImage is nsImage I created with response of the connection ..
[myImageView setImage:[response returnedImage]];
I use this code to change the Image displayed when some user actions happen
I observe that The memory allocated to the program is increased linearly with large scale .. and the application might crash
I make sure that my code has no leaks ...
I do not Know why it is increases , do I have to release the previous Image that was set
Any Idea will be appreciated.
I delete and recreate the NSImageView In the Nib File ... I may be an xcode Deployment bug or something ,,, But i was allocate some memory to graphics card continuously increasing
Related
I am creating a simple photo catalogue application for macOS to see whether the latest APIs can significantly improve performance of loading directories with large numbers of images.
So far it looks pretty promising and loading around 600 45MB RAW image thumbnails using QLThumbnailGenerator and CGImageSourceCreateWithURL is super fast allowing thumbnail images and image metadata to be displayed almost instantly.
Displaying these images in a NSCollectionView using a CALayer in the NSCollectionViewItem's view also appears to be extremely fast and scrolling is very smooth.
I did find that QLThumbnailGeneratorseems to start failing after a few hundred images and starts returning error code 108 if I call the api in a continuous loop - I fixed that by calling CGImageSourceCopyPropertiesAtIndex immediately after the thumbnail generator api call - so maybe there is a timing issue or not enough file handles or something if the api is called to quickly and for too long.
However I am still having trouble rendering a full sized image to the display - here I am using a NSScrollView with a layer backed NSView documentView. Everything is super fast until the following call:
view.layer.contents = cgImage
And at this point the entire main thread hangs until the image has loaded - and this may take a few seconds.
Once it has loaded it's fine and zooming in and out by changing the documentView frame size is very fast - scrolling around the full size image is also super smooth without any of the typical hiccups.
Is there a way of loading these images without causing the UI to freeze ?
I've seen the recent WWDC2020 session where they demonstrate similar scrolling of large numbers of images but I haven't been able to find anything useful on loading large images other than CATiledLayer - but it's not really clear if that is the right answer for this problem.
The old Apple sample RawExpose seemed to be an option but most of that code is deprecated and it seems one has to use MetalKit not instead of GLKit - unfortunately there is no example of using MetaKit with Core Image that I can find.
FYI - I tried using some the new SwiftUI CollectionView and List but they seem to be significantly slower than AppKit and I found some of the collection view items never render - of course these could just be bugs in the macOS 11 beta.
OK - well I finally figured it out and it's complicated but simple. It's complicated because there are so many options to choose from and so many outdated sample apps to look at. In any event I think I have solved most if not all the issues related to using metal backed CALayers and rendering realtime updates of the images as CIFilter adjustments are applied. There are many pieces to the puzzle and happy to share if anyone is looking for help.
Some key pointers:
I am using CAMetalLayer and NSView
I override the CAMetalLayer.display(layer:) method and call the layer.setNeedsDisplay() when the user slides an adjustment slider.
I chain together all the CIFilters, including the RAW filter created with CIFilter(imageUrl:)
Most importantly I use the RAW filters scaleFactor parameter to size the image - encountered major performance issues using any other method to resize the image for the views size
Don't expect high performance if the image is zoomed right in - 50% is seems to be the limit for 45megapixel RAW imaged from Nikon D850.
A short video of the result is here https://youtu.be/5wp0CIWAoIM
TL;DR: In macOS 10.13, an MTLTexture has a maximum width and height of 16,384. What strategies can you use to be able to process and display images larger than 16,384 pixels using Metal?
In a photo viewer that I'm currently working on, I've moved most of the viewing of images into a Metal backed view that uses Core Image for doing any image adjustments. This is working really well but I've recently started testing against some really large images (panoramas) and I'm now hitting some limits that I'm not entirely sure how to workaround while remaining relatively performant.
My current environment looks like this:
Load and decode an image from from an NSURL into an IOSurface. This is done using either Image IO directly or a Core Image pipeline that renders into the IOSurface. The IOSurface is then passed from an XPC service back into the main app.
In the main app, a new MTLTexture is created that is backed by the IOSurface. Then, a CIImage is created from the MTLTexture and that CIImage is used throughout an image pipeline as the root "source image".
However, if I attempt to open an image larger that 16,384 pixels in one dimension, then I'm unable to create the original IOSurface 16,384 on my laptop. (13" MBP-TB 2016)
But even if I could create a larger IOSurface, then I'm still stuck with the same limit on the MTLTexture.
See: Apple's Metal Feature Table Set
I'm curious what strategies others would recommend to allow one to open large image files while still taking advantage of Core Image and Metal.
One attempt I've made is to just have the root source image be a CIImage that was created with a CGImageRef. However, there's a significant drop in performance between that arrangement and a CIImage backed by a texture for even smaller sized images.
Another idea I've had, but haven't yet explored, was to use CIImageProvider in some capacity but I'm not entirely sure how I'd go about "tiling" potentially several IOSurfaces or MTLTextures, and if that even makes sense or if it would be better to just allocate a single large buffer to read from. (Or perhaps use dispatch_data in some capacity?)
(macOS 10.13 or even 10.14 would be fine.)
I am downloading cover images uploaded by App.net users. App.net requires these cover images to be at least 960 pixels wide. I fetch them with a simple AFImageRequestOperation:
NSURLRequest *urlRequest = [NSURLRequest requestWithURL:URL];
AFImageRequestOperation *imageRequestOperation = [AFImageRequestOperation imageRequestOperationWithRequest:urlRequest success:^(UIImage *image) {
if (completionHandler) {
completionHandler(image); // Load image into UI...
}
}];
[self.fetchQueue addOperation:imageRequestOperation];
This is working, no memory spikes.
I want to cache the authenticated users' images so users don't have to download them each time the app opens. As soon as I archive the downloaded image to disk, I get huge spikes in memory. For example, my cover image is currently 3264 x 2448 pixels. When downloaded on my Mac it comes to around 1,3 MB. However, as soon as I create a NSData object with either UIImagePNGRepresentation(image) or via TMCache's setObject:forKey: method, the app's used memory spikes to around 60,0 MB.
For clarity, This is all I'm doing to write the file to disk:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
NSURL *fileURL = ... // URL of file in "/Application Support"
NSData *imageData = UIImagePNGRepresentation(imageToSave);
[imageData writeToURL:fileURL atomically:YES];
});
Can anyone tell me what is going on? Why is a 1,3 MB being extrapolated into almost sixty times that. How can I avoid this massive and potentially crippling inflation. This is one image, what if the user opens several profiles, each with a cached image?
The image dimensions are what have the greatest bearing on memory usage. For a given image size (regardless of PNG, JPG), the memory usage is pretty much the same and is given by: width x height x 4 bytes.
A cover image of 3264x2448 would decode to roughly 32MB. Perhaps the atomic write explains the doubling you see.
Spikes like this may be unavoidable if that's the size of the image you need to work with. The important thing is to make sure the memory usage isn't growing without bound. When you run the app and look at the memory instrument gauge, does it eventually go down as your app does its work? You can also try wrapping the image-writing code in an #autoreleasepool.
I am having some memory issues with our android app when handling bitmaps (duh!).
We are having multiple activities loading images from a server, this could be a background image for the activity.
This background image could be the same for multiple activities, and right now each activity is loading its own background image.
This means if the flow is ac1->ac2->ac3->ac4 the same image will be loaded 4 times and using 4x memory.
How do I optimize imagehandling for this scenario? Do I create an image cache where the image is stored and then each activity ask the cache first for images. If this is the case, how do I know when to garbage collect the image from the cache?
Any suggestions, link to good tutorials or similar is highly appreciated.
Regards
EDIT:
When downloading images for the device the exact sizes is used, meaning that if the ui element needs an 100x100 pixel image it gets that size and therefore no need for scaling. So i am not sure about downscaling the image when loading it into the memory. Maybe it is needed to unload images in the activity when moving on the the next and then reload when going back.
One thing you might want to try is scaling down your bitmaps (make a thumbnail) to a size that is more appropriate to your device. It's pretty easy to quickly use up all the RAM on an Android device with a bitmap if you don't scale it down. This recipe shows how to do so on the fly. You could adapt this to save the images to disk.
You could also create your own implementation of the LRUCache to cache the images for your app.
After ready your update I will give you an other tip then.
I can still post the patch too if people want that..
What you need to do with those bitmaps is call them with a using block. That way Android will unload the bitmap as soon as that block is executed.
Example:
using(Bitmap usedBitmap = new Bitmap()){
//Do stuff with the Bitmap here
//You can call usedBitmap.Dispose() but it's not really needed
}
With this code your app shouldn't keep all the used bitmaps in memory.
There are a lot of answers for this question. But all of them are incorrect!
For example if I have created a CCLayer object with one CCSprite object. I have 3 textures and I want to switch between them on every touch.
For example I will use something similar to this:
link
I run this application in Simulator. Then I call a memory warning. Then I try to switch between images (textures). And I see that 2 of 3 images are deleted (except of the image was shown at the same time memory warning appeared).
I tried to use retain/release commands for CCSprite and ССTexture2D but they cause a situation when dealloc method of released object is never called.
So how to store them correctly? I want to store them at memory warning and release/remove them when the current layer is destroyed.
Store them in one texture atlas, created with Texture Packer. Then it's as simple as calling [Sprite setDisplayFrame:frameName] to switch the displayed texture.
By default on memory warning cocos2d will remove unused textures. The whole point of memory warning is that OS tells your app "hey, that's not okay, cut down your memory appetite or I'll shut you down", and your app should be like "oops, sorry, freeing memory now".
If you receive a memory warning when preloading textures, cocos2d's default behavior of removing unused textures will shoot you in the foot. More about this issue here.
My advice: remove the call to purge cocos2d's caches in the memory warning method in AppDelegate. Of course you want to be extra careful with your memory usage. Alternatively you could simply disable the behavior while you're preloading images, but this might simply move the problem to a later point.