What is the best way to display and apply filters to RAW images on macOS? - image

I am creating a simple photo catalogue application for macOS to see whether the latest APIs can significantly improve performance of loading directories with large numbers of images.
So far it looks pretty promising and loading around 600 45MB RAW image thumbnails using QLThumbnailGenerator and CGImageSourceCreateWithURL is super fast allowing thumbnail images and image metadata to be displayed almost instantly.
Displaying these images in a NSCollectionView using a CALayer in the NSCollectionViewItem's view also appears to be extremely fast and scrolling is very smooth.
I did find that QLThumbnailGeneratorseems to start failing after a few hundred images and starts returning error code 108 if I call the api in a continuous loop - I fixed that by calling CGImageSourceCopyPropertiesAtIndex immediately after the thumbnail generator api call - so maybe there is a timing issue or not enough file handles or something if the api is called to quickly and for too long.
However I am still having trouble rendering a full sized image to the display - here I am using a NSScrollView with a layer backed NSView documentView. Everything is super fast until the following call:
view.layer.contents = cgImage
And at this point the entire main thread hangs until the image has loaded - and this may take a few seconds.
Once it has loaded it's fine and zooming in and out by changing the documentView frame size is very fast - scrolling around the full size image is also super smooth without any of the typical hiccups.
Is there a way of loading these images without causing the UI to freeze ?
I've seen the recent WWDC2020 session where they demonstrate similar scrolling of large numbers of images but I haven't been able to find anything useful on loading large images other than CATiledLayer - but it's not really clear if that is the right answer for this problem.
The old Apple sample RawExpose seemed to be an option but most of that code is deprecated and it seems one has to use MetalKit not instead of GLKit - unfortunately there is no example of using MetaKit with Core Image that I can find.
FYI - I tried using some the new SwiftUI CollectionView and List but they seem to be significantly slower than AppKit and I found some of the collection view items never render - of course these could just be bugs in the macOS 11 beta.

OK - well I finally figured it out and it's complicated but simple. It's complicated because there are so many options to choose from and so many outdated sample apps to look at. In any event I think I have solved most if not all the issues related to using metal backed CALayers and rendering realtime updates of the images as CIFilter adjustments are applied. There are many pieces to the puzzle and happy to share if anyone is looking for help.
Some key pointers:
I am using CAMetalLayer and NSView
I override the CAMetalLayer.display(layer:) method and call the layer.setNeedsDisplay() when the user slides an adjustment slider.
I chain together all the CIFilters, including the RAW filter created with CIFilter(imageUrl:)
Most importantly I use the RAW filters scaleFactor parameter to size the image - encountered major performance issues using any other method to resize the image for the views size
Don't expect high performance if the image is zoomed right in - 50% is seems to be the limit for 45megapixel RAW imaged from Nikon D850.
A short video of the result is here https://youtu.be/5wp0CIWAoIM

Related

Images in listview are not released from memory when out of view

I am displaying images from the internet in a vertical ListView. I fetch the images using http.get (not using cached network image cuz I do not want to cache the images). Then I insert the image Uint8List into image.memory(). What happens is that as the user scrolls the list and images are loading, the ram keeps increasing until the whole app crashes. Any ideas what to do ?
Yeah, this is the normal behavior. I don't know why exactly. My theory is that the images by default are disposed if the dart objects holding references to them are garbage collected rather then when the widgets are knocked off the widgets tree, but don't take my word for it- that's just my personal reasoning. It may be completely wrong.
To get around this, I use Extended Image pakcage It's constructors take a bool clearMemoryCacheWhenDispose which disposes of images in RAM in scroll lists. You may do that or visit the package code to see how he's doing it, and replicate it.
Other advice I can give is to make sure your images are of the appropriate size. If you are loading your images locally check this part of the documentation to have different dimensions selected for different screen sizes https://flutter.dev/docs/development/ui/assets-and-images#loading-images
If you are fetching your images from network which is more likely, make sure their sizes are appropriate, and have different sizes served if you can. If you don't have control over that set cacheWidth and cacheHeight in Image.memory these will reduce the cached image in memory (RAM). You see by default Flutter caches images in full resolution despite their apparent size in the device/app. For example if you want your image to display in 200x200 box then set cacheWidth to 200 * window.devicePixelRatio.ceil() you will find window in dart:ui, and if you only set cacheWidth, the ratio of your images will remain true. Same true if you only set cacheHeight. Also do use ListView.builder as suggested.
I am disappointed at how little is said about this in Flutter docs. Most of the time people discover this problem when their apps start crashing. Do check your dev tools regularly for memory consumption. It's the best indicator out there.
Cheers and good luck
I was having the same issue and found a fix thanks to #moneer!
Context:
Users in my app can create shared image galleries which can easily contain several hundred images. Those are all displayed in a SliverGrid widget. When users scrolled down the list, too many images were loaded into RAM and the app would crash.
Things I had already implemented:
Image resizing on the server side and getting the appropriate sized images on the client based on the device pixel ratio and the tile size in the gallery
I made sure that my image widgets were properly disposing when out of view, but the memory size kept building up as the user scrolled through all the images anyway
Implement cacheHeight to limit the size of the cached image to the exact size of the displayed image
All these things helped but the app would eventually still crash every time the user scrolled down far enough.
The fix:
After some googling I stumbled upon this thread and tried the extended_image_package as #moneer suggested. Setting the clearMemoryCacheWhenDispose flag to true fixed the issue of the app crashing as it was now properly clearing the images from memory when they were out of view. Hooray! However, in my app users can tap on an image and the app navigates to an image detail page with a nice Hero animation. When navigating back the image would rebuild and this would cause a rebuild 'flicker'. Not that nice to look at and kind of distracting.
I then noticed that there's also an enableMemoryCache flag. I tried setting this to false and that seems to work nicely for me. The Network tab in Dart DevTools seems to show that all images are only fetched from the network once when scrolling up and down the gallery multiple times. The app does not crash anymore.
I'll have to more testing to see if this leads to any performance issues (if you can think of any please let me know).
The final code:
ExtendedImage.network(
_imageUrl,
cacheHeight: _tileDimension,
fit: BoxFit.cover,
cache: true, // store in cache
enableMemoryCache: false, // do not store in memory
enableLoadState: false, // hide spinner
)
I had a similar issue when I loaded images from files in a ListView.
[left-side: old code, right-side: new code]
The first huge improvement for me: not to load the whole list at once
ListView(children:[...]) ---> ListView.builder(...).
The second improvement was that images are no longer loaded at full-size:
Image.file("/path") ---> Image.file("/path", cacheWidth: X, cacheHeight: Y)
These two things solved my memory problems completely
Ideally caching happens kind of by default after some conditions are fulfilled. So its upon your app to be responsible to handle and control how the caching will happen.
Checkout this answer
https://stackoverflow.com/a/24476099/12264865

How do you work with really large images in Metal?

TL;DR: In macOS 10.13, an MTLTexture has a maximum width and height of 16,384. What strategies can you use to be able to process and display images larger than 16,384 pixels using Metal?
In a photo viewer that I'm currently working on, I've moved most of the viewing of images into a Metal backed view that uses Core Image for doing any image adjustments. This is working really well but I've recently started testing against some really large images (panoramas) and I'm now hitting some limits that I'm not entirely sure how to workaround while remaining relatively performant.
My current environment looks like this:
Load and decode an image from from an NSURL into an IOSurface. This is done using either Image IO directly or a Core Image pipeline that renders into the IOSurface. The IOSurface is then passed from an XPC service back into the main app.
In the main app, a new MTLTexture is created that is backed by the IOSurface. Then, a CIImage is created from the MTLTexture and that CIImage is used throughout an image pipeline as the root "source image".
However, if I attempt to open an image larger that 16,384 pixels in one dimension, then I'm unable to create the original IOSurface 16,384 on my laptop. (13" MBP-TB 2016)
But even if I could create a larger IOSurface, then I'm still stuck with the same limit on the MTLTexture.
See: Apple's Metal Feature Table Set
I'm curious what strategies others would recommend to allow one to open large image files while still taking advantage of Core Image and Metal.
One attempt I've made is to just have the root source image be a CIImage that was created with a CGImageRef. However, there's a significant drop in performance between that arrangement and a CIImage backed by a texture for even smaller sized images.
Another idea I've had, but haven't yet explored, was to use CIImageProvider in some capacity but I'm not entirely sure how I'd go about "tiling" potentially several IOSurfaces or MTLTextures, and if that even makes sense or if it would be better to just allocate a single large buffer to read from. (Or perhaps use dispatch_data in some capacity?)
(macOS 10.13 or even 10.14 would be fine.)

MonoMac Application & OpenGL - Weird frame times

I am trying to create an application with an OpenGL view using MonoMac. Setting up an application and an NSOpenGLView was fairly simple...
...but for some reason I cannot get a consistent frame rate rendering OpenGL. The issue I am having is that 9 out of ten frames have perfect performance and every tenth frame or so I am getting a massive frame time spike (about 60ms-80ms for a single frame). The time of the slow frame correlates with the size of the control (and even more so using retina backing buffer).
I have been digging and have come up with nothing that works for my case.
I tried to use NSOpenGLView with CVDisplayLink and rendering on the main thread with timers and DrawRect.
I tried MonoMacGameView also both versions. Actually MonoMacGameView has consistent performance but only draws when my window does not have a background color.
I reimplemented the run loop to do my own NextEvent polling just to find out that that is not the issue...
So, my current hunch is that it has something to do with layer backed rendering in Cocoa views but I really cannot figure out what is causing this.
Any hint as to what is causing this delay?
I found a solution which produces pretty good results:
Do not use NSOpenGLView or MonoMacGameView.
Use the approach described in the example on this page: http://cocoa-mono.org/archives/366/monomac-and-opengl/
To enable retina support export the ContentScale property on the deriving class and set that depending whether you are running on retina screen or not
In conclusion, using a core animation layer is the only viable solution.

Silverlight canvas freedraw underperforming

I'm making a silverlight website which includes paint-like features including freedraw. To achieve this I used the technique described on the following website: http://codeding.com/articles/freehand-drawing-in-silverlight .
The problem is, when I run the demo project it will start to lag extremely after just a few seconds of drawing. I realise that that is probably caused by the amount of shapes this techniue requires, however, and this is my main question:
How on earth does the demo on the website not lag nomatter howmuch I draw, while my local project which should have the EXACT same code lags right away?
I tried finding something about improving canvas performance overall, but the only thing I found was turning the drawing into a static image, which is not really ideal since I use undo/redo functionality.
The number of shapes added to the Canvas shouldn't be the reason for the lagging, there must be something else like converting the drawing into image for undo/redo functionality. For undo/redo, you can save the strokes-information instead of images. Creating & storing images during each undo/redo operation will consume too much memory.
A stroke is nothing but a set of points from the start (mousedown event) to end (mouseup event), and a set of strokes forms a complete drawing. You can always recreate the drawing using the saved strokes-information (just like you can recreate using images). You can use simple data-structures like List<List<Point>> to store a complete drawing, this is very memory efficient instead of creating & storing the image itself.

Drawing large images for ipad

I am developing an application for viewing images.
I used the example of PhotoScroller Apple to implement this application.
In my application I want to be able to draw on the image.
I had the idea to put a UIView on top with transparent background and draw the lines via touch events. This solution has become very slow because the generated images are very large, around 3700x2000 pixels.
I also tried a solution with the example of Apple GLPaint that uses OpenGL, but it has a size limitation of 2048x2048 pixels.
Anyone have any idea or example of how I implement this?
I think you should try and tile your image.
One option is using CATiledLayer. Have a look at this short tutorial.
Or you could try and use CGContextDrawTiledImage to get your stuff done. Possibly this post from S.O. could help you getting started.

Resources