How to implement very large scrolled view in Cocoa - cocoa

Is it wise to create views in Cocoa that have dimensions around 15000 pixels? (of course only small part of this view will be visible at a time in a NSScrollView)
Interface Builder has limit of 10000 pixels in size. Is this an artificial limitation or is there a good reason behind it?
Should I just create huge view and let NSScrollView/Quartz worry about rendering it efficiently (my view is drawn programmatically within area requested in drawRect) or do I risk excessive memory usage and other problems? (e.g. could OS X try to cache whole view's bitmap in video memory at any time?)

Views don't have backing stores, unless they are layer-backed. The window is what has the backing store, so the amount of memory used to display the view is limited to the size of the window.
So, the answer is yes. Go ahead and make your views as big as you want.
(Of course, you'll want to limit the drawing you do in the view to the rect passed in drawRect: or you'll be wasting a lot of time doing invisible drawing.)

Well, if Cocoa does try to cache the entire view in memory, that would be a problem:
10000 * 10000 = 100,000,000
* 4 = 400,000,000
That's 400 MB in raw RGBA pixels for one view. If we want to be really pessimistic, assume that NSView is double-buffering that for you, in which case your memory usage doubles to 800 MB.
In the worst case, your user is running your app on an old Mac mini with 1 GB of RAM—of which you have just used 80%. The system will certainly start paging way before this point, making their system unbearably slow.
On the other hand, it is the easiest way to implement it that I can think of, so I say try it and see what Activity Monitor says about your memory usage. If it's too high, try changing various options of the scroll view and clip view; if that doesn't work, I can think of nothing else but to make your own scrollers and fake it.

Related

Do we really need separate thumbnail images?

I understand the use of thumbnail in network applications but assuming all the image are in the application itself (photo application), in this case do we still need thumbnail images for performance reasons or is it just fine for the device to resize the actual image on run time?
Since the question is too opinion based I am going to ask more quantitively.
The images are 500x500, about 200-300kb size jpg.
There will be about 200 images.
It is targeted for iphone4 and higher, so that would be the minimum hardware specs users will have.
The maximum memory used should not pass 20% of the devices capacity.
Will the application in this case need separate thumbnail images?
It depends on your application. Just test performance and memory usage on device.
If you show a lot of images and/or they change very quickly (like when you are scrolling UITableView with a lot of images) you will probably have to use thumbnails.
UPDATE:
When image is shown it takes width * height * 3 (width * height * 4 for images with ALPHA channel) bytes of memory. 10 photos 2592 x 1936 stored in memory will require 200Mb of RAM. It is too much. You definitely have to use thumbnails.
Your question is a bit lacking on detail but I assume you're asking if, for say a photo album app, can you just throw around full size UIImages and let a UIImageView resize them to fit on the screen, or do you need to resize?
You absolutely need to resize.
An image taken by an iPhone camera will be several megabytes in compressed file size, more in actual bytes used to represent pixels. The dimensions of the image will be far greater than the screen dimensions of the device. The memory use is very high, particularly if you're thinking of showing multiple "thumbnails". It's not so much a CPU issue (once the image has been rendered it doesn't need re-rendering) but a memory one, and you're severely memory constrained on a mobile device.
Each doubling in size of an image (e.g. from a 100x100 to a 200x200) represents a four-fold increase in the memory needed to hold it.

What is the relationship between NSImageView and NSImageCell?

It seems that if I create an NSImageView in code, there is no way to have the image automatically scale proportionally up if the NSImageView becomes bigger than the image itself. (an odd omission)
On the other hand, if I create the NSImageView in IB, it seems to somehow attach an NSImageCell to the NSImageView and the NSImageCell has an option to scale proportionally up and down, which is what I want.
But in IB, I can't seem to understand the relationship between the NSImageView and the NSImageCell. I can't delete the NSImageCell from the NSImageView and I don't see the connection in bindings or anywhere else.
How do I get the functionality of the NSImageCell when creating the NSImageView in code?
Sorry if this is obvious, but I'm used to UIImageViews and they're definitely different than NSImageView.
Thank you.
You should be able to scale up using [NSImageView setImageScaling:NSImageScaleProportionallyUpOrDown]. Have you had trouble with that? You can also access the cell for any NSControl using -cell.
As for the separation of cells from controls (views), this is a hold-over from the days of much less powerful computers (some NeXT computers only had 8MB of memory). Cells provide a lighter-weight object that does not require some of the overhead of a full view. This is important in cases where a lot of elements might exist, such as in a matrix or a table. Cells are designed to be easier to copy and reuse, much like UITableViewCell. Like UITableViewCell, the same NSCell may be used repeatedly to draw different content. Cells also share some singletons. For example, there is typically only one field editor (NSTextView) shared by most cells. It just gets moved around as needed as the user selects different text fields to edit.
In a world where the first iPhone had 10x the memory of a NeXT and desktops commonly have 1000x the memory, some of the choices in NSCell don't make as much sense. Since 10.7, OS X has moved some things away from NSCell. NSTableView now supports NSView cells as well as NSCell cells. When iPhoneOS was released, UITableView got started on views from the beginning. (Of course an iPhone table view is minuscule compared to an OS X table view, so it was an easier choice for more reasons than just available memory.)
The reason you're confused is that the documentation on -[NSImageView setImageScaling:] is wrong, where it lists the possible scaling choices. If you look up NSImageScaling, you will find another choice NSImageScaleProportionallyUpOrDown.

Windows Phone 7 memory management

I'd like to know if there are any specific strategies for handling memory, especially with respect to image caching on the Windows Phone. I have a very graphics intensive silverlight App which needs to keep it graphics that it retrieves from the internet and it needs to be able to freely roam about - but the memory requirement becomes quite huge after using the app for a couple of minutes.
I have tried setting the image's UriSource to null but I need to maintain the image backgrounds when I come back to the page. I'm at a loss because there isn't much information on the internet. The inbuilt profiling showed me "Texture Memory Dominant" and asked me to Analyze Heap Memory to resolve the issue, but I'm still clueless about these.
Any pointers to move forward?
My answer will be general - similarly to your question. I presume that you know for sure that the problem is in images. (Because a simple ListBox with a few hundred text items can cost you many MB.)
If you search the web you'll find plenty of links such as this one. But a general analysis is easy to do.
Take an image of the WP7 screen size, i.e. 480x800. 32-bit bitmap (I suppose this is what WP7 uses when the image is opened) takes roughly 1.5 MB (a simple multiplication).
The same jpg file can have 10x smaller size (for high quality compression) or even less.
Now what's done behind the scenes when you use the construction
<image source="http://..."/>.
(In the absence of any information from you, this is what I suppose you use.)
WP7 downloads the image and adds it to the cache. The cache apparently traces the use of the Uri pointing to the image.
As next the image gets opened, i.e. converted to a bitmap of native image size. Image gets downsampled in this process if it would exceed max. WP7 texture size.
You can customize the bitmap size as described here. If you care of quality, then you should use downscale factor 2, 4, or 8. In case of jpeg these factors represent by far the fastest option. (Well, I have no idea if you know the image resolution before the image gets loaded into the Image control. It is not too difficult to get this info from a jpg file, but right now I have no idea how it can be easily done on WP7.)
The bitmap gets freed if (my speculation) if the control's source is set to null. The downloaded image is purged from the cache when Uri is set to null. (This is reported on the web plenty times.)
If you take all this info, it should be possible to (kind of) control your use of the image cache. You can roughly estimate the image size and can decide which images remain in the cache. Maybe it will need some tricks such as storing Uri objects in you own structures and releasing them as needed. I am not saying this is easy to do, but it is certainly possible.

Help with Cocoa: Objects as views?

in my app I want to have a light table to sort photos. Basically it's just a huge view with lots of photos in it and you can drag the photos around. Photos can overlap, they don't fall into a grid like in iPhoto.
So every photo needs to respond to mouse events. Do I make every photo into its own view? Or are views too expensive to create? I want to easily support 100+ photos or more.
Photos need to be in layers as well so I can change the stacking order. Do I use CoreAnimation for this?
I don't need finished source code just some pointers and general ideas. I will (try to) figure out the implementation myself.
Fwiw, I target 10.5+, I use Obj-C 2.0 and garbage collection.
Thanks in advance!
You should definitely use CALayer objects. Using a set of NSImageView subviews will very quickly become unmanageable performance-wise, especially if you have more than 100 images on screen. If you don't want to use Core Animation for some reason, you'd be much better off creating a single custom view and handling all the image drawing and hit testing yourself. This will be more efficient than instantiating many NSImageView objects.
However, Core Animation layers will give orders of magnitude improvement in performance over this approach, as each layer is buffered in the GPU so you can drag the layers around with virtually zero cost, and you only need to draw each image once rather than every time anything in the view changes. Core Animation will also handle layer stacking for you.
Have a look at the excellent CocoaSlides sample code which demonstrates a very similar application to what you describe, including hit testing and simple animation.
The simplest method is to use NSImageViews. You can create a subclass that can be easily dragged scaled and rotated. A more complex but visually superior option would be to use Core Animation layers (CALayer).
As long as you maintain the photo representations as distinct objects (so you can manipulate individually) they will use quite a chunk of memory, no matter how you represent them. If you provide all the data available in the photos each one could take several megs. You probably will want to actually reduce the image's display quality i.e. size in pixels, fidelity etc except when the particular photo is being worked on in detail.
Remember, you don't have to treat the photos like the physical objects they mimic. You simply have to create the illusion of physical objects in the interface. We're theater stage designers, not architects. As long as you data model model remains rigorous to the task at hand, the interface can engage in all kinds of illusions for the benefit of the user.

Tips for reducing Core Animation memory usage

So here's the situation:
I have a CALayer that is the size of my screen, and I'm setting the contents property to a 2 Mb JPEG that's roughly 3500 x 2000 pixels in size with a resolution of 240ppi.
I'd expect there to be a slight overhead involved in using the CALayer, but my sample application (which only does exactly what's above) shows usage of about 33Mb RSIZE, 22Mb RPVT and 30Mb RSHRD. I've noticed that these numbers are much better when running the application as 64-bit than they are running as a 32-bit process.
I'm doing everything I can think of in the real application that this example comes from, including resampling my CGImageRefs to only be the size of the layer, but this seems extraneous to me - shouldn't it be simpler?
Has anyone come across good methods to reduce the amount of memory CALayers and CGImageRefs use?
First, you're going to run into problems with an image that size in a plain CALayer, because you may hit the texture size limit of 2048 x 2048 (depending on your graphics card). Applications like this are what CATiledLayer is designed for. Bill Dudney has some code examples on his blog (a large PDF), as well as with the code that accompanies his book.
It isn't surprising to me that such a large image would take so much memory, given that it will be stored as an uncompressed bitmap in your CGImage. Aside from scaling your image to the resolution you need, and tiling it with CATiledLayer, I can't think of much. Are you releasing the CGImageRef once you've assigned it to the contents of the CAlayer? You won't need to hang onto it at that point.

Resources