How to pre-cache NSImage for displaying inside NSImageView - macos

I have a rather large image, which I display inside NSImageView. Once I do it the first time, every next usage of said image is very fast. Either when re-using the NSImageView it was assigned to or assigning it to a new NSImageView. This is apparently due to the fact that NSImage keeps cache of the representation it used for displaying said image the first time. The problem is with displaying it the first time though. Even on a fast hardware there is an unacceptably long delay before the image eventually appears. What I am looking for is a way to pre-cache the image representation before I display it the first time. Any suggestions?
FWIW, I use Obj-C for OS X/macOS but I believe it shouldn't make any difference and Swift techniques should be applicable too.

Related

How are blinking carets often implemented?

I'm trying to implement a cross-platform UI library that takes as little system resource as possible. I'm considering to either use my own software renderer or opengl.
For stationary controls everything's fine, I can repaint only when it's needed. However, when it comes to implementing animations, especially animated blinking carets like the 'phase' caret in sublime text, I don't see a easy way to balance resource usage and performance.
For a blinking caret, it's required that the caret be redrawn very frequently(15-20 times per sec at least, I guess). On one hand, the software renderer supports partial redraw but is far too slow to be practical(3-4 fps for large redraw regions, say, 1000x800, which makes it impossible to implement animations). On the other hand, opengl doesn't support partial redraw very well as far as I know, which means the whole screen needs to be rendered at 15-20 fps constantly.
So my question is:
How are carets usually implemented in various UI systems?
Is there any way to have opengl to render to only a proportion of the screen?
I know that glViewport enables rendering to part of the screen, but due to double buffering or other stuff the rest of the screen is not kept as it was. In this way I still need to render the whole screen again.
First you need to ask yourself.
Do I really need to partially redraw the screen?
OpenGL or better said the GPU can draw thousands of triangles at ease. So before you start fiddling with partial redrawing of the screen, then you should instead benchmark and see whether it's worth looking into at all.
This doesn't however imply that you have to redraw the screen endlessly. You can still just redraw it when changes happen.
Thus if you have a cursor blinking every 500 ms, then you redraw once every 500 ms. If you have an animation running, then you continuously redraw while that animation is playing (or every time the animation does a change that requires redrawing).
This is what Chrome, Firefox, etc does. You can see this if you open the Developer Tools (F12) and go to the Timeline tab.
Take a look at the following screenshot. The first row of the timeline shows how often Chrome redraws the windows.
The first section shows a lot continuously redrawing. Which was because I was scrolling around on the page.
The last section shows a single redraw every few 500 ms. Which was the cursor blinking in a textbox.
Open the image in a new tab, to see better what's going on.
Note that it doesn't tell whether Chrome is fully redrawing the window or only that parts of it. It is just showing the frequency of the redrawing. (If you want to see the redrawn regions, then both Firefox and Chrome has "Show Paint Rectangles".)
To circumvent the problem with double buffering and partially redrawing. Then you could instead draw to a framebuffer object. Now you can utilize glScissor() as much as you want. If you have various things that are static and only a few dynamic things. Then you could have multiple framebuffer objects and only draw the static contents once and continuously update the framebuffer containing the dynamic content.
However (and I can't emphasize this enough) benchmark and check if this is even needed. Having two framebuffer objects could be more expensive than just always redrawing everything. The same goes for say having a buffer for each rectangle, in contrast to packing all rectangles in a single buffer.
Lastly to give an example let's take NanoGUI (a minimalistic GUI library for OpenGL). NanoGUI continuously redraws the screen.
The problem with not just continuously redrawing the screen is that now you need a system for issuing a redraw. Now calling setText() on a label needs to callback and tell the window to redraw. Now what if the parent panel the label is added to isn't visible? Then setText() just issued a redundant redrawing of the screen.
The point I'm trying to make is that if you have a system for issuing redrawing of the screen. Then that might be more prone to errors. Thus unless continuously redrawing is an issue, then that is definitely a more optimal starting point.

IKImageBrowserView on retina screen

Has anyone successfully used an IKImageBrowserView with a Retina Mac? What I get is that the image size is wildly misinterpreted. Previously I was using CGImage images which don't have a logical size, so it makes sense that the browser can't draw the at the right size. However, I've switched to NSImage, created using -initWithCGImage:size: and that still doesn't work right.
My images are 244x184 pixels and should be drawn at a logical size of 122x92. When passing 122x92 as the size, they are drawn way too large, at about 180 pixels wide. If I pass exactly half this, 61x46, the size is correct, but the image looks downscaled and not sharp. If I pass 122x92 and run with NSHighResolutionCapable set to NO in Info.plist, everything works well.
My conclusion is that IKImageBrowserView is not Retina compatible even with the 10.10 SDK on a Retina MacBook Pro running OS X 10.11. Or am I missing something? Any pointers would be appreciated!
I discovered that I wasn't really thinking the right way. The browser is supposed to always scale its images, so that's why the Retina-sized images ended up larger. I just subclassed the browser to be able to use a custom cell and customize the image frame per cell. There are however some subtle bugs in the browser that cause it to scale the images just a little bit in Retina mode, but I was able to work around that by creating a custom foreground layer for each cell that contains the image without scaling. Problem solved. Hopefully this will help someone else in the future.

What is the relationship between NSImageView and NSImageCell?

It seems that if I create an NSImageView in code, there is no way to have the image automatically scale proportionally up if the NSImageView becomes bigger than the image itself. (an odd omission)
On the other hand, if I create the NSImageView in IB, it seems to somehow attach an NSImageCell to the NSImageView and the NSImageCell has an option to scale proportionally up and down, which is what I want.
But in IB, I can't seem to understand the relationship between the NSImageView and the NSImageCell. I can't delete the NSImageCell from the NSImageView and I don't see the connection in bindings or anywhere else.
How do I get the functionality of the NSImageCell when creating the NSImageView in code?
Sorry if this is obvious, but I'm used to UIImageViews and they're definitely different than NSImageView.
Thank you.
You should be able to scale up using [NSImageView setImageScaling:NSImageScaleProportionallyUpOrDown]. Have you had trouble with that? You can also access the cell for any NSControl using -cell.
As for the separation of cells from controls (views), this is a hold-over from the days of much less powerful computers (some NeXT computers only had 8MB of memory). Cells provide a lighter-weight object that does not require some of the overhead of a full view. This is important in cases where a lot of elements might exist, such as in a matrix or a table. Cells are designed to be easier to copy and reuse, much like UITableViewCell. Like UITableViewCell, the same NSCell may be used repeatedly to draw different content. Cells also share some singletons. For example, there is typically only one field editor (NSTextView) shared by most cells. It just gets moved around as needed as the user selects different text fields to edit.
In a world where the first iPhone had 10x the memory of a NeXT and desktops commonly have 1000x the memory, some of the choices in NSCell don't make as much sense. Since 10.7, OS X has moved some things away from NSCell. NSTableView now supports NSView cells as well as NSCell cells. When iPhoneOS was released, UITableView got started on views from the beginning. (Of course an iPhone table view is minuscule compared to an OS X table view, so it was an easier choice for more reasons than just available memory.)
The reason you're confused is that the documentation on -[NSImageView setImageScaling:] is wrong, where it lists the possible scaling choices. If you look up NSImageScaling, you will find another choice NSImageScaleProportionallyUpOrDown.

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

How to implement very large scrolled view in Cocoa

Is it wise to create views in Cocoa that have dimensions around 15000 pixels? (of course only small part of this view will be visible at a time in a NSScrollView)
Interface Builder has limit of 10000 pixels in size. Is this an artificial limitation or is there a good reason behind it?
Should I just create huge view and let NSScrollView/Quartz worry about rendering it efficiently (my view is drawn programmatically within area requested in drawRect) or do I risk excessive memory usage and other problems? (e.g. could OS X try to cache whole view's bitmap in video memory at any time?)
Views don't have backing stores, unless they are layer-backed. The window is what has the backing store, so the amount of memory used to display the view is limited to the size of the window.
So, the answer is yes. Go ahead and make your views as big as you want.
(Of course, you'll want to limit the drawing you do in the view to the rect passed in drawRect: or you'll be wasting a lot of time doing invisible drawing.)
Well, if Cocoa does try to cache the entire view in memory, that would be a problem:
10000 * 10000 = 100,000,000
* 4 = 400,000,000
That's 400 MB in raw RGBA pixels for one view. If we want to be really pessimistic, assume that NSView is double-buffering that for you, in which case your memory usage doubles to 800 MB.
In the worst case, your user is running your app on an old Mac mini with 1 GB of RAM—of which you have just used 80%. The system will certainly start paging way before this point, making their system unbearably slow.
On the other hand, it is the easiest way to implement it that I can think of, so I say try it and see what Activity Monitor says about your memory usage. If it's too high, try changing various options of the scroll view and clip view; if that doesn't work, I can think of nothing else but to make your own scrollers and fake it.

Resources