Tips for reducing Core Animation memory usage - cocoa

So here's the situation:
I have a CALayer that is the size of my screen, and I'm setting the contents property to a 2 Mb JPEG that's roughly 3500 x 2000 pixels in size with a resolution of 240ppi.
I'd expect there to be a slight overhead involved in using the CALayer, but my sample application (which only does exactly what's above) shows usage of about 33Mb RSIZE, 22Mb RPVT and 30Mb RSHRD. I've noticed that these numbers are much better when running the application as 64-bit than they are running as a 32-bit process.
I'm doing everything I can think of in the real application that this example comes from, including resampling my CGImageRefs to only be the size of the layer, but this seems extraneous to me - shouldn't it be simpler?
Has anyone come across good methods to reduce the amount of memory CALayers and CGImageRefs use?

First, you're going to run into problems with an image that size in a plain CALayer, because you may hit the texture size limit of 2048 x 2048 (depending on your graphics card). Applications like this are what CATiledLayer is designed for. Bill Dudney has some code examples on his blog (a large PDF), as well as with the code that accompanies his book.
It isn't surprising to me that such a large image would take so much memory, given that it will be stored as an uncompressed bitmap in your CGImage. Aside from scaling your image to the resolution you need, and tiling it with CATiledLayer, I can't think of much. Are you releasing the CGImageRef once you've assigned it to the contents of the CAlayer? You won't need to hang onto it at that point.

Related

Do we really need separate thumbnail images?

I understand the use of thumbnail in network applications but assuming all the image are in the application itself (photo application), in this case do we still need thumbnail images for performance reasons or is it just fine for the device to resize the actual image on run time?
Since the question is too opinion based I am going to ask more quantitively.
The images are 500x500, about 200-300kb size jpg.
There will be about 200 images.
It is targeted for iphone4 and higher, so that would be the minimum hardware specs users will have.
The maximum memory used should not pass 20% of the devices capacity.
Will the application in this case need separate thumbnail images?
It depends on your application. Just test performance and memory usage on device.
If you show a lot of images and/or they change very quickly (like when you are scrolling UITableView with a lot of images) you will probably have to use thumbnails.
UPDATE:
When image is shown it takes width * height * 3 (width * height * 4 for images with ALPHA channel) bytes of memory. 10 photos 2592 x 1936 stored in memory will require 200Mb of RAM. It is too much. You definitely have to use thumbnails.
Your question is a bit lacking on detail but I assume you're asking if, for say a photo album app, can you just throw around full size UIImages and let a UIImageView resize them to fit on the screen, or do you need to resize?
You absolutely need to resize.
An image taken by an iPhone camera will be several megabytes in compressed file size, more in actual bytes used to represent pixels. The dimensions of the image will be far greater than the screen dimensions of the device. The memory use is very high, particularly if you're thinking of showing multiple "thumbnails". It's not so much a CPU issue (once the image has been rendered it doesn't need re-rendering) but a memory one, and you're severely memory constrained on a mobile device.
Each doubling in size of an image (e.g. from a 100x100 to a 200x200) represents a four-fold increase in the memory needed to hold it.

How to create textures from large images in opengl (bigger than the MAX_TEXTURE_SIZE)

I've found that the maximum texture size that my opengl can support is 8192 but the image that I'm working with is 16997x15931. As you can see in this link, I've completed the class COpenGLControl and customized it for my own use to work with a smaller 7697x7309 image and activated different navigation tasks for it.
Render an outlined red rectangle on top a 2D texture in OpenGL
but now in the last stages of work, I've decided to change the part where applies the texture and enable it to handle images bigger than the size 8192.
Questions:
Is it possible in my opengl?
what concept should I study mipmaps, multiple texturing?
Will it expand performance of code?
Right now my program uses 271 MB of ram for just showing this small image(7697x7309) and I'm going to add a task to it (for image-processing filtering processes) that I have used all my effort to optimize the code but it uses 376 MB of ram for the (7697x7309) image(the code is already written as a console application will be combined with this project). So I think the final project would use up to 700 MB of ram for images near the 7000x7000 size. Obviously for the bigger image (16997x15931 ) the usage of ram will be alot higher!
So I'm looking for a concept to handle images bigger than the MAX_TEXTURE_SIZE and also optimize the performance of the program
More Questions:
What concept should I study in OpenGL to achieve the above goal?
explain alittle about the concept that you suggest?
I've asked the question in Game Developement too but decided to repeat the question here maybe it will have more viewers. As soon as I get the answer, I will delete the question from either on of the sites. So don't worry about multiple questionings.
I will try to sum up my comments for the original question.
know your proper opengl version: maybe you can load some modern extension and work with even the recent version of opengl.
if it is possible you can take a look at Sparse Textures (Mega Textures): ARB_sparse_texture or AMD_sparse_texture
to reduce memory you can use some texture compression:
How to: load DDS files in OpenGL.
another simple idea: you can split the huge texture and create 4 smaller textures (from 16k x 16k into four 8k x 8k) and somehow render four squares.
maybe you can use OpenCL or CUDA to do the work?
regarding mipmaps: it is set of smaller version of your input texture, mipmaps improve performance and final quality of the filtering, but you need another 33% more memory for a texture with full mipmap chain. In your case they could be very helpful. For instance when you look at a wall from a huge distance you do not have to use full (large) texture... only a small version of it is enough. g-truc on mipmaps
In general there is a lot of options, but it depends on your experience what is simpler and fastest to implement.

Windows Phone 7 memory management

I'd like to know if there are any specific strategies for handling memory, especially with respect to image caching on the Windows Phone. I have a very graphics intensive silverlight App which needs to keep it graphics that it retrieves from the internet and it needs to be able to freely roam about - but the memory requirement becomes quite huge after using the app for a couple of minutes.
I have tried setting the image's UriSource to null but I need to maintain the image backgrounds when I come back to the page. I'm at a loss because there isn't much information on the internet. The inbuilt profiling showed me "Texture Memory Dominant" and asked me to Analyze Heap Memory to resolve the issue, but I'm still clueless about these.
Any pointers to move forward?
My answer will be general - similarly to your question. I presume that you know for sure that the problem is in images. (Because a simple ListBox with a few hundred text items can cost you many MB.)
If you search the web you'll find plenty of links such as this one. But a general analysis is easy to do.
Take an image of the WP7 screen size, i.e. 480x800. 32-bit bitmap (I suppose this is what WP7 uses when the image is opened) takes roughly 1.5 MB (a simple multiplication).
The same jpg file can have 10x smaller size (for high quality compression) or even less.
Now what's done behind the scenes when you use the construction
<image source="http://..."/>.
(In the absence of any information from you, this is what I suppose you use.)
WP7 downloads the image and adds it to the cache. The cache apparently traces the use of the Uri pointing to the image.
As next the image gets opened, i.e. converted to a bitmap of native image size. Image gets downsampled in this process if it would exceed max. WP7 texture size.
You can customize the bitmap size as described here. If you care of quality, then you should use downscale factor 2, 4, or 8. In case of jpeg these factors represent by far the fastest option. (Well, I have no idea if you know the image resolution before the image gets loaded into the Image control. It is not too difficult to get this info from a jpg file, but right now I have no idea how it can be easily done on WP7.)
The bitmap gets freed if (my speculation) if the control's source is set to null. The downloaded image is purged from the cache when Uri is set to null. (This is reported on the web plenty times.)
If you take all this info, it should be possible to (kind of) control your use of the image cache. You can roughly estimate the image size and can decide which images remain in the cache. Maybe it will need some tricks such as storing Uri objects in you own structures and releasing them as needed. I am not saying this is easy to do, but it is certainly possible.

avoid massive memory usage in openlayers with image overlay

I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew
The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.

OpenGL performance on rendering "virtual gallery" (textures)

I have a considerable (120-240) amount of 640x480 images that will be displayed as textured flat surfaces (4 vertex polygons) in a 3D environment. About 30-50% of them will be visible in a given frame. It is possible for them to crossover. Nothing else will be present in the environment.
The question is - will the modern and/or few-years-old (lets say Radeon 9550) GPU cope with that, and what frame rate can I expect? I aim for 20FPS, but 30-40 would be nice. Would changing the resolution to 320x240 make it more probable to happen?
I do not have any previous experience with performance issues of 3D graphics on modern GPUs, and unfortunately I must make a design choice. I don't want to waste time on doing something that couldn't have worked :-)
Assuming you have RGB textures, that would be 640*480*3*120 Bytes = 105 MB minimum of texture data, which should fit in VRAM of more recent graphics cards without swapping, so this wont be of an issue. However, texture lookups might get a bit problematic but this is hard to judge for me without trying. Given that you only need to process 50% of 105 MB, that is about 50 MB (very rough estimate) while targetting 20 FPS means 20*50MB/sec = about 1GB/sec. This should be possible to throughput even on older hardware.
Reading the specs of an older Radeon 9600 XT, it says peak fill-rate of 2000Mpixels/sec and if i'm not mistake you require far less than 100Mpixels/sec. Peak memory b/w is specified with 9.6GB/s, while you'd need about 1 GB/s (as explained above).
It would argue that this should be possible, if done correctly - esp. current hardware should have not problem at all.
Anyways, you should simply try out: Loading some random 120 textures and displaying them in some 120 quads can be done in very few lines of code with hardly any effort.
First of all, you should realize that the dimensions of textures should normally be powers of two, so if you can change them something like 512x256 (for example) would be a better starting point.
From that, you can create MIPmaps of the original, which are simply versions of the original scaled down by powers of two, so if you started with 512x256, you'd then create versions at 256x128, 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1 and 1x1. When you've done this, OpenGL can/will select the "right" one for the size it'll show up at in the final display. This generally reduces the work (and improves quality) in scaling the texture to the desired size.
The obvious sticking point with that would be running out of texture memory. If memory serves, in the 9550 timeframe you could probably expect 256 MB of on-board memory, which would be about sufficient, but chances are pretty good that some of the textures would be in system RAM. That overflow would probably be fairly small though, so it probably won't be terribly difficult to maintain the kind of framerate you're hoping for. If you were to add a lot more textures, however, it would eventually become a problem. In that case, reducing the original size by 2 in each dimension (for example) would reduce your memory requirement by a factor of 4, which would make fitting them into memory a lot easier.

Resources