Maximum bitmap image size for NSImage or CGImage? - cocoa

I generate bitmap images with Cocoa using NSImage or CGImage (UIImage would work similar probably)
How do I determine what would be the maximum image size I can generate?
I guess it should be in some relation to the memory available?

Apart from a tech curiosity, is always bad to allocate big images.
Apple (especially for iPhone/iOS) has worked hard to use tiles and "tiled" classes (i.e. CATiledLayer) to display very large images.
I saw it directly at WWDC in past years (see sample code PhotoScroller, https://developer.apple.com/library/content/samplecode/PhotoScroller/Introduction/Intro.html)

Related

Do we really need separate thumbnail images?

I understand the use of thumbnail in network applications but assuming all the image are in the application itself (photo application), in this case do we still need thumbnail images for performance reasons or is it just fine for the device to resize the actual image on run time?
Since the question is too opinion based I am going to ask more quantitively.
The images are 500x500, about 200-300kb size jpg.
There will be about 200 images.
It is targeted for iphone4 and higher, so that would be the minimum hardware specs users will have.
The maximum memory used should not pass 20% of the devices capacity.
Will the application in this case need separate thumbnail images?
It depends on your application. Just test performance and memory usage on device.
If you show a lot of images and/or they change very quickly (like when you are scrolling UITableView with a lot of images) you will probably have to use thumbnails.
UPDATE:
When image is shown it takes width * height * 3 (width * height * 4 for images with ALPHA channel) bytes of memory. 10 photos 2592 x 1936 stored in memory will require 200Mb of RAM. It is too much. You definitely have to use thumbnails.
Your question is a bit lacking on detail but I assume you're asking if, for say a photo album app, can you just throw around full size UIImages and let a UIImageView resize them to fit on the screen, or do you need to resize?
You absolutely need to resize.
An image taken by an iPhone camera will be several megabytes in compressed file size, more in actual bytes used to represent pixels. The dimensions of the image will be far greater than the screen dimensions of the device. The memory use is very high, particularly if you're thinking of showing multiple "thumbnails". It's not so much a CPU issue (once the image has been rendered it doesn't need re-rendering) but a memory one, and you're severely memory constrained on a mobile device.
Each doubling in size of an image (e.g. from a 100x100 to a 200x200) represents a four-fold increase in the memory needed to hold it.

.png images make my app size very large

I have a lot of .png images of 5-4-3 mb each, and they made a total of around 50-60mb
Has someone got a way to compress .png images to make my app smaller? I was using "PNGenius" but it doesn't make a lot of difference.
Convert them to / export them as .PVR.CCZ images. TexturePacker is a tool that can help you with that. You'll find more pointers in this article.
I suppose that you have large png images with big dimensions which are resized (the dimensions) in your app. This is why they have big sizes on the disk.
If you resize the dimensions (width and height) first, and then you add them in your app the problem is solved.
It looks like this:
Situation 1.
Big image file (very big dimensions) <---> Big size on disk (e.g. 5MB)
---> You use it in your app. You resize it in your app. <---> Same size on disk.
Situation 2.
Big image file (very big dimensions) <---> Big size on disk
---> You resize it on your disk, and you don't resize it in your app.
---> Less size
You can resize images very quickly and easy using IfanView. It's a great app that have nice shortcuts.
Hope this helps you.

Draw part of CGImage

I have an application that draws images from a CGImage.
The CImage itself is loaded using a CGImageSourceCreateImageAtIndex to create an image from a PNG file.
This forms part of a sprite engine - there are multiple sprite images on a single PNG file, so each sprite has a CGRect defining where it is found on the CGImage.
The problem is, CGContextDraw only takes a destination rect - and stretches the source CGImage to fill it.
So, to draw each sprite image we need to create multiple CGImages from the original source, using CGImageCreateWithImageInRect().
I thought at first that this would be a 'cheap' operation - it doesn't seem necessary for each CGImage to contain its own copy of the images bits - however, profiling has revealed that using CGImageCreateWithImageInRect() is a rather expensive operation.
Is there a more optimal method to draw a sub-section of a CGImage onto a CGContext so I dont need to CGImageCreateWithImageInRect() so often?
Given the lack of a source rectangle, and the ease of making a CGImage from a rect on a CGImage I began to suspect that perhaps CGImage implemented a copy-on-write semantic where a CGImage made from a CGImage would refer to a sub-rect of the same physical bits as the parent.
Profiling seems to prove this wrong :/
I was in the same boat as you. CGImageCreateWithImageInRect() worked better for my needs but previously I had attempted to convert to an NSImage, and prior to that I was clipping the context I was drawing in, and translating so that CGContextDrawImage() would draw the right data into the clipped region.
Of all of the solutions I tried:
Clipping and translating was prohibitively tolling on the CPU. It was too slow. It seemed that increasing the amount of bitmap data only slightly made significant performance impacts, suggesting that this approach lacks any sort of scalability.
Conversion to NSImage was relatively efficient, at least for the data we were using. There didn't seem to be any duplication of bitmap data that I could see, which was mostly what I was afraid of going from one image object to another.
At one point I converted to a CIImage, as this class also allows drawing subregions of the image. This seemed to be slower than converting to NSImage, but did offer me the chance to fiddle around with the bitmap by passing through some of the Core Image filters.
Using CGImageCreateWithImageInRect() was the fastest of the lot; maybe this has been optimised since you had last used it. The documentation for this function says the resulting image retains a reference to the original image, this seems to agree with what you had assumed regarding copy-on-write semantics. In my benchmarks, there appears to be no duplication of data but maybe I'm reading the results wrong. We went with this method because it was not only the fastest but it seemed like a more “clean” approach, keeping the whole process in one framework.
Create an NSImage with the CGImage. An NSImage object makes it easy to draw only some section of it to a destination rectangle.
I believe the recommendation is to use a clipping region.
I had a similar problem when writing a simple 2D tile-based game.
The only way I got decent performance was to:
1) Pre-render the tilesheet CGImage into a CGBitmapContext using CGContextDrawImage()
2) Create another CGBitmapContext as an offscreen rendering buffer, with the same size as the UIView I was drawing in, and same pixel format as the context from (1).
3) Write my own fast blit routine that would copy a region (CGRect) of pixels from the bitmap context created in (1) to the bitmap context created in (2). This is pretty easy: just simple memory copying (and some extra per-pixel operations to do alpha blending if needed), keeping in mind that the rasters are in reverse order in the buffer (the last row of pixels in the image is at the beginning of the buffer).
4) Once a frame had been drawn, draw the offscreen buffer in the view using CGContextDrawImage().
As far as I could tell, every time you call CGImageCreateWithImageInRect(), it decodes the entire PNG file into a raw bitmap, then copies the desired region of the bitmap to the destination context.

create image cache - large png images

I am a first time poster. I have a problem that seems very common to a lot of people but I can't seem to find or decipher an answer anywhere.
The game I'm making has 3 rather large sprites on screen (230 high) x various widths.
Each sprite has 3 1024x1024 character sheets where the frames of animations are taken from.
I've experimented with PVR's but at that size they are absolutely horrible so I want to keep PNG's.
From other information I believe that the device can handle 3 1024x1024 PNGs in memory without memory warnings.
My problem is I end up with 9 as if I don't use imagenamed then the time it takes to load the sheet from disk is unacceptable.
I have created a NSMutableDictionary to store the data from the png image sheets into, but here lies my problem.
How do I add uncompressed png data to the NSMutableDictionary without using imagenamed? If I add it in uncompressed (imagewithcontents of file) then there is still a massive loading time when I get the image from the cache.
I need to somehow store it uncompressed in the NSMutableDictionary - Assign it to the image of the sprite (sprites are UIImageViews) so that it doesn't get stored as it does if I use imageNamed.
Completely free the png from texture memeory if I need to change the sheet the sprite is grabbing the frames from. (not from my NSMutableDictionary cache)
thanks in advance -

Tips for reducing Core Animation memory usage

So here's the situation:
I have a CALayer that is the size of my screen, and I'm setting the contents property to a 2 Mb JPEG that's roughly 3500 x 2000 pixels in size with a resolution of 240ppi.
I'd expect there to be a slight overhead involved in using the CALayer, but my sample application (which only does exactly what's above) shows usage of about 33Mb RSIZE, 22Mb RPVT and 30Mb RSHRD. I've noticed that these numbers are much better when running the application as 64-bit than they are running as a 32-bit process.
I'm doing everything I can think of in the real application that this example comes from, including resampling my CGImageRefs to only be the size of the layer, but this seems extraneous to me - shouldn't it be simpler?
Has anyone come across good methods to reduce the amount of memory CALayers and CGImageRefs use?
First, you're going to run into problems with an image that size in a plain CALayer, because you may hit the texture size limit of 2048 x 2048 (depending on your graphics card). Applications like this are what CATiledLayer is designed for. Bill Dudney has some code examples on his blog (a large PDF), as well as with the code that accompanies his book.
It isn't surprising to me that such a large image would take so much memory, given that it will be stored as an uncompressed bitmap in your CGImage. Aside from scaling your image to the resolution you need, and tiling it with CATiledLayer, I can't think of much. Are you releasing the CGImageRef once you've assigned it to the contents of the CAlayer? You won't need to hang onto it at that point.

Resources