NSBitmapRep, CGContextRef and how to handle large images - macos

I have a Knife tool as part of WildTools in PowerCADD that cuts objects in the drawing. I did this the 'wrong' way some years ago by using NSBitmapImageRep to compare and set pixels. It works but it's horribly slow, I'm embarrassed and I'm working on fixing the problem.
The approach that I'm using now is to use an NSBitmapImageRep to draw a mask offscreen, then use a CGContextRef, clipping the CGContextRef with the mask, then use CGContextDrawImage with the original CGImageRef to get the masked image.
There's no problem getting this to work, but for a single knife cut of an image, I have to do this twice, once for each side of the knife cut.
The problem I'm running into is with very large bitmaps with say 14000 x 9000 pixels in a 300-400 dpi image. In this case when I call CGBitmapContextCreate I'll get a NULL context and the whole things falls apart.
I'm trying to figure out if there's a way to do this with a single NSBitmapImageRep for the four operations. My thought is to create an NSImage, lock focus on it so the drawing goes into it to create a mask. Then use NSImage drawInRect with an NSCompositeClear operation to mask out the unwanted part of the image. If I never get any bits into the single NSBitmapImageRep then perhaps I can do it all with a single NSBitmapImageRep.
Anyone got any ideas?

Related

How to pre-cache NSImage for displaying inside NSImageView

I have a rather large image, which I display inside NSImageView. Once I do it the first time, every next usage of said image is very fast. Either when re-using the NSImageView it was assigned to or assigning it to a new NSImageView. This is apparently due to the fact that NSImage keeps cache of the representation it used for displaying said image the first time. The problem is with displaying it the first time though. Even on a fast hardware there is an unacceptably long delay before the image eventually appears. What I am looking for is a way to pre-cache the image representation before I display it the first time. Any suggestions?
FWIW, I use Obj-C for OS X/macOS but I believe it shouldn't make any difference and Swift techniques should be applicable too.

Displaying a CGContextRef

How can I draw a CGContextRef created with CGBitmapContextCreate() to a NSView?
Should I convert it to a image first? If that's the case, wouldn't it be an expensive operation?
Should I convert it to a image first?
Yes. You can use CGBitmapContextCreateImage, then use that to draw into the graphics context from drawRect:.
If that's the case, wouldn't it be an expensive operation?
CGBitmapContext->CGImage one option among multiple - use the best for the task. If you make good design decisions, it's rarely an obstacle.
CGImage, NSImage, and UIImage are immutable. They avoid copying you might expect when they are created.
Larger images can obviously consume a good amount of memory, and it can be expensive to draw the image to the bitmap, and then to draw the image at a size other than its native size.
Reuse the images you create, and hold on to them appropriately. Profile now and then to see how things are going.

Draw part of CGImage

I have an application that draws images from a CGImage.
The CImage itself is loaded using a CGImageSourceCreateImageAtIndex to create an image from a PNG file.
This forms part of a sprite engine - there are multiple sprite images on a single PNG file, so each sprite has a CGRect defining where it is found on the CGImage.
The problem is, CGContextDraw only takes a destination rect - and stretches the source CGImage to fill it.
So, to draw each sprite image we need to create multiple CGImages from the original source, using CGImageCreateWithImageInRect().
I thought at first that this would be a 'cheap' operation - it doesn't seem necessary for each CGImage to contain its own copy of the images bits - however, profiling has revealed that using CGImageCreateWithImageInRect() is a rather expensive operation.
Is there a more optimal method to draw a sub-section of a CGImage onto a CGContext so I dont need to CGImageCreateWithImageInRect() so often?
Given the lack of a source rectangle, and the ease of making a CGImage from a rect on a CGImage I began to suspect that perhaps CGImage implemented a copy-on-write semantic where a CGImage made from a CGImage would refer to a sub-rect of the same physical bits as the parent.
Profiling seems to prove this wrong :/
I was in the same boat as you. CGImageCreateWithImageInRect() worked better for my needs but previously I had attempted to convert to an NSImage, and prior to that I was clipping the context I was drawing in, and translating so that CGContextDrawImage() would draw the right data into the clipped region.
Of all of the solutions I tried:
Clipping and translating was prohibitively tolling on the CPU. It was too slow. It seemed that increasing the amount of bitmap data only slightly made significant performance impacts, suggesting that this approach lacks any sort of scalability.
Conversion to NSImage was relatively efficient, at least for the data we were using. There didn't seem to be any duplication of bitmap data that I could see, which was mostly what I was afraid of going from one image object to another.
At one point I converted to a CIImage, as this class also allows drawing subregions of the image. This seemed to be slower than converting to NSImage, but did offer me the chance to fiddle around with the bitmap by passing through some of the Core Image filters.
Using CGImageCreateWithImageInRect() was the fastest of the lot; maybe this has been optimised since you had last used it. The documentation for this function says the resulting image retains a reference to the original image, this seems to agree with what you had assumed regarding copy-on-write semantics. In my benchmarks, there appears to be no duplication of data but maybe I'm reading the results wrong. We went with this method because it was not only the fastest but it seemed like a more “clean” approach, keeping the whole process in one framework.
Create an NSImage with the CGImage. An NSImage object makes it easy to draw only some section of it to a destination rectangle.
I believe the recommendation is to use a clipping region.
I had a similar problem when writing a simple 2D tile-based game.
The only way I got decent performance was to:
1) Pre-render the tilesheet CGImage into a CGBitmapContext using CGContextDrawImage()
2) Create another CGBitmapContext as an offscreen rendering buffer, with the same size as the UIView I was drawing in, and same pixel format as the context from (1).
3) Write my own fast blit routine that would copy a region (CGRect) of pixels from the bitmap context created in (1) to the bitmap context created in (2). This is pretty easy: just simple memory copying (and some extra per-pixel operations to do alpha blending if needed), keeping in mind that the rasters are in reverse order in the buffer (the last row of pixels in the image is at the beginning of the buffer).
4) Once a frame had been drawn, draw the offscreen buffer in the view using CGContextDrawImage().
As far as I could tell, every time you call CGImageCreateWithImageInRect(), it decodes the entire PNG file into a raw bitmap, then copies the desired region of the bitmap to the destination context.

What's the most straightforward way of painting RGBA data into a view on Mac OS X?

Long story short, every 60th of a second I have a relatively small buffer (256x240) of RGBA data (8 bits per component, 4 bytes per pixel) that is refreshed with new data, and I want to display it (I guess inside an NSView, but anything is welcome). What's the most straightforward way to do it? Building a CGImageRef from a CGBitmapContext to write it to the CGContextRef obtained from the NSGraphicsContext seems a bit convoluted.
Building a CGImageRef from a CGBitmapContext to write it to the CGContextRef obtained from the NSGraphicsContext seems a bit convoluted.
It is. Specifically, the “bit” is CGBitmapContext; you don't need that if you already have raster data to create an image from. Just create the image, and then draw it into the CGContext you got from the current NSGraphicsContext.

create image cache - large png images

I am a first time poster. I have a problem that seems very common to a lot of people but I can't seem to find or decipher an answer anywhere.
The game I'm making has 3 rather large sprites on screen (230 high) x various widths.
Each sprite has 3 1024x1024 character sheets where the frames of animations are taken from.
I've experimented with PVR's but at that size they are absolutely horrible so I want to keep PNG's.
From other information I believe that the device can handle 3 1024x1024 PNGs in memory without memory warnings.
My problem is I end up with 9 as if I don't use imagenamed then the time it takes to load the sheet from disk is unacceptable.
I have created a NSMutableDictionary to store the data from the png image sheets into, but here lies my problem.
How do I add uncompressed png data to the NSMutableDictionary without using imagenamed? If I add it in uncompressed (imagewithcontents of file) then there is still a massive loading time when I get the image from the cache.
I need to somehow store it uncompressed in the NSMutableDictionary - Assign it to the image of the sprite (sprites are UIImageViews) so that it doesn't get stored as it does if I use imageNamed.
Completely free the png from texture memeory if I need to change the sheet the sprite is grabbing the frames from. (not from my NSMutableDictionary cache)
thanks in advance -

Resources