I have an application that draws images from a CGImage.
The CImage itself is loaded using a CGImageSourceCreateImageAtIndex to create an image from a PNG file.
This forms part of a sprite engine - there are multiple sprite images on a single PNG file, so each sprite has a CGRect defining where it is found on the CGImage.
The problem is, CGContextDraw only takes a destination rect - and stretches the source CGImage to fill it.
So, to draw each sprite image we need to create multiple CGImages from the original source, using CGImageCreateWithImageInRect().
I thought at first that this would be a 'cheap' operation - it doesn't seem necessary for each CGImage to contain its own copy of the images bits - however, profiling has revealed that using CGImageCreateWithImageInRect() is a rather expensive operation.
Is there a more optimal method to draw a sub-section of a CGImage onto a CGContext so I dont need to CGImageCreateWithImageInRect() so often?
Given the lack of a source rectangle, and the ease of making a CGImage from a rect on a CGImage I began to suspect that perhaps CGImage implemented a copy-on-write semantic where a CGImage made from a CGImage would refer to a sub-rect of the same physical bits as the parent.
Profiling seems to prove this wrong :/
I was in the same boat as you. CGImageCreateWithImageInRect() worked better for my needs but previously I had attempted to convert to an NSImage, and prior to that I was clipping the context I was drawing in, and translating so that CGContextDrawImage() would draw the right data into the clipped region.
Of all of the solutions I tried:
Clipping and translating was prohibitively tolling on the CPU. It was too slow. It seemed that increasing the amount of bitmap data only slightly made significant performance impacts, suggesting that this approach lacks any sort of scalability.
Conversion to NSImage was relatively efficient, at least for the data we were using. There didn't seem to be any duplication of bitmap data that I could see, which was mostly what I was afraid of going from one image object to another.
At one point I converted to a CIImage, as this class also allows drawing subregions of the image. This seemed to be slower than converting to NSImage, but did offer me the chance to fiddle around with the bitmap by passing through some of the Core Image filters.
Using CGImageCreateWithImageInRect() was the fastest of the lot; maybe this has been optimised since you had last used it. The documentation for this function says the resulting image retains a reference to the original image, this seems to agree with what you had assumed regarding copy-on-write semantics. In my benchmarks, there appears to be no duplication of data but maybe I'm reading the results wrong. We went with this method because it was not only the fastest but it seemed like a more “clean” approach, keeping the whole process in one framework.
Create an NSImage with the CGImage. An NSImage object makes it easy to draw only some section of it to a destination rectangle.
I believe the recommendation is to use a clipping region.
I had a similar problem when writing a simple 2D tile-based game.
The only way I got decent performance was to:
1) Pre-render the tilesheet CGImage into a CGBitmapContext using CGContextDrawImage()
2) Create another CGBitmapContext as an offscreen rendering buffer, with the same size as the UIView I was drawing in, and same pixel format as the context from (1).
3) Write my own fast blit routine that would copy a region (CGRect) of pixels from the bitmap context created in (1) to the bitmap context created in (2). This is pretty easy: just simple memory copying (and some extra per-pixel operations to do alpha blending if needed), keeping in mind that the rasters are in reverse order in the buffer (the last row of pixels in the image is at the beginning of the buffer).
4) Once a frame had been drawn, draw the offscreen buffer in the view using CGContextDrawImage().
As far as I could tell, every time you call CGImageCreateWithImageInRect(), it decodes the entire PNG file into a raw bitmap, then copies the desired region of the bitmap to the destination context.
Related
I have a Knife tool as part of WildTools in PowerCADD that cuts objects in the drawing. I did this the 'wrong' way some years ago by using NSBitmapImageRep to compare and set pixels. It works but it's horribly slow, I'm embarrassed and I'm working on fixing the problem.
The approach that I'm using now is to use an NSBitmapImageRep to draw a mask offscreen, then use a CGContextRef, clipping the CGContextRef with the mask, then use CGContextDrawImage with the original CGImageRef to get the masked image.
There's no problem getting this to work, but for a single knife cut of an image, I have to do this twice, once for each side of the knife cut.
The problem I'm running into is with very large bitmaps with say 14000 x 9000 pixels in a 300-400 dpi image. In this case when I call CGBitmapContextCreate I'll get a NULL context and the whole things falls apart.
I'm trying to figure out if there's a way to do this with a single NSBitmapImageRep for the four operations. My thought is to create an NSImage, lock focus on it so the drawing goes into it to create a mask. Then use NSImage drawInRect with an NSCompositeClear operation to mask out the unwanted part of the image. If I never get any bits into the single NSBitmapImageRep then perhaps I can do it all with a single NSBitmapImageRep.
Anyone got any ideas?
I think the answer is no, but asking anyway.
Does CALayer support vector art?
I get the sense there is some resolution dependence on the resolution of the image used for a CALayer.
Am I missing something?
It does, in the sense that a layer subclass, or delegate of a layer can be asked to draw on-demand by the layer. You can fairly easily draw vector art into the layer this way.
There's no support for vector images in QuartzCore.
It means that there are no in-built API's to read and draw any of the vector image formats.
It's possible to write code to read and parse svg format (which is essentially and XML file) and draw everything using primitive drawing methods (creating paths and stroking them)... but that means you have to do everything from scratch yourself.
[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.
How can I draw a CGContextRef created with CGBitmapContextCreate() to a NSView?
Should I convert it to a image first? If that's the case, wouldn't it be an expensive operation?
Should I convert it to a image first?
Yes. You can use CGBitmapContextCreateImage, then use that to draw into the graphics context from drawRect:.
If that's the case, wouldn't it be an expensive operation?
CGBitmapContext->CGImage one option among multiple - use the best for the task. If you make good design decisions, it's rarely an obstacle.
CGImage, NSImage, and UIImage are immutable. They avoid copying you might expect when they are created.
Larger images can obviously consume a good amount of memory, and it can be expensive to draw the image to the bitmap, and then to draw the image at a size other than its native size.
Reuse the images you create, and hold on to them appropriately. Profile now and then to see how things are going.
I'm writing an audio waveform editor in Cocoa with a wide range of zoom options. At its widest, it shows a waveform for an entire song (~10 million samples in view). At its narrowest, it shows a pixel accurate representation of the sound wave (~1 thousand samples in a view). I want to be able to smoothly transition between these zoom levels. Some commercial editors like Ableton Live seem to do this in a very inexpensive fashion.
My current implementation satisfies my desired zoom range, but is inefficient and choppy. The design is largely inspired by this excellent article on drawing waveforms with quartz:
http://supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/
I create multiple CGMutablePathRef's for the audio file at various levels of reduction. When I'm zoomed all the way out, I use the path that's been reduced to one point per x-thousand samples. When I'm zoomed in all the way in, I use that path that contains a point for every sample. I scale a path horizontally when I'm in between reduction levels. This gets it functional, but is still pretty expensive and artifacts appear when transitioning between reduction levels.
One thought on how I might make this less expensive is to take out anti-aliasing. The waveform in my editor is anti-aliased while the one in Ableton is not (see comparison below).
I don't see a way to turn off anti-aliasing for CGMutablePathRef's. Is there a non-anti-aliased alternative to CGMutablePathRef in the world of Cocoa? If not, does anyone know of some OpenGL classes or sample code that might set me on course to drawing my huge line more efficiently?
Update 1-21-2014: There's now a great library that does exactly what I was looking for: https://github.com/syedhali/EZAudio
i use CGContextMoveToPoint+CGContextAddLineToPoint+CGContextStrokePath in my app. one point per onscreen point to draw using a pre-calculated backing buffer for the overview. the buffer contains the exact points to draw, and uses an interpolated representation of the signal (based on the zoom/scale). although it could be faster and look better if i rendered to an image buffer, i've never had a complaint. you can calc and render all of this from a secondary thread, if you set it up correctly.
anti-aliasing pertains to the graphics context.
CGFloat (the native input for CGPaths) is overkill for an overview, as an intermediate representation, and for calculating the waveform overview. 16 bits should be adequate. of course, you'll have to convert to CGFloat when passing to CG calls.
you need to profile to find out where your time is spent -- focus on the parts that take the most time. also, make you sure you only draw what you must, when you must and avoid overlays/animations where possible. if you need overlays, it's better to render to an image/buffer and update that as needed. sometimes it helps to break up the display into multiple drawing surfaces when the surface is large.
semi-OT: ableton's using s+h values this can be slightly faster but... i much prefer it as an option. if your implementation uses linear interpolation (which it may, based on its appearance), consider a more intuitive approach. linear interpolation is a bit of a cheat, and really not what the user would expect if you're developing a pro app.
In relation to the particular question of anti-aliasing. In Quartz the anti-aliasing is applied to the context at the moment of drawing. The CGPathRef is agnostic to the drawing context. Thus, the same CGPathRef can be rendered into an antialiased context or to a non-antialiased context. For example, to disable antialiasing during animations:
CGContextRef context = UIGraphicsGetCurrentContext();
GMutablePathRef fill_path = CGPathCreateMutable();
// Fill the path with the wave
...
CGContextAddPath(context, fill_path);
if ([self animating])
CGContextSetAllowsAntialiasing(context, NO);
else
CGContextSetAllowsAntialiasing(context, YES);
// Do the drawing
CGContextDrawPath(context, kCGPathStroke);