How can I draw a CGContextRef created with CGBitmapContextCreate() to a NSView?
Should I convert it to a image first? If that's the case, wouldn't it be an expensive operation?
Should I convert it to a image first?
Yes. You can use CGBitmapContextCreateImage, then use that to draw into the graphics context from drawRect:.
If that's the case, wouldn't it be an expensive operation?
CGBitmapContext->CGImage one option among multiple - use the best for the task. If you make good design decisions, it's rarely an obstacle.
CGImage, NSImage, and UIImage are immutable. They avoid copying you might expect when they are created.
Larger images can obviously consume a good amount of memory, and it can be expensive to draw the image to the bitmap, and then to draw the image at a size other than its native size.
Reuse the images you create, and hold on to them appropriately. Profile now and then to see how things are going.
Related
I think the answer is no, but asking anyway.
Does CALayer support vector art?
I get the sense there is some resolution dependence on the resolution of the image used for a CALayer.
Am I missing something?
It does, in the sense that a layer subclass, or delegate of a layer can be asked to draw on-demand by the layer. You can fairly easily draw vector art into the layer this way.
There's no support for vector images in QuartzCore.
It means that there are no in-built API's to read and draw any of the vector image formats.
It's possible to write code to read and parse svg format (which is essentially and XML file) and draw everything using primitive drawing methods (creating paths and stroking them)... but that means you have to do everything from scratch yourself.
I would like to share my experience from using self.layer.shouldRasterize = YES; flag on UIViews.
I have a UIView class hierarchy that has self.layer.shouldRasterize turned ON in order to improve scrolling performance (all of them have STATIC subviews that are larger than the screen of the device).
Today in one of the subclasses I used CAEmitterLayer to produce nice particle effects.
The performance is really poor although the number of particles was really low (50 particles).
What is the cause of this problem?
I'll just quote Apple Doc and explain:
#property BOOL shouldRasterize
When the value of this property is YES, the layer is
rendered as a bitmap in its local coordinate space and then composited
to the destination with any other content. Shadow effects and any
filters in the filters property are rasterized and included in the
bitmap. However, the current opacity of the layer is not rasterized.
If the rasterized bitmap requires scaling during compositing, the
filters in the minificationFilter and magnificationFilter properties
are applied as needed.
So basically when shouldRasterize is set to YES, every pixel that will compose the layer is calculated and the whole layer is cached as a bitmap.
When will you benefit from it ?
When you only need to draw it once. That means when you need just pure "simple" animation (eg moving, transform, scaling...) because CoreAnimation will actually use that layer without redrawing it every frame. It's a very powerful feature to cache complex layers (with shadows and corner radius) combined with CoreAnimation.
When will it kill you framerate ?
When your layer is redisplayed many times, because on top of the drawing that is already taking effect, the shouldRasterize will process all pixels to cache the bitmap data.
So the real question you should ask yourself is this : "On which layer am I applying the shouldRasterize to YES ? And how often is this layer redrawn ?"
Hope this was clear enough.
Turning OFF self.layer.shouldRasterize increases performance to normal levels.
Why is that?
According to a video on apple's developers site (I cannot remember the video, help please?) the rule for self.layer.shouldRasterize is that simple: If all of your subviews are static (their position, contents etc, are not changing or animating) then it is beneficiary to turn self.layer.shouldRasterize ON. On the other side if any of the subviews are changing then the framework needs to re-cache the view hierarchy and this is a huge bottleneck. Under the hood the bottleneck is the memory copying between CPU and GPU.
I'm writing an audio waveform editor in Cocoa with a wide range of zoom options. At its widest, it shows a waveform for an entire song (~10 million samples in view). At its narrowest, it shows a pixel accurate representation of the sound wave (~1 thousand samples in a view). I want to be able to smoothly transition between these zoom levels. Some commercial editors like Ableton Live seem to do this in a very inexpensive fashion.
My current implementation satisfies my desired zoom range, but is inefficient and choppy. The design is largely inspired by this excellent article on drawing waveforms with quartz:
http://supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/
I create multiple CGMutablePathRef's for the audio file at various levels of reduction. When I'm zoomed all the way out, I use the path that's been reduced to one point per x-thousand samples. When I'm zoomed in all the way in, I use that path that contains a point for every sample. I scale a path horizontally when I'm in between reduction levels. This gets it functional, but is still pretty expensive and artifacts appear when transitioning between reduction levels.
One thought on how I might make this less expensive is to take out anti-aliasing. The waveform in my editor is anti-aliased while the one in Ableton is not (see comparison below).
I don't see a way to turn off anti-aliasing for CGMutablePathRef's. Is there a non-anti-aliased alternative to CGMutablePathRef in the world of Cocoa? If not, does anyone know of some OpenGL classes or sample code that might set me on course to drawing my huge line more efficiently?
Update 1-21-2014: There's now a great library that does exactly what I was looking for: https://github.com/syedhali/EZAudio
i use CGContextMoveToPoint+CGContextAddLineToPoint+CGContextStrokePath in my app. one point per onscreen point to draw using a pre-calculated backing buffer for the overview. the buffer contains the exact points to draw, and uses an interpolated representation of the signal (based on the zoom/scale). although it could be faster and look better if i rendered to an image buffer, i've never had a complaint. you can calc and render all of this from a secondary thread, if you set it up correctly.
anti-aliasing pertains to the graphics context.
CGFloat (the native input for CGPaths) is overkill for an overview, as an intermediate representation, and for calculating the waveform overview. 16 bits should be adequate. of course, you'll have to convert to CGFloat when passing to CG calls.
you need to profile to find out where your time is spent -- focus on the parts that take the most time. also, make you sure you only draw what you must, when you must and avoid overlays/animations where possible. if you need overlays, it's better to render to an image/buffer and update that as needed. sometimes it helps to break up the display into multiple drawing surfaces when the surface is large.
semi-OT: ableton's using s+h values this can be slightly faster but... i much prefer it as an option. if your implementation uses linear interpolation (which it may, based on its appearance), consider a more intuitive approach. linear interpolation is a bit of a cheat, and really not what the user would expect if you're developing a pro app.
In relation to the particular question of anti-aliasing. In Quartz the anti-aliasing is applied to the context at the moment of drawing. The CGPathRef is agnostic to the drawing context. Thus, the same CGPathRef can be rendered into an antialiased context or to a non-antialiased context. For example, to disable antialiasing during animations:
CGContextRef context = UIGraphicsGetCurrentContext();
GMutablePathRef fill_path = CGPathCreateMutable();
// Fill the path with the wave
...
CGContextAddPath(context, fill_path);
if ([self animating])
CGContextSetAllowsAntialiasing(context, NO);
else
CGContextSetAllowsAntialiasing(context, YES);
// Do the drawing
CGContextDrawPath(context, kCGPathStroke);
I have an application that draws images from a CGImage.
The CImage itself is loaded using a CGImageSourceCreateImageAtIndex to create an image from a PNG file.
This forms part of a sprite engine - there are multiple sprite images on a single PNG file, so each sprite has a CGRect defining where it is found on the CGImage.
The problem is, CGContextDraw only takes a destination rect - and stretches the source CGImage to fill it.
So, to draw each sprite image we need to create multiple CGImages from the original source, using CGImageCreateWithImageInRect().
I thought at first that this would be a 'cheap' operation - it doesn't seem necessary for each CGImage to contain its own copy of the images bits - however, profiling has revealed that using CGImageCreateWithImageInRect() is a rather expensive operation.
Is there a more optimal method to draw a sub-section of a CGImage onto a CGContext so I dont need to CGImageCreateWithImageInRect() so often?
Given the lack of a source rectangle, and the ease of making a CGImage from a rect on a CGImage I began to suspect that perhaps CGImage implemented a copy-on-write semantic where a CGImage made from a CGImage would refer to a sub-rect of the same physical bits as the parent.
Profiling seems to prove this wrong :/
I was in the same boat as you. CGImageCreateWithImageInRect() worked better for my needs but previously I had attempted to convert to an NSImage, and prior to that I was clipping the context I was drawing in, and translating so that CGContextDrawImage() would draw the right data into the clipped region.
Of all of the solutions I tried:
Clipping and translating was prohibitively tolling on the CPU. It was too slow. It seemed that increasing the amount of bitmap data only slightly made significant performance impacts, suggesting that this approach lacks any sort of scalability.
Conversion to NSImage was relatively efficient, at least for the data we were using. There didn't seem to be any duplication of bitmap data that I could see, which was mostly what I was afraid of going from one image object to another.
At one point I converted to a CIImage, as this class also allows drawing subregions of the image. This seemed to be slower than converting to NSImage, but did offer me the chance to fiddle around with the bitmap by passing through some of the Core Image filters.
Using CGImageCreateWithImageInRect() was the fastest of the lot; maybe this has been optimised since you had last used it. The documentation for this function says the resulting image retains a reference to the original image, this seems to agree with what you had assumed regarding copy-on-write semantics. In my benchmarks, there appears to be no duplication of data but maybe I'm reading the results wrong. We went with this method because it was not only the fastest but it seemed like a more “clean” approach, keeping the whole process in one framework.
Create an NSImage with the CGImage. An NSImage object makes it easy to draw only some section of it to a destination rectangle.
I believe the recommendation is to use a clipping region.
I had a similar problem when writing a simple 2D tile-based game.
The only way I got decent performance was to:
1) Pre-render the tilesheet CGImage into a CGBitmapContext using CGContextDrawImage()
2) Create another CGBitmapContext as an offscreen rendering buffer, with the same size as the UIView I was drawing in, and same pixel format as the context from (1).
3) Write my own fast blit routine that would copy a region (CGRect) of pixels from the bitmap context created in (1) to the bitmap context created in (2). This is pretty easy: just simple memory copying (and some extra per-pixel operations to do alpha blending if needed), keeping in mind that the rasters are in reverse order in the buffer (the last row of pixels in the image is at the beginning of the buffer).
4) Once a frame had been drawn, draw the offscreen buffer in the view using CGContextDrawImage().
As far as I could tell, every time you call CGImageCreateWithImageInRect(), it decodes the entire PNG file into a raw bitmap, then copies the desired region of the bitmap to the destination context.
So I've read that StretchBlt can mirror images horizontally and/or vertically by negating the nWidthSrc/Dest and nHeightSrc/Dest parameters. I'd like this functionality without the performance overhead of a StretchBlt. I tried the same technique with BitBlt but it didn't work.
Is there any way to mirror an image with something as simple as BitBlt, without the overkill of a StretchBlt? Or will StretchBlt not affect performance if the source and destination sizes are the same?
BitBlt will only perform mathmatic operations (or, xor, etc) on the individual pixels in question, it will not resize the image in any way. That is exactly what StretchBlt is for, and StretchBlt (compared to any other graphics resizing operation) is insanely fast as in most cases it can use the graphics card to accelerate its performance.
All Win32 functions are probably going to be extremely optimized.
What makes you think StretchBlt will be a big performance hit?
Have you profiled your application using StretchBlt?
You could reverse all of the bitmap data yourself and see if you can do better that StretchBlt.
Here's a link that might help you out:
http://www.codeguru.com/cpp/g-m/bitmap/specialeffects/article.php/c1739
To mirror an image you just need to loop through the pixels in reverse order. Such as if you want to mirror horizontally you just need to do the following:
expand image canvas to double the size
start at the bottom of the image and work you way up writing the pixels in to the mirrored area from the top down.
do step 3 from left to right.
I don't know what language you are using, but most of them allow you to manipulates the pixels or bits on an individual basis using GDI.
No way you are going to be more efficient that StretchBlt, unless you know some extra information about the image (e.g, there is a border so you don't have to flip certain pixels.