Drawing image with CoreGraphics on Retina iPad is slow - performance

In my iPad app, I am rendering to an offscreen bitmap, and then drawing the bitmap to the screen. (This is because I want to re-use existing bitmap rendering code.) On the iPad 2, this works like a charm, but on the new iPad with Retina display, drawing the bitmap is really slow, even though its resolution is still the same.
To draw the bitmap, we use the regular Quartz 2D functions: CGImageCreate with a data provider created by CGDataProviderCreateWithData, 32-bit RGBA format with kCGImageAlphaNoneSkipLast. In the UIView that displays the bitmap, in drawRect:, we use CGContextDrawImage to draw it to the context returned by UIGraphicsGetCurrentContext.
Note that I'm not even trying to draw at double resolution: for now I'm fine with the same resolution as I was using on the iPad 2. It looks like CoreGraphics is internally doubling the pixels, and then sending that to the GPU, even though the CGImage that I'm making should be fine for passing to the GPU directly. Any ideas?

It looks like CoreGraphics is internally doubling the pixels, and then sending that to the GPU,
Pretty much. More accurately (in spirit at least):
UIKit makes a CGBitmapContext the size of your view's bounds, in device pixels
It makes that context the current context
You draw your CGImage into that context
... so CG has to rescale the source image, and touch all of the destination pixels
After you're done drawing, UIKit makes a CGImage from the bitmap context
and assigns it to the view's layer's contents.
the CGImage that I'm making should be fine for passing to the GPU directly.
If you want that to happen, you need to tell the system to do that, by cutting out some of the steps above.
(There is no link between UIKit, CoreAnimation, and CoreGraphics that provides a "fast path" like you are expecting.)
The easiest way would be to make a UIImageView, and set its image to a UIImage wrapping your CGImageRef.
Or, set your view.layer.contents to your CGImageRef. (And make sure to not override -drawRect:, not call -setNeedsDisplay, and make sure contentMode is not UIViewContentModeRedraw. Easier to just use UIImageView.)

Related

Direct2D: smooth image movement?

Im using direct2d to draw a bitmap to the window, and thats working fine, but when I move the image (continually, with joystick, key input, or just by itself), it doesn't move smoothly. It look's like the image pixels are not spread to surrounded pixels, It looks like the most dominant colors remain, and the others are discarded. I create the bitmap using WIC (windows imaging component).
I didn't want to post code cause it's just the most basic stuff, window creation, direct2d initialization, bitmap creation and drawing, all taken from msdn, most of it is just copy-paste. If you think you need any code to help me, ask, then I'll post it.
What I want:
Smooth image movement on picture.
What I've tried:
-void SetAntialiasMode(D2D1_ANTIALIAS_MODE antialiasMode); (tried all options)
-setting D2D1_BITMAP_INTERPOLATION_MODE in draw bitmap method to different modes.
Thanks in advance!

What is the correct way to optimise scrolling performance of NSImageView and NSView

I need to improve the scrolling performance of a view for annotating on top of an image.
Currently I have the following:
- annotationView (custom NSView)
- imageView (NSImageView)
- contentView (custom NSView)
- clipView (NSClipView)
- scrollView (NScrollView)
The images are quite large PDFs and PNGs and scrolling is poor unless I make the imageView layer backed, which I am just doing in Interface Builder. Scrolling is then pretty smooth.
However the PNG images override everything on top of the imageView, whereas the PDF images remain correctly in the background.
Why is this and how can I fix that?
To get even better performance I would also like to make the annotationView layer backed as well but if I do that the entire view becomes black - with the exception of the annotations being draw on the annotationView. How can I make this layer backed view transparent but still allow for the shapes to be draw on it. It seems I can make it transparent but then everything becomes transparent, including the drawn shapes.
Is there a better way to achieve this? The annotations are simply shapes and text that need to be placed at specific positions over the image which I am currently just drawing in response to mouse positions.
The short answer is to use layer backed views and use CGContext for drawing and not the simpler NSView drawing APIs.
By default macOS does not use GPU based graphics, unlike iOS.
macOS provides a high level API that uses the CPU rather than the GPU for graphics operations.
So in my case I switched everything to layer backed NSViews - you can set this in Interface Builder or simply add 'wantsLayer = true' in the NSView initialisation code (init()).
Avoid using NSImageView, instead use a layer backed view and set the layer.content = NSIMage, you may have to also set the layers background colour or you might get some areas of the background not being cleared when scrolling.
This works for me - I have big PDF images in the background - building layouts and a layer backed view on top of that for placing annotations.
Scrolling around is now buttery smooth. Images seem to load instantly.
For the most part its pretty easy - just set up right from the start and save yourself a lot of headaches.

Xcode GLKit printing Text on GLKView without using UIImages

I have an app, its a small game using opengles with GLKit.
No im wondering how it works when i want to draw text on
my screen (if it is possible).
How can i do it?
i draw all of my game objects using images (wrapped in some kind
of sprite). its possible to scale, to move, and to rotate.
everything works fine.
but finding out how it works to print text on that glkview
gets me deep inside of problems ^^
I dont want to use uiimages cause i also dont know how
to present uiimages on a glkview.
There are a number of ways to do what you want:
1) Have an image with all the text glyphs you need in it. For example, if your application is in English, you'd have the 26 uppercase and 26 lowercase letters in the image. Upload that texture to the GPU and use the proper texture coordinates or glSubTexImage2d() to pull out the glyphs you need. (It's not clear to me if this is what you meant by not wanting a UIImage. It doesn't have to be a UIImage, though that's probably easiest.)
2) Every time you need to display text, draw it on the CPU on the fly, and upload the entire word, phrase, or sentence as a texture. You could create a CGBitmapContext and use Core Graphics to draw text to it. Then upload it using glTexImage2D().
3) Get the individual glyphs out of the fonts and draw directly using the bezier curves that make up the glyphs. This allows for 3D extrusion, too. However, this option is the most time consuming to code and probably least performant. It also involves dealing with the many small problems that fonts have (like degenerate segments, and incorrect winding orders). IF you want to go down this path, I think maybe Core Text can help.
There are at least two clean ways to do this, depending on your requirements.
While documentation advises against compositing over a CAEAGLLayer (GLKView), it works quite well, at least in recent iOS versions, when transparent content is layered on top of the CAEAGLLayer. For example, try dropping a UITextView, with opaque set to false and a clear background color, on top of a GLKView in your Storyboard in Interface Builder in the Apple GLKit template or your app. In my test on an iPhone 5, frame rendering time remained around 1ms, even while scrolling in the text view. If your text needs are static, or you don't want the user to interact with the text, use CATextLayer as a child layer of your EAGLLayer instead of a view.
The second approach is to render the text into a texture. You can then composite the text onto your view by disabling the depth buffer and rendering the texture on a full screen rectangle. Look at UIGraphicsBeginImageContextWithOptions to see how to render to an offscreen image with Quartz. UIGraphicsGetImageFromCurrentImageContext allows you to retrieve the UIImage to use as a texture.

drawRect makes scrolling slow in UIScrollView

I've got a UIScrollView with a (custom) UIView inside of it.
In my scrollViewDidScroll-method I'm calling
[myCustomView setNeedsDisplay];
This makes the scrolling noticeably slower, if I'm implementing the drawRect: method in my custom UIView - even if it's completely empty.
As soon as I delete the drawRect: method, it's smooth again.
I have absolutely no idea, why... anyone of you?
I hate drawrect too
"It's because of the way hardware-accellerated animation works in Cocoa.
If you don't have a custom drawRect method, the system caches the pixels for your view in video memory. When it needs to redraw them, it just blits the pixels onto the screen.
If you have a custom drawRect method, the system instead has to call your drawRect method to render the pixels in main memory, then copy those pixels into video memory, THEN draw the pixels to the screen for each an every frame. The docs say to avoid drawRect if you can.
I think main memory and video memory are shared on most/all iOS devices, but the system still has a lot more work to do when you implement a drawRect method for a view.
You would probably be better served to use an OpenGL layer and render to that with OpenGL calls, since OpenGL talks directly with the display hardware."
link to that quote: http://www.iphonedevsdk.com/forum/iphone-sdk-development/80637-drawrect-makes-scrolling-slow-uiscrollview.html

Converting vector image to Quartz 2D code

Is it possible to convert a vector image into Quartz 2D code (mac) so that
image can be drawn programmatically?
Not easily, you would have to write all the code yourself to do this. You might like to have a look at the Opacity image editor, which allows you to generate images and export them as Quartz or Cocoa drawing code.
What kind of vector image?
NSImage loads PDFs the same way it loads bitmaps.
NSImage is Quartz 2D drawing, but if you meant that you need a CGImage, NSImage in 10.6 has a method for getting one. However, CGImage is explicitly bitmap based, unlike NSImage. The parameters you pass to -[NSImage CGImageForRect:context:hints:] will determine how the art is rasterized. It will be rasterized the same way it would be if drawing to the passed rect in the passed context.
You can use Vector code http://www.vectorcodeapp.com to generate core graphics code, which you may use programmatically, or even use to generate postscripts.

Resources