How to synchronize OpenGL drawing with UIKit updates - opengl-es

In our app we have UIScrollView above CAEAGLLayer. UIScrollView contains some UIViews (red rectangles). In CAEAGLLayer we draw white rectangles. Centers of white rectangles are the same as the centers of red rectangles. When UIScrollView scrolls, we update positions of white rectangles in CAEAGLLayer and render them.
We are getting the expected result: centers of white rectangles are always the same as centers of red rectangles.
But we can't synchronize updates of the CAEAGLLayer with the movement of the views inside UIScrollView.
We have some kind of mistiming – red rectangles lag behind white rectangles.
Speaking roughly, we really want to make CAEAGLLayer lag together with the UIScollView.
We have prepared sample code. Run it on the device and scroll, and you will see that white rectangles (drawn by OpenGL) are moving faster than red ones (regular UIViews). OpenGL is being updated within scrollViewDidScroll: delegate call.
https://www.dropbox.com/s/kzybsyu10825ikw/ios-opengl-scrollview-test.zip
It behaves the same even in iOS Simulator, just take a look at the video: http://www.youtube.com/watch?v=1T9hsAVrEXw
Red = UIKit, White = OpenGL
Code is:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
// reuses red squares that are gone outside thw bounds
[overlayView updateVisibleRect:CGRectMake(...)];
// draws white squares using OpenGL under the red squares
[openGlView updateVisibleRect:CGRectMake(...)];
}
Edit:
The same issue can easily be demonstrated in a much simplified sample. The working xcodeproj can be find at:
https://www.dropbox.com/s/vznzccibhlf8bon/simple_sample.zip
The sample project basically draws and animates a set of WHITE squares in OpenGL and does the same for a RED set of UIViews. The lagging can easily be seen between the red and white squares.

In iOS 9 CAEAGLLayer has presentsWithTransaction property that synchronizes the two.

In fact, you can't synchronize them using current APIs. MobileMaps.app and Apple Map Kit use private property on CAEAGLLayer asynchronous to workaround this issue.
Here is related radar: http://openradar.appspot.com/radar?id=3118401

After digging around a little I'd like to extend my previous answer:
Your OpenGL rendering is immediately done from within the scrollViewDidScroll: method while the UIKit drawing is performed later, during normal CATransaction updates from the runloop.
To synchronize UIKit updates with the OpenGL rendering just enclose both in one explicit transaction and flush it to force UIKit to commit the changes to backbaordd immediately:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
[CATransaction begin];
[overlayView updateVisibleRect:CGRectMake(...)];
[openGlView updateVisibleRect:CGRectMake(...)];
[CATransaction flush]; // trigger a UIKit and Core Animation graphics update
[CATransaction commit];
}

In lack of a proper answer I'd like to share my thoughts:
There are two ways of drawing involved: Core Animation (UIKit) and OpenGL. In the end, all drawing is done by OpenGL but the Core Animation part is rendered in backboardd (or Springboard.app, before iOS 6) which serves as a kind of window server.
To make this work your app's process serializes the layer hierarchy and changes to its properties and passes the data over to backboardd which in turn renders and composes the layers and makes the result visible.
When mixing OpenGL with UIKit, the CAEAGLLayer's rendering (which is done in your app) has to be composited with the rest of the Core Animation layers. I'm guessing that the render buffer used by the CAEAGLLayer is somehow shared with backboardd to have a fast way of transferring your application's rendering. This mechanism does not necessarily has to be synchronized with the updates from Core Animation.
To solve your issue it would be necessary to find a way of synchronizing the OpenGL rendering with the Core Animation layer serialization and transfer to backboardd. Sorry for not being able to present a solution but I hope these thoughts and guesses help you to understand the reason for the problem.

I am trying to solve exactly same issue. I tried all method described above but nothing was acceptable.
Ideally, this can be solved by accessing Core Animation's final compositing GL context. But we can't access the private API.
One another way is transferring GL result to CA. I tried this by reading frame-buffer pixels to dispatch to CALayer which was too slow. About over 200MB/sec data should be transferred. Now I am trying to optimize this. Because this is the last way that I can try.
Update
Still heavy, but reading frame-buffer and setting it to a CALayer.content is working well and shows acceptable performance on my iPhone 4S. Anyway over 70% of CPU load is used only for this reading and setting operation. Especially glReadPixels I this most of them are just waiting time for reading data from GPU memory.
Update2
I gave up last method. It's cost is almost purely pixel transferring and still too slow. I decided to draw every dynamic objects in GL. Only some static popup views will be drawn with CA.

In order to synchronize UIKit and OpenGL drawing, you should try to render where Core Animation expects you to draw, i.e. something like this:
- (void)displayLayer:(CALayer *)layer
{
[self drawFrame];
}
- (void)updateVisibleRect:(CGRect)rect {
_visibleRect = rect;
[self.layer setNeedsDisplay];
}

Related

Optimal Fade In and Out of NSView

I am currently writing a presentation application showing images and video, full screen on multiple monitors. The images and video display one after the other and fade in an out.
At the moment I have this working correct but fades are not smooth, there is a little stutter.
My code currently animates the alpha on each of the components being shown.
[[self.videoView animator] setAlphaValue:1.0f];
Are there ways of doing this that will improve performance on OSX?
For example, when using cocos2D on iPhone it is more efficient to fade a colour layer up and down over the content, than in is to fade the content itself (i.e. animate the alpha on the most simple component). However, I can't see anything in Cocoa that would allow it to simplify the calculations it is doing (i.e. no simple concept of flat-color layer).
I hope that's clear! Thank you.
Making all the NSViews in the hierarchy layer backed makes a huge improvement to the performance of these transitions.
[self setWantsLayer:YES];

Xcode Cocos2D How To Add 50 Sprites With No Lag

I need to add about 50 sprites to the screen then redraw them. When I try and add them like this:
[self addChild:Img];
This creates a lot of lag.
I have also tried creating a CCLayer then adding the all of the images to the layer but i get the same amount of lag. How can I add all of these sprites and decrease on lag? Most games probably have more then 50 sprites a page.
If all or most of your sprites are the same, then you could use one CCSpriteBatchNode for all CCSprites sharing the same texture or image. This will save memory.
You would do something like the following,
1)define a CCSpriteBatchNode
2)add it as a subview.
3)define a frame from the batchNode
4)set it as displayFrame for the sprite
Use the CCSpriteSheet, If you haven’t used sprite sheets yet, think of them as gigantic images that you put your sprites within. They come with a file that specifies the boundaries for each individual sprite so you can pull them out when you need them within the code.
The reason why these are such a good idea to use is because Cocos2D is optimized for them. If you use sprites within a sprite sheet properly, rather than making one OpenGL ES draw call per sprite it just makes one per sprite sheet.
In short – it’s faster, especially when you have a lot of sprites!

Cocoa / CoreGraphics / Quartz - borderless Quicktime X like window with rounded edges

I am developing a document based application for Mac OS X. It's a kind of media player, but instead of playing audio or video files it is supposed to open text-files containing meta-data specifying OpenGL animations. I would like to mimic Apples QuickTime X window style. This means, i have to do all the window drawings myself, because Cocoa has no appropriate window style.
There is one thing which gives me headaches: The rounded corners usually to be found on Mac OS X windows. I tried using the borderless window mask and working some CGS magic - there are some private Apple headers which allow window shaping, but they are of course undocumented. I was able to cut rectangular holes in my windows edges, but i couldn't figure out how Apple achieves rounded corners.
Creating a transparent window and drawing the frame myself does not work, because an OpenGL viewport is always rectangular, and the only way to change it is to turn on NSOpenGLCPSurfaceOpacity for alpha transparency and using the stencil buffer or shaders to cut out the edges, which seems like a hell of a lot of overhead.
If i put an OpenGLView into a standard Cocoa window with titlebar, the bottom edges are rounded. It seems this is happening at the NSThemeFrame stage of the view hierarchy. Any ideas how this is done?
Use a layer-backed view, and do your drawing in the CALayer on an invisible window. Layers include automatic handling of rounded corners and borders.
Background for CALayer is in the Core Animation Programming Guide. To create a layer for NSView, you need to call [view setWantsLayer:YES]. You would create a CAOpenGLLayer and assign it to the view using setLayer:.
See CALayerEssentials for sample code demonstrating how to use CAOpenGLLayer among other layer types.
Since Robs suggestion didn't work and no one else contributed to the discussion i settled on using the stencil buffer to crop the windows corners. I did this by creating a texture from the windows background and rendering it into the stencil buffer, discarding all transparent pixels. Looks fine, but is slow when resizing the window :/

How can I improve CGContextFillRect and CGContextDrawImage performance

Those two functions are currently my bottleneck. I am working with very large bitmaps.
How can I improve their performance?
You could cache smaller versions of your bitmaps which you create before drawing the first time and then simply draw the downscaled samples instead of the full-blown 15 megapixel stuff.
Then again make sure you are only drawing what is necessary i.e. in 'drawRect: (NSRect) rect' only draw inside the rect (unless absolutely necessary). And try not do perform drawings outside of that method.
If you're drawing large background images with content in the foreground that moves, consider using a layer-backed NSView, adding a layer and setting its background image. You can then draw your content in a other layers (or layer-backed NSViews) above the background layer, and the view will never need to redraw the background image because it is stored in the GPU's texture memory. Your current image is too large for a single CALayer (CALayers are limited to the maximum OpenGL texture size of 2048 x 2048) so you will probably need to break it up into tiles.
Otherwise, as #iolo mentioned, you should make sure that you only redraw the parts of the view that really need updating.

Best way to speed up multiple uses of "CGContextDrawRadialGradient" in drawrect?

I couldn't post the image, but I use the "CGContextDrawRadialGradient" method to draw a shaded blue ball (~40 pixel diameter), it's shadow and to make a "pulsing" white ring around the ball (inner and outer gradients on the ring). The ring starts at the edges of the blue ball and expands outward (radius grows with a timer). The white ring fades as it expands outward like a radio wave.
Looks great running in the simulator but runs incredibly slow on the iPhone 4. The ring should pulse in about a second (as in simulator), but takes 15-20 seconds on the phone. I have been reading a little about CALayer, CGLayer and reading some segments on a some gradient animation, but it isn't clear what I should be using for best performance.
How do I speed this up. Should I put the ball on a layer and the expanding ring on another layer? If so, how do I know which layer to update on a drawrect?
Appreciate any guidance. Thanks.
The only way to speed something like that up is to pre-render it. Determine how many image frames you need to make it look good and then draw each frame into a context you created with CGBitmapContextCreate and capture the image using CGBitmapContextCreateImage. Probably the easiest way to animate the images would be to set the animationImages property of a UIImageView (although there are other options, like CALayer animations).
The newest Apple docs finally mention which pixel formats are supported in iOS so make sure to reference those when creating your bitmap context.

Resources