Is there any way to improve UIView beginAnimations/commitAnimations performance? - performance

I am resizing a control via
[UIView beginAnimations]
[self setTransform:CGAffineTransformMakeScale(0.5f, 0.5f)];
[UIView commitAnimations]
The animation looks beautiful on the iPhone simulator but on my 2G test device it is unbearably slow and choppy. How can I improve the animation on an older iPhone? Or do I have to disable animated transitions on older devices? Thanks.

My problem was too many non-opaque UIViews on the screen. I radically reduced the number through some redesign and it is very zippy now.

Related

How does cornerRadius and offscreen rendering affect performance in iOS 9

Setting cornerRadius and masksToBounds will lead to offscreen rendering, so that it will affect the performance when scrolling the table. Though this has been mentioned a lot before, does anyone ever make an experiment about this in iOS 9?
Here is my demo, It seems that setting cornerRadius doesn't have any influence on scrolling performance. On my iPhone 6, the average fps is still 57 or 58.
Does iOS 9 make any optimization or I did something incorrectly?
By the way, I noticed that when the UIImageView is a square(width = height), setting conner of some UIImageView will not lead to offscreen rendering. How does this happen?
You have misunderstood what the warning about cornerRadius and masksToBounds is about. You are merely setting the cornerRadius and masksToBounds of some image views inside your cell. Those image views are not what is being animated when the table view is scrolled: it is the cells that are animated. The interior of the cell is already completely composited; it does not have to be recomposited on every frame of the animation.
If you had set the cornerRadius and masksToBounds of every cell, you might see some more severe effect on scrolling.
It seems that I didn't add enough images to the cell. When the number of presenting images reaches 30 or more, the influence on performance becomes apparent. The average fps drops down to 33.

Optimal Fade In and Out of NSView

I am currently writing a presentation application showing images and video, full screen on multiple monitors. The images and video display one after the other and fade in an out.
At the moment I have this working correct but fades are not smooth, there is a little stutter.
My code currently animates the alpha on each of the components being shown.
[[self.videoView animator] setAlphaValue:1.0f];
Are there ways of doing this that will improve performance on OSX?
For example, when using cocos2D on iPhone it is more efficient to fade a colour layer up and down over the content, than in is to fade the content itself (i.e. animate the alpha on the most simple component). However, I can't see anything in Cocoa that would allow it to simplify the calculations it is doing (i.e. no simple concept of flat-color layer).
I hope that's clear! Thank you.
Making all the NSViews in the hierarchy layer backed makes a huge improvement to the performance of these transitions.
[self setWantsLayer:YES];

How to synchronize OpenGL drawing with UIKit updates

In our app we have UIScrollView above CAEAGLLayer. UIScrollView contains some UIViews (red rectangles). In CAEAGLLayer we draw white rectangles. Centers of white rectangles are the same as the centers of red rectangles. When UIScrollView scrolls, we update positions of white rectangles in CAEAGLLayer and render them.
We are getting the expected result: centers of white rectangles are always the same as centers of red rectangles.
But we can't synchronize updates of the CAEAGLLayer with the movement of the views inside UIScrollView.
We have some kind of mistiming – red rectangles lag behind white rectangles.
Speaking roughly, we really want to make CAEAGLLayer lag together with the UIScollView.
We have prepared sample code. Run it on the device and scroll, and you will see that white rectangles (drawn by OpenGL) are moving faster than red ones (regular UIViews). OpenGL is being updated within scrollViewDidScroll: delegate call.
https://www.dropbox.com/s/kzybsyu10825ikw/ios-opengl-scrollview-test.zip
It behaves the same even in iOS Simulator, just take a look at the video: http://www.youtube.com/watch?v=1T9hsAVrEXw
Red = UIKit, White = OpenGL
Code is:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
// reuses red squares that are gone outside thw bounds
[overlayView updateVisibleRect:CGRectMake(...)];
// draws white squares using OpenGL under the red squares
[openGlView updateVisibleRect:CGRectMake(...)];
}
Edit:
The same issue can easily be demonstrated in a much simplified sample. The working xcodeproj can be find at:
https://www.dropbox.com/s/vznzccibhlf8bon/simple_sample.zip
The sample project basically draws and animates a set of WHITE squares in OpenGL and does the same for a RED set of UIViews. The lagging can easily be seen between the red and white squares.
In iOS 9 CAEAGLLayer has presentsWithTransaction property that synchronizes the two.
In fact, you can't synchronize them using current APIs. MobileMaps.app and Apple Map Kit use private property on CAEAGLLayer asynchronous to workaround this issue.
Here is related radar: http://openradar.appspot.com/radar?id=3118401
After digging around a little I'd like to extend my previous answer:
Your OpenGL rendering is immediately done from within the scrollViewDidScroll: method while the UIKit drawing is performed later, during normal CATransaction updates from the runloop.
To synchronize UIKit updates with the OpenGL rendering just enclose both in one explicit transaction and flush it to force UIKit to commit the changes to backbaordd immediately:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
[CATransaction begin];
[overlayView updateVisibleRect:CGRectMake(...)];
[openGlView updateVisibleRect:CGRectMake(...)];
[CATransaction flush]; // trigger a UIKit and Core Animation graphics update
[CATransaction commit];
}
In lack of a proper answer I'd like to share my thoughts:
There are two ways of drawing involved: Core Animation (UIKit) and OpenGL. In the end, all drawing is done by OpenGL but the Core Animation part is rendered in backboardd (or Springboard.app, before iOS 6) which serves as a kind of window server.
To make this work your app's process serializes the layer hierarchy and changes to its properties and passes the data over to backboardd which in turn renders and composes the layers and makes the result visible.
When mixing OpenGL with UIKit, the CAEAGLLayer's rendering (which is done in your app) has to be composited with the rest of the Core Animation layers. I'm guessing that the render buffer used by the CAEAGLLayer is somehow shared with backboardd to have a fast way of transferring your application's rendering. This mechanism does not necessarily has to be synchronized with the updates from Core Animation.
To solve your issue it would be necessary to find a way of synchronizing the OpenGL rendering with the Core Animation layer serialization and transfer to backboardd. Sorry for not being able to present a solution but I hope these thoughts and guesses help you to understand the reason for the problem.
I am trying to solve exactly same issue. I tried all method described above but nothing was acceptable.
Ideally, this can be solved by accessing Core Animation's final compositing GL context. But we can't access the private API.
One another way is transferring GL result to CA. I tried this by reading frame-buffer pixels to dispatch to CALayer which was too slow. About over 200MB/sec data should be transferred. Now I am trying to optimize this. Because this is the last way that I can try.
Update
Still heavy, but reading frame-buffer and setting it to a CALayer.content is working well and shows acceptable performance on my iPhone 4S. Anyway over 70% of CPU load is used only for this reading and setting operation. Especially glReadPixels I this most of them are just waiting time for reading data from GPU memory.
Update2
I gave up last method. It's cost is almost purely pixel transferring and still too slow. I decided to draw every dynamic objects in GL. Only some static popup views will be drawn with CA.
In order to synchronize UIKit and OpenGL drawing, you should try to render where Core Animation expects you to draw, i.e. something like this:
- (void)displayLayer:(CALayer *)layer
{
[self drawFrame];
}
- (void)updateVisibleRect:(CGRect)rect {
_visibleRect = rect;
[self.layer setNeedsDisplay];
}

Text blurry when running nonuniversal iPhone app on iPad

Basically I'm trying to retinafy my game
So my application isn't universal, its specifically for the iPhone. I just would like to to also look its best when running on the iPad.
I've created a simple method to load the high resolution images when running on the iPad in scaled 2x mode. Which is working.
However my problem isn't the images. Its my UILabels and UIButtons. When I scale up the text becomes slightly blurry.
So they look fine with 1x scale. Its just 2x scale on the iPad.
Is there property I can set or a way I can redraw the UILabels/UIButtons so that they are more clear?
When an iPhone app is running on an iPad and scaled x2, you can't draw to each individual pixel. It is literally an enlarged 320x480 pixel screen (or 640x960 if you are using an iPad 3 which simulates an iPhone with a retina display).
Hope that helps.

Animate many images on iPhone/iPad

I made subclass of UIView and override drawRect method that drawing jpg images. For animation i'm using a NSTimes which fires 24 times per second. But animation playin slow than i expected. How can i improve frame rate. Please provide me a drawing code that run fast.

Resources