I made subclass of UIView and override drawRect method that drawing jpg images. For animation i'm using a NSTimes which fires 24 times per second. But animation playin slow than i expected. How can i improve frame rate. Please provide me a drawing code that run fast.
Related
In Unity i want to make an animation Map.
I have created a animation map with 100 frame image.
The animation image size total is too large for android games.
And if i using animation in unity it will take a time to sort the image record one by one and of course it will result in large size.
What is the best way to make animation for 100 frame images in Unity ?
Thanks
Dennis
Tutorial using sprite sheet animation without particle system:
http://www.strandedsoft.com/using-spritesheets-with-unity3d/
try "Texture Sheet Animation Module" of Particle System to cast one particle with texture sheet animations:
https://docs.unity3d.com/Manual/PartSysTexSheetAnimModule.html
I am currently writing a presentation application showing images and video, full screen on multiple monitors. The images and video display one after the other and fade in an out.
At the moment I have this working correct but fades are not smooth, there is a little stutter.
My code currently animates the alpha on each of the components being shown.
[[self.videoView animator] setAlphaValue:1.0f];
Are there ways of doing this that will improve performance on OSX?
For example, when using cocos2D on iPhone it is more efficient to fade a colour layer up and down over the content, than in is to fade the content itself (i.e. animate the alpha on the most simple component). However, I can't see anything in Cocoa that would allow it to simplify the calculations it is doing (i.e. no simple concept of flat-color layer).
I hope that's clear! Thank you.
Making all the NSViews in the hierarchy layer backed makes a huge improvement to the performance of these transitions.
[self setWantsLayer:YES];
I need to add about 50 sprites to the screen then redraw them. When I try and add them like this:
[self addChild:Img];
This creates a lot of lag.
I have also tried creating a CCLayer then adding the all of the images to the layer but i get the same amount of lag. How can I add all of these sprites and decrease on lag? Most games probably have more then 50 sprites a page.
If all or most of your sprites are the same, then you could use one CCSpriteBatchNode for all CCSprites sharing the same texture or image. This will save memory.
You would do something like the following,
1)define a CCSpriteBatchNode
2)add it as a subview.
3)define a frame from the batchNode
4)set it as displayFrame for the sprite
Use the CCSpriteSheet, If you haven’t used sprite sheets yet, think of them as gigantic images that you put your sprites within. They come with a file that specifies the boundaries for each individual sprite so you can pull them out when you need them within the code.
The reason why these are such a good idea to use is because Cocos2D is optimized for them. If you use sprites within a sprite sheet properly, rather than making one OpenGL ES draw call per sprite it just makes one per sprite sheet.
In short – it’s faster, especially when you have a lot of sprites!
In our app we have UIScrollView above CAEAGLLayer. UIScrollView contains some UIViews (red rectangles). In CAEAGLLayer we draw white rectangles. Centers of white rectangles are the same as the centers of red rectangles. When UIScrollView scrolls, we update positions of white rectangles in CAEAGLLayer and render them.
We are getting the expected result: centers of white rectangles are always the same as centers of red rectangles.
But we can't synchronize updates of the CAEAGLLayer with the movement of the views inside UIScrollView.
We have some kind of mistiming – red rectangles lag behind white rectangles.
Speaking roughly, we really want to make CAEAGLLayer lag together with the UIScollView.
We have prepared sample code. Run it on the device and scroll, and you will see that white rectangles (drawn by OpenGL) are moving faster than red ones (regular UIViews). OpenGL is being updated within scrollViewDidScroll: delegate call.
https://www.dropbox.com/s/kzybsyu10825ikw/ios-opengl-scrollview-test.zip
It behaves the same even in iOS Simulator, just take a look at the video: http://www.youtube.com/watch?v=1T9hsAVrEXw
Red = UIKit, White = OpenGL
Code is:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
// reuses red squares that are gone outside thw bounds
[overlayView updateVisibleRect:CGRectMake(...)];
// draws white squares using OpenGL under the red squares
[openGlView updateVisibleRect:CGRectMake(...)];
}
Edit:
The same issue can easily be demonstrated in a much simplified sample. The working xcodeproj can be find at:
https://www.dropbox.com/s/vznzccibhlf8bon/simple_sample.zip
The sample project basically draws and animates a set of WHITE squares in OpenGL and does the same for a RED set of UIViews. The lagging can easily be seen between the red and white squares.
In iOS 9 CAEAGLLayer has presentsWithTransaction property that synchronizes the two.
In fact, you can't synchronize them using current APIs. MobileMaps.app and Apple Map Kit use private property on CAEAGLLayer asynchronous to workaround this issue.
Here is related radar: http://openradar.appspot.com/radar?id=3118401
After digging around a little I'd like to extend my previous answer:
Your OpenGL rendering is immediately done from within the scrollViewDidScroll: method while the UIKit drawing is performed later, during normal CATransaction updates from the runloop.
To synchronize UIKit updates with the OpenGL rendering just enclose both in one explicit transaction and flush it to force UIKit to commit the changes to backbaordd immediately:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
[CATransaction begin];
[overlayView updateVisibleRect:CGRectMake(...)];
[openGlView updateVisibleRect:CGRectMake(...)];
[CATransaction flush]; // trigger a UIKit and Core Animation graphics update
[CATransaction commit];
}
In lack of a proper answer I'd like to share my thoughts:
There are two ways of drawing involved: Core Animation (UIKit) and OpenGL. In the end, all drawing is done by OpenGL but the Core Animation part is rendered in backboardd (or Springboard.app, before iOS 6) which serves as a kind of window server.
To make this work your app's process serializes the layer hierarchy and changes to its properties and passes the data over to backboardd which in turn renders and composes the layers and makes the result visible.
When mixing OpenGL with UIKit, the CAEAGLLayer's rendering (which is done in your app) has to be composited with the rest of the Core Animation layers. I'm guessing that the render buffer used by the CAEAGLLayer is somehow shared with backboardd to have a fast way of transferring your application's rendering. This mechanism does not necessarily has to be synchronized with the updates from Core Animation.
To solve your issue it would be necessary to find a way of synchronizing the OpenGL rendering with the Core Animation layer serialization and transfer to backboardd. Sorry for not being able to present a solution but I hope these thoughts and guesses help you to understand the reason for the problem.
I am trying to solve exactly same issue. I tried all method described above but nothing was acceptable.
Ideally, this can be solved by accessing Core Animation's final compositing GL context. But we can't access the private API.
One another way is transferring GL result to CA. I tried this by reading frame-buffer pixels to dispatch to CALayer which was too slow. About over 200MB/sec data should be transferred. Now I am trying to optimize this. Because this is the last way that I can try.
Update
Still heavy, but reading frame-buffer and setting it to a CALayer.content is working well and shows acceptable performance on my iPhone 4S. Anyway over 70% of CPU load is used only for this reading and setting operation. Especially glReadPixels I this most of them are just waiting time for reading data from GPU memory.
Update2
I gave up last method. It's cost is almost purely pixel transferring and still too slow. I decided to draw every dynamic objects in GL. Only some static popup views will be drawn with CA.
In order to synchronize UIKit and OpenGL drawing, you should try to render where Core Animation expects you to draw, i.e. something like this:
- (void)displayLayer:(CALayer *)layer
{
[self drawFrame];
}
- (void)updateVisibleRect:(CGRect)rect {
_visibleRect = rect;
[self.layer setNeedsDisplay];
}
I have a situation where I have many CALayers which animate in a "turn based" fashion. I animate the position on each of those CALayers, but they have the exact same duration. Once all of those CALayers are finished animating, a new "turn" is initiated and they animate changing positions again.
The whole idea is that with a linear interpolation between positions, and at a constant speed, a turn based transition between state to state looks like a real time animation. This, however, is hard to achieve with many different CALayers.
CAAnimationGroup is used to group together animations on a single CALayer. But I was wondering, is there a simple solution to group animations, which are supposed to have the same durations, on several CALayers together?
Edited to include a reply to Kevin Ballard's question
My problem lies in this. I'm creating animations for each of my CALayers, then putting
those in an NSArray. Once I get the callback that the individual animation has ended, I remove it form the NSArray. Once it's empty, I again create animations for them all.
With more than a few layers, there's a noticeable delay between where all of the animations finished and the new ones start.
I imagine that if I could group all of these animations into a single one, a lot more layers could be animated without a delay between animations. Thereby not ruining the illusions of a contiguous animation.
If you attach animations to multiple CALayers within a single method, they will all commence at (effectively) the same time. I use this approach in a puzzle game with dropping balls, at the end of the animations I attach the next stage of the animation to any ball that needs further animation.
I'm animating upto 60 CALayers at a time and not experiencing any delays between stages of the animation, but I don't cache the animations in an array of any sort, I'm not sure of thge overhead you have there.
My animations are relatively simple, and created and attached to each CALayer on the fly. My sprites are 60px square and use a couple dozen possible images to represent their content.
In some cases there are multiply animations that I can create with different starting times(using beginTime), I bundle them up with a CAAnimationGroup - but you may not be able to precalculate subsequent animations.
If you wrap your animations in a CATransaction, CG will make sure that they all run during the same iteration of the main loop.
[CATransaction begin];
// all your animations
[CATransaction commit];
If you add 3 animations to 3 different layers all at the same time, and they have the same duration, I would expect them all to animate together. What behavior are you seeing?
cp21yos: can you elaborate on your method? I am trying to do something similar, which involves animating several layers at a time, more than once. You said: "at the end of the animations I attach the next stage of the animation ". Can you explain that? When I try to put logic to perform additional animations in an animationDidStop event, only the last animation is occuring, instead of the whole sequence of animations.