Suitability of using Core Animation on iOS vs using Cocos2D and OpenGL ES? - opengl-es

I finished a breakout game tutorial in a book, but the ball, which is a 20x20 pixel image, was skipping frames and not moving very smoothly. That is the case on the Simulator as well as on an iPhone 4S (the real thing). The code wasn't using NSTimer (which may be slower), but was using CADisplayLink and UIImageView setFrame to do the animation.
Is Core Animation on iOS not very suitable for development animation type of games? Say if it is a game of
Invaders (Space Invaders)
Breakout (as a game in a tutorial)
Arkanoid
Angry Birds / Cut the Rope / Fruit Ninja
For these types of games, is Core Animation really suitable for writing (2) above? For (1), (3), and (4), either Cocos2D or OpenGL ES is more suitable of doing the job. And the performance of Cocos2D and OpenGL ES are very close. Is that true?

Cocos2D is often looked at because of its ease for programming common game logic, like collision detection and sprite animations, frame-by-frame, scaling and other processes that are quite common in game development, where you string together multiple animations, combine then, sequence them, do call backs, and more. That is one of the big benefits of the engine.
However, performance is another. Cocos offers batch nodes, which combine all graphic elements into a single OpenGL call, rather than "drawing" each to the screen separately in each frame; this can dramatically improve performance, especially for large graphics. If you had skipping frames, I wonder if batch sprites in Cocos would have been the missing link.
I'm very impressed by Core Animation and want to hope that it can hold its own with performance issues in games. My understanding is that CA is, like Cocos, also built on top of OpenGL ES, so I'd expect it possible to achieve good results in either. It could be that doing so in Cocos is easier simply because it has been designed and optimized internally for game development.

If you are having performance problems with a 2D app, this is likely caused by a lack of understanding of how to get the most efficient results from CoreGraphics as opposed to something that switching to OpenGL will fix. A 2D game will work just fine with CoreGraphics, you just need to start with the right approach. First off, you should not be rendering the entire view over again on each CADisplayLink callback. Instead, setup a UIView that contains multiple CALayer objects. Set the layer like so: CALayer.contents = (id) cgImage and then let the system take care of rendering it when the x, y, or animation elements change. You just need to position your elements and define the animations that move the elements around. With this approach, the system will cache the animating image on the graphics card behind the scenes and redraw using GPU operations.

Related

How to animate vector graphics on the Apple Watch?

Since most devices today have a CPU and a GPU, the usual advice for programmers wishing to do animated vector graphics (like making a circle grow or move around) is to define the graphical item once and then use linear transformations to animate it. This way, (on most platforms and frameworks) the GPU can do the animation work, because rasterization with linear transformations can be done very fast on a GPU. If the programmer chooses to draw each frame on the CPU, it would most likely be much slower and consume more energy.
I understand that the Watch is not a device you want to overload with complex animations, but at least the Home Screen certainly seems to use exactly this kind of animated linear transformations:
Also, most Watch Faces are animated in a way, e.g. the moving seconds and minutes hands.
However, the WatchKit controls do not have a .transform property, and I could not find much in the documentation - the words "animation" and "graphics" are not even mentioned there.
So, the only way I currently see is to draw the vector graphics to a CGContext and then put the result as an UIImage to a image control, as described here. But this does not really seem energy-efficient. It is exactly the kind of "CPU pixel drawing" that we usually want to avoid if possible. I think it is not energy-efficient because if I draw on a 100x100 pixels image buffer, the image has to be scaled to the actual Watch screen size, so we have two actual drawing processes per frame.
Is there a officially recommended, energy-efficient way to do animations on the Apple Watch?
Or, in other words, can we animate things like they are animated on the Home Screen or Watch Faces?
Seems SpriteKit is the answer. You can create SKScene and node objects and then display them in a WKInterfaceSKScene.

If Core Graphics uses Metal under the hood, can a Metal implementation run faster than a CG one? Why?

Let's say I want to develop a Paint app and need to implement a brush engine. For a raster brush, you basically need to stamp a texture on touch locations with a given spacing.
-- Task: Composite a small image (brush tip) over a bigger one.
I decided to build a prototype first in CG using a CGContext to render the stamps and found out it performed pretty well even with coalesced touches and a decent size canvas (CGContext output size).
However, since I need to paint onto really big textures (8000x6000 would be great), I decided to give metal a chance. I know that this task might be trivial for someone with a background in Metal but I'm new in this field. So I tried to use CIFilters (Metal backed) for compositing the brush over the canvas and displaying it in a custom MetalImageView: GTKView.
I thought having the canvas and the brush as CIImages and displaying them in a Metal Layer would already be more performant than the naive CG implementation. But it's not. The CIFilter approach renders the entire canvas every single stamp(at: Point), whether in CG I just refresh a small rect around that point.
Now, I think I could accomplish that with the CIFilter if I could change the extent that is computed. I don't know if that can be done with Core Image, but I'm sure in metal would be really easy for someone with experience.
-- Question: Can a pure metal implementation be faster stamping images than the CG one, given that CG runs with Metal under the hood? If so, how faster? Is it worth learning how to do it, or should I better spend that time improving the CG implementation?
Note that I'm asking for a raster brush, not a vector brush with Bezier Paths which is way easier to code and runs faster but textured brushes can't be used.
I really appreciate any help.
There is actually a chapter in the Core Image Programming Guide about that. They describe continuous painting into the same texture using the CIImageAccumulator class. You can also download the sample app.
I think performance-wise there shouldn't be a huge difference. You should be able to optimize heavily by telling Core Image the region of interest and domain of definition (extent) of your brush stroke filter. Then it should be able to render only the necessary parts of the image instead of the whole thing in every frame.

OS X Sprite Kit - Dirty Rects/Regions

Some background:
I have an existing OS X card game app that uses OpenGL.
The window is resizable, and a 4:3 aspect ratio is always maintained.
When the window is resized, the OpenGL view is resized accordingly. All visual elements are scaled accordingly. i.e. the cards maintain their relative sizes and distances from each other.
I'm interested in moving the code to a system that either uses Sprite Kit, or one predominantly based on Core Animation layers. Sprite Kit is more attractive to me in terms of feature set for my needs, but...
... I am concerned about Sprite Kit performance (or rather, needless performance, particularly on battery-powered Macs) for a game that essentially blasts the same textures to the screen, 60fps, even when nothing much is happening. (Most of the time, the cards are static, as the player ponders their next move.)
To reduce some of the (repetitive) drawing required, particularly at very large window sizes (e.g. fullscreen on a 30" monitor), I'm interested in using a "dirty rects/region" or "as-required" drawing system.
Question:
Does Sprite Kit provide some kind of dirty-rect drawing system, or the ability to implement such a drawing system? (Or, is it basically going to draw everything over and over at 60fps, regardless of the need to redraw?)
SK is a OpenGL renderer, naturally it will redraw its contents every frame. That however doesn't make it slow. While the dirty rect drawing of UI frameworks is a way to improve performance but also to reduce power consumption, they have to use this approach because rendering in UI frameworks is typically a lot slower (often not hardware accelerated) than in an OpenGL renderer.
On the other hand SK can be slower frame over frame if the rendered scene's complexity is extreme. But that sounds highly unlikely for a card game.
Generally You shouldn't concern yourself with performance until you wrote some code to test it with. Premature optimization and all...

Canvas 2d context or WebGL for 2D game

I'm planning on writing a game, which will use a lot of sprites and images. At first I tried EaselJS but playing some other canvas-based games I realized it's not that fast. And when I saw BananaBread by Mozilla I thought "if WebGL can do 3D so fast, then it can do 2D even faster". So I moved to three.js (using planes and transparent textures, texture offset for sprites).
The question is: is it better? Faster? Most of the WebGL games are 3D so should I use canvas 2D context for 2D and WebGL for 3D? I've also noticed that there are no libraries for WebGL in 2D (except WebGL-2d, but it's quite low level).
Please note that compatibility is not my greatest concern as I'm not planning on releasing anything anytime soon :) .
The short answer is yes. WebGL can be quite a bit more efficient if you use it well. A naive implementation will either yield no benefit or perform worse, so if you're not already familiar with the OpenGL API you may want to stick to canvas for the time being.
A few more detailed notes: WebGL can draw textured quads (sprites) very very fast, but if you need more advanced 2D drawing features such as path tracing you'll want to stick to a 2D canvas as implementing those types of algorithms in WebGL is non-trivial. The nature of your game also makes a difference in your choice. If you only have a few moving items on screen at a time Canvas will be fairly fast and reasonably simple. If you're redrawing the entire scene every frame, however, WebGL is better suited to that type of render loop.
My recommendation? If you're just learning both, start with Canvas2D and make your game work with that. Abstract your drawing in a simple manner, such as having a DrawPlayer function rather than ctx.drawImage(playerSprite, ....), and when you reach a point where the game is either functioning and you want it to run faster or the game design dictates that you MUST use a faster drawing method, create an alterate rendering backend for all those abstract functions with WebGL. This gives you the advantages of not getting hung up on rendering tech earlier on (which is ALWAYS a mistake!), let's you focus on gameplay, and if you end up implementing both methods you have a great fallback for non-WebGL browsers like Internet Explorer. Chances are you won't really need the increased speed for a while anyway.
WebGL can be much faster than canvas 2D. See http://blog.tojicode.com/2012/07/sprite-tile-maps-on-gpu.html as one example.
That said, I think you're mostly on your own right now. I don't know of any 2d libraries for WebGL except for maybe PlayN http://code.google.com/p/playn/ though that's in Java and uses the Google Web Toolkit to get converted to JavaScript. I'm also pretty sure it doesn't use the techniques mentioned in that blog post above, although if your game does not use tiles maybe that technique is not useful for you.
three.js is arguably not the library you want if you're planning on 2d.

HTML5 Canvas Performance: Loading Images vs Drawing

I'm planning on writing a game using javascript / canvas and I just had 1 question: What kind of performance considerations should I think about in regards to loading images vs just drawing using canvas' methods. Because my game will be using very simple geometry for the art (circles, squares, lines), either method will be easy to use. I also plan to implement a simple particle engine in the game, so I want to be able to draw lots of small objects without much of a performance hit.
Thoughts?
If you're drawing simple shapes with solid fills then drawing them procedurally is the best method for you.
If you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. Generating graphics procedurally is not always efficient.
It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
If you do choose to draw sprites you should read some of the tips and optimization techniques on this thread.
My personal suggestion is to just draw shapes. I've learned that if you're going to use images instead, then the more you use the slower things get, and the more likely you'll end up needing to do off-screen rendering.
This article discusses the subject and has several tests to benchmark the differences.
Conculsions
In brief — Canvas likes small size of canvas and DOM likes working with few elements (although DOM in Firefox is so slow that it's not always true).
And if you are planing to use particles I thought that you might want to take a look to Doodle-js.
Image loading out of the cache is faster than generating it / loading it from the original resource. But then you have to preload the images, so they get into the cache.
It really depends on the type of graphics you'll use, so I suggest you implement the easiest solution and solve the performance problems as they appear.
Generally I would expect copying a bitmap (drawing an image) to get faster compared to recreating it from primitives, as the complexity of the image gets higher.
That is drawing a couple of squares per scene should need about the same time using either method, but a complex image will be faster to copy from a bitmap.
As with most gaming considerations, you may want to look at what you need to do, and use a mixture of both.
For example, if you are using a background image, then loading the bitmap makes sense, especially if you will crop it to fit in the canvas, but if you are making something that is dynamic then you will need to using the drawing API.
If you target IE9 and FF4, for example, then on Windows you should get some good performance from drawing as they are taking advantage of the graphics card, but, for more general browsers you will want to perhaps look at using sprites, which will either be images you draw as part of the initialization and move, or load bitmapped images.
It would help to know what type of game you are looking at, how dynamic the graphics will need to be, how large the bitmapped images would be, what type of framerate you are hoping for.
The landscape is changing with each browser release. I suggest following the HTML5 Games initiative that Facebook has started, and the jsGameBench test suite. They cover a wide range of approaches from Canvas to DOM to CSS transforms, and their performance pros and cons.
http://developers.facebook.com/blog/post/454
http://developers.facebook.com/blog/archive
https://github.com/facebook/jsgamebench
If you are just drawing simple geometry objects you can also use divs. They can be circles, squares and lines in a few CSS lines, you can position them wherever you want and almost all browser support the styles (you may have some problems with mobile devices using Opera Mini or old Android Browser versions and, of course with IE7-) but there wouldn't be almost any performance hit.

Resources