How to animate vector graphics on the Apple Watch? - animation

Since most devices today have a CPU and a GPU, the usual advice for programmers wishing to do animated vector graphics (like making a circle grow or move around) is to define the graphical item once and then use linear transformations to animate it. This way, (on most platforms and frameworks) the GPU can do the animation work, because rasterization with linear transformations can be done very fast on a GPU. If the programmer chooses to draw each frame on the CPU, it would most likely be much slower and consume more energy.
I understand that the Watch is not a device you want to overload with complex animations, but at least the Home Screen certainly seems to use exactly this kind of animated linear transformations:
Also, most Watch Faces are animated in a way, e.g. the moving seconds and minutes hands.
However, the WatchKit controls do not have a .transform property, and I could not find much in the documentation - the words "animation" and "graphics" are not even mentioned there.
So, the only way I currently see is to draw the vector graphics to a CGContext and then put the result as an UIImage to a image control, as described here. But this does not really seem energy-efficient. It is exactly the kind of "CPU pixel drawing" that we usually want to avoid if possible. I think it is not energy-efficient because if I draw on a 100x100 pixels image buffer, the image has to be scaled to the actual Watch screen size, so we have two actual drawing processes per frame.
Is there a officially recommended, energy-efficient way to do animations on the Apple Watch?
Or, in other words, can we animate things like they are animated on the Home Screen or Watch Faces?

Seems SpriteKit is the answer. You can create SKScene and node objects and then display them in a WKInterfaceSKScene.

Related

If Core Graphics uses Metal under the hood, can a Metal implementation run faster than a CG one? Why?

Let's say I want to develop a Paint app and need to implement a brush engine. For a raster brush, you basically need to stamp a texture on touch locations with a given spacing.
-- Task: Composite a small image (brush tip) over a bigger one.
I decided to build a prototype first in CG using a CGContext to render the stamps and found out it performed pretty well even with coalesced touches and a decent size canvas (CGContext output size).
However, since I need to paint onto really big textures (8000x6000 would be great), I decided to give metal a chance. I know that this task might be trivial for someone with a background in Metal but I'm new in this field. So I tried to use CIFilters (Metal backed) for compositing the brush over the canvas and displaying it in a custom MetalImageView: GTKView.
I thought having the canvas and the brush as CIImages and displaying them in a Metal Layer would already be more performant than the naive CG implementation. But it's not. The CIFilter approach renders the entire canvas every single stamp(at: Point), whether in CG I just refresh a small rect around that point.
Now, I think I could accomplish that with the CIFilter if I could change the extent that is computed. I don't know if that can be done with Core Image, but I'm sure in metal would be really easy for someone with experience.
-- Question: Can a pure metal implementation be faster stamping images than the CG one, given that CG runs with Metal under the hood? If so, how faster? Is it worth learning how to do it, or should I better spend that time improving the CG implementation?
Note that I'm asking for a raster brush, not a vector brush with Bezier Paths which is way easier to code and runs faster but textured brushes can't be used.
I really appreciate any help.
There is actually a chapter in the Core Image Programming Guide about that. They describe continuous painting into the same texture using the CIImageAccumulator class. You can also download the sample app.
I think performance-wise there shouldn't be a huge difference. You should be able to optimize heavily by telling Core Image the region of interest and domain of definition (extent) of your brush stroke filter. Then it should be able to render only the necessary parts of the image instead of the whole thing in every frame.

Querying graphics capabilities for deciding whether to apply GPU-intensive effects (through SpriteKit)

I have a game written with SpriteKit which uses a SKEffectNode with blur effect to blur a set of sprites, one of which has a fairly large texture, and which together cover a fairly large area of the screen. An iMac and Mac Book Pro cope quite happily with this but on a more humble Mac Book there is a notable drop in frame rate with the effect node added in. Since the effect isn't crucial to the functionality of the game, I could simply not add the SKEffectNode for machines with less powerful graphics capabilities.
So then the question: what would be a good programmatic check that I could make to determine the "power of the GPU" or "performance when applying texture effects" or [suggest better metric here] and via what API? Thanks for your suggestions!
You'll have to create a performance test using your actual blurring processes and some sample content to get an accurate idea of the time cost of it on each generation of hardware.
Blurs are really weird things, programmatically. A Box Blur can give you most of the appearance of a nice, soft gaussian blur for much less processing cost. A zoom or motion blur (that looks good) is surprisingly expensive, even on strong hardware.
And there's some amazingly effective "cheats" when doing blurs. Because there's no need for detail you can heavily optimise the operations, particularly if the blurs are strong.
Apple, it's believed, does something like this, for example, with its blurs:
Massively shrink the target image
Do a gaussian blur on this tiny image
Scale it back up, somewhat
Apply a cheap Box Blur to soften it
Fully scale back to the desired size
By way of terrible example benefitting from scaling well (with filtering set for good scaling)
This is the full sized image blurred:
And here's a version of the same image, scaled to a 16th of its original size, blurred, and then the blurred image scaled back up. As you can see, due to the good scaling and lack of detail, there's hardly any difference in the blurred image, but the blur takes MUCH less processing energy and time:

OS X Sprite Kit - Dirty Rects/Regions

Some background:
I have an existing OS X card game app that uses OpenGL.
The window is resizable, and a 4:3 aspect ratio is always maintained.
When the window is resized, the OpenGL view is resized accordingly. All visual elements are scaled accordingly. i.e. the cards maintain their relative sizes and distances from each other.
I'm interested in moving the code to a system that either uses Sprite Kit, or one predominantly based on Core Animation layers. Sprite Kit is more attractive to me in terms of feature set for my needs, but...
... I am concerned about Sprite Kit performance (or rather, needless performance, particularly on battery-powered Macs) for a game that essentially blasts the same textures to the screen, 60fps, even when nothing much is happening. (Most of the time, the cards are static, as the player ponders their next move.)
To reduce some of the (repetitive) drawing required, particularly at very large window sizes (e.g. fullscreen on a 30" monitor), I'm interested in using a "dirty rects/region" or "as-required" drawing system.
Question:
Does Sprite Kit provide some kind of dirty-rect drawing system, or the ability to implement such a drawing system? (Or, is it basically going to draw everything over and over at 60fps, regardless of the need to redraw?)
SK is a OpenGL renderer, naturally it will redraw its contents every frame. That however doesn't make it slow. While the dirty rect drawing of UI frameworks is a way to improve performance but also to reduce power consumption, they have to use this approach because rendering in UI frameworks is typically a lot slower (often not hardware accelerated) than in an OpenGL renderer.
On the other hand SK can be slower frame over frame if the rendered scene's complexity is extreme. But that sounds highly unlikely for a card game.
Generally You shouldn't concern yourself with performance until you wrote some code to test it with. Premature optimization and all...

SDL accelerated rendering

I am trying to understand the whole 2D accelerated rendering process using SDL 2.0.
So my question is which would be the most efficient way to draw circles in the screen and why?
Some ways would be:
First to create a software surface and then draw the necessary pixels on that surface then create a texture out of that surface and lastly copy that texture to the rendering target.
Also another implementation would be to draw a circle using multiple times SDL_RenderDrawLine.And I think this is the way it is being implemented in SDL 2.0 gfx
Or there is a more efficient way to do all of this?
Take this question more generally in means of if I would wanted to draw other shapes manually, which probably, couldn't be rendered easily with the 2D rendering API that SDL provides(using draw line or rectangle).
With the example of circles this is a fairly complicated question, it is more based on the visual quality you wish to achieve which will drive performance. Drawing lots of short lines will vary vastly based on how close to a circle you wish to get, if you are happy to use say, 60 lines, which will work on small shapes nearly seamlessly but if scaled up will begin to appear not to be a circle, the performance will likely be better (depending on the user's hardware). Note also SDL_RenderDrawLines will be much much faster for many lines as it avoids lots of context switches for rendering calls.
However if you need a very accurate circle with thousands of lines to get a good approximation it will be faster to simply use a bitmap and scale and blit it. This will also give you a 'smoother' feel to the circle.
In my personal opinion I do not think the hardware accelerated render API has much use outside of some special uses such as graph rendering and perhaps very simple GUI drawing. For anything more complex I would usually use bitmap based drawing.
With regards to the second part, it again depends on the accuracy of any arcs you need to draw. If you can easily approximate the shape into a few tens of lines it will be fast, otherwise the pixel method is better.

Suitability of using Core Animation on iOS vs using Cocos2D and OpenGL ES?

I finished a breakout game tutorial in a book, but the ball, which is a 20x20 pixel image, was skipping frames and not moving very smoothly. That is the case on the Simulator as well as on an iPhone 4S (the real thing). The code wasn't using NSTimer (which may be slower), but was using CADisplayLink and UIImageView setFrame to do the animation.
Is Core Animation on iOS not very suitable for development animation type of games? Say if it is a game of
Invaders (Space Invaders)
Breakout (as a game in a tutorial)
Arkanoid
Angry Birds / Cut the Rope / Fruit Ninja
For these types of games, is Core Animation really suitable for writing (2) above? For (1), (3), and (4), either Cocos2D or OpenGL ES is more suitable of doing the job. And the performance of Cocos2D and OpenGL ES are very close. Is that true?
Cocos2D is often looked at because of its ease for programming common game logic, like collision detection and sprite animations, frame-by-frame, scaling and other processes that are quite common in game development, where you string together multiple animations, combine then, sequence them, do call backs, and more. That is one of the big benefits of the engine.
However, performance is another. Cocos offers batch nodes, which combine all graphic elements into a single OpenGL call, rather than "drawing" each to the screen separately in each frame; this can dramatically improve performance, especially for large graphics. If you had skipping frames, I wonder if batch sprites in Cocos would have been the missing link.
I'm very impressed by Core Animation and want to hope that it can hold its own with performance issues in games. My understanding is that CA is, like Cocos, also built on top of OpenGL ES, so I'd expect it possible to achieve good results in either. It could be that doing so in Cocos is easier simply because it has been designed and optimized internally for game development.
If you are having performance problems with a 2D app, this is likely caused by a lack of understanding of how to get the most efficient results from CoreGraphics as opposed to something that switching to OpenGL will fix. A 2D game will work just fine with CoreGraphics, you just need to start with the right approach. First off, you should not be rendering the entire view over again on each CADisplayLink callback. Instead, setup a UIView that contains multiple CALayer objects. Set the layer like so: CALayer.contents = (id) cgImage and then let the system take care of rendering it when the x, y, or animation elements change. You just need to position your elements and define the animations that move the elements around. With this approach, the system will cache the animating image on the graphics card behind the scenes and redraw using GPU operations.

Resources