How does UIKit actually render elements onto the screen? - uikit

This is a more general question on screen rendering, but will be using UIKit as an example. When we write UIView... this is an abstraction of an element in UIKit which uses CoreGraphics which uses x which uses y. At the end of the day to render a view or a label there has to be a common ancestor. I have look into this and can only come to one conclusion and curious to see if that is correct. At the base level is a rendering software like OpenGL in this case Metal used to handle the rendering of UIKit elements? All I’m trying to uncover each layer of abstraction for drawing a UILabel/UIView onto the screen.

Related

Alternatives for Mac OS X SpriteKit

I program in Xcode Swift 2.2 and use SpriteKit für my User Interface. I do not need any Animation, just lots of static pictures, that can overlap and must support transparency. Any "animation" is made manually by changing the pictures to others, what means changing the Sprite Nodes to a new picture. This works well with a small amount of nodes, but when I have over 100.000 nodes, the performance is very bad. User Interface Actions like pulling down a menu or typing text works very very slow. So I look for an alternative without changing completely all of my code.
I look for a Mac OS X Library (not iOS) that supports something like a View with can be filled with rectangled pictures manually (but with transparency). The pictures come from one big picture, which contains all pictures I need. In SpriteKit I can assign a picture area to a node which is a sub-picture of the big picture like this:
let SmallPict = SKTexture(rect:myrect, inTexture:mypicture))
I need to overlap the pictures. In SpriteKit pictures overlap when placing at the same coordinates.
I only need 2D but very fast changes of the pictures must be possible.
Any idea which Class Library (perhaps a SubClass of NSView) might be right for this?
Consider using Cocos2D it has built-in batching, so it can improve performance on drawing thousands of similar nodes.
https://github.com/cocos2d/cocos2d-objc

Mixing OpenGL and software rendered GUI

I need to write application where the main content will be OpenGL rendered (something like game engine), but there is no good OpenGL based GUI library similiar to what Qt widgets does (but they are software rendered).
As i browsed the source code of Qt, all painting is done via QPainter and there is even QPainter implementation in OpenGL, but the suppport for multiple graphics backends was dropped in Qt 5, so you can't render Qt Widgets in OpenGL anymore (i don't know why).
The problem is that you can't paint to window surface using both software and hardware rendering. You can have the window associated with OpenGL context or use software rendering. That means if i want to have app with complex GUI with OpenGL based content, i need either paint everything using OpenGL (which is hard because as i said, there is no good GUI library for it), or i can render GUI to image using software rendering (for example Qt) and than load that image as OpenGL texture (probably big performance loss).
Does anyone know any good application that is using software rendered GUI loaded as texture to OpenGL? I need to be sure it will work without some big performance loss, but can't find good example that it will work well even for apps like game engines.
If you take the "render ui to texture then draw a textured quad over my game" route, and are worried about performances, try to avoid transfering the whole texture each frame.
If you think about it :
60fps is not necessary for ui : 30fps is enough, so update it one time out of two.
Most of the time, ui dont change between frames, and if it changes, only a small portion of it do.
ui framework often keep track of which part of the ui is "dirty" and need to be redrawn. If you can get your hand on that, you can stream to the texture only the parts that need to be updated (glTexSubImage2D).

Best low level canvas library for making interactive animations?

I'm evaluating canvas libraries, and my needs are:
I want to make it easy to build nice looking buttons that move
around and on which I can easily capture events. Button drawing
helpers would be cool
I'll be building a system for others to use to create animated
scenes combining moving test, images, and sound. I won't ever be
drawing complex shapes myself, the most I might be drawing is
buttons around some text.
I do not want to be totally insulated from the low level machinery
of the per-frame drawing callback. Helped along sure, but
I'm going to be syncing with Web Audio API stuff and want to keep
access to super tight timing control
I'm comfortable with pretty low level scripting of animation, would rather not have it be something that changes Canvas into some
totally different paradigm, but not sure on this point
needs to work well for touch on iOs
I'd ideally like to be using one with good docs and a high truck number. The state of Canvas libs reminds me of the state of JS libs
10 years ago, and I'd rather not invest in something that doesn't
have an actual "team" behind it. Truck number == 1 worries me.
You flagged KineticJS, so I can say a little bit about how that would work.
1) It's a great tool for tracking shapes on a canvas, capturing clicks, and moving them around. It's easy to place an image on any shape, but I would use another program to make those images.
2) Even if you don't do a lot beyond buttons, KineticJS provides some nice features for manipulating the canvas, and I'm sure you'd use a lot of them in making tools for others.
3) KineticJS provides an animation object that repeatedly calls the draw() method for you. You define your draw method in order to create animations.
4) It's more of a wrapper around canvas. You work with a Stage and Layers, but there is still a lot of transparency to the canvas itself, and you can always do direct manipulation as well.
5) You can capture a broad range of events including "touch", "click", etc. It's easy to treat them the same when appropriate or differently if you need to. Furthermore, you can simply mark shapes as "draggable" and it handles all that appropriately.
6) Kinetic has had spectacular documentation and examples, but in looking now, the tutorials seem to be missing from http://kineticjs.com/ and I can't find them elsewhere. That's minorly worrisome, but the docs are still there and my guess is that they'll be back up soon since KineticJS is still under active development.
I'll weigh in on #1:
Nice looking buttons:
Hands-down...use Adobe Illustrator to create a set of button vector images (.svg).
If you need low level control over the button design at run-time then convert the Illustrator images to canvas drawing commands with this great plugin from Mike Swanson:
http://blog.mikeswanson.com/post/29634279264/ai2canvas.
The key here is that canvas will scale the vector button for you so you're always getting a professional, polished look both on a small mobile screen and a large desktop screen.
You could use canvas to build each part of a button from scratch, but don't reinvent the wheel.
A good animation library is Greensock. It also helps you build timelines (kind of like Flash timelines).
http://www.greensock.com/gsap-js/
As to canvas libraries, check out Stackoverflow's sister site that offers software recommendations:
http://softwarerecs.stackexchange.com
Good luck with your project!

Suitability of using Core Animation on iOS vs using Cocos2D and OpenGL ES?

I finished a breakout game tutorial in a book, but the ball, which is a 20x20 pixel image, was skipping frames and not moving very smoothly. That is the case on the Simulator as well as on an iPhone 4S (the real thing). The code wasn't using NSTimer (which may be slower), but was using CADisplayLink and UIImageView setFrame to do the animation.
Is Core Animation on iOS not very suitable for development animation type of games? Say if it is a game of
Invaders (Space Invaders)
Breakout (as a game in a tutorial)
Arkanoid
Angry Birds / Cut the Rope / Fruit Ninja
For these types of games, is Core Animation really suitable for writing (2) above? For (1), (3), and (4), either Cocos2D or OpenGL ES is more suitable of doing the job. And the performance of Cocos2D and OpenGL ES are very close. Is that true?
Cocos2D is often looked at because of its ease for programming common game logic, like collision detection and sprite animations, frame-by-frame, scaling and other processes that are quite common in game development, where you string together multiple animations, combine then, sequence them, do call backs, and more. That is one of the big benefits of the engine.
However, performance is another. Cocos offers batch nodes, which combine all graphic elements into a single OpenGL call, rather than "drawing" each to the screen separately in each frame; this can dramatically improve performance, especially for large graphics. If you had skipping frames, I wonder if batch sprites in Cocos would have been the missing link.
I'm very impressed by Core Animation and want to hope that it can hold its own with performance issues in games. My understanding is that CA is, like Cocos, also built on top of OpenGL ES, so I'd expect it possible to achieve good results in either. It could be that doing so in Cocos is easier simply because it has been designed and optimized internally for game development.
If you are having performance problems with a 2D app, this is likely caused by a lack of understanding of how to get the most efficient results from CoreGraphics as opposed to something that switching to OpenGL will fix. A 2D game will work just fine with CoreGraphics, you just need to start with the right approach. First off, you should not be rendering the entire view over again on each CADisplayLink callback. Instead, setup a UIView that contains multiple CALayer objects. Set the layer like so: CALayer.contents = (id) cgImage and then let the system take care of rendering it when the x, y, or animation elements change. You just need to position your elements and define the animations that move the elements around. With this approach, the system will cache the animating image on the graphics card behind the scenes and redraw using GPU operations.

Cocoa Compositing Together Images

I recall that there was this one method that could piece together an image, like a button, from 3 different images - one left side, the middle that would be stretched, and the right side. I think there was something like that that takes 10 images too, but I'm not sure.
The point is, does anyone know the API call that did that, and if not, how could I go about drawing a simple button from 3 different images?
Thanks.
I think you're looking for the AppKit function NSDrawThreePartImage. There are several drawing functions in a similar vein.

Resources