AnimatedSprite vs AnimatedImage in QML - animation

In QML I have multiple ways of including animations. Within others there are
AnimatedImage
AnimatedSprite
which both seem to be of similar use. With the right tools, it is quite easy to transform a sprite sheet into an animated gif or MNG file which could be handled by an AnimatedImage. The other way around is not that much harder.
In the documentation of Sprite they say:
The sprite engine internally copies and cuts up images to fit in an easier to read internal format, which leads to some graphics memory limitations. Because it requires all the sprites for a single engine to be in the same texture, attempting to load many different animations can run into texture memory limits on embedded devices. In these situations, a warning will be output to the console containing the maximum texture size.
On the other hand, the AnimatedImage usually caches the single frames, especially when the animation should loop, (which might also bring the maximum texture size to risk?)
I know that the Sprite has some fancy state machine and stuff, but the AnimatedSprite seems to be stripped of this.
As the production of content for either of those is the same work, I want to know if one of them is superior in any usecase, or whether their usecases and their performance are just entirely the same and which one to use is a question of flavour.
Actually I did not find a single reference that mentioned both in the same context...

Related

svg or png for better performance in PIXI.js and WebGL?

my question may seems not new, but as far as I searched for days I couldn't find my answer.
I'm trying to make a webpage with PIXI.js which uses webGL.
My webpage is mouse movement parallax, I mean all the movements an object can have is few pixels when the user moves his/her mouse pointer.
Now my problem: I have some simple images and i don't know to use svg or png.
My images are like these:
https://1drv.ms/i/s!Aj-BeFYyTnRzhTBSVEXXeJ2c-O7V
https://1drv.ms/i/s!Aj-BeFYyTnRzhTFeTzJLrWaq_VFh
https://1drv.ms/i/s!Aj-BeFYyTnRzhTIa9lAaS9dKX1DL
I want to make my webpage as smooth as possible and I don't know to use png or svg. I searched a lot, some says it depends on the png and svg, in my case my svgs won't be too complex but some says because svg use CPU and the WebGL use GPU using them both, cause lack of performance, and also some says which using svg in PIXI makes no difference than the png because PIXI makes texure from them and there won't be any deference...
I'm new to webGL and Pixi so now with these answers I became confused, by the way, for my case the images size is not mattered, I only wanted as much smoothness as possible.
thanks a lot for your help.
It doesn't make a difference for runtime performance, the SVGs will be rasterized into textures either way. However during initialization where the browser neeeds to rasterize the SVGs to create a texture from them there might be a significant performance penalty depending on how complex your SVGs are.
However since you're developing for the web aforementioned penalty is easily offset by the fact that you're loading the SVGs from a server which introduces way more latency than rasterizing the SVG will, even more so if you consider the size difference between a rasterized PNG and a SVG(assuming you're not planning to create tiny textures from them).
So final verdict, go with SVG, its lossless and small aswell as resizable and editable from within client code. It also saves you from exporting your source assets to PNG everytime you change something.

Using SWF animation in Haxe/OpenFL applications

As great as Haxe got with NME/OpenFL the big problem transitioning from AS3 development are assets. As much as Haxe is similar to as3 and OpenFL tries to provide a familiar API the lack of SWF support scares away many developers.
My research on this topic led me to understand that current SWF is rather weak and buggy with many edits necessary to SWF file in order to run it in Haxe.
The question is how do you use SWF animation in OpenFL apps, or if you don't - whats the best solution you've found regarding rendering time, processor time and file size.
Having spent more time on the research and asking other developers I put together a small list of possible alternatives to using SWF assets for animation. Hopefully it will help other developers, who has a similar problem while the SWF animation support is weak.
NOTE: all methods were selected considering three factors important for me: availability on all platforms, performance and file-size of assets. Therefor not all possible methods were included.
Tested on: HTML5, Android, iOS
SWF animation is possible with Haxe/OpenFL, but there are few rules: no tweens - all animations are frame-by-frame. Vector art should be cached as bitmap, saved as bitmap or pre-rendered as bitmap sequence as on some platforms (e.g. neko) vector art is being transformed to raster with ugly bitten edges. Some bugs were reported if representing MovieClip's as graphics or vice versa, but I didn't notice any bugs on HTML5, Flash, iOS, Android releases. Nested animations sometimes might skip frames if looped (didn't see that either, maybe older release of NME/OpenFL did that). I'd say this is a fairly good way to animate content from the concern of file size and platform availability, but it's a headache to edit all the assets to meet requirements of Haxe support. And it's not fun to reuse these assets later, since they're all frame-by-frame animations and it's a mess.
Sprite sheet animations. Primary used for HTML5 targets due to higher rendering performance. This is directly from a openGL standard, so this method should apply for all openGL targets. The idea is to rather have one big file and save time on opening/loading multiple smaller files. The performance is good, works on all tested platforms, but the file size get's quickly out of hand and can be hardly used for animations where object size is changing in size - makes unnecessary large transparent space, rotating the image to best fit the space reduces rendering performance with editing the transformation matrix on run-time.
Frame sequence aka PNG sequence animation. Personal favorite. It works well and fast on all platforms, it's possible to pre-render the the animation (just like any other method above), transform to BitmapData array, stream-load etc. It does take a lot of disk space for the animations, but can be softened by loading the assets before using them (HTML5, SWF) and it doesn't really matter for mobile devices - as even 1-2GB apps are ok for the market. Large advantage that I found for myself is that this type of asset can be used for any other developing standard (C++, Java, cocos2d) and saved as Sprite sheet if needed (e.g. cocos2d, like HTML5 prefers Sprite sheets over anything else as written in the official book COCOS2DX by Roger Engelbert).With this flexibility, good performance is tolerable file size I prefer this method over any other method listed above.
Bone animation - PNG array + property list. Another approach is having separate images of an animated object and every image matrix data for every frame. That way with minimum disk space use it's possible to make thousands of animations. The downsides are: it's harder (not impossible) to have nested animations for more complex animations, constant matrix transformations limit number of active animations on the display list (horrible method for HTML5, other platforms held on well) and little reusability of assets. Usually it's the same good old SWF assets that were exported to this type, so it makes sense to edit the FLA rather then the bone animation itself.
Surely I've missed some great points, there are many ways to animate graphics and some might work for you better than others so feel free to leave comments and criticize, but I still hope this topic was helpful.
This question might be obsolete. I compiled a C++ app in Haxe/OpenFL just 5 minutes ago and had no trouble getting SWF animations (with tweens) to work.
Here's a gif recording:
https://imgflip.com/gif/7l02f
I had an asset called "library.swf" containing that animation, exported as class "Oluv"
This requires the "swf" library which is now free, and can be installed with "haxelib install swf"
In my example, I added this to my application.xml file:
<haxelib name="swf" />
<library id="oluvLib" path="assets/library.swf" type="swf"/>
Then put this in a standard OpenFL template project:
Assets.loadLibrary("oluvLib", swfAssetsLoaded);
private function swfAssetsLoaded(library:AssetLibrary):Void {
var oluv = Assets.getMovieClip("oluvLib:Oluv");
addChild(oluv);
oluv.x = (stage.stageWidth - oluv.width) / 2;
oluv.y = (stage.stageHeight - oluv.height) / 2;
}
Tweens do not seem to work on neko target, but they work fine in C++, and flash (of course).

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

HTML5 Canvas Performance: Loading Images vs Drawing

I'm planning on writing a game using javascript / canvas and I just had 1 question: What kind of performance considerations should I think about in regards to loading images vs just drawing using canvas' methods. Because my game will be using very simple geometry for the art (circles, squares, lines), either method will be easy to use. I also plan to implement a simple particle engine in the game, so I want to be able to draw lots of small objects without much of a performance hit.
Thoughts?
If you're drawing simple shapes with solid fills then drawing them procedurally is the best method for you.
If you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. Generating graphics procedurally is not always efficient.
It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
If you do choose to draw sprites you should read some of the tips and optimization techniques on this thread.
My personal suggestion is to just draw shapes. I've learned that if you're going to use images instead, then the more you use the slower things get, and the more likely you'll end up needing to do off-screen rendering.
This article discusses the subject and has several tests to benchmark the differences.
Conculsions
In brief — Canvas likes small size of canvas and DOM likes working with few elements (although DOM in Firefox is so slow that it's not always true).
And if you are planing to use particles I thought that you might want to take a look to Doodle-js.
Image loading out of the cache is faster than generating it / loading it from the original resource. But then you have to preload the images, so they get into the cache.
It really depends on the type of graphics you'll use, so I suggest you implement the easiest solution and solve the performance problems as they appear.
Generally I would expect copying a bitmap (drawing an image) to get faster compared to recreating it from primitives, as the complexity of the image gets higher.
That is drawing a couple of squares per scene should need about the same time using either method, but a complex image will be faster to copy from a bitmap.
As with most gaming considerations, you may want to look at what you need to do, and use a mixture of both.
For example, if you are using a background image, then loading the bitmap makes sense, especially if you will crop it to fit in the canvas, but if you are making something that is dynamic then you will need to using the drawing API.
If you target IE9 and FF4, for example, then on Windows you should get some good performance from drawing as they are taking advantage of the graphics card, but, for more general browsers you will want to perhaps look at using sprites, which will either be images you draw as part of the initialization and move, or load bitmapped images.
It would help to know what type of game you are looking at, how dynamic the graphics will need to be, how large the bitmapped images would be, what type of framerate you are hoping for.
The landscape is changing with each browser release. I suggest following the HTML5 Games initiative that Facebook has started, and the jsGameBench test suite. They cover a wide range of approaches from Canvas to DOM to CSS transforms, and their performance pros and cons.
http://developers.facebook.com/blog/post/454
http://developers.facebook.com/blog/archive
https://github.com/facebook/jsgamebench
If you are just drawing simple geometry objects you can also use divs. They can be circles, squares and lines in a few CSS lines, you can position them wherever you want and almost all browser support the styles (you may have some problems with mobile devices using Opera Mini or old Android Browser versions and, of course with IE7-) but there wouldn't be almost any performance hit.

How to render formatted text in Direct3D9?

I'm writing an application which needs to draw a lot of text - several lines, maybe tens of lines - in Direct3D9. The text can be heavily formatted (i.e. different typefaces, styles, sizes) and can contain unicode symbols from different charsets. Worst of all, it can change on the fly, so it needs to be dynamic (render once display always won't do).
There are two problems with that: first, I'll probably need a lot of calls to D3DXCreateFont, which are supposedly costly (I'm not sure myself). Another approach is to create all the fonts before drawing the multi-part-line, and then just switch between them - is this better? Well, I can also create font objects on-the-fly as I draw the line, add them to some kind of font-object-cache and then look in cache before trying to create a new one? What do you think, which is the best approach?
The second problem is that D3DXFont seem to not understand underline/strike-through font styles. Although D3DFONT is based on LOGFONT, it omits those fields (supports italic though). Let's say I really really need underline/strike-through, what do I do? Is there a way to force ID3DXFont to do underline? Should I just draw those lines myself (how do I do that fast)? Or maybe I should switch to drawing with GDI on HDC and then copying those pixels into texture - will that provide reasonable performance?
Well, I can also create font objects on-the-fly as I draw the line, add them to some kind of font-object-cache and then look in cache before trying to create a new one? What do you think, which is the best approach?
The caching approach. Don't even try to recreate the fonts on-the-fly whenever you need them, with no caching. I'm unaware of D3DXCreateFont's real cost, but the function is by design not intended to be called too frequently.
Regarding underline/strikeout - you could render the lines yourself on top using IDirect3DDevice9::DrawPrimitiveUP with D3DPT_LINELIST and a pass-through shader to leave the line vertices unchanged and untransformed.
Or maybe I should switch to drawing with GDI on HDC and then copying those pixels into texture - will that provide reasonable performance?
I won't because it needs to transfer the texture contents from system RAM to VRAM.

Resources