We now all know, particularly from that nice article, that we should prefer css-transforms to animate position.
But we got the choice between translate() and translate3d()...
Which one is generally faster?
This site below runs tests comparing translate(), translate3d(), and a couple other properties. According to it, translate3d() is faster in most browsers.
http://jsperf.com/translate3d-vs-xy
The use of translate3d pushes CSS animations into hardware acceleration. Even if you're looking to do a basic 2d translation, use translate3d for more power! So 'T3d' is just better because it tells the CSS animations to push the animations in 3d power!
That's why it is faster.
(The answer of Cameron Little is the proof)
As Cameron suggested translate3d should be faster in a lot of browsers, mostly those that use GFX hardware acceleration to speed up rendering. Especially on WebKit translate3d was the preferred way of forcing hardware acceleration on page items.
Though in my experience sometimes there is one drawback to using 3d transforms - in certain browser versions/OS combinations (macOS + Safari I'm looking at you) hardware accelerated rendering can slightly alter font smoothing as well as interpolation, thus causing minor flicker or blur. Those situations can mostly be worked around but should be kept in mind.
Related
Is there any known implementation of a 2D canvas fallback for the awesome playcanvas game engine?
The idea is that the system should test rendering performance during the loading process in WebGL vs Canvas 2D, and fall back to Canvas 2D in case it finds better performance there o in cas WebGL is not supporte in the browser.
Other frameworks three.js or pixijs have this feature and it would be great for playcanvas, but as long as I have seen they do not have this feature and no solution have been implemented by the community.
No, there is no Canvas 2D fallback for PlayCanvas.
First of all, Canvas 2D will never be faster than WebGL for rendering 3D scenes. This is because you would have to shift complex GPU tasks onto the CPU. PlayCanvas implements a very sophisticated physical rendering pipeline and reimplementing it CPU-side will never give acceptable performance. It makes more sense for Pixi because Pixi is largely concerned with 2D sprite primitives which can be rendered quite cheaply by Canvas 2D.
At the time of writing, WebGL has a penetration of 91.1% and the trend is still upwards. Therefore, you have a relatively small minority of people who can't experience WebGL right now.
In the situation where the user can't run WebGL (for whatever reason, such as running IE9 and below), the recommendation would be to simply display a message asking to upgrade to a WebGL-capable browser.
This fallback to canvas if no webgl is a very fringe feature and perhaps that is why there is little desire to implement it. Perhaps it was useful before when webgl support is extremely limited but I dont see much need for it atm. Heres why:
If canvas api met all your requirements, then there is little point in writing a webgl app to do the same thing as the canvas api.
If webgl features are desired/required, such as 3D, or simply need to draw many sprites with different scale/rotations, then canvas performance is likely not up to par. Furthermore, browsers (last time I checked) have very inconsistent canvas performance. It was what motivated me to write a webgl renderer in the first place.
On this html5rocks article, it states that
In general the CSS ‘opacity’ property isn’t hardware accelerated, but some browsers that implement filters using hardware acceleration will accelerate the filter version of opacity for much better performance.
This seems to imply that in performance-intensive applications, one should use the opacity filter instead of the opacity property. For example, I'm rendering a canvas under an image with an opacity property of 0.5. Should I be using the filter instead? How could one measure performance gains when using this filter property, and on what platforms might there be a noticeable improvement?
First, CSS3 filter effects are still a draft standard and their browser support varies. So think twice whether you really need them (e.g., CSS Filters in the Real World article (04.2013) reports rendering artifacts).
Hardware acceleration in Webkit and GPU Accelerated Compositing in Chrome give an overview of their implementations and both suggest there's no discrimination against CSS2 or CSS3 (I'd be surprised if there was).
Second, a review in PC magazine (01.2013) shows results of some online benchmarks by IE and Firefox teams. There are canvas, particle and CSS (rotation) benchmarks. The Maxthon (uses WebKit) and Opera versions tested there do not support acceleration so they give a hint on the performance gain. Regarding the transparency: it's orders of magnitude less computations than resampling so you shouldn't notice any difference for this specific operation.
Paul Irish says here that opacity is one of the few CSS properties that *is* GPU accelerated: https://plus.google.com/+AddyOsmani/posts/aTRerYcZpts
And also, there is a severe lack of support for filters across browsers as can be seen here, though opacity is supported across the board: http://caniuse.com/#search=opacity.
Not to mention the opacity property is just so much easier to use.
I'd stick with what you've got.
I'm using webkit-transform: translate3d and a few other properties pretty extensively on a mobile app for iPhone because its hardware accelerated. With about 98% of the features in place, performance is great. I'm aware of not trying to do too much at once.
I'm successfully simulating swiping in a very excellent, native way. What I've noticed now is that when I add the last 2% of features I'm seeing some image redrawing issues in the that is being animated while swiping. After you swipe through all 4 images and they load, then performance is perfectly smooth again. However, when this section is hidden and shown, the same thing happens.
What I hypothesize is happening is there's an internal buffer being hit and it has to reload each time.
So this with that background, the general question is what kinds of performance optimizations have other developers been making for -webkit-transform? I'm not necessarily asking about my particular situation, but rather what wider range of optimizations have people figured out for their individual needs?
Hopefully if this question gets some answers, it can be a resource for other folks asking the same question down the road.
It's a fairly well known thing, but making sure the element you transform is using 3d transforms where possible helps a lot on devices that hardware acccelerate transforms (iOS at the moment).
The easiest way to do that is to add:
transform: translate3d(0,0,0);
with the appropriate prefixes to the css of the element in question, then just animate it as normal, either by using 2d or 3d transforms.
It might sound a bit weird but i had a similar issue and i solved it by using -webkit-perspective: 1000.
Don't know how this acts in favor of the transitions, but in my case it did.
I'm in the process of starting to build a strategy game (think warcraft) for the web. I've been doing research on HTML5 Canvas and CSS3 sprites and still can't decide which technology to use.
The game won't be completed for another 6 months.
Any advice would be appreciated.
As you probably hear so frequently... "It Depends..." ™
My suggestion would be consider the feel of the application you're after. If you are trying to build a very graphically rich, mostly-images application, then I would use Canvas. However if you are trying to animate some graphics, but have the page remain and behave more "Web-like", mixed with other HTML content, I'd give CSS3 a try.
Two additional points:
Currently, Canvas is better supported than CSS3 animation/sprites.
If you use Canvas you're going to be implementing your own render loop and animation code (or making use of a 3rd partly library). Your code create animation by compositing the various layers of each frame, applying movement, and repeating. You can't simply say "move this image a little to the right". You'll have to do that yourself.
The EA web game "Lords of Ultima", as dull as it is, is an excellent example of a WarCraft-styled (well, it's more city building as there are no visible units) overworld, with animations and everything, built on a pure HTML and CSS sprite base. It looks and performs well and I think the square block box-model nature of HTML suits that kind of tile based design, especially since a lot of the image processing (embed an <img> or a <div> with a background, change background-position for animation) and click/mouse handling is done for you in simple html.
If you do go canvas you have to manage that yourself which will greatly increase the complexity and dev time. You'll have more control of minor elements and improved performance, but then you'll also lose (if it's at all important), greater backwards compatibility with older browsers. So it depends on how complex your design is and what kind of performance you need.
Use Canvas. If you use CSS sprites to build a game, then you are going to make a lots of <div>'s which performs operations on the DOM, which may slow down and also have a lots of focus and compatibility problems.
It may pay off to trade the development time for performance on <canvas>, by the assumption of "A code will be maintained forever".
I think CSS3 sprite system takes more time to develop, because you need to handle browser compatibility.
Browsers like IE 8 (8 or 9?) are using GPU to accelerate graphics, which lets you get the free lunch of Moore's Law.
There's pros and cons to both. Currently, Canvas is better supported then is CSS3, but you said that your game won't be done for another 6 months, by then the support for CSS3 could be much much better. There's also a lot of other variables here, such as: What browser will the game be viewed on? How advanced are the graphics you need to animate? etc... I would say that canvas would be better for support of the current generation of browsers and for gaming graphics, however CSS3 would be quicker, but wouldn't even come close to the support or graphics handling. But it doesn't seem like your in a rush to get it done.
Basically:
Canvas: Graphics, current support in users browser
CSS3: Speed of development
Ether will work. But for now I would use Canvas. However, 6 months in the tech world is an eternity, things could be a lot different then.
I am working at an app that causes lots of browser reflows. Performance is a key issue here. From the performance point of view Is it better to use a CSS3 gradient or an image gradient for some DOM elements? Does a page that uses CSS text shadows and gradients will have a slower reflow as a page that uses images to achieve those visual effects?
Also, are there any reflow tests out there I can use?
For drawing, CSS gradients and shadows do task the CPU more than images. Performance used to be pretty bad, these days they are acceptable. If you have a ton of gradients/shadows, you should just implement them and do the tests in your real-world setting. If you just have a few, I wouldn't worry about it.
It depends a lot on how the browser renders it, but for the most part those things will render slower. In addition, you'll have a less pixel-perfect display in older browsers. However, this also serves to segment your audience, as generally those with updated browsers also have updated computers. So, it's a trade-off that can work in your favor to essentially serve a stripped down version of your site to those that wouldn't be able to handle it. It's not guaranteed, but I've found it usually balances out pretty well.
Overall, real-world testing is the way to go. Build it, see if it works, and fix performance issues once you find them. I wouldn't hesitate just because there's a chance it might not work. If it works just fine and you don't try it, you'll never know!