Can re-factoring of HTML 4 into HTML5 increase the performance of website? - performance

Can using HTML5 things increase the speed and performance of website?
Or it will only increase the semanticists and add new technology and user experience.

HTML 5 adds some new controls that browsers can implement natively (like calenders). Using these will improve performance over JavaScript implemented controls (but in general, you will not notice much difference).

No doubt a lot of new elements are introduced in HTML5, but that should not have any direct considerable effect on the overall speed or performance of the website. In HTML5, Strict parsing and lexing rules are introduced to handle any errors, and the introduction of multimedia elements, <audio>, <video> (that wouldn't require support from third party software), the performance or the threshold efficiency is indirectly improved.

Well there is not a lot in HTML 5 to make things faster directly other than the new elements mentioned and maybe local storage. In stead the reality is that most HTML 5 supporting browsers are faster, some significantly. So by going to HTML 5 and forcing a user upgrade your your client part of an app should be faster.
For example look into the bleeding edge browsers acceleration via GPUs and better multiple threading. So your client might be faster by default simply because you would end up executing on a better browser. Combined with new features in HTML you may bel able to speed up Your pages.

Related

looking for tools / techniques to monitor memory usage in the browser

We are building an application that makes extensive use of css3 transformations. apparently, transform scale, rotate3d, and of course large images are all factors that cause a memory usage threshold to be reached once a critical number of html elements are on the page.
The goals is to maximize the number of elements that can be added, with minimal compromise in features of css3d transforms or image size/ quality.
I am looking for tools / techniques to monitor the memory usage in the browser on as detailed a level as is technologically available.
Currently working with google chrome.
bonus: any tips for efficiently working with images / css3 transformations / animations?
the best thing I found so far (in chome) is the timeline feature of the chrome web developer tools.
Its amazingly rigorous and detailed. you can profile down to a very low level including intricate details of paint events, css selectors, and of course javascript functions.

Performance Testing Knockout, Angular and Backbone with Selenium2 - Paul Hammant's Blog

Interesting note on the performance of these technologies. Are saying? which choose to do a project? and I'm looking for one of these technologies for a project
http://paulhammant.com/2012/04/12/performance-testing-knockout-angular-and-backbone-with-selenium2/
I don't think that this post is conclusive in downgrading angular.js due to performance problems. So you're question leads basically to comparing these three technologies...
They solve very different kind of problems, e.g. backbone.js is in fact only a library for building event-based MV* architectures, while knockout.js and angular.js are more opinionated frameworks. So it's really comparing apples to oranges... But people try anyways: http://codebrief.com/2012/01/the-top-10-javascript-mvc-frameworks-reviewed/
None of the frameworks are made for performance. They are made to give direction to the developer.
Backbone is by far the least performant but even with Backbone if it's tuned right you can get a high FPS on tablets, mobiles and desktop.
Rendering Performance means:
only create DOM elements once, update DOM with new model contents
use object pooling as much as possible
minimize image loading/parsing until the last minute possible
watch out of JavaScript that triggers CSS to relayout
tie your render loop to the browser's paint loop
be smart about when to use GPU layers and compositing
opt out of the Garbage Collector as much as possible to maintain high frame rates
I have a PerfView on github that extends Backbone to make rendering performant. https://github.com/puppybits/BackboneJS-PerfView It can maintain 120FPS in Chrome and 56FPS on iPad with some real world examples.

Why worry about minifying JS and CSS when images are typically the largest sized HTTP request?

I think my question says it all. With many successful sites using lots of user inputted images (e.g., Instagram), it just seems, in my perhaps naive opinion, that source code minification might be missing the boat.
Regardless of the devices one is coding for (mobile/desktop/both), wouldn't developers time be better spent worrying about the images they are serving up rather than their code sizes?
For optimizing for mobile browsers slower speeds, I was thinking it would probably be best to have multiple sized images and write code to serve up the smallest ones if the user is on a phone.
Does that sound reasonable?
First, I don't know that many developers do "worry" about minification of scripts. Plenty of public facing websites don't bother.
Second, minification is the removal of unnecessary whitespace whereas decreasing the size of an image usually entails reducing it's quality so there is some difference.
Third, I believe that if it weren't so easy to implement a minification step into a deployment process it would be even less popular. It doesn't save much bandwidth true but if all it takes is a few minutes to configure a deployment script to do it then why not?
100KB of data for a medium-sized JS library is no light-weight. You should optimize your site as best as possible. If by minifying a script you can get it to half of its size, and then by gzipping it, save another third, why wouldn't you?
I don't know how you do your minifying, but there are many scripts that automate the process, and will often bundle all of your JS into one package on the fly. This saves bandwidth, and hassle.
My philosophy is always "use what you need". If it is possible to save bandwidth for you and your users without compromising anything, then do it. If you can compress your images way down and they still look good, then do it. Likewise, if you have some feature that your web application absolutely must have, and it takes quite a bit of space, use it anyway.

Ember 0.9.6 performances update - Significant?

I was naturally drawn to Ember's nice API/design/syntax compared to the competitors but was very saddened to see the performance was significantly worse. (For example, see the now well known http://jsfiddle.net/samdelagarza/ntMdB/167/ .) My eyes tell me at least 4 times slower than Backbone in Chrome.
The version 0.9.6 of EmberJS apparently has many performance fixes, in particular around bindings and rendering. However the above benchmark still performs poorly when using this version of Ember.
I see the above benchmark as demonstrative of one framework's binding cost. I come from Flex where bindings perform well enough that you don't have to constantly think whether these 5 bindings per renderer (multiplied by maybe 20 renderers) you want to use aren't going to be too much of an overhead. Ease of use is nice, but only if good enough performance is maintained. (Even more so since HTML5 also often targets mobiles).
As it stands, I tend to think the beauty of Ember is not worth the performance hit compared to some of its competitors, as we're talking about big apps with many bindings here, else you wouldn't need such framework in the first place. I could live with Ember performing slightly less well; after all it brings more to the table.
So my questions are fairly general and open:
Is the Ember part of the benchmark written well enough that it shows
a genuine issue?
Are the 0.9.6 performance updates maybe very low
key?
Are the areas of bad performances identified by the main
contributors?
This isn't really an issue of bindings being slow, but doing more DOM updates than necessary. We have been doing some investigation into this particular issue and we have some ideas for how to coalesce these multiple operations into one, so I do expect this to improve in the future.
That said, I can't see that this is a realistic benchmark. I would never recommend doing heavy animation in Ember (or with Backbone, for that matter). In standard app development, you shouldn't ever have to update that many different views simultaneous with that frequency.
If you can point out slow areas in a normal app we would be very happy to investigate. Performance is of great concern to us, and if things are truly slow during normal operation, we would consider that a bug. But, like I said, performant binding driven animations is not one of our goals, nor do I know of anyone for whom it is. Ember generally plays well with other libraries so it should be possible to plug in an animation library to do the animations outside of Ember.

In terms of HTTP request performance, AJAX or Flash?

In terms of HTTP request performance should I pick AJAX or Flash? To be more specific, I'm more into Flash than AJAX and I'm currently working on a wide scale web project. I wanted to try AJAX out for once and now it's getting too messy for me. Before it gets more complicated I thought may be I can run Flash on the background for HTTP Requests and use it with javascript.
I couldn't find any benchmark on the Internet, but I think AJAX is faster than Flash. So what's your personal experience? Is there too much difference between Flash and AJAX?
Flash and JS both use the browser to send HTTP requests so I don't see any reason there would be a difference in performance between them.
From my personal experience, AJAX tends to be a little faster than Flash, depending on what movie you're showing. If your movie is extremely large, then it will take longer, but for small content they're virtually as fast; the difference is barely seen. However, keep in mind I'm testing on a fairly good laptop; on other devices and machines, like cellphones, the difference might be bigger (probably flash would be slower).
Hope this helps a bit!
N.S.
I agree that AJAX is a generally faster than Flash performing a similar request, but really the speed difference should be a negligible consideration. Having the additional requirement of a Flash movie to just act as an HTTP communication tool seems to be a bad idea because you are still going to require a Javascript solution to act where Flash is unavailable.
I wonder where the proof is in any of these responses. I've used both, I started off doing lots of HTML and JS programming and used AJAX when it was first getting traction and found it to be okay with regard to performance. AMF3 is faster than JSON hands down. Why? Not because of differences in the HTTP standards that they both hitch a ride on, but because of the way the data itself is represented (the compressions schemes used and serialization/de-serialization mechanisms make all the difference).
Go ahead and check it out for yourself, http://www.jamesward.com/census2/
(after all the best proof is a test)
Dojo JSON using gzip compression is closest to AMF3 but still produces a payload of about 160% the size of the AMF payload, one and a half times larger is not in my opinion ever going to be faster assuming equivalent bandwidth. I believe with the latest JavaScript engines the time to de-serialize the data in a browser directly vs having the Flash plugin do that work might make JSON faster for small payloads but when it comes to large amounts of data I don't think that processing time difference would make up for the payload size.

Resources