Performance Testing Knockout, Angular and Backbone with Selenium2 - Paul Hammant's Blog - performance

Interesting note on the performance of these technologies. Are saying? which choose to do a project? and I'm looking for one of these technologies for a project
http://paulhammant.com/2012/04/12/performance-testing-knockout-angular-and-backbone-with-selenium2/

I don't think that this post is conclusive in downgrading angular.js due to performance problems. So you're question leads basically to comparing these three technologies...
They solve very different kind of problems, e.g. backbone.js is in fact only a library for building event-based MV* architectures, while knockout.js and angular.js are more opinionated frameworks. So it's really comparing apples to oranges... But people try anyways: http://codebrief.com/2012/01/the-top-10-javascript-mvc-frameworks-reviewed/

None of the frameworks are made for performance. They are made to give direction to the developer.
Backbone is by far the least performant but even with Backbone if it's tuned right you can get a high FPS on tablets, mobiles and desktop.
Rendering Performance means:
only create DOM elements once, update DOM with new model contents
use object pooling as much as possible
minimize image loading/parsing until the last minute possible
watch out of JavaScript that triggers CSS to relayout
tie your render loop to the browser's paint loop
be smart about when to use GPU layers and compositing
opt out of the Garbage Collector as much as possible to maintain high frame rates
I have a PerfView on github that extends Backbone to make rendering performant. https://github.com/puppybits/BackboneJS-PerfView It can maintain 120FPS in Chrome and 56FPS on iPad with some real world examples.

Related

Which is the best way to Lazy load using Intersection observer API or CSS content-visibilty rule?

We are trying to optimize LCP of our site by implementing "Infinite scroll" on our web pages either using Lazy load using Intersection observer API or CSS content-visibilty rule.
I need to know which one is more effective?
Thanks,
This is somewhat dependent on your site in question. I would immediately say that just setting content-visibility: auto to sections below the fold is a known pattern for improving rendering performance (just be sure to use contain-intrinsic-size as well to avoid layout shifts. Letting the browser to as much as possible with just HTML and CSS will typically result in better performance (provided all is done correctly).
However, the better puzzle may be looking into what is causing the LCP measurements to be slow. If there are multiple images on the page, changing the fetchpriority of images can tell the browser to load the main image first. Looking at script order and making sure all code is deferred when possible is another easy win for LCP. Most of the time, there are other opportunities to more significantly change the LCP measurement (unless the DOM is truly massive, e.g. David Bowie's news page).

real world user interfaces which are attractive and capable of high dynamic updates

I promise I give my best to bring this question in a format matching the SO question format. Most of the questions of that kind seem to get closed.
Please consider not closing this question but giving hints how I could improve the question instead.
I am doing some research on the possibilities how I could implement a user interface which should meet two criteria (besides that it should work and be clearly to understand):
it should look attractive
it should be able to visualize high dynamic data (requires a lot of updates)
Well, the thing is I have a gaming background and a web background. In games the UI is rendered, like everything elsle, from the game loop which means you can visualize data changes with 30fps and up. This would meet criteria 2.
Use cases I can imagine for the high dynamic data updates would be:
stock trading apps where the prize changes (I have to guess ~10 times per second)
audio programs where the current position of the played audio track is synchronized in the display of the audio wave form
dynamic changing network graphs
real time visualization of procedural generated data (images, 3D models)
But developers outside of the gaming industry do not use game engines for their user interfaces.
I am trying to figure out how business software developers (in the real world) implement their user interfaces.
During the research I found those main streams:
business software libraries (user interfaces which get the job done for people who do a job)
like Windows Forms, Java Swing, Qt
they provide a lot of widgets, components (or whatever) and functionality
they are all grey in grey (though to some limited extend customizable)
they are very static (the layout) and react bad if you try to redraw the ui with 30fps
customer software libraries (user interfaces which attract people in their spare time)
those seem to be implemented more and more with HTML(5), CSS(3) and Javascript which utilize an embedded browser
they have an attractive look and feel
even the layout can be dynamic
examples would be the Steam and Spotify client (as I learned recently)
updating an HTML UI with 30fps seems not to be a good idea
the newcomers and the mixtures
meaning those who try to mix these already (JavaFX, Silverlight, dead Flash)
or they are new and limited to their devices (Android, IOS user interfaces)
There is always the option to write an UI system from scratch so it is tailored to the needs. But why reinvent the wheel because displaying dynamic data in an attractive way seems not be a new idea.
So the question: what is used in the real world?
Someone may provide names of applications displaying dynamic data so I can further research what they used.
I try to visualize dynamic data and avoid the (50 shades of grey style, although you can taint it black like VS 2015).
The programming language does not matter. It tends to become a desktop application (since a web application seems to be impossible so far).

Are basic HTML Canvas drawing/animation frameworks (or at least large portions of a framework), such as processing.js still necessary?

Allow me to explain, so that this doesn't just get marked as an "opinion-based" question.
I'm learning processing.js right now, and I can't help but notice many of the similarities in functionality with what already exists in the Canvas API of Vanilla-JS. Perhaps writing a set of large-scale animations is much more complicated in plain old Canvas than it is in processing?
I'm asking this because, as I continue to learn more about the vanilla APIs, I'm seeing a lot of new functionality added in JS over the years that is starting to (VERY SLOWLY) make certain aspects of popular frameworks, no longer necessary (jQuery being a great example). I'm curious as to whether or not this is the case with Canvas and processing.js as well.
Personally, I'm trying to determine whether or not I should still be spending a lot of time in processing.js (I'm not asking you to make that decision for me though, but I just want some information that can help me decide what's best for me).
Stackoverflow allows specific non-coding questions about programming tools-like ProcessingJS, but your question seems likely to be closed as too broad.
Even so, here are my thoughts...
Native Canvas versus ProcessingJS
Html5 canvas was born with a rich set of possibilities rivaling Photoshop itself. However, native canvas is a relatively low-level tool where you must handle structuring, eventing, serialization and animation with your own code.
ProcessingJS adds structure, eventing, serialization, animation & many (amazing!) mathematical functions to native canvas. IMHO, ProcessingJS is a higher-level tool that's well worth learning.
Extending native canvas into a higher level tool instead of a low-level tool
With about 500 lines of javascript, you can add a reusable framework to native canvas that adds these features in within a higher level structure: eventing (including drag/drop, scaling, rotating, hit testing, etc), serialization / deserialization.
With about 100 more lines you can add a reusable framework to native canvas that does animation with easing.
Even though native canvas was born with most of the capabilities needed to present even complex content, a PathObject is sorely needed in native canvas. The PathObject would serialize paths to make them reusable. With about 50 lines you can create a reusable PathObject.
Here's a fairly useless recommendation :-p
Try to use the right tool for the job (yeah, not specifically helpful).
Learning native canvas alone will let you do, maybe 70% of pixel display tasks.
Coding the extensions (above) will get you to 90%.
Using a tool like ProcessingJS will get you to 98%.
Yes, there are always about 2% edge cases where you either "can't get there" or must reduce your design requirements to accommodate coding limitations.
A slightly more specific recommendation
Since ProcessingJS merely extends native canvas, IMHO it's well worthwhile to take a few days and learn native canvas. This knowledge will let you determine the right tool for the job.

Ember 0.9.6 performances update - Significant?

I was naturally drawn to Ember's nice API/design/syntax compared to the competitors but was very saddened to see the performance was significantly worse. (For example, see the now well known http://jsfiddle.net/samdelagarza/ntMdB/167/ .) My eyes tell me at least 4 times slower than Backbone in Chrome.
The version 0.9.6 of EmberJS apparently has many performance fixes, in particular around bindings and rendering. However the above benchmark still performs poorly when using this version of Ember.
I see the above benchmark as demonstrative of one framework's binding cost. I come from Flex where bindings perform well enough that you don't have to constantly think whether these 5 bindings per renderer (multiplied by maybe 20 renderers) you want to use aren't going to be too much of an overhead. Ease of use is nice, but only if good enough performance is maintained. (Even more so since HTML5 also often targets mobiles).
As it stands, I tend to think the beauty of Ember is not worth the performance hit compared to some of its competitors, as we're talking about big apps with many bindings here, else you wouldn't need such framework in the first place. I could live with Ember performing slightly less well; after all it brings more to the table.
So my questions are fairly general and open:
Is the Ember part of the benchmark written well enough that it shows
a genuine issue?
Are the 0.9.6 performance updates maybe very low
key?
Are the areas of bad performances identified by the main
contributors?
This isn't really an issue of bindings being slow, but doing more DOM updates than necessary. We have been doing some investigation into this particular issue and we have some ideas for how to coalesce these multiple operations into one, so I do expect this to improve in the future.
That said, I can't see that this is a realistic benchmark. I would never recommend doing heavy animation in Ember (or with Backbone, for that matter). In standard app development, you shouldn't ever have to update that many different views simultaneous with that frequency.
If you can point out slow areas in a normal app we would be very happy to investigate. Performance is of great concern to us, and if things are truly slow during normal operation, we would consider that a bug. But, like I said, performant binding driven animations is not one of our goals, nor do I know of anyone for whom it is. Ember generally plays well with other libraries so it should be possible to plug in an animation library to do the animations outside of Ember.

Can re-factoring of HTML 4 into HTML5 increase the performance of website?

Can using HTML5 things increase the speed and performance of website?
Or it will only increase the semanticists and add new technology and user experience.
HTML 5 adds some new controls that browsers can implement natively (like calenders). Using these will improve performance over JavaScript implemented controls (but in general, you will not notice much difference).
No doubt a lot of new elements are introduced in HTML5, but that should not have any direct considerable effect on the overall speed or performance of the website. In HTML5, Strict parsing and lexing rules are introduced to handle any errors, and the introduction of multimedia elements, <audio>, <video> (that wouldn't require support from third party software), the performance or the threshold efficiency is indirectly improved.
Well there is not a lot in HTML 5 to make things faster directly other than the new elements mentioned and maybe local storage. In stead the reality is that most HTML 5 supporting browsers are faster, some significantly. So by going to HTML 5 and forcing a user upgrade your your client part of an app should be faster.
For example look into the bleeding edge browsers acceleration via GPUs and better multiple threading. So your client might be faster by default simply because you would end up executing on a better browser. Combined with new features in HTML you may bel able to speed up Your pages.

Resources