Which is the best way to Lazy load using Intersection observer API or CSS content-visibilty rule? - performance

We are trying to optimize LCP of our site by implementing "Infinite scroll" on our web pages either using Lazy load using Intersection observer API or CSS content-visibilty rule.
I need to know which one is more effective?
Thanks,

This is somewhat dependent on your site in question. I would immediately say that just setting content-visibility: auto to sections below the fold is a known pattern for improving rendering performance (just be sure to use contain-intrinsic-size as well to avoid layout shifts. Letting the browser to as much as possible with just HTML and CSS will typically result in better performance (provided all is done correctly).
However, the better puzzle may be looking into what is causing the LCP measurements to be slow. If there are multiple images on the page, changing the fetchpriority of images can tell the browser to load the main image first. Looking at script order and making sure all code is deferred when possible is another easy win for LCP. Most of the time, there are other opportunities to more significantly change the LCP measurement (unless the DOM is truly massive, e.g. David Bowie's news page).

Related

Spritesheet or Separated PNGs?

So, I'm working on a small indie game, and for that I made my own animation system, it's pretty efficent at the moment, but I have some doubts about how it'll operate, after I add 20-30-100 more animations, because (as the title says) every single frame is a different image, in a separated folder.
So the question is: How will this work, after I add more animations? Will it cause longer loadtimes, or worse performance? I'm not totally sure, because eg. the file size of the same animation on a spritesheet and if separated, are almost the same.
The question you should ask yourself when making this sort of decision is "what is the cost of switching if I chose the simpler solution now, and need the complex solution later"?
In this case, switching involves:
Switching your animation system to use the sprite sheet instead of the individual images. This can be really easy if you make your resource fetching and animation calls clean interfaces, and is not too bad unless you do something really horrible in your code.
Getting a program that combines your individual sprites into spritesheets. A quick google search will find dozens of simple programs that do this, but if you need to write your own for some reason it shouldn't be that bad either.
So, my non-answer answer is you probably should not care and if you still care, just take an hour or two and try both.

Performance Testing Knockout, Angular and Backbone with Selenium2 - Paul Hammant's Blog

Interesting note on the performance of these technologies. Are saying? which choose to do a project? and I'm looking for one of these technologies for a project
http://paulhammant.com/2012/04/12/performance-testing-knockout-angular-and-backbone-with-selenium2/
I don't think that this post is conclusive in downgrading angular.js due to performance problems. So you're question leads basically to comparing these three technologies...
They solve very different kind of problems, e.g. backbone.js is in fact only a library for building event-based MV* architectures, while knockout.js and angular.js are more opinionated frameworks. So it's really comparing apples to oranges... But people try anyways: http://codebrief.com/2012/01/the-top-10-javascript-mvc-frameworks-reviewed/
None of the frameworks are made for performance. They are made to give direction to the developer.
Backbone is by far the least performant but even with Backbone if it's tuned right you can get a high FPS on tablets, mobiles and desktop.
Rendering Performance means:
only create DOM elements once, update DOM with new model contents
use object pooling as much as possible
minimize image loading/parsing until the last minute possible
watch out of JavaScript that triggers CSS to relayout
tie your render loop to the browser's paint loop
be smart about when to use GPU layers and compositing
opt out of the Garbage Collector as much as possible to maintain high frame rates
I have a PerfView on github that extends Backbone to make rendering performant. https://github.com/puppybits/BackboneJS-PerfView It can maintain 120FPS in Chrome and 56FPS on iPad with some real world examples.

Why worry about minifying JS and CSS when images are typically the largest sized HTTP request?

I think my question says it all. With many successful sites using lots of user inputted images (e.g., Instagram), it just seems, in my perhaps naive opinion, that source code minification might be missing the boat.
Regardless of the devices one is coding for (mobile/desktop/both), wouldn't developers time be better spent worrying about the images they are serving up rather than their code sizes?
For optimizing for mobile browsers slower speeds, I was thinking it would probably be best to have multiple sized images and write code to serve up the smallest ones if the user is on a phone.
Does that sound reasonable?
First, I don't know that many developers do "worry" about minification of scripts. Plenty of public facing websites don't bother.
Second, minification is the removal of unnecessary whitespace whereas decreasing the size of an image usually entails reducing it's quality so there is some difference.
Third, I believe that if it weren't so easy to implement a minification step into a deployment process it would be even less popular. It doesn't save much bandwidth true but if all it takes is a few minutes to configure a deployment script to do it then why not?
100KB of data for a medium-sized JS library is no light-weight. You should optimize your site as best as possible. If by minifying a script you can get it to half of its size, and then by gzipping it, save another third, why wouldn't you?
I don't know how you do your minifying, but there are many scripts that automate the process, and will often bundle all of your JS into one package on the fly. This saves bandwidth, and hassle.
My philosophy is always "use what you need". If it is possible to save bandwidth for you and your users without compromising anything, then do it. If you can compress your images way down and they still look good, then do it. Likewise, if you have some feature that your web application absolutely must have, and it takes quite a bit of space, use it anyway.

Modernizr.js and page loading performance

I'm trying to decide whether it might make sense to use modernizr.js.
Does covering all the edge cases that modernizer.js does significantly affect page load speed?
thanks,
Tim
When you download Modernizr, you can customize it to only include the functionality that you need.
http://modernizr.com/download/
That should cover any performance concerns.

Can re-factoring of HTML 4 into HTML5 increase the performance of website?

Can using HTML5 things increase the speed and performance of website?
Or it will only increase the semanticists and add new technology and user experience.
HTML 5 adds some new controls that browsers can implement natively (like calenders). Using these will improve performance over JavaScript implemented controls (but in general, you will not notice much difference).
No doubt a lot of new elements are introduced in HTML5, but that should not have any direct considerable effect on the overall speed or performance of the website. In HTML5, Strict parsing and lexing rules are introduced to handle any errors, and the introduction of multimedia elements, <audio>, <video> (that wouldn't require support from third party software), the performance or the threshold efficiency is indirectly improved.
Well there is not a lot in HTML 5 to make things faster directly other than the new elements mentioned and maybe local storage. In stead the reality is that most HTML 5 supporting browsers are faster, some significantly. So by going to HTML 5 and forcing a user upgrade your your client part of an app should be faster.
For example look into the bleeding edge browsers acceleration via GPUs and better multiple threading. So your client might be faster by default simply because you would end up executing on a better browser. Combined with new features in HTML you may bel able to speed up Your pages.

Resources