I have an LCP issue that I'm trying to make sense of.
In my page template, there is a div which is the last body dom element. It is meant as an acceptance/decline cookies toggle, so while it's the last dom element it is positioned via CSS to always be in the viewport on the initial page load/until acceptance has been given.
When I run lighthouse locally, I see that it's the final LCP item. Around the time we implemented it, our LCP scores dropped significantly.
In this scenario, because it's the last body dom element, does that mean that the CSS has to be applied to all higher elements before it then gets to that element? And if so, would it help to move that element to be the first body dom element?
For general advice about optimizing LCP see https://web.dev/optimize-lcp.
But since the question is specifically about the LCP of a cookie consent dialog, this guide would probably be the best place to start: https://web.dev/cookie-notice-best-practices/. There's a lot of great advice in there about how to load these dialogs performantly—DOM order is plays a very small role, if any at all.
Related
I'm testing an app that uses drag-drop to rearrange panels on the page. There are a lot of permutations that need to be tested. Starting with half a dozen panels on the page, each needs to be moved in a different direction and the resulting layout tested.
To check the final layout in each test I need to use 10 to 15 traversal commands that are tedious to get right and hard to read when reviewing the test, for example
cy.get('.panel[data-cy="2"]')
.trigger('mousedown', 'center')
.trigger('mousemove', 'center')
cy.get('.target')
.trigger('mouseup', 'center')
cy.get('.panel[data-cy="2"]')
.parent()
.should('have.text', 'Panel2')
.children()
.should('have.length', 3)
// and so on
I would like to use something like a snapshot test to assert the structure of the final DOM.
Looking at Cypress snapshot, it has two draw-backs
it's too black-box. The final layout is not apparent looking at the test nor at the Cypress log. I want the results to reflect the test assertion so that stake-holders can assure the tests are comprehensive.
it captures too much detail, every element attribute and style which makes it too fragile when anything is adjusted. I'd like to "filter" the snapshot to just compare the elements and attributes that are relevant to the drag feature.
As an example, the following would be ideal as and "expected" layout. Each line has selectors for the element and the indenting represents parent/child relationship.
There can be other element between, but we don't care about those and they can be ignored or filtered out.
.box[data-cy='1']
.panel[data-cy='9']
.pane#Pane1
.divider
.box[data-cy='side-panel']
.panel[data-cy='projects-panel']
.pane#Projects
.divider
.panel[data-cy='questions-panel']
.pane#Questions
.divider
.panel[data-cy='2']
.pane#analysis
.pane#graph
.pane#schema
.pane#data
How do I implement an expect() (or something similar) that compares the above to the actual DOM?
Lets say I have an HTML div containing numerous form elements that are all watching model values, if I use ng-show, ng-if, or ng-switch on the div to hide it, will this stop angular JS from doing dirty checking on the form elements and thus improve the performance of my app?
I figure that if the bound elements are not visible then there's no need for angular to be checking the values bound to them.
ng-show and ng-hide will only set a CSS display style, and will still process the bindings. ng-switch, however, will completely comment out the cases that do not apply, which in turn means bindings in those are not processed. I agree, however, with Edmondo1984's reply, that I doubt you should base your choices on this. Do not rewrite your ng-shows as ng-switches because of this!
You can verify this with the Chrome extension Batarang, the performance tab shows which watches are active.
In the past, I created some divs to act like articles. Now I am thinking about changing it to HTML5 tag article...
Is there an important diference (in terms of efficiency) between using HTML elements or using equivalent divs created by the user?
For example: Will the browser load the pages faster if they are built only with HTML elements?
Short answer: No.
Long answer: maybe, if it will decrease the amount of markup you use. But not likely.
The benefit of using semantic tags is to add more meaning to the markup, not improve performance.
May be. When you cretae a div and add styling to it, the browser needs to first interpret the element and then process the style over it and render it. If you use the appropriate HTML element, it would put less burden on the rendering engine.
My designer thought it was a good idea to create a transition between different pages. Essentially only the content part will reload (header and footer stay intact), and only the content div should have a transitional effect (fade or some sort). To create this sort of effect isn't really the problem, to make google (analytics) happy is...
Solutions I didn't like and why;
Load only the content div with ajax: google won't see any content, meaning the site will never be found, or only the parts which are retrieved by ajax, which arent't full pages at all
show the transitional effect, then after that 'redirect' the user to the designated page (capture the click event of a elements): effect is pretty much the same as just linking to another page, eg. user will still see a page being reloaded
I thought of one possible solution:
When a visitor clicks a link, capture the event, load the target with ajax, show the transitional effect in the meantime, then just rewrite the entire document with the content fetched with the ajax request.
At least this will work and has some advantages; the page reload will look seamless, no matter how slow your internet connection is, google won't really mind because the ajax content is a full html page itself, and can be crawled as is, even non-javascript browsers (mobile phones et al.) won't mind, they just reload the page.
My hesitation to implement this method is that i would reload an entire page using ajax. I'm wondering if this is what ajax is meant to do, if it would slow things down. Most of all, is there a better solution, eg. my first 'bad' solution but slightly different so google would like it (analytics too)?
Thanks for your thoughts on this!
Short answer: I would not recommend loading an entire page in this manner.
Long answer: Not recommended. whilst possible, this is not really the intent of XHR/Ajax. Essentially what you're doing is replicating the native behaviour of the browser. Some of the problems you'll encounter:
Support for the Back/Forward
button. You'll need a URI # scheme
to solve.
The Browser must parse
the entire page through AJAX.
This'll slow things down. E.g. if
you load a block of HTML into the
browser, then replace the DOM with
it, only then will any scripts, CSS
or images contained therein begin
downloading.
Memory - the
browser's not changing pages. Over
time (depending on the browser), I'd
expect the memory usage to increase.
Accessibility. Screen readers
will need to be notified whenever
the page content is updated. Might
not be a concern for you but worth
mentioning.
Caching. Browser
would not know which page to cache
(beyond the initial load).
Separation of concerns - your View
is essentially broken into
server-side pieces to render the
page's content along with the static
HTML for the page framework and
lastly the JS to combined the server
piece with the browser piece.
This'll make maintenance over time
problematic and complex.
Integration with other components -
you're already seeing problems with
Google Analytics. You may encounter
issues with other components related
to the timing of when the DOM is
constructed.
Whether it's worth it for the page transition effect is your call but I hope I've answered your question.
you can have AJAX and SEO: Google's proposal .
i think you can learn something from Gmail's design.
This may be a bit strange, but I have an idea for this.
Prepare your pages to load with an 'ifarme' GET parameter.
When there is 'iframe' load it with some javascript to trigger the parent show_iframe_content()
When there is no 'iframe' just load the page, with a hidden iframe element called 'preloader'
Your goal is to make sure every of your links are opened in the 'preloader' with an additional 'iframe' get parameter, and when the loading of the iframe finishes, and it calls the show_iframe_content() you copy the content to your parent page.
Like this: Link clicked -> transition to loading phase -> iframe loaded -> show_iframe_content() called -> iframe content copied back to parent -> transition back to normal phase
The whole thing is good since, if a crawler visit ary of your pages, it will do it without the 'iframe' get parameter, so it can go through all your pages like normal, but when you use it in a browser, you make your links do the magic above.
This is just a sketch of it, but I'm sure it can be made right.
EDIT: Actually you can do it with simple ajax, without iframe, the thing is you have to modify the page after it has been loaded in the browser, to load the linked content with ajax. Also crawlers should see the links.
Example script:
$.fn.initLinks = function() {
$("a",this).click(function() {
var url = $(this).attr("href");
// transition to loading phase ...
// Ajax post parameter tells the site lo load only the content without header/footer
$.post(href,"ajax=1",function(data) {
$("#content").html(data).initLinks();
// transition to normal phase ...
});
return false;
});
};
$(function() {
$("body").initLinks();
});
Google analytics can track javascript events as if they are pageviews- check here for implementation:
http://www.google.com/support/googleanalytics/bin/answer.py?hl=en-GB&answer=55521
When cached, my starting page only needs to load one element (the "root document") - but then it needs some time until it's rendered completely:
alt text http://www.walkner.biz/_temp/firebug_net.png
The elements following are things loaded asynchronous via JavaScript.
Two questions:
Why does it take so "long" from loading the root document until the DomContentLoaded-event?
Does it make sense to load some not-so-important things asynchronously? Is it important to have the DmoContentLoaded-event as early as possible? Unfortunately there's not much documentation about that event, but I don't think it's the moment when the page is displayed, is it?
I'm not sure YSlow is gonna help him as that will download all elements for a page and run performance tests on them, whereas swalkner's problem is how long it is taking to render the HTML page itself when all other elements (images, CSS, etc) are cached.
At least that's what I think he's saying.
In the original question you said, "The elements following are things loaded asynchronous via JavaScript." but then listed nothing. What is loaded?
I would suggest checking for Javascript errors in the first instance. Then try removing some of your asynchronous loading calls one by one until you hit the bottleneck. In fact, remove them all, how long does the downloaded HTML take to render? Take that time and work from there.
Is your HTML document very big? Does it use lots of inline styles that could be in the CSS file?
Perhaps if you posted a link to the site then people would have a look at it.