Why is Largest Contentful Paint almost 4 seconds? - performance

I don't understand which element on the mobile version of https://www.openstream.ch is considered to be the Largest Contentful Paint by Lighthouse.
When I test the current version of the website with PageSpeed Insights (currently running on Lighthouse v6.0.0), it has 90 points for desktop with 1.8s to render the Largest Contentful Paint which seem to be the 4 images at the bottom of the view port:
When I switch to the mobile results it has 61 points with 3.7s to render the Largest Contentful Paint, although there is not even an image in the view port:
On https://web.dev/lcp/ it says that the types of elements currently considered for Largest Contentful Paint are:
<img> elements
<image> elements inside an element
<video> elements (the poster image is used)
An element with a background image loaded via the url() function (as opposed to a CSS gradient)
Block-level elements containing text nodes or other inline-level text elements children.
When I do the test in the latest version of Chrome for macOS (Lighthouse 5.7.1) the Largest Contentful Paint is rendered after around 0.5s:
Is this a bug in Lighthouse 6.0.0 or does it probably have another reason?

You have missed an important part of how Lighthouse (the engine behind Page Speed Insights (PSI)) works.
When running the test for a mobile PSI simulates a 4x CPU slowdown and a 4G connection.
To simulate this while profiling you need to change the "Network" to "Fast 3G" (this is close enough, for more accuracy you can go in set up a profile for 4G which is the same as PSI - see below) and change "CPU" to "4x slowdown".
If you run the profiling again with these settings you will see that it takes just over 3 seconds to render the page and then very shortly after you get the fonts loading in, which is when the largest contentful paint happens.
The image below shows the location of the settings (right hand side)
How to set your network profile the same as Page Speed Insights
Page Speed Insights use the same settings as Lighthouse as I said earlier. You can find the config they currently use here.
From this we can see that they use throttling.mobileSlow4G as the default which can be set as follows:-
Download: 1.6 * 1024 * 0.9 = 1474 kbps.
Upload: 750 * 0.9 = 675 kbps.
latency: 150 * 3.75 = 562.5ms.
If you set up a profile (go to "Network" -> "Custom" -> "Add") with the above settings and call it something like "Page Speed Insights Throttle Settings" you can then use this to run a very similar profile.
An alternative way to get the profiling data
The trace that is produced in Developer Tools > Audits can be made to generate an accurate trace by clicking on the settings gear (top right of the audits panel) and unchecking "Simulated throttling" in the small bar that appears.
If you then run your audit (it will take longer as it applies the throttling now instead of simulating it). you can then access the trace that matches your report by clicking the "View Trace" button, which is located just above the photo timeline / below "Time to Interactive".

Related

Lighthouse Field data is not updated for a very long time, what should I do?

Due to a wrong UX design, I have got a very low CLS score. However, I have fixed the mistake over a month ago.
But the Field Data still remains not updated.
What should I do to force update the Field data?
What should I do to force update the Field data?
You can't I am afraid.
However if you want to see your Cumulative Layout Shift data in real time (to catch any problems / confirm fixes early) you can use the "web vitals library" - please see the final level 3 heading ("Tracking using JavaScript for real time data") at the end of this answer.
What is actually going on here?
Field data is calculated on a 28 day rolling basis, so if you made the change over a month ago the problem still persists.
Just because the lab tests yield a 0 cumulative layout shift does not mean that that is the case in the field.
In field data the Cumulative Layout Shift (CLS) is calculated (and accumulates) until the page reaches unload. (See this answer from Addy Osmani, who works at Google on Lighthouse, the engine behind Page Speed Insights).
Because of this you could have issues further down the page or that occur after an amount of time that would cause a layout shift to occur that would not be picked up by automated tests.
This means that if layout shifts occur once you scroll the page (due to lazy loading not working effectively for example) it will start affecting the CLS field data.
Also bear in mind that field data is collected across all devices.
Probable Causes
Here are a couple of probable causes:
Screen sizes
Just because the site doesn't show a CLS on the mobile and desktop sizes that Page Speed Insights uses does not mean that CLS does not occur at different sizes. It could be that tablets or certain mobile screen widths cause an item to "jump around" the screen.
JavaScript layout engines
Another possible causes is using JavaScript for layout. Looking at your "Time to interactive" and "total blocking time" I would guess your site is using JavaScript for layout / templating (as they are both high indicating a large JavaScript payload).
Bear in mind that if your end users are on slower machines (both desktop and mobile) then a huge JavaScript payload may also be having a severe effect on layout shifts as the page is painted.
Fonts
Font swapping causes a lot of CLS issues as a new font is swapped in it can cause word wrapping to change and therefore change the height (and width if the width is not fixed / fluid) of containers.
If for some reason your font is slow to load in or is very late in the load order this could be causing large CLS.
Yet again this is likely to be on slower connections such as 4G where network latency can cause issues. Automated tests may not pick this up as they throttle loading based on a simulation (via an algorithm), rather than applying throttling (actually applying latency and throughput slowdown) to each request.
Additionally if you are using font-icons such as font-awesome then this is a highly probable cause of CLS. If this is the cause then use inline SVGs instead.
Identifying the issue
Here is a question (and answer as nobody answered me) I created on how to identify problems with CLS. The answer I gave to my own question was the most effective way to identify the problem I could find, however I am still hopeful someone will improve upon my answer as more people get used to correcting CLS issues. The same principle would work for finding font word-wrapping issues.
If the issue is JavaScript related as I suspect then changing the CPU slowdown in Developer tools will allow you to spot this.
Go to Developer Tools -> Performance -> Click the "gear" icon if needed on the top right -> "CPU". Set it to 6x slowdown.
Then go to the "rendering" tab and switch on "Paint Flashing", "Layout Shift Regions" and "Layer borders". You may have to enable the "rendering" tab using the 3 vertical dots drop down to the left of the lower panel menu bar.
Now reload your page and look for any issues as you start navigating the page. Keep a close eye out for any blue flashes as they are highlighting items that were shifted. I have found that once I spot a potential shift it is useful to toggle the first two options on and off individually and repeat the action as sometimes layout shifts are not as noticeable as repaints but both together can be confusing.
So do I have to wait 28 days to see if I have fixed the problem?
No, if you watch your CLS score for about 7 days after a fix you will see a slow and steady improvement as people "in the red" disappear from the rolling 28 day average.
If your percentage in the red drops from 22% to below 18% after 7 days then the odds are you have fixed the issue (you would also see a similar drop for people "in the orange").
The actual CLS number (0.19 in your screenshot) may not change until after 28 days so ignore that unless it jumps upwards.
Tracking using JavaScript for real time data
You may want to check out The web vitals library and implement your own tracking of CLS (and other key metrics), this way you can have real-time user data instead of waiting for the Field Data to update.
I have only just started playing with this myself but so far it seems pretty straight forward. I am currently trying to implement my own end-point for the data rather than Google Analytics so I can have real time data under my control. If i get that sorted before the bounty runs out I will update the answer accordingly.
What should I do to force update the Field data?
I am not sure if YOU could do anything to change this data as this data is collected based on Chrome User Experience Report, as mentioned here:
The Chrome User Experience Report is powered by real user measurement
of key user experience metrics across the public web, aggregated from
users who have opted-in to syncing their browsing history, have not
set up a Sync passphrase, and have usage statistic reporting enabled
About your question as to why it is not being updated and in your lab data it has 0 cls but in field data it is not the same, again it depends on variety of factors. Lab data is basically running the report in a controlled environment (mostly your machine) while field data is result of aggregated data from variety of users with variety of network and devices and mot likely they will not be the same as you unless your targeted audience is using similar network and device as the one on which lab report was ran.
You can find few similar threads by searching webmasters forum.

How to optimize Raster (Rasterizer Thread) as seen in the Chrome Dev Tools Performance tab?

I'm trying to understand why these Raster processes have such a long duration, but I'm finding the documentation to be sparse.
From other people's questions, I thought it might be related to the images being painted, or javascript listeners, or elements being repainted due to suboptimal CSS transitions but removing the images, javascript, and CSS transitions didn't do the trick.
Would someone be able to point me in the right direction? How do I narrow down which elements or scripts are causing this long process? It's been two days and I'm making no headway.
Thanks!
The "Raster" section represents all activities related to painting. Any HTML page, after all, is an "image". The browser converts your DOM and CSS to the image to display it on a screen. You can read about it here. So even if you don't have any image on the page you still would see as a minimum one rasterizer thread in "Raster" which represents converting your HTML page to the "image".
By the way, Chrome(79.0.3945.79) provides some information if an image was initiated this thread.
Also, you can enable "Advanced paint instrumentation" in "Performance" settings to see in more details what is going on when the browser renders an image
After spending some hours with the same, I believe that the 4 ugly green rectangles called "Rasterize paint" are a bug in the profiler DISPLAY. My suspicion based on:
1) The rectangles start some senconds after the profiler started. NOT after the page loaded, so it seems it is bound to the profiler, not to the page.
2) The starting point of the rectangles depends on the size of the profiling timeframe. If I capture 3 seconds it starts after ~2 secs, if I capture 30 seconds it starts after ~20 secs. So the "cpu load increase" depends on the time you press the stop button.
3) If I enable "Advanced paint instrumentation" as maksimr suggested, I can click on the rectangle to see the details, and the details show ~0.4 ms events in the "Paint profiler", just like before the death rectangles started. (see screenshot, bottom right part)
3b) I even can click on different parts of the same rectangle, resulting different ~0.4 ms long events in the Paint profiler...

Web Dev: Strategy to maximise user website display speed

There are many specific 'display speed' amelioration techniques discussed on stackoverflow, however, you need to be aware of a particular option, before searching for how to do it.
Therefore, a guide-line strategy would be useful, particularly for those new to website development.
Due to this concept covering numerous individual areas.... the answer may prove best to be collaborative.
My website involves the usual suspects: js, css, google-fonts, images, and I have yet to embed 3 youtube videos.
As such, it likely represents a standard web site, encompassing the needs of many users.
I propose to commence an answer that can subsequently be added to, so that a general strategy can be determined.
Web Dev: Strategy to maximise user website display speed
(work in progress)
Step 1. Compression & Caching
Enable Gzip, or deflate compression, and set cache parameters.
(Typically achieved in .htaccess, unless you have control over your server.)
Step 2. Fonts
Best option is to stick with 'safe fonts'.
(Tested 'linked Google Roboto' in Dev-Tools - 2.8 seconds to download!… ditched it for zero delay.)
Step 3. Images
A single image might be larger than all other resources, therefore images must be the first in line for optimisation.
First crop the image to the absolute minimum required content.
Experiment with compression formats - photos (jpg), block colours (gif), pixels (png).
Then re-size the image (scale image) to the maximum size to be displayed on the site (controlled using css).
Reduced max-display size, allows greater jpg compression.
(if a photo site - provide a link to full size, to allow user discretion.)
Consider delayed image loading eg, LightLazyLoader where images are only loaded when the user scrolls to view more images.
Step 4. Arrange load sequence to provide initial user display
This requires the sequential positioning of linked resources.
Eg. If using responsiveslides, the script must be read before the images begin to display, otherwise many images will be stupidly displayed; until the script is read.
Ie. Breaking the ground rule, and loading scripts early, can present a working display, whilst other 'un-required elements' continue to load.
Step 5. Combine CSS and JS resources according to load sequence
Simply open a css resource in Notepad++ and copy it into, and above, your 'my.css' file… and repeat for JS.
Step 6. Minify the combined resources
Various program exist to achieve this… google tools offers this online… upload the file and download the minified version with your comments and white space stripped out.
(Remember to 'version name' the files, so that you keep your comments.)
Notes:
Keep testing the changes that you make (say) in google-dev-tools Ctrl+shift+I, Tab: Network.
This graphically shows element load times, and the overall timeline until fully loaded.
Where multiple elements begin loading at the same time… this is ideal.
Typically, another group will begin loading when the previous lot have finished.
It is preferable that all elements are downloaded simultaneously.
We are advised that this can be achieved by storing the different elements in different locations, to facilitate multi-threaded downloads (where elements cannot be combined).
Comment
Using 2Mb/s throttling, and a clean load, my 1st image display is now ready # 2 seconds (down from 15 seconds).
The 2s delay is due to other threads being occupied.
My objective is to commence the '1st image' load earlier, whilst ensuring that in less than 500ms the 2nd image is ready to be displayed.
This is a work in progress.

Window width hard limit in Opera and Firefox?

Okay... hey guys,
I hope you can help me solve this one, or maybe someone will be able to provide a comprehensible reasoning for the following.
The newer versions of the Opera and Firefox browser are forcing a reduced available width onto websites. I assume that's in order to fit their content better (with less unused space).
However, if a website's content exceeds the width of 1536 px (and your screen resolution is 1920 px in width), the available width is still capped at 1536 (a horizontal scrollbar appears).
I've prepared a demo as well:
http://r00t.li/test/opera_fittowidth_fckup.html
So I think it's a nice feature of those browsers to fit website content to the available width as it often improves the readability, but what on earth can I do if I want/need to utilize the full screen width?
I've toyed around with different meta tag viewport settings, but it hadn't have any effect. I guess that's for mobile devices only.
Ok, so after some more searching I found the culprit: the DPI setting of windows affects how these browsers display websites. For instance with a DPI of 125%, the Opera and Firefox browser try to apply this not just to their UI, but also to websites by rendering the content bigger (even though the website zoom in the browser is set to 100%), effectively decreasing the available pixel width.
As a web designer, one has apparently absolutely no control over this. And even if a user takes the time to change the windows DPI to 100%, it's not an acceptable solution. Granted, the websites look normal again, but the font-size of the windows UI is tiny - very hard to read.
But I don't want this to become a rant, so again; the solution is to change your windows DPI setting to 100%. This can be done like this:
Right click on the desktop and select Screen Resolution
Click on "Make text and other items larger and smaller"
Choose 100% and save your settings
Very sad that those browser developers made that decision... as if the 100 milliseconds it takes the user to hold Ctrl and tick the mouse wheel one or two times were too much.

Size limitiations when "Linking within a multi-page document" with jQuery Mobile?

Link to article
A single HTML document can contain one or many 'page' containers
simply by stacking multiple divs with a data-role of "page". This
allows you to build a small site or application within a single HTML
document; jQuery Mobile will simply display the first 'page' it finds
in the source order when the page loads.
I'm wondering about the limitations of this.
I'm building a site for a gallery exhibition which contains 340 images and 19 media files (audio and video)
The exhibition is divided into 16 seaprate galleries, each containing anythign from 4 to 90 images each.
Is this method descibed by jQuery mobile possible?
Obviously i'll be keeping the image size reasonable for this.
Thanks
I haven't built anything that large even in my demo time playing with jqm, but if you start to see your entire page document being more than say a 1MB in total size with images, html, css etc you are going run into some longer than recommended load times. I recommend checking this with google speed tools - It will give you a nice graph of all your resources loaded and the total page size (you will need to be on the dev channel of google chrome to use this tool)
I'm curious to see how usable this ends up. Let us know. If you do run into issues you may just have to find some logical points to split your document or possibly look at using json & flickr api to load some of your image galleries.

Resources