How to optimize Raster (Rasterizer Thread) as seen in the Chrome Dev Tools Performance tab? - performance

I'm trying to understand why these Raster processes have such a long duration, but I'm finding the documentation to be sparse.
From other people's questions, I thought it might be related to the images being painted, or javascript listeners, or elements being repainted due to suboptimal CSS transitions but removing the images, javascript, and CSS transitions didn't do the trick.
Would someone be able to point me in the right direction? How do I narrow down which elements or scripts are causing this long process? It's been two days and I'm making no headway.
Thanks!

The "Raster" section represents all activities related to painting. Any HTML page, after all, is an "image". The browser converts your DOM and CSS to the image to display it on a screen. You can read about it here. So even if you don't have any image on the page you still would see as a minimum one rasterizer thread in "Raster" which represents converting your HTML page to the "image".
By the way, Chrome(79.0.3945.79) provides some information if an image was initiated this thread.
Also, you can enable "Advanced paint instrumentation" in "Performance" settings to see in more details what is going on when the browser renders an image

After spending some hours with the same, I believe that the 4 ugly green rectangles called "Rasterize paint" are a bug in the profiler DISPLAY. My suspicion based on:
1) The rectangles start some senconds after the profiler started. NOT after the page loaded, so it seems it is bound to the profiler, not to the page.
2) The starting point of the rectangles depends on the size of the profiling timeframe. If I capture 3 seconds it starts after ~2 secs, if I capture 30 seconds it starts after ~20 secs. So the "cpu load increase" depends on the time you press the stop button.
3) If I enable "Advanced paint instrumentation" as maksimr suggested, I can click on the rectangle to see the details, and the details show ~0.4 ms events in the "Paint profiler", just like before the death rectangles started. (see screenshot, bottom right part)
3b) I even can click on different parts of the same rectangle, resulting different ~0.4 ms long events in the Paint profiler...

Related

Lighthouse Field data is not updated for a very long time, what should I do?

Due to a wrong UX design, I have got a very low CLS score. However, I have fixed the mistake over a month ago.
But the Field Data still remains not updated.
What should I do to force update the Field data?
What should I do to force update the Field data?
You can't I am afraid.
However if you want to see your Cumulative Layout Shift data in real time (to catch any problems / confirm fixes early) you can use the "web vitals library" - please see the final level 3 heading ("Tracking using JavaScript for real time data") at the end of this answer.
What is actually going on here?
Field data is calculated on a 28 day rolling basis, so if you made the change over a month ago the problem still persists.
Just because the lab tests yield a 0 cumulative layout shift does not mean that that is the case in the field.
In field data the Cumulative Layout Shift (CLS) is calculated (and accumulates) until the page reaches unload. (See this answer from Addy Osmani, who works at Google on Lighthouse, the engine behind Page Speed Insights).
Because of this you could have issues further down the page or that occur after an amount of time that would cause a layout shift to occur that would not be picked up by automated tests.
This means that if layout shifts occur once you scroll the page (due to lazy loading not working effectively for example) it will start affecting the CLS field data.
Also bear in mind that field data is collected across all devices.
Probable Causes
Here are a couple of probable causes:
Screen sizes
Just because the site doesn't show a CLS on the mobile and desktop sizes that Page Speed Insights uses does not mean that CLS does not occur at different sizes. It could be that tablets or certain mobile screen widths cause an item to "jump around" the screen.
JavaScript layout engines
Another possible causes is using JavaScript for layout. Looking at your "Time to interactive" and "total blocking time" I would guess your site is using JavaScript for layout / templating (as they are both high indicating a large JavaScript payload).
Bear in mind that if your end users are on slower machines (both desktop and mobile) then a huge JavaScript payload may also be having a severe effect on layout shifts as the page is painted.
Fonts
Font swapping causes a lot of CLS issues as a new font is swapped in it can cause word wrapping to change and therefore change the height (and width if the width is not fixed / fluid) of containers.
If for some reason your font is slow to load in or is very late in the load order this could be causing large CLS.
Yet again this is likely to be on slower connections such as 4G where network latency can cause issues. Automated tests may not pick this up as they throttle loading based on a simulation (via an algorithm), rather than applying throttling (actually applying latency and throughput slowdown) to each request.
Additionally if you are using font-icons such as font-awesome then this is a highly probable cause of CLS. If this is the cause then use inline SVGs instead.
Identifying the issue
Here is a question (and answer as nobody answered me) I created on how to identify problems with CLS. The answer I gave to my own question was the most effective way to identify the problem I could find, however I am still hopeful someone will improve upon my answer as more people get used to correcting CLS issues. The same principle would work for finding font word-wrapping issues.
If the issue is JavaScript related as I suspect then changing the CPU slowdown in Developer tools will allow you to spot this.
Go to Developer Tools -> Performance -> Click the "gear" icon if needed on the top right -> "CPU". Set it to 6x slowdown.
Then go to the "rendering" tab and switch on "Paint Flashing", "Layout Shift Regions" and "Layer borders". You may have to enable the "rendering" tab using the 3 vertical dots drop down to the left of the lower panel menu bar.
Now reload your page and look for any issues as you start navigating the page. Keep a close eye out for any blue flashes as they are highlighting items that were shifted. I have found that once I spot a potential shift it is useful to toggle the first two options on and off individually and repeat the action as sometimes layout shifts are not as noticeable as repaints but both together can be confusing.
So do I have to wait 28 days to see if I have fixed the problem?
No, if you watch your CLS score for about 7 days after a fix you will see a slow and steady improvement as people "in the red" disappear from the rolling 28 day average.
If your percentage in the red drops from 22% to below 18% after 7 days then the odds are you have fixed the issue (you would also see a similar drop for people "in the orange").
The actual CLS number (0.19 in your screenshot) may not change until after 28 days so ignore that unless it jumps upwards.
Tracking using JavaScript for real time data
You may want to check out The web vitals library and implement your own tracking of CLS (and other key metrics), this way you can have real-time user data instead of waiting for the Field Data to update.
I have only just started playing with this myself but so far it seems pretty straight forward. I am currently trying to implement my own end-point for the data rather than Google Analytics so I can have real time data under my control. If i get that sorted before the bounty runs out I will update the answer accordingly.
What should I do to force update the Field data?
I am not sure if YOU could do anything to change this data as this data is collected based on Chrome User Experience Report, as mentioned here:
The Chrome User Experience Report is powered by real user measurement
of key user experience metrics across the public web, aggregated from
users who have opted-in to syncing their browsing history, have not
set up a Sync passphrase, and have usage statistic reporting enabled
About your question as to why it is not being updated and in your lab data it has 0 cls but in field data it is not the same, again it depends on variety of factors. Lab data is basically running the report in a controlled environment (mostly your machine) while field data is result of aggregated data from variety of users with variety of network and devices and mot likely they will not be the same as you unless your targeted audience is using similar network and device as the one on which lab report was ran.
You can find few similar threads by searching webmasters forum.

What's causing this excessive "Composite Layers", "Recalculate Style" and "Update Layer Tree" cycle?

I am very intrigued by the excessive number of "composite layers", "recalculate style" and then "update layer tree" events in one of our webapps. I'm wondering what's causing them here.
If you point your Chrome to one of our fast moving streams, say https://choir.io/player/beachmonks/github, and turn on your "FPS meter", you can see that the app can achieve about 60fps most of the times when we are on the top.
However, as soon as I scroll down a few messages and leave the screen as it is, the FPS rate drops dramatically down to around 10 or even lower. What the code is doing here is that it renders each incoming message, prepends it to the top and scroll the list up Npx, which is the height of the new message, to keep the viewport position intact.
(I understand that scrollTop will invalidate the screen but I have carefully ordered the operations to avoid layout thrashings. I am also aware of the synchronous repaint that happens every second, it's caused by the jquery.sparkline but it's not relevant to this discussion.)
Here is what I see when I tried to profile it.
.
What do you think might be causing the large number of layer operations?
The CSS property will-change: transform on all elements needing a repaint solved the problem with too many Composite Layers for me.
I had the same issue. I fixed it by reducing the size of the images.
There was some thumbnails in the scrollable list. The size of each thumbnail was 3000x1800 but they were resized by CSS to 62x44. Using 62x44 images reduced the time consumed for "Composite layers".
Some info about the Composite Layers that can help
from what I see here it says
event: Composite Layers
Description: Chrome's rendering engine composited image layers.
for the meaning of the word composite from wikipedia
Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene
so this the process of making the page we actually see by taking the output of coding/resizing images, parse HTML and parse CSS to make the final page we see

How do I make above-the-fold content load first on my website? (control the load order)

My website for architectural visualization: http://www.greenshell3d.com
I noticed on the networking tab / incognito it takes 15 seconds or so to load the above-the-fold content. (most notably the image slideshow.)
Some of the images in the slideshow load at the very end instead of the beginning of the website load process. Now I understand the browser handles this order, but perhaps there is another way. As it stands, the bounce-rate is too high and I expect it is because of load time.
I've seen a jquery snippet on github that allows one to control the order of image loads - do you think this is a good option? I'd be glad to hear any opinions before investing the time to fix this.
Any ideas? Thanks!
You said you are interested in any opinions as well, so first some general thoughts: There is no page fold. The web that we produce content for exists in so many different screen sizes + resolutions that it’s impossible to say "The fold is below this big image!". Yes, Google changed the pagespeed insights tool to make people load stuff on top of the page first, but I think their wording there is really bad.
Now to your image loading issue:
The first thing I would recommend is to reduce the size of all the images. They seem to be around 280 - 300 kb per image and you have a few of them. Since there is a translucent overlay over them anyways you can probably get away with reducing the image quality without people noticing it (because they don’t see the image directly). Play around with the values here.
I would then look into optimizing the code for the slider to load the first image first, then the rest of the page and the other images asynchronously maybe after that. Another trick could be to increase the slide fade time from the first slide to other slides so the slider doesn’t change if the next image isn’t ready yet. You said you found a jQuery script to implement that, that’s where I’d start.
As a general guideline: the position of requests in the source code usually determines the load order of things on the page. If your images are requested by JavaScript at the end of the page, that lead to the images being loaded later than you want them to be loaded.

Increase page loading speed

In my app, I've a panorama-page which contains around 10 panorama items. Each panorama item has some path drawings, a list picker and few input fields.The problem i'm facing is that whenver i navigate to this page the navigation is very slow due to lot of content to initialize. If i comment the InitializeComponent(); the loading becomes fast.I thought of adding the XAML content in code, but the problem is that i've to access the input fields by their name in code, so it didn't worked.Any idea how i can speed up the navigation to the page.Thanks..
From the UI Guide:
Use either a single color background
or an image that spans the entire
panorama. If you decide to use an
image, any UI image type that is
supported by Silverlight is
acceptable, but JPEGs are recommended,
as they generally have smaller file
sizes than other formats.
You can use multiple images as a
background, but you should note that
only one image should be displayed at
any given time.
Background images should be between
480 x 800 pixels and 1024 x 800 pixels
(width x height) to ensure good
performance, minimal load time, and no scaling.
Consider hiding panorama sections
until they have content to display.
Also, 10 PanoramaItems seems like a lot since the recommended maximum is 4. You should either cut down on the number, or hide the content until it's required. Have a read of the best practice guide for Panoramas on MSDN.
I think you could improve page performance by creating usercontrols for the specific panorama items, add an empty panorama control to your page (with only the headers) and as picypg suggests load these usercontrols when they are needed.
Another way could be that you load the first page and show this one already to the user. In the background you could start loading the other panorama items.
My suggested approach would be for the first one. Using the lazyloading principle.
I woudl assume that your delays are due to the number of items on the page. This will lead to a very large object graph which will take a long time to create. I'd also expect it's using lots of memory and you have a very high fill rate which is slowing down the GPU.
Having input items/fields on PanoItems can cause UX issues if you're not careful.
That many panoItems could also cause potential navigation issues for the user.

What screen resolution should my web app target for an average non-technical users?

I noticed StackOverflow appears to be targeting screen resolution widths of 1024px or more. I also checked Amazon, NBC, MSN, & AOL which target more lay users, and they all appear to be targeting the same width.
Is 1024px the current recommended width for web apps targeting the largest cross-section of users who use default monitor resolution/browser size?
Use liquid layout. Then you can easily accommodate everyone from ~800 to ~1600 width, and with a bit more work and care even lower-resolution devices too. This also gives users #1024 some leeway to zoom the page if they find the text too small.
Remember there'll be things like netbooks which don't have the big screens we expect today. You can get away with a horizontal scrollbar, but if you have to scroll the page just to get the main body of text in, you're lost.
Before sounding so condescending, you may want to read up on the modern user base. Netbooks. PDAs. Smartphones. Smartbooks (you do know what those are, being very sophisticated, right?). Programmers who have their screen in portrait orientaton. People who stack their windows side by side. Kiosks.
UPDATE As per conversation with John, I edited the question to change the tenor a bit to reflect his original intent. However, the original paragraph that I wrote is still true- I haven't seen the latest statistics but the days of "90% of users have AxB resultion/window size on their browser" are probably forever gone, what with wide screen laptops and mobile devices. Makes life more exciting for UI designers :)
Having said that, to develop a really usable web site, you need to couple flowing layout with, ideally, ability to use portlets and portal framework (think My Yahoo), so people can choose the page layout most comfortable for them.
make a good use of 960.gs and you will set everything that you need to start a good web site :)
(source: balexandre.com)
The 960 Grid System is an effort to streamline web development workflow by providing commonly used dimensions, based on a width of 960 pixels. There are two variants: 12 and 16 columns, which can be used separately or in tandem.
960 GS it's a lovely start, doing web or images, they have a complete template for almost any good design program (Photoshop, Ilustrator, Fireworks, InDesign, etc) as well a CSS generator and a Grid Overlay to help you with the website.
I use it and it's fantastic! check out the demo
Nettuts has a tutorial and video. WooThemes wrote a post entitled “Why we love 960.gs” and use it as a starting point for their WordPress themes. Spanish speakers can also check out tutorials by Jepser Bernardino and Miguel Angel Alvarez.
Unsophisticated? I think that's a bit of a rude way to describe the unwashed masses. I suppose every one and their dog has a 1024px width monitor now thanks to the likes of dell and others...
The maximum I would consider targeting as my "base" is 1280x1024, but I would be much more likely to go 1024x768.
That said, in my current projects I try to do a liquid layout with a min-width of 800 to accomidate netbooks and usually a max-width of around 1000px (970 usually). Of course, I also have the luxury of designing for myself, so I have the privilege of telling IE6 users that they should upgrade, which makes the liquid layouts much easier to design.
Summary:
Design with your browser's inner dimensions set to 1250x668 to satisfy 92.7% of users.
I like being stats-driven. To this end, W3Schools has a nice Browser Display Statistics page, which they update periodically with new statistics on how common each screen resolution is.
As of January 2015, 92.7% of browsers visiting W3Schools pages were attached to displays larger than 1024x768, though 39.3% of all displays were limited to 768 pixels in height (or lower), mostly due to the 33% of them having 1366x768 displays.
Unfortunately, W3Schools measured screen resolution rather than the inner dimensions used for rendering web page content. It'd be real nice to get stats on users' window.innerWidth and window.innerHeight instead.
Because we don't have these, we have to reserve room for window decorations that may be larger than our own, as well as browser widgets that may further take away from the space dedicated to rendering a web site. Additionally, not all users browse the web in a maximized web browser, though I think we can ignore that if we assume lower-resolution displays will have maximized browsers.
Windows 7 seems the biggest offender at eating up screen real estate, with what I'm measuring as 30-40px for the task bar (I had to search for a screen shot, as I don't run Windows). Firefox with titlebar, menubar, bookmarks toolbar, and status bar eats another 159px while the slimmer modern FF only consumes 64px. Let's use the slim version and assume around 100px of vertical space will be lost. Maximized browsers don't appear to consume any extra horizontal space, so you really only need to account for the scroll bar, but I'd reserve a few pixels for window edges just in case, bringing us up to 30px.
A few years ago (when I did more web design than I do today), I'd size my own browser to an inner size of 800x550 and made sure that most pages would not have scroll bars. Nowadays, it looks like that can be expanded to around an inner size of 1250x668.
You can check your inner size by putting this in your location bar:
javascript:alert(window.innerWidth + "x" + window.innerHeight)
Those values are read-only; you used to be able to run something like this to resize your inner dimensions, but (thanks to abusive advertisers) it no longer works:
javascript:window.resizeTo(window.outerWidth-window.innerWidth+1250,window.outerHeight-window.innerHeight+668)
One parting note: Just because you're assuming a certain size doesn't mean you shouldn't ensure that your site still works at smaller resolutions. The page can be ugly, but it must be functional!

Resources