Lighthouse Field data is not updated for a very long time, what should I do? - performance

Due to a wrong UX design, I have got a very low CLS score. However, I have fixed the mistake over a month ago.
But the Field Data still remains not updated.
What should I do to force update the Field data?

What should I do to force update the Field data?
You can't I am afraid.
However if you want to see your Cumulative Layout Shift data in real time (to catch any problems / confirm fixes early) you can use the "web vitals library" - please see the final level 3 heading ("Tracking using JavaScript for real time data") at the end of this answer.
What is actually going on here?
Field data is calculated on a 28 day rolling basis, so if you made the change over a month ago the problem still persists.
Just because the lab tests yield a 0 cumulative layout shift does not mean that that is the case in the field.
In field data the Cumulative Layout Shift (CLS) is calculated (and accumulates) until the page reaches unload. (See this answer from Addy Osmani, who works at Google on Lighthouse, the engine behind Page Speed Insights).
Because of this you could have issues further down the page or that occur after an amount of time that would cause a layout shift to occur that would not be picked up by automated tests.
This means that if layout shifts occur once you scroll the page (due to lazy loading not working effectively for example) it will start affecting the CLS field data.
Also bear in mind that field data is collected across all devices.
Probable Causes
Here are a couple of probable causes:
Screen sizes
Just because the site doesn't show a CLS on the mobile and desktop sizes that Page Speed Insights uses does not mean that CLS does not occur at different sizes. It could be that tablets or certain mobile screen widths cause an item to "jump around" the screen.
JavaScript layout engines
Another possible causes is using JavaScript for layout. Looking at your "Time to interactive" and "total blocking time" I would guess your site is using JavaScript for layout / templating (as they are both high indicating a large JavaScript payload).
Bear in mind that if your end users are on slower machines (both desktop and mobile) then a huge JavaScript payload may also be having a severe effect on layout shifts as the page is painted.
Fonts
Font swapping causes a lot of CLS issues as a new font is swapped in it can cause word wrapping to change and therefore change the height (and width if the width is not fixed / fluid) of containers.
If for some reason your font is slow to load in or is very late in the load order this could be causing large CLS.
Yet again this is likely to be on slower connections such as 4G where network latency can cause issues. Automated tests may not pick this up as they throttle loading based on a simulation (via an algorithm), rather than applying throttling (actually applying latency and throughput slowdown) to each request.
Additionally if you are using font-icons such as font-awesome then this is a highly probable cause of CLS. If this is the cause then use inline SVGs instead.
Identifying the issue
Here is a question (and answer as nobody answered me) I created on how to identify problems with CLS. The answer I gave to my own question was the most effective way to identify the problem I could find, however I am still hopeful someone will improve upon my answer as more people get used to correcting CLS issues. The same principle would work for finding font word-wrapping issues.
If the issue is JavaScript related as I suspect then changing the CPU slowdown in Developer tools will allow you to spot this.
Go to Developer Tools -> Performance -> Click the "gear" icon if needed on the top right -> "CPU". Set it to 6x slowdown.
Then go to the "rendering" tab and switch on "Paint Flashing", "Layout Shift Regions" and "Layer borders". You may have to enable the "rendering" tab using the 3 vertical dots drop down to the left of the lower panel menu bar.
Now reload your page and look for any issues as you start navigating the page. Keep a close eye out for any blue flashes as they are highlighting items that were shifted. I have found that once I spot a potential shift it is useful to toggle the first two options on and off individually and repeat the action as sometimes layout shifts are not as noticeable as repaints but both together can be confusing.
So do I have to wait 28 days to see if I have fixed the problem?
No, if you watch your CLS score for about 7 days after a fix you will see a slow and steady improvement as people "in the red" disappear from the rolling 28 day average.
If your percentage in the red drops from 22% to below 18% after 7 days then the odds are you have fixed the issue (you would also see a similar drop for people "in the orange").
The actual CLS number (0.19 in your screenshot) may not change until after 28 days so ignore that unless it jumps upwards.
Tracking using JavaScript for real time data
You may want to check out The web vitals library and implement your own tracking of CLS (and other key metrics), this way you can have real-time user data instead of waiting for the Field Data to update.
I have only just started playing with this myself but so far it seems pretty straight forward. I am currently trying to implement my own end-point for the data rather than Google Analytics so I can have real time data under my control. If i get that sorted before the bounty runs out I will update the answer accordingly.

What should I do to force update the Field data?
I am not sure if YOU could do anything to change this data as this data is collected based on Chrome User Experience Report, as mentioned here:
The Chrome User Experience Report is powered by real user measurement
of key user experience metrics across the public web, aggregated from
users who have opted-in to syncing their browsing history, have not
set up a Sync passphrase, and have usage statistic reporting enabled
About your question as to why it is not being updated and in your lab data it has 0 cls but in field data it is not the same, again it depends on variety of factors. Lab data is basically running the report in a controlled environment (mostly your machine) while field data is result of aggregated data from variety of users with variety of network and devices and mot likely they will not be the same as you unless your targeted audience is using similar network and device as the one on which lab report was ran.
You can find few similar threads by searching webmasters forum.

Related

How to set DPI scale to less than 100% on Windows 10 - With multiple displays

So I have a big 32 inch display with a resolution of 1440p, and I want to set the DPI scaling to 75% instead of 100%. But I can't find any way to do so on multiple monitors.
I currently have:
Display 1 [2560 x 1440] (Main display I want to change)
Display 2 [2560 x 1440] (This one is 27 inches so it's fine as is)
Display 3 [3840 x 2160] (Set to 100%, fine as it is)
This trick (click me) changes DPI scaling via some registry keys (LogPixels & Win8DpiScaling), but when I use that trick it downscales display 3 instead of display 1.
Is there a way to get this to work? I see no reason for Microsoft to limit the scaling in displays.
Note: I have a 2070 super, all the displays are plugged into the GPU via displayport directly, with the latest avalible firmware at the time of writing (september 2021)
The tl;dr:
Technical limitations aside, there are very solid user experience reasons why this probably isn't allowed.
No, Windows will not let you set UI scaling below 100%.
(even if a stable workaround were to be discovered, most users would probably be quite unhappy with the results)
While I would love¹ to be proven incorrect, the implications of scaling at less than 100% are so fraught that this limitation is unlikely to change in the near future.
Background:
This has been the case for ages, likely since Windows first introduced the feature.
Compatibility with current software
The only ~purely technical~ reason I can think of:
The 100% scaling size likely uses the smallest base image (e.g. Explorer and Taskbar icons, mouse and text cursors) resources included in various existing Microsoft and 3rd-party applications.
User experience
Going below the 100% point may cause small UI text and icons, especially in application toolbars and the Taskbar to be blurred to the point of ambiguity.
Those fine lines in the taskbar 'Windows' menu icon? Blurred or gone.
Taken to the extreme, the UI ~might~ become so unreadable that the user is effectively prevented from being able to read the text even in the 'Settings' window and therefore is 'stuck': i.e. not able to navigate through 'Settings' to restore the original '100%' scaling mode.
(Luckily, Windows is never used to run any SCADA software where confusing two icons could theoretically cost money or lives.)
Performance:
Since those carefully-designed graphic assets don't exist, if sub-100% scaling were allowed, it would also likely cause extra CPU/GPU workload - that is why only certain fixed sizes of up-sampling are shown on the normal Display settings screen and why the Advanced scaling settings screen warns that custom scaling between 100-500% is "not recommended".
That might also apply to any fixed scaling option offered below 100%, and absolutely would for custom scaling sizes.
Some people enjoy reading:
Vector-based TrueType/OpenType fonts usually contain a ~lot~ of manual tweaking / hints to enable readable display of very small point sizes.
The marketing department & friends of the C-suite
Could they implement this at a limited range of options? 90%? 75%?
Perhaps - but it's extra testing for a horrible-looking edge case.
The existence of the option, even if only available as a registry hack, might cause some people to actually use it in kiosks and other public-facing displays; this risks the same sort of bad PR as when a BSOD is seen on the 'arrivals' screen at a train station or airport monitor.
Combined with the first example below, even a 90% option could cause trouble in some environments.
Example and tutorial:
Imagine how Windows might look displayed on one of those cheapo '1080p-supported' projectors that actually only contains an imager with a native pixel resolution of, say, 1024x576 (or even 480x234).
Windows thinks it can send 1080p, since that what the HDMI connection advertises, so it does: any text / vector content looks atrocious.
(At least in this case the user could normally² unplug the projector and reconnect to a normal monitor to restore functionality.)
See for yourself... while connected to any monitor (at that monitor's native resolution), with Windows set to 100% scaling:
Open Windows Notepad
Type or paste in any block of text
Now, use the Zoom Out command from the View menu³ five or more times in a row
While not an exact analogue, you may still see how hard it could be to read down-sampled text, even when very high-contrast (the best-case scenario).
   ¹: As someone currently typing this very answer on a 1080p connection to a 55" 4K television as a second monitor, I came across the question very much hoping this was possible. Sadly, logic intervened and killed my potential joy.
   ²: Unless the computer is actually stored somewhere locked or inaccessible, such as a NUC-style PC hidden above the false ceiling in a conference room.
   ³: Alternatively, press <CTRL>-<Minus> five or more times.

Supporting A/B tests without hurting CLS metric

We are a 3rd-party vendor that adds components / UI elements to our clients' websites. We sometimes hide/change the size of this container in run-time, based on contextual parameters or as part of A/B testing.
It is impossible for the website owner to know the final size of the element before we have all the contextual data, so the height cannot be set on the server-side.
To minimize the effect on CLS, the website owner can set an initial height for the container, but this has two issues:
It does not completely eliminate CLS, only reduces it slightly
It creates a bad UX where the page loads up with a white space which then disappears / changes height
What is the recommended approach for eliminating the CLS impact of such an element?
We sometimes hide/change the size of this container in run-time, based on contextual parameters or as part of A/B testing.
Any time you change the size of content at runtime, you risk shifting other content on the page around, and that can negatively affect the use experience. Before you spend a lot of time trying to "fix" CLS for your use case, you might want to consider whether your use case if the right experience for users.
If you cannot change your system and just want to minimize its impact on CLS, here are some options:
Collapse the area only after user input (perhaps ask users to close the container, or, wait for some other expected reason for page re-layout).
Only collapse if all affected content is outside of the viewport. It sounds like you already do this for below-the-fold? For above-the-fold content, you may be able to simultaneously remove content and adjust the scroll position by the exact same amount.
And, perhaps some broader alternatives are:
Don't collapse the area, but replace it with some default content that cannot fail.
Likely not an option, but maybe there are ways to delay showing content until you know if the conditional content will be needed? This depends on how late loading your content is and will have negative tradeoffs with loading performance if you cannot answer that success test quickly...

Limit the frame rate on an aframe project

I am developing an aframe project on my MacBook pro, late 2013. When running the project, the fan of my computer always spins fast, regardless which browser I use (firefox, safari, chrome) and the project size (also happens with a project just containing a simple a-box).
aframe-stats show me that my project (1028244 vertices, 342748 faces) still runs with 20 fps.
Is it somehow possible to limit the frame rate to 10fps in order to keep my computer quite? Or any other way to limit the flop-consumption of the aframe project? I already tried a native approach with sudo cputhrottle plugin-container 10 but that did not just throttle the aframe-renderer but the whole firefox browser. Can I pull the break somewhere in the JavaScript or the Browser settings?
It's difficult to say without your project code. Large data sets will simply crank out even a high spec macbook pro. I have found it helpful to pause any rendering whenever possible to quiet the users' machines.
I personally removed automated next animation frame rendering in favor of waiting for controls and objects to change.
For example:
this.controls.addEventListener( 'change', function(e){ addToRenderStack(); });
A simple function addtorenderstack puts in a new value in a list for a render, with the expectation that the render will occur at some point in the future and not right away. the list can also be used to log who requested the render in the call stack, and narrow down performance hogs.
addtorenderstack places a render request in a list. In the requestanimationframe loop, if the list has any length, a render is called on the scene. The stack is immediately cleared rather than processed one by one. If controls or animations continue to make render requests, the list will have a length again and request animationframe will process them in the same way with another render.
In this way, the code only renders when absolutely required. This saved me much grinding on framerate and the fans only come on during intensive operations and then shutdown when its complete, much like a typical 3d game experience.
Your mileage may vary depending on what's happening in your app. I work in engineering so often the view of the 3d world is stopped as an engineer examines or shows a model.

Poor ListView performance on Gluon

I have a custom ListCell implementation, shown in the picture below.
The left side, which represents the date, consisting of 3 labels, put in a VBox and the "CounterContent" consisting of the counter, with a TextField for each digit, contained in a HBox, and two Hboxes containing labels for kWh, kWh/day and so on. And that seem to be just too much, to be running performantly.
I've tried to load the data in a background task, showing a progress indicator, while the task is running, but unlike on desktop, on android the performance is very poor. Whenever I switch to the listview, the garbage collection kicks in, and blocks the ui thread, so that the progress indicator never shows up.
I've tried it on a Huawei Y-300, Android 4.1.1, javafxports 8.60.6 (because javafxports 8.60.7 causes a bug, that makes TextField unusable), and on a Samsung S5 mini, Android 5+. On the Samsung phone the performance in general is way better, just like expected, because of the Ahead-of-Time compilation I guess, but there is still the garbage collection issue. Furthermore after the listview has been populated with cells, the scrolling is not very smooth.
Is the listcell to complex or what else could be the matter for the poor performance?
UPDATE:
After running a lot of tests it seems the unsmooth scrolling is not caused by performance problems. At least on the S5 (javafxports 8.60.7).
I removed all css styling, and replaced the textfields by a single label (the counter node is already a custom control(forgot about that), which lays out the textfields in 2 Regions(not HBoxes) and the nodes of the ListCell are instantiated in the constructor). Furthermore I switched the ListView for a CharmListView and set android.monocle.input.touchRadius=1.
None of these steps resulted in considerable improvement.
Just to clarify: In contrast to the Huawei phone, the scrolling on the S5 and android 5+ is usable, but it's not very smooth, which makes for a unsatisfactory user experience.
On the Huawei (javafxports 8.60.6), changing the counter textfields for a label, gave a significant improvement, but not to the point where the scrolling became usable. Until I set this magical experimental switch: gluon.experimental.performance=true, which makes the listview scrolling lightning fast(after a little warmup delay), but still not really smooth.
There are many reasons why the performance of a complex scene is reduced, so this is just a list of possible ideas that might help improving it, in any order.
ListCell
For starters, the number of nodes in the cell is really high. Notice that every single scroll you make means the full rendering of the virtual flow that holds the visible cells. And for every cell, that means recreating its content all over again.
Without viewing your code I can't tell, but you should avoid creating new instances of every node in the cell all the time, by having just one single instance, and in the updateItem method only change the content of the nodes.
Have a look at this sample. The NoteCell class is a custom cell, where a ListTile is used.
Number of nodes
Have you tried using just a Label to replace the 8 textfields and 3 boxes?
Cache
If you use images downloaded from Internet, use Gluon Charm Down Cache to avoid the same image being downloaded over and over again.
Have a look at this sample. Without the cache, even on desktop the performance is really affected.
Also use the JavaFX built-in cache for any node, trying different cache strategies.
CSS
Complex CSS requires long CPU time. Try to simplify it. Even you can remove the whole CSS for a quick test. Then decide what you may or may not use.
The same goes for animations: Avoid animations, transitions or even CSS effects, if possible.
Custom Control
The counter complex node could probably be replaced by a custom control that optimizes the rendering.
CharmListView
Have you tried using the Gluon Charm CharmListView control instead of the ListView?
There's a new experimental flag that you can use to test a possible optimization that might improve performance while scrolling the list. Set gluon.experimental.performance=true on the java.custom.properties file, and give it a try.
JavaFXPorts version
You mentioned you are using 8.60.6 because of the TextField bug. In this case, are your TextField nodes editable? If not, I'd suggest replacing them with other nodes and running with 8.60.7, since it contains a lot of performance improvements.
Performance tools
Use performance tools like Monitor and use its profiling options so you can trace down any possible bottle neck.
CPU
Last but not least: your mobile device specs are always critical.
Trying to render a complex scene on a Cortex A5, given that "it is the smallest, lowest cost and lowest power ARMv7 application processor", or using a very old Android 4.1.1, can't perform as well as running it on a brand new device with higher specs.
As you also mentioned, running on a Cortex A7 performs "way better". Have a look at this comparison, and find the right architecture for the job.
Anyway, there's always room for improvement, and a lot of effort is put into it. Your feedback is always welcome.

Coordinating graphic elements with streaming media

if you were watching the State of the Union Address (http://www.whitehouse.gov/state-of-the-union-2013) you would have seen graphic supplements that appeared alongside of the video stream of the President that served to illustrate his key points.
The video on the site is a composite of this, but during the live streaming these were handled separately.
My question is: what is the best approach for doing this? especially if one wanted very tight control of the appearance of the graphics (i.e. right when the point is made, not before and not long after).
I'm wondering if any tools exist to facilitate this? I've been scouring google, but I don't think that I have the correct technical vocabulary for what I'm describing because I'm coming up blank.
I imagine AJAX would be a good starting point, but I'm not sure how to achieve the level of control that they had, or how to handle the back end of things.
For anyone who might encounter this challenge we devised two ways to solve it:
The first is a bit mickey mouse: It requires that you know how many images, etc you want to use beforehand (which in most cases you would). We wrote a script to repeatedly request an image and inserts it into the page, and on finding an image then request the next image in the chain.
Ie. Display default image -> request image 1
then, displaying image 1 -> request image 2
etc
From your end you can simply drop the images into a folder on your server when you are ready for them to go in. An advantage of this is that the images can be interactive, with links to other content, etc.
The big disadvantage, of course, is a lot of unnecessary requests to your page. In our case we anticipated enough traffic that it didn't seem wise. Also, there are plenty of opportunities for mistakes and depending how frequently your timer fires there are likely to be timing discrepancies.
The Second costs money: we found the program Ustream (http://www.ustream.tv/producer) which allows us all the image control we require in terms of timing with the advantage of providing support for media clips etc. And it allows you to record everything streamed.
The disadvantage is that what the user sees is an integrated video on your site, so that you have to handle links to related content and provide images (if you want your users to have access to them) separately.
Hope this comes in handy for someone
I would still welcome any suggestions on how to make the first method more effective

Resources