In brief: I've got a page with KO-code that operates absolutely cool in Google Chrome, Firefox, Safari, etc. But the performance is gone in Internet Explorer. I tried IE10, IE11. It takes from 10 to 25 seconds to render about 150 rows.
Details: There page represents a work queue for users, where their tasks are shown. The requirement is not to use any paging on that page. Each row of the table has at least a dozen of variants to display (links, buttons, inputs, css styling, handling user events, custom js plugins, etc.). The average number of rows on prod is 100-200+. User is able to apply different filters and sortings.
Things I've already tried:
reduced the number of computed properties (changed to pureComputed, where possible)
reduced the number of using the template, if and ifnot bindings (according to profiler they are the most time consuming task) - I use the visible, where possible
tried to use the knockout-fast-foreach custom binding (https://github.com/brianmhunt/knockout-fast-foreach)
profiled the code with IE and Chrome tools to eliminate the memory leaks
profiled the code with ko.bindingReport.js (https://gist.github.com/kamranayub/65399fa247a6c182bc65)
The approaches specifed above tuned the code (according to ko.bindingReport.js) almost two times faster in Chrome. But IE is still too slow - about 10 seconds for rendering.
Chrome:
Internet Explorer:
Folks, any ideas?
"The table binding provides a fast method for displaying tables of data using Knockout. table is about ten times faster than nested foreach bindings."
This claims to be 10x faster.
https://github.com/mbest/knockout-table
You reduced the amount of computed observables, but did you also reduce the amount of observables? I'm not seeing a lof of editable fields. The ones that are not being edited on the page probably don't need to be an observable? This has boosted my performance quite a lot of times.
Related
I'm in the need of an Excel like grid in an attempt to convert an "application" written in Google Calc to a real application. I've got one implementation using Vaadin, but it (also) suffers from a long page construction. The screenshot below uses a CSS flex grid with individual divs, and given 6 weeks, there are over 5000 individual divs.
Constructing this page takes over 20 seconds, not something users will be happy about. I'm working on a version based on a table, but it does not seem to improve much. In the end the same amount of cells need to be constructed, whether they are DIVs or TDs does not seem to matter much.
Is there a way to construct such a grid in a more speedy way? I'm more than happy to solve "where did the user click?" on the server side. To be aware of: besides the number of cells themselves, each also has specific content, so just getting a grid shown is not enough.
Each component (div, or something else) is managed by the server. So when you have 5000 of them it's quite slow. You need to reduce the number of components managed by the server.
I can't give you a better answer since I don't know the requirements. But the idea is to try to combine some elements.
You have an example of a table generated ( instead of each element one by one) here: https://cookbook.vaadin.com/grid-details-table.
You can also create or own component. There is also a paid add-on: spreadsheet which seems to fit your needs. It's still in preview: https://vaadin.com/roadmap
The problem here is the complexity of the UI itself. Rendering 5000+ cells will be slow what ever method you use and what ever framework you use. There will be big amount of elements in the DOM and you need to load also lot of data upfront. And as you see the result is huge, and it wont fit most screens. So I would recommend further design of the UI. Is it really necessary to show all the weeks at once? Your UI's complexity will already reduce a lot if you show only one week at the time and add buttons to browse the weeks forwards and backwards. But even with that optimization you will have lot of columns. I would consider adding another browsing direction by day. Further knowledge of the actual purpose of the UI will naturally give more insight how to develop it further.
I have a large federated model loaded in Autodesk Forge Viewer (~300k elements from several IFC files). I'm doing a cross-model (aggregate) selection like this:
var selection = [{model1 : [ids...]}, {model2 : [ids...]}, {model3: [ids...]}, etc...);
viewer.impl.selector.setAggregateSelection(selection);
Now, given that the number of selected elements goes to 100k+, this freezes the UI for a couple of seconds, then once all of the elements get highlighted in the viewer the performance (fps) of the viewer degrades significantly. Switching to isolation instead of selection (highlighting) improves viewer performance but it still freezes the UI for a couple of seconds while doing it.
Are there any performance tips when doing these large selections, can the selection/isolation process be done async so the UI feels more responsive?
Cheers
I'm afraid the selection mechanism (both the state toggling and the rendering) isn't optimized for such numbers, and there's no way to optimize it yourself. This would require a code update in the internal viewer implementation.
Before we pass this to the engineering team, however, I would like to ask about your actual use case. The selection functionality is typically used to bring user's attention to one (or a small number) of elements in the design, or to allow the user to choose one (or a small number) of elements for some further processing. But selecting 100k+ elements, and then continuing the use of the viewer with those 100k+ elements selected? What exactly do you need that for? Have you considered using other features of the viewer such as the theming colors?
Due to a wrong UX design, I have got a very low CLS score. However, I have fixed the mistake over a month ago.
But the Field Data still remains not updated.
What should I do to force update the Field data?
What should I do to force update the Field data?
You can't I am afraid.
However if you want to see your Cumulative Layout Shift data in real time (to catch any problems / confirm fixes early) you can use the "web vitals library" - please see the final level 3 heading ("Tracking using JavaScript for real time data") at the end of this answer.
What is actually going on here?
Field data is calculated on a 28 day rolling basis, so if you made the change over a month ago the problem still persists.
Just because the lab tests yield a 0 cumulative layout shift does not mean that that is the case in the field.
In field data the Cumulative Layout Shift (CLS) is calculated (and accumulates) until the page reaches unload. (See this answer from Addy Osmani, who works at Google on Lighthouse, the engine behind Page Speed Insights).
Because of this you could have issues further down the page or that occur after an amount of time that would cause a layout shift to occur that would not be picked up by automated tests.
This means that if layout shifts occur once you scroll the page (due to lazy loading not working effectively for example) it will start affecting the CLS field data.
Also bear in mind that field data is collected across all devices.
Probable Causes
Here are a couple of probable causes:
Screen sizes
Just because the site doesn't show a CLS on the mobile and desktop sizes that Page Speed Insights uses does not mean that CLS does not occur at different sizes. It could be that tablets or certain mobile screen widths cause an item to "jump around" the screen.
JavaScript layout engines
Another possible causes is using JavaScript for layout. Looking at your "Time to interactive" and "total blocking time" I would guess your site is using JavaScript for layout / templating (as they are both high indicating a large JavaScript payload).
Bear in mind that if your end users are on slower machines (both desktop and mobile) then a huge JavaScript payload may also be having a severe effect on layout shifts as the page is painted.
Fonts
Font swapping causes a lot of CLS issues as a new font is swapped in it can cause word wrapping to change and therefore change the height (and width if the width is not fixed / fluid) of containers.
If for some reason your font is slow to load in or is very late in the load order this could be causing large CLS.
Yet again this is likely to be on slower connections such as 4G where network latency can cause issues. Automated tests may not pick this up as they throttle loading based on a simulation (via an algorithm), rather than applying throttling (actually applying latency and throughput slowdown) to each request.
Additionally if you are using font-icons such as font-awesome then this is a highly probable cause of CLS. If this is the cause then use inline SVGs instead.
Identifying the issue
Here is a question (and answer as nobody answered me) I created on how to identify problems with CLS. The answer I gave to my own question was the most effective way to identify the problem I could find, however I am still hopeful someone will improve upon my answer as more people get used to correcting CLS issues. The same principle would work for finding font word-wrapping issues.
If the issue is JavaScript related as I suspect then changing the CPU slowdown in Developer tools will allow you to spot this.
Go to Developer Tools -> Performance -> Click the "gear" icon if needed on the top right -> "CPU". Set it to 6x slowdown.
Then go to the "rendering" tab and switch on "Paint Flashing", "Layout Shift Regions" and "Layer borders". You may have to enable the "rendering" tab using the 3 vertical dots drop down to the left of the lower panel menu bar.
Now reload your page and look for any issues as you start navigating the page. Keep a close eye out for any blue flashes as they are highlighting items that were shifted. I have found that once I spot a potential shift it is useful to toggle the first two options on and off individually and repeat the action as sometimes layout shifts are not as noticeable as repaints but both together can be confusing.
So do I have to wait 28 days to see if I have fixed the problem?
No, if you watch your CLS score for about 7 days after a fix you will see a slow and steady improvement as people "in the red" disappear from the rolling 28 day average.
If your percentage in the red drops from 22% to below 18% after 7 days then the odds are you have fixed the issue (you would also see a similar drop for people "in the orange").
The actual CLS number (0.19 in your screenshot) may not change until after 28 days so ignore that unless it jumps upwards.
Tracking using JavaScript for real time data
You may want to check out The web vitals library and implement your own tracking of CLS (and other key metrics), this way you can have real-time user data instead of waiting for the Field Data to update.
I have only just started playing with this myself but so far it seems pretty straight forward. I am currently trying to implement my own end-point for the data rather than Google Analytics so I can have real time data under my control. If i get that sorted before the bounty runs out I will update the answer accordingly.
What should I do to force update the Field data?
I am not sure if YOU could do anything to change this data as this data is collected based on Chrome User Experience Report, as mentioned here:
The Chrome User Experience Report is powered by real user measurement
of key user experience metrics across the public web, aggregated from
users who have opted-in to syncing their browsing history, have not
set up a Sync passphrase, and have usage statistic reporting enabled
About your question as to why it is not being updated and in your lab data it has 0 cls but in field data it is not the same, again it depends on variety of factors. Lab data is basically running the report in a controlled environment (mostly your machine) while field data is result of aggregated data from variety of users with variety of network and devices and mot likely they will not be the same as you unless your targeted audience is using similar network and device as the one on which lab report was ran.
You can find few similar threads by searching webmasters forum.
I am loading a dataset of ~ 80,000 rows into a time series chart object I've created, and it's crashing my browser.
I don't think it should be a problem for d3, as this Crossfilter example demonstrates with a dataset of several hundred thousand rows. (Although, the data are being aggregated, whereas I am graphing each point).
I am not sure how to debug this. Chrome isn't giving me any useful messages and Google results are scarce. Any ideas?
If you are loading huge remote data using chrome than this is known issue. Chrome crashes on receiving large datasets via xhr. To solve this problem you can either receive data by chunks or receive it over web sockets.
It depends, if you are appending 80,000 elements to the DOM, thats huge and I'm not surprised if its crashing the browser. The Crossfilter example does indeed have several hundred thousand rows but performs minimal DOM manipulation due to the aggregation (as you mentioned). You might take a look at canvas instead.
you can disable browser extensions, and try, if your results are in flash, or java, disable chrome java extension, if they are showing in pdf disable chrome pdf extension and let OS decide which program to use, it'll still show up in chrome but won't crash. chrome://plugins/
sometimes chrome has two extensions for one program disable one of them
When using an application developed with backbone.js Chrome freezes for about 7-10 seconds when adding to the DOM the content of a large document that has been retrieved with an AJAX call. Chrome's event timeline shows that the main issue is a single 'layout' event that takes about 6-8 seconds (times measured in a modern MB Air if that matters)
The content being loaded is about 800kbs of uncompressed HTML, 15000 DOM nodes, memory usage after the content is loaded is about to 30-35 Mbs; it's a large document but such a long freeze just doesn't feel right.
is such a large "layout" time to be expected for a document like that, or is this a sign of other issues? (like too complex CSS rules, bad HTML structure, etc.)
what other factors besides document size may have an impact in the performance of the 'layout' event?
besides the obvious and probably right solution of breaking the content in pieces, is there any trick that can be done to make it easier for the browser to compute the layout event? (I am thinking in something like placing the monster content inside an iframe or a div with fixed positioning, or avoiding specific CSS features inside the content)