What's causing this excessive "Composite Layers", "Recalculate Style" and "Update Layer Tree" cycle? - performance

I am very intrigued by the excessive number of "composite layers", "recalculate style" and then "update layer tree" events in one of our webapps. I'm wondering what's causing them here.
If you point your Chrome to one of our fast moving streams, say https://choir.io/player/beachmonks/github, and turn on your "FPS meter", you can see that the app can achieve about 60fps most of the times when we are on the top.
However, as soon as I scroll down a few messages and leave the screen as it is, the FPS rate drops dramatically down to around 10 or even lower. What the code is doing here is that it renders each incoming message, prepends it to the top and scroll the list up Npx, which is the height of the new message, to keep the viewport position intact.
(I understand that scrollTop will invalidate the screen but I have carefully ordered the operations to avoid layout thrashings. I am also aware of the synchronous repaint that happens every second, it's caused by the jquery.sparkline but it's not relevant to this discussion.)
Here is what I see when I tried to profile it.
.
What do you think might be causing the large number of layer operations?

The CSS property will-change: transform on all elements needing a repaint solved the problem with too many Composite Layers for me.

I had the same issue. I fixed it by reducing the size of the images.
There was some thumbnails in the scrollable list. The size of each thumbnail was 3000x1800 but they were resized by CSS to 62x44. Using 62x44 images reduced the time consumed for "Composite layers".

Some info about the Composite Layers that can help
from what I see here it says
event: Composite Layers
Description: Chrome's rendering engine composited image layers.
for the meaning of the word composite from wikipedia
Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene
so this the process of making the page we actually see by taking the output of coding/resizing images, parse HTML and parse CSS to make the final page we see

Related

How to optimize Raster (Rasterizer Thread) as seen in the Chrome Dev Tools Performance tab?

I'm trying to understand why these Raster processes have such a long duration, but I'm finding the documentation to be sparse.
From other people's questions, I thought it might be related to the images being painted, or javascript listeners, or elements being repainted due to suboptimal CSS transitions but removing the images, javascript, and CSS transitions didn't do the trick.
Would someone be able to point me in the right direction? How do I narrow down which elements or scripts are causing this long process? It's been two days and I'm making no headway.
Thanks!
The "Raster" section represents all activities related to painting. Any HTML page, after all, is an "image". The browser converts your DOM and CSS to the image to display it on a screen. You can read about it here. So even if you don't have any image on the page you still would see as a minimum one rasterizer thread in "Raster" which represents converting your HTML page to the "image".
By the way, Chrome(79.0.3945.79) provides some information if an image was initiated this thread.
Also, you can enable "Advanced paint instrumentation" in "Performance" settings to see in more details what is going on when the browser renders an image
After spending some hours with the same, I believe that the 4 ugly green rectangles called "Rasterize paint" are a bug in the profiler DISPLAY. My suspicion based on:
1) The rectangles start some senconds after the profiler started. NOT after the page loaded, so it seems it is bound to the profiler, not to the page.
2) The starting point of the rectangles depends on the size of the profiling timeframe. If I capture 3 seconds it starts after ~2 secs, if I capture 30 seconds it starts after ~20 secs. So the "cpu load increase" depends on the time you press the stop button.
3) If I enable "Advanced paint instrumentation" as maksimr suggested, I can click on the rectangle to see the details, and the details show ~0.4 ms events in the "Paint profiler", just like before the death rectangles started. (see screenshot, bottom right part)
3b) I even can click on different parts of the same rectangle, resulting different ~0.4 ms long events in the Paint profiler...

Silverlight canvas freedraw underperforming

I'm making a silverlight website which includes paint-like features including freedraw. To achieve this I used the technique described on the following website: http://codeding.com/articles/freehand-drawing-in-silverlight .
The problem is, when I run the demo project it will start to lag extremely after just a few seconds of drawing. I realise that that is probably caused by the amount of shapes this techniue requires, however, and this is my main question:
How on earth does the demo on the website not lag nomatter howmuch I draw, while my local project which should have the EXACT same code lags right away?
I tried finding something about improving canvas performance overall, but the only thing I found was turning the drawing into a static image, which is not really ideal since I use undo/redo functionality.
The number of shapes added to the Canvas shouldn't be the reason for the lagging, there must be something else like converting the drawing into image for undo/redo functionality. For undo/redo, you can save the strokes-information instead of images. Creating & storing images during each undo/redo operation will consume too much memory.
A stroke is nothing but a set of points from the start (mousedown event) to end (mouseup event), and a set of strokes forms a complete drawing. You can always recreate the drawing using the saved strokes-information (just like you can recreate using images). You can use simple data-structures like List<List<Point>> to store a complete drawing, this is very memory efficient instead of creating & storing the image itself.

Optimize display of a large number of images (1000+) for performance and ease of use in a web application

In our web application, the users need to review a large number of images. This is my current layout. 20 images will be displayed at a time, with a pagination bar above the thumbnails. Clicking a thumbnail will show the enlarged image to the left. The enlarged image will follow the scrollbar so it's always visible. Quite simple actually.
I was wondering what the best interface would be in this scenario:
One option is to implement an infinite scroll script which will lazy load thumbnails as the user scrolls. The thumbnails not visible will be removed from the DOM. But my concern with this approach is the number of changes in the DOM slowing down the page.
Another option could be something like Google's Fastflip.
What do you think is the best approach for this application? Radical ideas welcomed.
I think the question you have to ask is: what action is user supposed to do? What's the purpose of the site?
If "review images" entails rating every image, I'd rather go with a Fastflip approach where the focus is on the single image. A thumbnail gallery will distract from the desired action and might result in a smaller amount of pics rated/reviewed.
If the focus should rather be on the comparison of a given image against others, I'd say try the gallery approach, although I wouldn't impement an infinite scroll with thumbnails because user can quickly get lost in the abundance of choices. I think a standard pagination (whether static or ajaxified) would be better if you choose to go this route.
Just my 2c.
If you paginate thumbnails, you can pre-generate a single image containing all thumbnails for each page, then use an image map to handle mouseover text and clicking. This will reduce the number of HTTP requests and possibly lead to fewer bytes sent. The separation distance between images should be minimized for this to be most efficient. This would have some disadvantages.
To reduce image download size at the expense of preprocessing, you can try to save each image in the format (PNG or JPG) most efficient for its contents using an algorithm like the one in ImageGuide. Similarly, if the images are poorly compressed (like JPEGs from a cell phone camera), they can be recompressed at the cost of some quality.
Once the site has some testers, you can analyze patterns in which images tend to be clicked (if a pattern exists) and preload the full-size images, or even pre-load all of them once the thumbs are loaded.
You might play with JPEG2000 images (you did say "radical ideas welcomed"), which thumb very easily, because the thumbnail and main image needn't be sent as if they are separate files. This is an advantage of the compression format -- it isn't the same as the hack of telling the browser to resize the full size image to represent its own thumbnail.
You can take a look at Google's WebP image format.
At the server side, a separate image server optimized for static content delivery, perhaps using NginX or the Tux webserver.
I would show the thumbnails, since the user might want to skip some of the pictures. I would also stay away of pagination in the terms of
<<first <previuos n of x next> last>>
and go for something more easy to implement and efficient. A
load x more pictures.
No infinite scroll whatsoever and why not, even no scroll at all. Just load x more, previous x.
Although this answer might be a bit unradical and boring, I'd go with exactly your suggestion of asynchronously loading the thumbnails (and of course main picture), if they come into view. A similar technique is used by Google+ in the pane to add persons to circles. This way, you keep the server resources and bandwidth on the pictures that are needed by the client. As Google+ shows, the operations on the DOM tree are fast enough and don't slow down a computer of the past years.
You might also prebuilt a few lines of the thumbnail table ahead with a dummy image (e.g. a "loading circle" animated gif) and replace the image. That way, the table in view is already built and does not need to be rerendered, as the flow elements following the table would have to be, if no images are in there during scrolling.
Instead of paginating the thumbnails (as suggested by your layout scheme), you could also think about letting users filter the images by tag, theme, category, size or any other way to find their results faster.

Windows Phone 7 Image Looping

I would like to loop through a sequence of images. I have tried using a Pivot control, but I don't like the blank space in between image transitions. I would prefer to use something that will animate between images smoothly. I also looked at the LoopingSelector control, but I can't seem to set the orientation to horizontal.
I'm assuming you're interested in a kind of image viewer like iOS offers, swiping right or left to navigate through the photos. If that's the case, I hate to say it but i think you're looking at building your own control.
I think to implement it properly these are the essential things you need to think about and address:
For performance' sake, load all the images you have into memorystream objects and store the binary data (you can get creative with this and only store the first 10-15 images, depending on how large the images are, doing this would enable your control to support thousands of images and still perform like a champ).
Once an image is about to be on-screen set the source of the image to the saved memorystream object that has the bytes loaded into it (this will minimize the work that the UI thread does, keeping the control performant and responsive)
Use Manipulation events to track the delta x of the motion someone uses when swiping left to right in order to actually perform the moving of the items
Move the images by changing their Canvas.Left property (you can go negative I think, otherwise just make your canvas the width of all the images you have combined)
Look up some of the available libraries to support momentum so you can have a natural smooth transition between images

Increase page loading speed

In my app, I've a panorama-page which contains around 10 panorama items. Each panorama item has some path drawings, a list picker and few input fields.The problem i'm facing is that whenver i navigate to this page the navigation is very slow due to lot of content to initialize. If i comment the InitializeComponent(); the loading becomes fast.I thought of adding the XAML content in code, but the problem is that i've to access the input fields by their name in code, so it didn't worked.Any idea how i can speed up the navigation to the page.Thanks..
From the UI Guide:
Use either a single color background
or an image that spans the entire
panorama. If you decide to use an
image, any UI image type that is
supported by Silverlight is
acceptable, but JPEGs are recommended,
as they generally have smaller file
sizes than other formats.
You can use multiple images as a
background, but you should note that
only one image should be displayed at
any given time.
Background images should be between
480 x 800 pixels and 1024 x 800 pixels
(width x height) to ensure good
performance, minimal load time, and no scaling.
Consider hiding panorama sections
until they have content to display.
Also, 10 PanoramaItems seems like a lot since the recommended maximum is 4. You should either cut down on the number, or hide the content until it's required. Have a read of the best practice guide for Panoramas on MSDN.
I think you could improve page performance by creating usercontrols for the specific panorama items, add an empty panorama control to your page (with only the headers) and as picypg suggests load these usercontrols when they are needed.
Another way could be that you load the first page and show this one already to the user. In the background you could start loading the other panorama items.
My suggested approach would be for the first one. Using the lazyloading principle.
I woudl assume that your delays are due to the number of items on the page. This will lead to a very large object graph which will take a long time to create. I'd also expect it's using lots of memory and you have a very high fill rate which is slowing down the GPU.
Having input items/fields on PanoItems can cause UX issues if you're not careful.
That many panoItems could also cause potential navigation issues for the user.

Resources