Chrome DevTools: Slow frames, but no events causing them - performance

What does it mean, when you have slow frames (those ones indicated with red triangles) in Chrome Dev Tools' Timeline, but it doesn't show what causes that (script, render, composition, etc). It's like nothing really happens but you still have janks.

If you click on the time-span above the main thread, and then click on the Bottom-Up tab at the bottom, you can see all the individual activities going on that add up to that total time. I suspect there must be some other activities you can't currently see, but if you scroll vertically, you should see them - probably Rasterize Paint on another worker.
Update
I realised that the times in the Bottom-Up view do not add up to the total time shown for the frame. I did a little more playing and it looks as though the remaining time is 'Idle' time.
If you look at the screenshot below, I have adjusted the Timeline view to contain approximately one frame that has very minimal activity. It's a little bit more than one frame because you can see the dotted lines on either side are a little further in.
If you then look at the bottom Summary view, you can see the majority of time is listed as 'Idle'. If you pretend I more accurately filtered the Timeline view to be exactly one frame (by taking off a tiny bit of time from the summary values), you could probably fairly confidently come to the conclusion that the total frame time (shown just above the Main Thread bar) is the summation of the values shown in Summary, including the 'Idle' time.

Related

How to debug HTML5 canvas pixi.js performance?

I've made a game on pixi.js where it is very simple and should run smoothly but doesn't. Since the game is big and has lots of elements I don't know where the bottleneck is. I believe some part/s of the is buggy resulting in lots of resource usage.
I want to learn the method on how to identify the functions I wrote that use up a lot of resources.
When I use default chrome tools they never show the part of the code that I wrote but always the library ticker: https://prnt.sc/qiplql.
This doesn't help me, I want to know which functions I wrote that is being called many times and uses up a lot of resources.
What is a way I can do this?
Here is our debugging server where the code isn't minimized.
Zoom in
To get the most out of the performance monitor you must zoom into the timeline and investigate small sections of code.
The image below shows the section of the timeline, and the Call tree's representation of that small section of time
Find a single frame
However this still is too much information. Zoom right in to find a single frame that is running long (over 16ms)
Use the graphical call tree (open fold main)
The next image shows a long frame (25ms) Opening the fold main (on left) will show a graphical representation of the call tree.
The frame is started with a long task triggered by a mouse up event. At the bottom of that stack the reason the event takes so long is the DOM recalculating the style.
The recalculation of the style meant that the next requestAnimationFrame callback had some extra baggage before (reflow) and after the script (DOM rendering and compositing) meaning the frame was late to present.
Zooming in on the requestAnimationFrame there are no problems. Just as an example I zoomed so you can see how the graphical representation relates to the Call tree. I have drawn some lines connecting graphical function calls with the same call in the call tree.
Many reasons
To find your bottle necks you will have to zoom right in on the slow frames. There may be many different reasons that frames are slow.

How are blinking carets often implemented?

I'm trying to implement a cross-platform UI library that takes as little system resource as possible. I'm considering to either use my own software renderer or opengl.
For stationary controls everything's fine, I can repaint only when it's needed. However, when it comes to implementing animations, especially animated blinking carets like the 'phase' caret in sublime text, I don't see a easy way to balance resource usage and performance.
For a blinking caret, it's required that the caret be redrawn very frequently(15-20 times per sec at least, I guess). On one hand, the software renderer supports partial redraw but is far too slow to be practical(3-4 fps for large redraw regions, say, 1000x800, which makes it impossible to implement animations). On the other hand, opengl doesn't support partial redraw very well as far as I know, which means the whole screen needs to be rendered at 15-20 fps constantly.
So my question is:
How are carets usually implemented in various UI systems?
Is there any way to have opengl to render to only a proportion of the screen?
I know that glViewport enables rendering to part of the screen, but due to double buffering or other stuff the rest of the screen is not kept as it was. In this way I still need to render the whole screen again.
First you need to ask yourself.
Do I really need to partially redraw the screen?
OpenGL or better said the GPU can draw thousands of triangles at ease. So before you start fiddling with partial redrawing of the screen, then you should instead benchmark and see whether it's worth looking into at all.
This doesn't however imply that you have to redraw the screen endlessly. You can still just redraw it when changes happen.
Thus if you have a cursor blinking every 500 ms, then you redraw once every 500 ms. If you have an animation running, then you continuously redraw while that animation is playing (or every time the animation does a change that requires redrawing).
This is what Chrome, Firefox, etc does. You can see this if you open the Developer Tools (F12) and go to the Timeline tab.
Take a look at the following screenshot. The first row of the timeline shows how often Chrome redraws the windows.
The first section shows a lot continuously redrawing. Which was because I was scrolling around on the page.
The last section shows a single redraw every few 500 ms. Which was the cursor blinking in a textbox.
Open the image in a new tab, to see better what's going on.
Note that it doesn't tell whether Chrome is fully redrawing the window or only that parts of it. It is just showing the frequency of the redrawing. (If you want to see the redrawn regions, then both Firefox and Chrome has "Show Paint Rectangles".)
To circumvent the problem with double buffering and partially redrawing. Then you could instead draw to a framebuffer object. Now you can utilize glScissor() as much as you want. If you have various things that are static and only a few dynamic things. Then you could have multiple framebuffer objects and only draw the static contents once and continuously update the framebuffer containing the dynamic content.
However (and I can't emphasize this enough) benchmark and check if this is even needed. Having two framebuffer objects could be more expensive than just always redrawing everything. The same goes for say having a buffer for each rectangle, in contrast to packing all rectangles in a single buffer.
Lastly to give an example let's take NanoGUI (a minimalistic GUI library for OpenGL). NanoGUI continuously redraws the screen.
The problem with not just continuously redrawing the screen is that now you need a system for issuing a redraw. Now calling setText() on a label needs to callback and tell the window to redraw. Now what if the parent panel the label is added to isn't visible? Then setText() just issued a redundant redrawing of the screen.
The point I'm trying to make is that if you have a system for issuing redrawing of the screen. Then that might be more prone to errors. Thus unless continuously redrawing is an issue, then that is definitely a more optimal starting point.

What is the significance of rasterize paint extending over multiple frames

I have started looking into the client side performance of my app using Chrome's Timeline tools. However, whilst I have found many articles on how to use them, information on how to interpret the results is more sparse and often vague.
Currently I am looking at scroll performance and attempting to hit 60FPS.
This screenshot show's the results of my most recent timeline recording.
As can be seen most frames are over 60 FPS and several are over 30 FPS.
If I zoom in on one particular frame - the one with duration 67.076ms I can see a few things:
the duration of the frame is 67ms, but aggregated time is 204ms
201ms of this time is spent painting BUT the two paint events in
this frame are of duration 1.327 ms and 0.106 ms
The total duration for the JS event, update layer tree and paint events is
only 2.4 ms
There is a long green hollow bar (rasterize Paint)
which lasts the duration of the frame and in fact starts before and
continues after it.
I have a few questions on this:
the aggregated time is far longer than the frame time - is it
correct to assume that these are parrallel processes?
the paint time for the frame (204ms) far exceeds the time for the two paint events (1.433ms) - is this because it includes the
rasterize paint events
why does the rasterize paint event span multiple frames?
where would one start optimizing this?
Finally can someone point me to some good resources on understanding this?
This is somewhat unfortunate result of the way the 'classic' waterfall Timeline view coalesces multiple events. If you expand that long "Rasterize Paint" event, you'll see a bunch of individual events, which are going to be somewhat shorter. You may really want to switch to Flame Chart mode for troubleshooting rendering performance, where the rasterize events are shown on appropriate threads.

Custom infinite scrolling with items being removed while being scrolled out of view

The problem
Today's modern websites may use infinite scrolling technique to replace paged lists to make a more seamless experience to their users.
This is all nice and dandy as long as users don't scroll to far down which means that your document becomes very complex with huge amounts of DOM nodes. There are of course ways to mitigate this problem (i.e. replacing over-top overflowed scrolled elements with a single DIV of appropriate height for example), but they're either complex to implement and they still have their flaws.
The idea
I was thinking if anyone has already seen an implementation where items, that get scrolled out (top or bottom) somehow become smaller and fainter until they disappear and get replaced by its adjacent item.
I'm thinking of a mix of experience of:
scrolling
fading
scaling
Fading and scaling can be seen on Medium.com when you get to the bottom of any article and you click the next recommended one displayed below (click the title). When you do click and if you pay attention you can see the effect of original article disappearing while being replaced by an up-sliding new article.
Content scrolling could be done in this way and infinite scrolling would be much smoother and less resource consuming as elements would get replaced on the fly and in-place.
Number of simultaneously displayed items of course depends on items' size. In case of Medium-line articles it would likely be one article that would also scroll until you'd scroll it to the very bottom (or top). In case of posts like Facebook, it would be many more items simultaneously as they don't take as much vertical space.
Coverflow works in somewhat similar way as it displays middle content completely and rest is either hidden or scaled/transformed.
The question
Has anybody seen such an implementation on the web? If done properly it would actually make a much nicer infinite scrolling experience without hogging our browsers.
But to make my question more clear and non-debatable. Can you provide a working (albeit simplified) example of such experience?
Requirements:
when an item gets scrolled out it disappears (using fade/scale/both)
when items appear at the bottom (or top when scrolled up) they should display in the opposite to scrolled out items
pressing usual scrolling buttons Home, End, Page Up, Page Down and Space should work.
invisible items should be removed from DOM
scrolling should somehow be available using some sort of scrollbar as well
I think I might have found what you were looking for.
http://engineering.linkedin.com/linkedin-ipad-5-techniques-smooth-infinite-scrolling-html5

Scrollbars for Infinite Document?

Is there a standard Aqua way to handle a practically infinite document?
For example, imagine a level editor for a tile-based game. The level has no preset size (though it's technically limited by NSInteger's size); tiles can be placed anywhere on the grid. Is there a standard interface for scrolling through such a document?
I can't simply limit the scrolling to areas that already have tiles, because the user needs to be able to add tiles outside that boundary. Arbitrarily creating a level size, even if it's easily changeable by the user, doesn't seem ideal either.
Has anyone seen an application that deals with this problem?
One option is to essentially dynamically expand the area as the user scrolls through it - any time the user scrolls within X units of an edge, add another unit in that direction. Essentially, you'll never be able to scroll "all the way" to an edge, because the closer you get the farther it will expand.
If the user scrolls back away from the edge, contract it to back to no more than X units beyond where there is actually content.
Have you seen what Microsoft Excel does for this problem? It has to represent an unbounded space with scrollbars, as well.
One solution is to define a reasonable space for the original level size, and when the user scrolls to one tile away from its bounds, add another row or column of tiles, and adjust the scrollbar accordingly. This way, the user never reaches the actual bounds.
If the user decides to cut down on the level size, you could also add code that shrinks the "reasonable space" once an unused row consists only of empty tiles. This saves the user from being stuck with a huge level that they scrolled through, with no way to shrink it.
Edit: Same as Dav's answer. :)

Resources