I am doing some performance tuning on the item renderers in a list I have, so far I am listening to update-complete event to see when it is rendered for the first time.
However I would also like to measure the scrolling speed (when i change the viewport position the list blinks for a few seconds) , what event should i be listening too?
Thanks
Related
I have set up a scroll depth tracker in GTM to track how much visitors scroll down a webpage.
The code seems to work fine, but I don't understand the results.
For example, it shows:
10% scroll depth = 949 (14,34%)
20% scroll depth = 901 (13,61%)
30% scroll depth = 861 (13,01%)
40% scroll depth = 813 (12,28%)
A screenshot is attached below.
How should these lines be interpreted? First I imagined that only 14,3% of all visitors would scroll down 10%, but that would be literally only the header (which consists of a picture), so it's rather unlikely that people would leave that early.
How should these lines be interpreted?
Thank you!
Edit: Here is what the GA event looks like:
There are two things to consider. First, the behavior of the scroll tracking trigger in Google Tag Manager. Based on the support article:
The trigger will only fire once per threshold per page.
So the event will be generated for all thresholds reached, not just the lowest one. You should also be aware of this behavior:
If a specified scroll depth is visible in the viewport when the page
loads, the trigger will fire even though the user has not physically
scrolled the page.
Second, the behavior of the Google Analytics reports. The percentage you see, is the share of the current row's quantity compared to the total quantity, for a given column. So on your first image, 949 (number of events for 10 percent threshold) is 14,34% of all tracked scroll events, which is 6620.
Therefore, 100 percent threshold was reached 145 times, which is ~15,3% of 10% events. (It's not clear, if these are global figures for your site, or only a specific page was included, but you can filter your data accordingly.)
I have game that simulates work of messanger, so there are messages added to the window along the game. But when I create the message prefab and add it to the window I can see the CPU spike, and profiler shows me this
So why the LayoutRebuilder.Rebuild() and the Graphic.Rebuild() eat so much CPU?
Depending on the size of your prefab that you are adding, when you instantiate it, unity has to go through and recalculate all of the sizes for the prefab and fill the meshes. If you were to use less layout groups it would reduce the layout time, but the graphic rebuild can only be reduced by having less items to display.
If you want to see what happens when those things get called, you can view the source here which may give you a better understanding of how to optimize your specific prefab
When I disabled Pixel Perfect on the canvas the frame rate drop went away.
What does it mean, when you have slow frames (those ones indicated with red triangles) in Chrome Dev Tools' Timeline, but it doesn't show what causes that (script, render, composition, etc). It's like nothing really happens but you still have janks.
If you click on the time-span above the main thread, and then click on the Bottom-Up tab at the bottom, you can see all the individual activities going on that add up to that total time. I suspect there must be some other activities you can't currently see, but if you scroll vertically, you should see them - probably Rasterize Paint on another worker.
Update
I realised that the times in the Bottom-Up view do not add up to the total time shown for the frame. I did a little more playing and it looks as though the remaining time is 'Idle' time.
If you look at the screenshot below, I have adjusted the Timeline view to contain approximately one frame that has very minimal activity. It's a little bit more than one frame because you can see the dotted lines on either side are a little further in.
If you then look at the bottom Summary view, you can see the majority of time is listed as 'Idle'. If you pretend I more accurately filtered the Timeline view to be exactly one frame (by taking off a tiny bit of time from the summary values), you could probably fairly confidently come to the conclusion that the total frame time (shown just above the Main Thread bar) is the summation of the values shown in Summary, including the 'Idle' time.
I have started looking into the client side performance of my app using Chrome's Timeline tools. However, whilst I have found many articles on how to use them, information on how to interpret the results is more sparse and often vague.
Currently I am looking at scroll performance and attempting to hit 60FPS.
This screenshot show's the results of my most recent timeline recording.
As can be seen most frames are over 60 FPS and several are over 30 FPS.
If I zoom in on one particular frame - the one with duration 67.076ms I can see a few things:
the duration of the frame is 67ms, but aggregated time is 204ms
201ms of this time is spent painting BUT the two paint events in
this frame are of duration 1.327 ms and 0.106 ms
The total duration for the JS event, update layer tree and paint events is
only 2.4 ms
There is a long green hollow bar (rasterize Paint)
which lasts the duration of the frame and in fact starts before and
continues after it.
I have a few questions on this:
the aggregated time is far longer than the frame time - is it
correct to assume that these are parrallel processes?
the paint time for the frame (204ms) far exceeds the time for the two paint events (1.433ms) - is this because it includes the
rasterize paint events
why does the rasterize paint event span multiple frames?
where would one start optimizing this?
Finally can someone point me to some good resources on understanding this?
This is somewhat unfortunate result of the way the 'classic' waterfall Timeline view coalesces multiple events. If you expand that long "Rasterize Paint" event, you'll see a bunch of individual events, which are going to be somewhat shorter. You may really want to switch to Flame Chart mode for troubleshooting rendering performance, where the rasterize events are shown on appropriate threads.
I'm making a snake game and on every tick the snake moves. because the snake is moving a whole unit on each tick the animation is jumpy. The game is on an invisable grid so the snake can only change directions at specific points.
Would it be considered better practice to have a timer that will move the snake a pixels at a time counting it with a variable, and on every n tick run the code to change direction. Or is it better to have two separate timers; one for move the snake a pixel at a time and another for changing the snakes direction?
Essentially, you're creating a "game-loop" to update the displayList (in Actionscript) and re-draw your view. When the game gets complex, Timers and listening for ENTER_FRAME events will be constrained to the flashplayer settings for screen refresh (i.e. it's FPS) and the CPU's rendering based on what it is tasked to process.
For frame-rate independent animation, you should probably use a combination of ENTER_FRAME to track milliseconds and GetTimer() calls to more accurately (to the millisecond) call animations and normalize the experience across a variety of platforms.
Basically, listen for the ENTER_FRAME event, check the current milliseconds since the last update and if it exceeds your refresh-rate (in terms of milli's), fire off the animation/state update: check for snake collision detection with a "direction-block" - handle if true, then update the snake's movement / position / state.
Flash updates the Display List at whatever it's settings, and CPU / machine-dependent issues are. The key, I've found, is to make sure you're normalizing the speed of updates to make the experience consistent. Timer's have their usage, but can cause memory performance issues. ENTER_FRAME is synced to the main timeline / frame-rate settings.
For a dated, but interesting discussion 3 years ago, check out this post from actionscript.org.