How to measure rendering time in Chrome 1000 times? - performance

I am working on a proof of concept and I need to measure the rendering time of a simple website (just a HTML document and one CSS file) 1000 times in a browser. Is there a simple and straightforward tool for this?
I know there are some highly complicated tools with an enormous learning curve, but I don't have the whole week to tinker with it. I don't need anything else just the rendering time, exactly as Chrome's Performance tool displays it in milliseconds, then calculate an average.
If someone could tell me how to find the total rendering time of the page in the (quite enormous) JSON output of the Performance tool, I'd be happy with that. I can have a macro recorder clicking the Refresh button all night. Though I guess there's a way to get it done from the command prompt - any advice is appreciated on that too!

The 'Audits' tab in Chrome's dev tools allows you to run a lighthouse performance audit, which will provide you some key metrics as defined by Google (such as time to interactive): https://developers.google.com/web/tools/lighthouse/.
You can run it from the command line too, which should make it somewhat straightforward to repeat it as needed in your scenario and perhaps even integrate it as a test: https://developers.google.com/web/tools/lighthouse/#cli

Related

Why does PageSpeed Insights keeps returning a high TTI (Time to Interactive) for a simple game?

I submitted my app/game/PWA to PageSpeed Insights and it keeps giving me TTI values > 7000ms and TBT values > 2000ms, as it can be seen in the screenshot below (the overall score for a mobile experience is around 63):
I read what those values mean over and over, but I just cannot make them lower!
What is most annoying, is that when accessing the page in a real-life browser, I don't need to wait 7 seconds for the page to become interactive, even with a clear cache!!!
The game can be accessed here and its source is here.
What comforts me is that Google's own game, Doodle Cricket also scores terribly. In fact, PageSpeed Insights gives it an overall score of "?".
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
UPDATE: Partial Solution
Thanks to #Graham Ritchie's answer, I was able to detect the two slowest points, simulating a mid-tier mobile phone:
Loading/Compiling WebAssembly files: there was not much I could do about this, and this alone consumes almost 1.5 seconds...
Loading the main script file, script.min.js: I split the file into two, since almost two thirds of this file are just string constants, and I started loading them asynchronously, both using async to load the main script and delay loading the other string constants, which has saved more than 1.2 seconds from the load time.
The improvements have also saved some time on better mobile devices/desktop devices.
The commit diff is here.
UPDATE 2: Improving the Tooling
For anyone who gets here from Google, two extra tips that I forgot to mention before...
Use the CLI Lighthouse tool rather than the website (both for localhost and for internet websites): npm install -g lighthouse, then call lighthouse --view http.... (or use any other arguments as necessary).
If running on a notebook, make sure it is not running on the battery, but actually connected to a power source 😅
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
No and unfortunately I think you have missed one key piece of the puzzle as to why those numbers are so high.
Page Speed Insights uses throttling on the Network AND the CPU to simulate a mid-tier mobile phone on a 4G connection.
The CPU throttling is your issue.
If I run your game within the "performance" tab on Google Chrome Developer Tools with "4x slowdown" on the CPU I get a few long tasks, one of which takes 5.19s to run!
Your issue isn't page weight as the site is lightweight it is JavaScript execution time.
You would have to look through your code and see why you have a task that takes so long to run, look for nested loops as they are normally the issue!
There are several other tasks that take 1-2 seconds total between them but that 5 second task is the main culprit!
Hopefully that clears things up a bit, any questions just ask.

Improve response speed Google sheet script

I have written a form for user to enter. I used code to guide them through the form. For example, after entering Cell A4, it jumps to D4, after D4 it jumps to A5, etc. Even though the execution time (as viewed in execution transcript) is not large (close to 0.1 seconds most of the time) the Google sheet response time is generally about 1 second. It feels quite laggy. Is there a way to improve the responsiveness of Google sheet of this action?
Besides the time that get a method to be executed you should consider the "transport time" (the communication between Google Servers and the user device), the spreadsheet recalculation time and the UI refresh time.
To improve the form users' chances to have a better experience,
avoid or reduce the use of formulas
avoid or reduce the use of volatile functions like NOW()
avoid or reduce of the use of open references like A:A
reduce the length of calculation dependency chains
etc.
Also ask the form users to
remove all the web browser extensions
close all other web browser tabs
close all other local applications
use a very fast Internet connection
etc.
Further reading
Profiling the Performance of a Google App Script
Measuring round trip and execution times from add-ons
Using Apps Script to try to move the user around the spreadsheet is probably not something you will be able to make feel comfortable.
Instead, see the guide on dialogs and sidebars, and consider if building a form in HTML/Javascript would be a more appropriate solution (assuming simply building a Google Form is not).

Are Sitecore's sublayout rendering stats incorrect?

The built-in Sitecore rendering stats http://<sitename>/sitecore/admin/stats.aspx is really helpful for identifying inefficient and slow-loading XSLT renders. Recently I've started switching to .ascx sub layouts to take advantage of the Sitecore C# API which can help improve performance when used correctly.
However, I've noticed that sub layouts (as opposed to XSLT renders) are not reported correctly on the stats page. See the screenshot below....
I know for a fact that this sub layout takes about 1.8 seconds to generate (I calculated this in the code-behind). Caching is turned off. I've refreshed the page 20 times to ensure I get an average. You will see that the "Avg. items" is always 0 - I can live with this - but the "Avg. time (ms)" is less than 1ms which is just clearly wrong.
Does anyone have any insights into this? Has anyone found a way to get it to work correctly?
Judging whether a statistic is right/wrong is going to rely on understanding exactly what it is measuring.
Digging around in Sitecore.Diagnostics.Statistics using Reflector I note the following:
Sitecore.Web.UI.Webcontrol contains a field m_timer
This is 'started' in the BeforeRender() method and 'stopped' in the AfterRender() method
Data from that timer is sent to Statistics.AddRenderingData() and is logged against the control
This means it is measuring the time taken to render the control, which for an XSLT includes the processing time for preparing all the data in it, but as much of the work of a normal ASCX is done prior to the Render-stage the statistic is much less useful. Incorporating the Load stage in the time would inadvertently include the processing time for all child components, since the Load sequence is chained and called recursively, so that probably doesn't help much either.
I suspect there is no good way of measuring the processing time for a specific ASCX control (excluding children) without first acquiring cumulative data then post-processing the call chain and splitting the time apart. This is the sort of thing RedGate ANTS does really well, but might not be so good if it was being executed on a live production system, given the overheads.

Measuring HTML parsing time in web browsers

In my project, I attend to measure the HTML parsing time, that is, how much percent of processing time is for HTML parsing when a web browser handling a particular webpage.
It seems instrumenting Firefox would be a good place to start. But this may take some time (I have no idea of the any complexities of instrumenting Firefox to fetch this info).
So my question is: Any idea on measuring this ratio in a relative lightweight way? Or by any chance you saw this information already available on any public papers/websites?
I guess you need to benchmark your HTML. Ideally the benchmark tools will give you a varied output cause its basically based on a lot of parameters. Well to get started you can use some online once to bechmark your HTML page. There are many search for HTML Benchmark.
Then you may use the follwing to becnhmark your browser.
http://browsermark.rightware.com/
Well i am sure you may get a relative answer and some numbers to crunch...

Heroku taking 2 seconds to load every page--including pages that simply render a single text string

No, this is NOT the "my page doesn't have any traffic and it has to be reloaded" issue.
We have 4 dynos for an alpha application. The reason we do, is because each page takes over 2 seconds to load. Even little things like rendering a text string (no layouts, erb or anything).
If I watch our logs, for our longer queries, they report response times in the 300-700ms range--which is far shorter than 2 seconds.
The DNS is cached, and the collective time to load given that isn't a slow DNS issue. And, that shouldn't affect subsequent page loads, right?
Any thoughts on how to get to the bottom of this would be appreciated.
Here are two screenshots to show what I mean.
http://dl.dropbox.com/u/7175041/Screenshots/qo.png
http://dl.dropbox.com/u/7175041/Screenshots/qq.png
Thanks!
First thing I'd do is to switch on NewRelic Basic - it's a free performance monitor integrated with Heroku. That'll help you get a bearing on the basics of where the trouble is coming from.
I take it that you don't see similar results locally? If you don't, then skip this step, but if you do, you can also run NewRelic locally and interrogate all of your queries for response times.
I'd stay away from using things like the Benchmark library - that was my first thought in troubleshooting a speed issue, but Benchmark is necessarily going to ignore elements of your app that are outside the pure Ruby layer, and if that's where you're slow then NewRelic catches that anyway.
Finally, if all else fails, a support ticket with Heroku's team has always been extremely helpful to me. Just make sure you check the box that lets them clone your app, it makes things a lot easier for them.
Let us know what you find out - I'm curious to see what the particular gremlin is!

Resources