I have written a form for user to enter. I used code to guide them through the form. For example, after entering Cell A4, it jumps to D4, after D4 it jumps to A5, etc. Even though the execution time (as viewed in execution transcript) is not large (close to 0.1 seconds most of the time) the Google sheet response time is generally about 1 second. It feels quite laggy. Is there a way to improve the responsiveness of Google sheet of this action?
Besides the time that get a method to be executed you should consider the "transport time" (the communication between Google Servers and the user device), the spreadsheet recalculation time and the UI refresh time.
To improve the form users' chances to have a better experience,
avoid or reduce the use of formulas
avoid or reduce the use of volatile functions like NOW()
avoid or reduce of the use of open references like A:A
reduce the length of calculation dependency chains
etc.
Also ask the form users to
remove all the web browser extensions
close all other web browser tabs
close all other local applications
use a very fast Internet connection
etc.
Further reading
Profiling the Performance of a Google App Script
Measuring round trip and execution times from add-ons
Using Apps Script to try to move the user around the spreadsheet is probably not something you will be able to make feel comfortable.
Instead, see the guide on dialogs and sidebars, and consider if building a form in HTML/Javascript would be a more appropriate solution (assuming simply building a Google Form is not).
Related
I submitted my app/game/PWA to PageSpeed Insights and it keeps giving me TTI values > 7000ms and TBT values > 2000ms, as it can be seen in the screenshot below (the overall score for a mobile experience is around 63):
I read what those values mean over and over, but I just cannot make them lower!
What is most annoying, is that when accessing the page in a real-life browser, I don't need to wait 7 seconds for the page to become interactive, even with a clear cache!!!
The game can be accessed here and its source is here.
What comforts me is that Google's own game, Doodle Cricket also scores terribly. In fact, PageSpeed Insights gives it an overall score of "?".
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
UPDATE: Partial Solution
Thanks to #Graham Ritchie's answer, I was able to detect the two slowest points, simulating a mid-tier mobile phone:
Loading/Compiling WebAssembly files: there was not much I could do about this, and this alone consumes almost 1.5 seconds...
Loading the main script file, script.min.js: I split the file into two, since almost two thirds of this file are just string constants, and I started loading them asynchronously, both using async to load the main script and delay loading the other string constants, which has saved more than 1.2 seconds from the load time.
The improvements have also saved some time on better mobile devices/desktop devices.
The commit diff is here.
UPDATE 2: Improving the Tooling
For anyone who gets here from Google, two extra tips that I forgot to mention before...
Use the CLI Lighthouse tool rather than the website (both for localhost and for internet websites): npm install -g lighthouse, then call lighthouse --view http.... (or use any other arguments as necessary).
If running on a notebook, make sure it is not running on the battery, but actually connected to a power source 😅
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
No and unfortunately I think you have missed one key piece of the puzzle as to why those numbers are so high.
Page Speed Insights uses throttling on the Network AND the CPU to simulate a mid-tier mobile phone on a 4G connection.
The CPU throttling is your issue.
If I run your game within the "performance" tab on Google Chrome Developer Tools with "4x slowdown" on the CPU I get a few long tasks, one of which takes 5.19s to run!
Your issue isn't page weight as the site is lightweight it is JavaScript execution time.
You would have to look through your code and see why you have a task that takes so long to run, look for nested loops as they are normally the issue!
There are several other tasks that take 1-2 seconds total between them but that 5 second task is the main culprit!
Hopefully that clears things up a bit, any questions just ask.
I am working on a proof of concept and I need to measure the rendering time of a simple website (just a HTML document and one CSS file) 1000 times in a browser. Is there a simple and straightforward tool for this?
I know there are some highly complicated tools with an enormous learning curve, but I don't have the whole week to tinker with it. I don't need anything else just the rendering time, exactly as Chrome's Performance tool displays it in milliseconds, then calculate an average.
If someone could tell me how to find the total rendering time of the page in the (quite enormous) JSON output of the Performance tool, I'd be happy with that. I can have a macro recorder clicking the Refresh button all night. Though I guess there's a way to get it done from the command prompt - any advice is appreciated on that too!
The 'Audits' tab in Chrome's dev tools allows you to run a lighthouse performance audit, which will provide you some key metrics as defined by Google (such as time to interactive): https://developers.google.com/web/tools/lighthouse/.
You can run it from the command line too, which should make it somewhat straightforward to repeat it as needed in your scenario and perhaps even integrate it as a test: https://developers.google.com/web/tools/lighthouse/#cli
I'm writing a Chrome extension and I want to measure how it affects performance, specifically currently I'm interested in how it affects page load times.
I picked a certain page I want to test, recorded it with Fiddler and I use this recording as the AutoResponder in Fiddler. This allows me to measure load times without networking traffic delays.
Using this technique I found out that my extension adds ~1200ms to the load time. Now I'm trying to figure out what causes the delay and I'm having trouble understanding the DevTools Performance results.
First of all, it seems there's a discrepancy in the reported load time:
On one hand, the summary shows a range of ~13s, but on the other hand, the load event arrived after ~10s (which I also corroborated using performance.timing.loadEventEnd - performance.timing.navigationStart):
The second thing I don't quite understand is how the number add up (or rather don't add up). For example, here's a grouping of different categories during load:
Neither of this columns sums up to 10s nor to 13s.
When I group by domain I can get different rows for the extension and for the rest of the stuff:
But it seems that the extension only adds 250ms which is much lower than the exhibited difference in load times.
I assume that these numbers represent just CPU time, and do not include any wait time. Is this correct? If so, it's OK that the numbers don't add up and it's possible that the extension doesn't spend all its time doing CPU bound work.
Then there's also the mysterious [Chrome extensions overhead], which doesn't explain the difference in load times either. Judging by the fact that it's a separate line from my extension, I thought they are mutually exclusive, but if I dive deeper into the specifics, I find my extension's functions under the [Chrome extensions overhead] subdomain:
So to summarize, this is what I want to be able to do:
Calculate the total CPU time my extension uses - it seems it's not enough to look under the extension's name, and its functions might also appear in other groups.
Understand whether the delay in load time is caused by CPU processing or by synchronous waiting. If it's the latter, find where my extension is doing a synchronous wait, because I'm pretty sure that I didn't call any blocking APIs.
Update
Eventually I found out that the reason for the slowdown was that we also activated Chrome accessibility whenever our extension was running and that's what caused the drastic slowdown. Without accessibility the extension had a very minor effect. I still wonder though, how I could see in the profiler that my problem was the accessibility. It could have saved me a ton of time... I will try to look at it again later.
I'm trying to find a benchmark for how long users are willing to wait for a response from a remote service. In my case the response is for very useful but not business critical validation of data entry. I guess that there must have been some work done in the HCI space on this.
If you know of a generally accepted definition for soft realtime responses then great but I'd also appreciate your well reasoned thoughts.
Chris
US DOD MIL-STD 1472-F Human Engineering Standard has the most widely accepted requirements for maximum allowed response time (from Table XXII, page 196, times in seconds):
Key Response (Key depression until positive response, e.g., "click"): 0.1
Key Print (Key depression until appearance of character): 0.2
Page Turn (End of request until first few lines are visible): 1.0
Page Scan (End of request until text begins to scroll): 0.5
XY Entry (From selection of field until visual verification): 0.2
Function (From selection of command until response): 2.0
Pointing (From input of point to display point): 0.2
Sketching (From input of point to display of line): 0.2
Local Update (Change to image using local data base, e.g., new menu list): 0.5
Host Update (from display buffer): 2.0
File Update (Change where data is at host in readily accessible form): 10.0
Inquiry - Simple (e.g., a scale change of existing image): 2.0
Inquiry - Complex (Image update requires an access to a host file): 10.0
Error Feedback (From command until display of a commonly used message): 2.0
As you can see, acceptable response time depends on what response the user is waiting for. For something like a pulldown menu appearing, it's 0.5 seconds max. For a full page load in a browser, you want something to appear in 1.0 s to 2.0 s and the full page loaded in 10.0 s. In all the above, shorter response times are better. Only in bizarre circumstances will users object to a 0.001s response time.
In any case, if the response time will be greater than 0.5 s, then you need to provide feedback such as a throbber or hourglass sprite. If response time is a minimum of 5-15 s (depending on what standard you use), provide a progress bar. With a progress bar, very long response times (on the order or minutes over even hours) may be acceptable as long as you set it up for the user as a “batch” process rather than being an interactive program. It's much better for the user to make all input and wait an hour than to make input on four occasions, waiting 15 minutes after each.
The above list has the accepted standards. How long your users are willing to wait (e.g., before giving up) essentially boils down to the user making a cost-benefit analysis. Is what I’m going to get worth the wait? What are my sunk costs? Is there an alternative (e.g., another web site) that can do it better? Can I do other things while I wait to make the most of my time? However, whatever users willing to do, you can bet they’ll resent delays greater than the standards above.
Human reaction time seems to be around 200 ms - anything around there will be perceived as instantaneous. That sort of number is hard to achieve, especially in an application that gets information from remote services.
If you take a look at Google's search suggestion box, the lag there is minimal - less than a second. It's astoundingly fast, and really remarkable for a web application. This is really nice for Google's users, but it's bad news for you. These days, users expect most applications to react with the same sort of speed an efficiency; anything slower is considered rather laggy. However, it's worth noting that people's patience usually varies with the complexity of the task at hand. A simple form submit should never take much time, but something like uploading photos is expected to take a while.
My feeling is this: go with your gut. If your application is fairly simple then you should try to get the wait/load time down to less than a second. If you can't, then your best bet is to add an indicator so the user knows that some computations are being done in the background. This can be in the form of a small animation or a progress bar.
Unfortunately, the answer to this question is not typically a well-defined number. Users expectations vary widely and can change depending on what it is you're talking about.
As computers continue to become more ubiquitous and we (the consumers) continue to have growing expectations of speed, remote services, websites, and even applications will need to continue to respond more quickly. Generally speaking, you want everything to be as fast as possible.
With this said, I would look at what your remote service is for. Since you said, "the response is for very useful..." to me, that means it probably will get used frequently. People tend to use what is useful. If that's the case, I would look for ways to make that remote service respond quickly.
Of course, there is also the caveat that you don't want to start optimizing before the service is written. What is the current response time? What is the context in which this will be used? Those factors will do a lot to determine the longest users are willing to wait for the service.
You might want to search for "SLA" or "Service Level Agreement". Those are the documents in a web business that make guarantees as to how long data will take to get back to the user, whether it's an HTML document or a web service call.
I've started to add the time taken to render a page to the footer of our internal web applications. Currently it appears like this
Rendered in 0.062 seconds
Occasionally I get rendered times like this
Rendered in 0.000 seconds
Currently it's only meant to be a guide for users to judge whether a page is quick to load or not, allowing them to quickly inform us if a page is taking 17 seconds rather than the usual 0.5. My question is what format should the time be in? At which point should I switch to a statement such as
Rendered in less than a second
I like seeing the tenths of a second but the second example above is of no use to anyone, in fact it just highlights the limits of the calculation I use to find the render time. I'd rather not let the users see that at all! Any answers welcome, including whether anything should be included on the page.
"Rendered instantly" sounds way better than "Rendered in less than a second".
I'm not sure there's any value in telling users how long it took for the server to render the page. It could well be worth you logging that sort of information, but they don't care.
If it takes the server 0.001 of a second to draw the page but it takes 17 seconds for them to load it (due to network, javascript, page size, their rubbish PC, etc) their perception will be the latter.
Then again adding the render time might help you fend off the enquiries about any percieved slowness with a "talk to your local network admin" response.
Given that you know the accuracy of your measurements you could have the 0.000 text be "Rendered in less than a thousandth of a second"
Rather than relying on your users to look at the page footer and to let you know if the value exceeds some patience threshold, it might be a better idea to log the page render times in a log file on the server. Once you have all that raw data, you can look for particular pages that tend to take longer than normal to render.
With more detailed logging, you could also measure the elapsed times in database queries or whatever if your web app relies on external systems.
I think I over-emphasized it was for the users.
I know by using in trace in the web.config I can get accurate information on page render times along with times for accessing the database.
We have in the past had problems with applications running too slowly over the network although it's now fixed I'm adding the label to new applications so that users are aware it is something we are taking seriously and it's a very simple indicator for the developers.
Taking all that into account I like "Rendered Instantly" and write a lot of sense so I'll accept both your answer and kokos'.
Thanks