Failing core web vitals despite 90%+ PSI results - core-web-vitals

Site : www.purelocal.com.au
Tested 1000's of URL's in Google PSI - all are Green 90%+.
However , in Google webmaster tools = 0 GOOD URLS.
Can someone please explain what Google requires and what we can do to pass core web vitals before JUNE ?
We've spent months optimising everything and cannot further optimise but Google says that NONE of our URL's pass core web vitals...it's just ridiculous.

Looking at your website's report in the CrUX Dashboard, there are a couple of things you could optimize more:
First, your site's LCP is right on the edge of having 75% good desktop experiences, and phone experiences are below that at 66% good. https://web.dev/optimize-lcp/ has some great tips for addressing LCP issues.
Second, while your site's desktop FID experiences are overwhelmingly good (98%), you do seem to have a significant issue for phone users (only 44% good). There are similarly great tips in the https://web.dev/optimize-fid/ article.
While the big green "98" score on PSI makes it look like the page is nearly perfect, what matters most in terms of the user experience is real field data. That information can be found in the "Field Data" and "Origin Summary" sections of the report.
You mentioned in the comments that your server response time is an issue. I can confirm this with lab testing:
https://webpagetest.org/graph_page_data.php?tests=210506_AiDcJ0_0283e8c51814788904bdf19cebe7a5c8&medianMetric=TTFB&fv=1&median_run=1&zero_start=true&control=NOSTAT#TTFB
https://webpagetest.org/result/210506_AiDcJ0_0283e8c51814788904bdf19cebe7a5c8/8/details/#waterfall_view_step1
The long light blue bar on line 1 of the chart above shows how long it takes your server to respond to the request. In this case the time to first byte (TTFB) is 1.132 seconds. This is going to be a huge problem for most users to achieve a fast LCP because in these tests it takes 1.9 seconds just to get the HTML to the client. No amount of frontend optimizations can make the HTML arrive sooner than that. You need to focus on backend optimizations to get the TTFB down.
I can't give you any specific hosting recommendations but it does seem like the shared hosting is adversely affecting your users' LCP performance.

Related

Why does PageSpeed Insights keeps returning a high TTI (Time to Interactive) for a simple game?

I submitted my app/game/PWA to PageSpeed Insights and it keeps giving me TTI values > 7000ms and TBT values > 2000ms, as it can be seen in the screenshot below (the overall score for a mobile experience is around 63):
I read what those values mean over and over, but I just cannot make them lower!
What is most annoying, is that when accessing the page in a real-life browser, I don't need to wait 7 seconds for the page to become interactive, even with a clear cache!!!
The game can be accessed here and its source is here.
What comforts me is that Google's own game, Doodle Cricket also scores terribly. In fact, PageSpeed Insights gives it an overall score of "?".
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
UPDATE: Partial Solution
Thanks to #Graham Ritchie's answer, I was able to detect the two slowest points, simulating a mid-tier mobile phone:
Loading/Compiling WebAssembly files: there was not much I could do about this, and this alone consumes almost 1.5 seconds...
Loading the main script file, script.min.js: I split the file into two, since almost two thirds of this file are just string constants, and I started loading them asynchronously, both using async to load the main script and delay loading the other string constants, which has saved more than 1.2 seconds from the load time.
The improvements have also saved some time on better mobile devices/desktop devices.
The commit diff is here.
UPDATE 2: Improving the Tooling
For anyone who gets here from Google, two extra tips that I forgot to mention before...
Use the CLI Lighthouse tool rather than the website (both for localhost and for internet websites): npm install -g lighthouse, then call lighthouse --view http.... (or use any other arguments as necessary).
If running on a notebook, make sure it is not running on the battery, but actually connected to a power source 😅
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
No and unfortunately I think you have missed one key piece of the puzzle as to why those numbers are so high.
Page Speed Insights uses throttling on the Network AND the CPU to simulate a mid-tier mobile phone on a 4G connection.
The CPU throttling is your issue.
If I run your game within the "performance" tab on Google Chrome Developer Tools with "4x slowdown" on the CPU I get a few long tasks, one of which takes 5.19s to run!
Your issue isn't page weight as the site is lightweight it is JavaScript execution time.
You would have to look through your code and see why you have a task that takes so long to run, look for nested loops as they are normally the issue!
There are several other tasks that take 1-2 seconds total between them but that 5 second task is the main culprit!
Hopefully that clears things up a bit, any questions just ask.

"Time to Interact" metric in web performance measurements

Apparently "Time to Interact" is the new metric to use when measuring the perceived speed of a webpage. I'm interested in understanding a bit more about what this actually is.
The term was apparently coined by Radware, and is being pushed as the most meaningful performance measurement (compared to things such as Time to First/Last Byte, Time to Render etc.).
It is described as:
the point which a page displays its primary interactive (think
clickable) content, rather than full page load.
This seems pretty subjective to me; what is the "primary interactive content" of a webpage for example?
There have been reports citing results for the measurement, so some how this is being measured, and further, it must be automated as the result sets are pretty big (~500 sites were tested).
Other than the above quote, I cannot find any more information on how to measure this.
As Google are placing more emphasis on above the fold content (or visible content), I am wondering whether this metric is actually more like "Time to First Meaningful Render", i.e. it is contextual to the current page goal. So for example, on an eCommerce site's product page, this could be the main image, and an add to basket link.
I am keen to understand this metric, as to me it does seem like the most useful one. My question is therefore whether anyone is measuring this, and if so how are they doing so?
You kind of answered your own question, it is subjective, and contextual to you current project.
What if I'm testing a site with only HTML without any complex resources? There is no point measuring TTI there. On the other hand, let's see this demo site.
Bigger picture here.
Blue line is marking the "COMContentLoaded" event (main document is loaded and markup parsed), red line indicates the load event, where all page resources are loaded. The TTI line would go in-between the two lines, that is defined differently for each project, based on some essential to interact resources loaded event.
For example, let's say that the pictures on the demo site are not essential to the core features of the site. While the main site loaded in 0.8 seconds, the 3 big pictures took 36 extra seconds to load, so in this case using the overall response time as a KPI would yield ~36second response time, while if you define TTI excluding those big, non essential resources, you end up with < 1s response time.
I am keen to understand this metric, as to me it does seem like the most useful one.
Definitely useful, but as you said it in your question, it's specific to the project. You wouldn't measure TTI on a simple, relatively static web app, you would probably measure overall response time. I always define KPIs "tailored" for the current project, instead of trying to use common metrics, and "force them" on a project.
My question is therefore whether anyone is measuring this, and if so how are they doing so?
Definitely used it before, you should identify the essential resources for your site, and when the last of those resources are loaded, that is your TTI. This could be a javascript file, a css, etc...
Websites are getting more complex. Whereas they might not always contain more content they still have more resources to load as the user interaction/user experience is more complex from a technical point of view. Ajax helps us to load different parts separately. So rather than one page load we have the loading of several small things. And for each of these parts we can measure the loading performance. But there might some parts on the site that might be more important than others. The "primary interactive content" is that part of your view that enables the user to do what he intends to do, for example buy a train ticket. If some advertisement or a special animation on the left side of the screen hasn't loaded this does not prevent the user to buy start buying a ticket. But of course "primary interactive content" as a term is quite vague and you have to define it for your specific application. It is the point an average user can and will start to interact with the website while some parts are sill loading.
This is how I understand the concept and I see the difference to "Time to First Meaningful Render" here: you might have a basket rendered on your eCommerce page but the GUI is not yet responsive. So you see something meaningful but the interactivity is not yet there. Therefore TTI >= TtFMR.
Measuring TTI requires you to define what elements are required for interactivity which not only depends on what the site does but also HOW it does it. So it highly depends on your implementation/technology.

Fast delivery webpages on shared hosting

I have a website (.org) for a project of mine on LAMP hosted on a shared plan.
It started very small but now I extended this community to other states (in US) and it's growing fast.
I had 30,000 (or so) visits per day (about 4 months ago) and my site was doing fine and today I reached 100,000 visits.
I want to make sure my site will load fast for everyone and since it's not making any money I can't really move it to a private server. (It's volunteer work).
Here's my setup:
- Apache 2
- PHP 5.1.6
- MySQL 5.5
I have 10 pages PER state and on each page people can contribute, write articles, like, share, etc... on few pages I can hit 10,000 per hours during lunch time and the rest of the day it's quiet.
All databases are setup properly (I personally paid a DBA expert to build the code). I am pretty sure the code is also good. Now, I can make page faster if I use memcached but the problem is I can't use it since I am on a shared hosting.
Will the MySQL be able to support that many people, with lots of requests per minutes? or I should create a fund to move to a private server and install all the tools I need to make it fast?
Thanks
To be honest there's not much you can do on shared hosting. There's a reason why they are cheap ... they limit you to do stuff like you want to do.
Either you move to a VPS that allow memcache (which are cheaper) and you put some google ads OR you keep going on your shared hosting using a pre-generated content system.
VPS can be very cheap (look for coupons) and you can install what ever you want since you are root.
for example hostmysite.com with the coupon: 50OffForLife you pay 20$ per month for life ... vs a 5$ shared hosting ...
If you want to keep the current hosting, then what you can do is this:
Pages are generated by a process (cronjob or on the fly), everytime someone write a comment or make an update. This process start and fetch all the data on the page and saves it to the web page.
So let say you have a page with comments, grab the contents (meta, h1, p, etc..) and the comments and save both into the file.
Example: (using .htaccess - based on your answer you are familiar with this)
/topic/1/
If the file exists, then simply echo ...
if not:
select * from pages where page_id = 1;
select * from comments where page_id = 1;
file_put_contents('/my/public_html/topic/1/index.html', $content);
Or something along these lines.
Therefore, saving static HTML will be very fast since you don't have to call any DB. It just loads the file once it's generated.
I know I'm stepping on unstable ground providing an answer to this question, but I think it is very indicative.
Pat R Ellery didn't provide enough details to do any kind of assessment, but the good news there can't be enough details. Explanation is quite simple: anybody can build as many mental model as he wants, but real system will always behave a bit differently.
So Pat, do test your system all the time, as much as you can. What you are trying to do is to plan the capacity of your solution.
You need the following:
Capacity test - To determine how many users and/or transactions a given system will support and still meet performance goals.
Stress test - To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
Load test - To verify application behavior under normal and peak load conditions.
Performance test - To determine or validate speed, scalability, and/or stability.
See details here:
Software performance testing
Types of Performance Testing
In the other words (and a bit primitive): if you want to know your system is capable to handle N requests per time_period simulate N requests per time_period and see the result.
(image source)
Another example:
There are a lot of tools available:
Load Tester LITE
Apache JMeter
Apache HTTP server benchmarking tool
See list here

Web site loading speed is slow

My website http://theminimall.com is taking more loading time than before
initially i had ny server in US at that time my website speed is around 5 sec.
but now i had transferred my server to Singapore and loading speed is got increased is about 10 sec.
the more waiting time is going in getting result from Store Procedure(sql server database)
but when i execute Store Procedure in Sql Server it is returning result very fast
so i assume that the time taken is not due to the query execution delay but the data transfer time from the sql server to the web server how can i eliminate or reduce the time taken any help or advice will be appreciated
thanks in advance
I took a look at your site on websitetest.com. You can see the test here: http://www.websitetest.com/ui/tests/50c62366bdf73026db00029e.
I can see what you mean about the performance. In Singapore, it's definitely fastest, but even there its pretty slow. Elsewhere around the world it's even worse. There are a few things I would look at.
First pick any sample, such as http://www.websitetest.com/ui/tests/50c62366bdf73026db00029e/samples/50c6253a0fdd7f07060012b6. Now you can get some of this info in the Chrome DevTools, or FireBug, but the advantage here is seeing the measurements from different locations around the world.
Scroll down to the waterfall. All the way on the right side of the Timeline column heading is a drop down. Choose to sort descending. Here we can see the real bottlenecks. The first thing in the view is GetSellerRoller.json. It looks like hardly any time is spent downloading the file. Almost all the time is spent waiting for the server to generate the file. I see the site is using IIS and ASP.net. I would definitely look at taking advantage of some server-side caching to speed this up.
The same is true for the main html, though a bit more time is spent downloading that file. Looks like its taking so long to download because it's a huge file (for html). I would take the inline CSS and JS out of there.
Go back to the natural order for the timeline, then you can try changing the type of file to show. Looks like you have 10 CSS files you are loading, so take a look at concatenating those CSS files and compressing them.
I see your site has to make 220+ connection to download everything. Thats a huge number. Try to eliminate some of those.
Next down the list I see some big jpg files. Most of these again are waiting on the server, but some are taking a while to download. I looked at one of a laptop and was able to convert to a highly compressed png and save 30% on the size and get a file that looked the same. Then I noticed that there are well over 100 images, many of which are really small. One of the big drags on your site is that there are so many connections that need to be managed by the browser. Take a look at implementing CSS Sprites for those small images. You can probably take 30-50 of them down to a single image download.
Final thing I noticed is that you have a lot of JavaScript loading right up near the top of the page. Try moving some of that (where possible) to later in the page and also look into asynchronously loading the js where you can.
I think that's a lot of suggestions for you to try. After you solve those issues, take a look at leveraging a CDN and other caching services to help speed things up for most visitors.
You can find a lot of these recommendations in a bit more detail in Steve Souder's book: High Performance Web Sites. The book is 5 years old and still as relevant today as ever.
I've just taken a look at websitetest.com and that website is completely not right at all, my site is amoung the 97% fastest and using that website is says its 26% from testing 13 locations. Their servers must be over loaded and I recommend you use a more reputatable testing site such as http://www.webpagetest.org which is backed by many big companies.
Looking at your contact details it looks like the focus audience is India? if that is correct you should use hosting where-ever your main audience is, or closest neighbor.

Web Development: What page load times do you aim for?

Website Page load times on the dev machine are only a rough indicator of performance of course, and there will be many other factors when moving to production, but they're still useful as a yard-stick.
So, I was just wondering what page load times you aim for when you're developing?
I mean page load times on Dev Machine/Server
And, on a page that includes a realistic quantity of DB calls
Please also state the platform/technology you're using.
I know that there could be a big range of performance regarding the actual machines out there, I'm just looking for rough figures.
Thanks
Less than 5 sec.
If it's just on my dev machine I expect it to be basically instant. I'm talking 10s of milliseconds here. Of course, that's just to generate and deliver the HTML.
Do you mean that, or do you mean complete page load/render time (html download/parse/render, images downloading/display, css downloading/parsing/rendering, javascript download/execution, flash download/plugin startup/execution, etc)? The later is really hard to quantify because a good bit of that time will be burnt up on the client machine, in the web browser.
If you're just trying to ballpark decent download + render times with an untaxed server on the local network then I'd shoot for a few seconds... no more than 5-ish (assuming your client machine is decent).
Tricky question.
For a regular web app, you don't want you page load time to exceed 5 seconds.
But let's not forget that:
the 20%-80% rule applies here; if it takes 1 sec to load the HTML code, total rendering/loading time is probably 5-ish seconds (like fiXedd stated).
on a dev server, you're often not dealing with the real deal (traffic, DB load and size - number of entries can make a huge difference)
you want to take into account the way users want your app to behave. 5 seconds load time may be good enough to display preferences, but your basic or killer features should take less.
So in my opinion, here's a simple method to get a rough figures for a simple web app (using for example, Spring/Tapestry):
Sort the pages/actions given you app profile (which pages should be lightning fast?) and give them a rough figure for production environment
Then take into account the browser loading/rendering stuff. Dividing by 5 is a good start, although you can use best practices to reduce that time.
Think about your production environment (DB load, number of entries, traffic...) and take an additional margin.
You've got your target load time on your production server; now it's up to you and your dev server to think about your target load time on your dev platform :-D
One of the most useful benchmarks we use for identifying server-side issues is the "internal" time taken from request-received to response-flushed by the web server itself. This means ignoring network traffic / latency and page render times.
We have some custom components (.net) that measure this time and inject it into the HTTP response header (we set a header called X-Server-Response); we can extract this data using our automated test tools, which means that we can then measure it over time (and between environments).
By measuring this time you get a pretty reliable view into the raw application performance - and if you have slow pages that take a long time to render, but the HTTP response header says it finished its work in 50ms, then you know you have network / browser issues.
Once you push your application into production, you (should) have things to like caching, static files sub-domains, js/css minification etc. - all of which can offer huge performance gains (esp. caching), but can also mask underlying application issues (like pages that make hundreds of db calls.)
All of which to say, the values we use for this time is sub 1sec.
In terms of what we offer to clients around performance, we usually use 2-3s for read-only pages, and up to 5s for transactional pages (registration, checkout, upload etc.)

Resources