mobile version FASTER than AMP. AMP showing less - performance

We end an optimization project for our mobile version and we did know the performance was very good (AMP level good). We also noticed AMP started to show less and less. We run some tests and on multiple articles and we found this:
[12 ms] Canonical page response time
[16 ms] AMP page response time [INFO: AMP page fetch is slower than Canonical page]
[20 ms] Google AMP Cache response time [INFO: AMP Cache page fetch is slower than Canonical page]
[0.75 X] Canonical / AMP response time factor
[0.6 X] Canonical / Google AMP Cache response time factor
This rises the following questions for us:
1) will AMP still show if the canonical version is faster?
2) if AMP is not showing, we are out of the AMP carrousel, which will damage our organic search traffic. How can we avoid that?
Regards
Hernán.

The AMP version will show as long as you have bi-directional rel-amphtml and rel-canonical links between your pages.

Related

Failing core web vitals despite 90%+ PSI results

Site : www.purelocal.com.au
Tested 1000's of URL's in Google PSI - all are Green 90%+.
However , in Google webmaster tools = 0 GOOD URLS.
Can someone please explain what Google requires and what we can do to pass core web vitals before JUNE ?
We've spent months optimising everything and cannot further optimise but Google says that NONE of our URL's pass core web vitals...it's just ridiculous.
Looking at your website's report in the CrUX Dashboard, there are a couple of things you could optimize more:
First, your site's LCP is right on the edge of having 75% good desktop experiences, and phone experiences are below that at 66% good. https://web.dev/optimize-lcp/ has some great tips for addressing LCP issues.
Second, while your site's desktop FID experiences are overwhelmingly good (98%), you do seem to have a significant issue for phone users (only 44% good). There are similarly great tips in the https://web.dev/optimize-fid/ article.
While the big green "98" score on PSI makes it look like the page is nearly perfect, what matters most in terms of the user experience is real field data. That information can be found in the "Field Data" and "Origin Summary" sections of the report.
You mentioned in the comments that your server response time is an issue. I can confirm this with lab testing:
https://webpagetest.org/graph_page_data.php?tests=210506_AiDcJ0_0283e8c51814788904bdf19cebe7a5c8&medianMetric=TTFB&fv=1&median_run=1&zero_start=true&control=NOSTAT#TTFB
https://webpagetest.org/result/210506_AiDcJ0_0283e8c51814788904bdf19cebe7a5c8/8/details/#waterfall_view_step1
The long light blue bar on line 1 of the chart above shows how long it takes your server to respond to the request. In this case the time to first byte (TTFB) is 1.132 seconds. This is going to be a huge problem for most users to achieve a fast LCP because in these tests it takes 1.9 seconds just to get the HTML to the client. No amount of frontend optimizations can make the HTML arrive sooner than that. You need to focus on backend optimizations to get the TTFB down.
I can't give you any specific hosting recommendations but it does seem like the shared hosting is adversely affecting your users' LCP performance.

Why does PageSpeed Insights keeps returning a high TTI (Time to Interactive) for a simple game?

I submitted my app/game/PWA to PageSpeed Insights and it keeps giving me TTI values > 7000ms and TBT values > 2000ms, as it can be seen in the screenshot below (the overall score for a mobile experience is around 63):
I read what those values mean over and over, but I just cannot make them lower!
What is most annoying, is that when accessing the page in a real-life browser, I don't need to wait 7 seconds for the page to become interactive, even with a clear cache!!!
The game can be accessed here and its source is here.
What comforts me is that Google's own game, Doodle Cricket also scores terribly. In fact, PageSpeed Insights gives it an overall score of "?".
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
UPDATE: Partial Solution
Thanks to #Graham Ritchie's answer, I was able to detect the two slowest points, simulating a mid-tier mobile phone:
Loading/Compiling WebAssembly files: there was not much I could do about this, and this alone consumes almost 1.5 seconds...
Loading the main script file, script.min.js: I split the file into two, since almost two thirds of this file are just string constants, and I started loading them asynchronously, both using async to load the main script and delay loading the other string constants, which has saved more than 1.2 seconds from the load time.
The improvements have also saved some time on better mobile devices/desktop devices.
The commit diff is here.
UPDATE 2: Improving the Tooling
For anyone who gets here from Google, two extra tips that I forgot to mention before...
Use the CLI Lighthouse tool rather than the website (both for localhost and for internet websites): npm install -g lighthouse, then call lighthouse --view http.... (or use any other arguments as necessary).
If running on a notebook, make sure it is not running on the battery, but actually connected to a power source 😅
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
No and unfortunately I think you have missed one key piece of the puzzle as to why those numbers are so high.
Page Speed Insights uses throttling on the Network AND the CPU to simulate a mid-tier mobile phone on a 4G connection.
The CPU throttling is your issue.
If I run your game within the "performance" tab on Google Chrome Developer Tools with "4x slowdown" on the CPU I get a few long tasks, one of which takes 5.19s to run!
Your issue isn't page weight as the site is lightweight it is JavaScript execution time.
You would have to look through your code and see why you have a task that takes so long to run, look for nested loops as they are normally the issue!
There are several other tasks that take 1-2 seconds total between them but that 5 second task is the main culprit!
Hopefully that clears things up a bit, any questions just ask.

PageSpeed error: Invalid task timing data

My website uses the following optimizations to free up main thread as well as optimize content load process:
- Web workers for loading async data as well as images.
- Defer images until all the content on page is loaded first
- typekit webfontloader for optimized font load
Now since the time I completely switched over to webworkers for all network [async] related tasks, I have noticed the increased occurence in following errors[by ~50%]:
But my score seems to be unaffected.
My question is, how accurate is this score?
P.S: My initial data is huge, so styling and rendering takes ~1300ms & ~1100ms resp. [required constraint]
After doing a few experiments and glancing through the LightHouse (the engine that powers PSI) source code I think the problem comes in the fact that once everything has loaded (page load event) Lighthouse only runs for a few seconds before terminating.
However your JS runs for quite some time afterwards with the service workers performing some tasks nearly 11 seconds after page load on one run of mine (probably storing some images which take a long time to download).
I am guessing you are getting intermittent errors as sometimes the CPU goes quiet long enough to calculate JS execution time and sometimes it does not (depending on how long it is between tasks it is performing).
To see what I mean open developer tools on Chrome -> Performance Tab -> Set CPU slowdown to 4 x slowdown (which is what Lighthouse emulates) and press 'record' (top left). Now reload the page and once it has loaded fully stop recording.
You will get a preformance profile and there is a section 'Main' that you can expand to see the main thread load (which is still used despite using a worker as it needs to decode the base64 encoded images, not sure if that can be shifted onto a different thread)
You will see that tasks continue to use CPU for 3-4 seconds after page load.
It is a bug with Lighthouse but at the same time something to address at your side as it is symptomatic of a problem (why base64 encoded images? that is where you are taking the performance hit on what would otherwise be a well-optimised site).

Varnish & ESIs : Fetching in parallel and possible workarounds

I'm investigating using Varnish with ESIs to cache page content for a high traffic forum-like website.
Context : I want to cache content for visitors only (connected users will have a really different display and need absolute fresh content). Still, for visitors, some parts of a page need to be dynamic :
- not cachable, for example for a visitor-dependant module (think a 'suggestion' widget that is fed with a real-time analysis of the pages viewed, thanks to a beacon)
- cachable with a small TTL of 15mn, for example for a 'latest posts' widget or for very changing ads campaigns.
Currently we are using Apache/PHP/symfony/memcache to cache pages and have in-house ESI-like mecanism : a page from cache is parsed and some specific tags are interpreted (including calls to web services and/or databases). This is not performant enough since server time is then around 700ms.
In remplacement of this solution, we can have Varnish+ESIs. The total nb of ESIs included in a page can reach 15. The real number of ESIs to fetch will be less than that but not so much given the ESI's TTLs. The critical problem is that Varnish fetches the ESIs sequencially instead of parallel and this is not acceptable. This feature is somewhere late in Varnish's roadmap.
So,
What is your experience with Varnish and ESIs ? How many ESIs, response time gain that you have ?
Do you know workarounds or other serious and configurable (VCL was nice) reverse-proxies with parallel ESI fetching ?
If not, what kind of good caching strategy do you use for equivalent use-cases ?
Thanks,
P.
Currently i work for a high traffic site, and performance is everything for us. On several pages we use a lot (20+) of ESI's, for example on our search resultlist. The resultlist is JSON response, and every resultblock in it is a seperate ESI. Okay, we do cache warming. But we didn't run in any performance issues on this. The number of ESI's will be a problem if the backend requests are real slow.
Parallel ESI fetching is on the feature request list of Varnish, but i think it didn't make it in version 4.1.

Web Development: What page load times do you aim for?

Website Page load times on the dev machine are only a rough indicator of performance of course, and there will be many other factors when moving to production, but they're still useful as a yard-stick.
So, I was just wondering what page load times you aim for when you're developing?
I mean page load times on Dev Machine/Server
And, on a page that includes a realistic quantity of DB calls
Please also state the platform/technology you're using.
I know that there could be a big range of performance regarding the actual machines out there, I'm just looking for rough figures.
Thanks
Less than 5 sec.
If it's just on my dev machine I expect it to be basically instant. I'm talking 10s of milliseconds here. Of course, that's just to generate and deliver the HTML.
Do you mean that, or do you mean complete page load/render time (html download/parse/render, images downloading/display, css downloading/parsing/rendering, javascript download/execution, flash download/plugin startup/execution, etc)? The later is really hard to quantify because a good bit of that time will be burnt up on the client machine, in the web browser.
If you're just trying to ballpark decent download + render times with an untaxed server on the local network then I'd shoot for a few seconds... no more than 5-ish (assuming your client machine is decent).
Tricky question.
For a regular web app, you don't want you page load time to exceed 5 seconds.
But let's not forget that:
the 20%-80% rule applies here; if it takes 1 sec to load the HTML code, total rendering/loading time is probably 5-ish seconds (like fiXedd stated).
on a dev server, you're often not dealing with the real deal (traffic, DB load and size - number of entries can make a huge difference)
you want to take into account the way users want your app to behave. 5 seconds load time may be good enough to display preferences, but your basic or killer features should take less.
So in my opinion, here's a simple method to get a rough figures for a simple web app (using for example, Spring/Tapestry):
Sort the pages/actions given you app profile (which pages should be lightning fast?) and give them a rough figure for production environment
Then take into account the browser loading/rendering stuff. Dividing by 5 is a good start, although you can use best practices to reduce that time.
Think about your production environment (DB load, number of entries, traffic...) and take an additional margin.
You've got your target load time on your production server; now it's up to you and your dev server to think about your target load time on your dev platform :-D
One of the most useful benchmarks we use for identifying server-side issues is the "internal" time taken from request-received to response-flushed by the web server itself. This means ignoring network traffic / latency and page render times.
We have some custom components (.net) that measure this time and inject it into the HTTP response header (we set a header called X-Server-Response); we can extract this data using our automated test tools, which means that we can then measure it over time (and between environments).
By measuring this time you get a pretty reliable view into the raw application performance - and if you have slow pages that take a long time to render, but the HTTP response header says it finished its work in 50ms, then you know you have network / browser issues.
Once you push your application into production, you (should) have things to like caching, static files sub-domains, js/css minification etc. - all of which can offer huge performance gains (esp. caching), but can also mask underlying application issues (like pages that make hundreds of db calls.)
All of which to say, the values we use for this time is sub 1sec.
In terms of what we offer to clients around performance, we usually use 2-3s for read-only pages, and up to 5s for transactional pages (registration, checkout, upload etc.)

Resources