Firefox slow load time high Waiting/Receiving - performance

Website generally loads fast in under 4 seconds or less but on the odd occassion - usually every 1 out of 10 tries its reaching 20+ seconds. All i can spot is an unusually high waiting/receiving time in chromes dev tools. Web host says everything is fine and nothing looks out of the ordinary but clearly there is an issue.
What could be causing such a long waiting/receiving time?

Figured this out in the end it was PHP's session lock limit.

Related

Why does PageSpeed Insights keeps returning a high TTI (Time to Interactive) for a simple game?

I submitted my app/game/PWA to PageSpeed Insights and it keeps giving me TTI values > 7000ms and TBT values > 2000ms, as it can be seen in the screenshot below (the overall score for a mobile experience is around 63):
I read what those values mean over and over, but I just cannot make them lower!
What is most annoying, is that when accessing the page in a real-life browser, I don't need to wait 7 seconds for the page to become interactive, even with a clear cache!!!
The game can be accessed here and its source is here.
What comforts me is that Google's own game, Doodle Cricket also scores terribly. In fact, PageSpeed Insights gives it an overall score of "?".
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
UPDATE: Partial Solution
Thanks to #Graham Ritchie's answer, I was able to detect the two slowest points, simulating a mid-tier mobile phone:
Loading/Compiling WebAssembly files: there was not much I could do about this, and this alone consumes almost 1.5 seconds...
Loading the main script file, script.min.js: I split the file into two, since almost two thirds of this file are just string constants, and I started loading them asynchronously, both using async to load the main script and delay loading the other string constants, which has saved more than 1.2 seconds from the load time.
The improvements have also saved some time on better mobile devices/desktop devices.
The commit diff is here.
UPDATE 2: Improving the Tooling
For anyone who gets here from Google, two extra tips that I forgot to mention before...
Use the CLI Lighthouse tool rather than the website (both for localhost and for internet websites): npm install -g lighthouse, then call lighthouse --view http.... (or use any other arguments as necessary).
If running on a notebook, make sure it is not running on the battery, but actually connected to a power source 😅
Summing up: is there a way to tell PageSpeed Insights the page is actually a game with only a simple canvas in it and that it is indeed interactive as soon as the first frame is rendered on the canvas (not 7 seconds later)?
No and unfortunately I think you have missed one key piece of the puzzle as to why those numbers are so high.
Page Speed Insights uses throttling on the Network AND the CPU to simulate a mid-tier mobile phone on a 4G connection.
The CPU throttling is your issue.
If I run your game within the "performance" tab on Google Chrome Developer Tools with "4x slowdown" on the CPU I get a few long tasks, one of which takes 5.19s to run!
Your issue isn't page weight as the site is lightweight it is JavaScript execution time.
You would have to look through your code and see why you have a task that takes so long to run, look for nested loops as they are normally the issue!
There are several other tasks that take 1-2 seconds total between them but that 5 second task is the main culprit!
Hopefully that clears things up a bit, any questions just ask.

PageSpeed error: Invalid task timing data

My website uses the following optimizations to free up main thread as well as optimize content load process:
- Web workers for loading async data as well as images.
- Defer images until all the content on page is loaded first
- typekit webfontloader for optimized font load
Now since the time I completely switched over to webworkers for all network [async] related tasks, I have noticed the increased occurence in following errors[by ~50%]:
But my score seems to be unaffected.
My question is, how accurate is this score?
P.S: My initial data is huge, so styling and rendering takes ~1300ms & ~1100ms resp. [required constraint]
After doing a few experiments and glancing through the LightHouse (the engine that powers PSI) source code I think the problem comes in the fact that once everything has loaded (page load event) Lighthouse only runs for a few seconds before terminating.
However your JS runs for quite some time afterwards with the service workers performing some tasks nearly 11 seconds after page load on one run of mine (probably storing some images which take a long time to download).
I am guessing you are getting intermittent errors as sometimes the CPU goes quiet long enough to calculate JS execution time and sometimes it does not (depending on how long it is between tasks it is performing).
To see what I mean open developer tools on Chrome -> Performance Tab -> Set CPU slowdown to 4 x slowdown (which is what Lighthouse emulates) and press 'record' (top left). Now reload the page and once it has loaded fully stop recording.
You will get a preformance profile and there is a section 'Main' that you can expand to see the main thread load (which is still used despite using a worker as it needs to decode the base64 encoded images, not sure if that can be shifted onto a different thread)
You will see that tasks continue to use CPU for 3-4 seconds after page load.
It is a bug with Lighthouse but at the same time something to address at your side as it is symptomatic of a problem (why base64 encoded images? that is where you are taking the performance hit on what would otherwise be a well-optimised site).

High CPU Usage with no load

We are running a windows service which every 5 seconds checks a folder for files and if found logs some info about it using NLog.
I already tried the suggestions from ASP.NET: High CPU usage under no load without succes.
When the service was just started there is hardly any CPU usage. After a few hours we see CPU peaks to 100% and after some more waiting the cpu graph looks like:
I tried the steps described in http://blogs.technet.com/b/sooraj-sec/archive/2011/09/14/collecting-data-using-xperf-for-high-cpu-utilization-of-a-process.aspx to produce information on what is going on:
I don't know where to continue. Any help appreciated
Who wrote this windows service? Was it you or 3rd party?
To me, checking folder for changes every 5 seconds sounds really suspicious, and maybe the primary reason why you are getting this massive slowdown.
If you do it right, you can get directory changes immediately as they happen, and yet spend almost no CPU time while doing that.
This Microsoft article explains how exactly to do that: Obtaining Directory Change Notifications by using functions FindFirstChangeNotification, FindNextChangeNotification, ReadDirectoryChangesW and WaitForMultipleObjects.
After a lot of digging it had to do with this:
The service had a private object X with property Y
Every time the service was fired X was passed to the Business Logic. There Y was used and in the end disposed. The garbage collector will then wait until X is disposed, which will never happen until the service is restarted. This caused an extra GC waiting thread every time the service was fired.

Weird change in CPU usage makes RubyMine almost unusable [Graph included]

I have been using RubyMine for about 2 months now and been experiencing performance issues ever since. I thought it was because of the buggy debugger, then because of swapping.. Nay
So I realized that for 5 - 10 minutes it works fine, until RubyMine (or Ubuntu?) decides to use alternatively one core at 100 % and the other not so much. It sometimes goes back to normal but tipically stays the same, makin it really unpleasant to use.
So here you see the change around 30 sec.
What do you think?
Providing a CPU snapshot for RubyMine may help to identify the problem if it's RubyMine process eating your resources.

Web Development: What page load times do you aim for?

Website Page load times on the dev machine are only a rough indicator of performance of course, and there will be many other factors when moving to production, but they're still useful as a yard-stick.
So, I was just wondering what page load times you aim for when you're developing?
I mean page load times on Dev Machine/Server
And, on a page that includes a realistic quantity of DB calls
Please also state the platform/technology you're using.
I know that there could be a big range of performance regarding the actual machines out there, I'm just looking for rough figures.
Thanks
Less than 5 sec.
If it's just on my dev machine I expect it to be basically instant. I'm talking 10s of milliseconds here. Of course, that's just to generate and deliver the HTML.
Do you mean that, or do you mean complete page load/render time (html download/parse/render, images downloading/display, css downloading/parsing/rendering, javascript download/execution, flash download/plugin startup/execution, etc)? The later is really hard to quantify because a good bit of that time will be burnt up on the client machine, in the web browser.
If you're just trying to ballpark decent download + render times with an untaxed server on the local network then I'd shoot for a few seconds... no more than 5-ish (assuming your client machine is decent).
Tricky question.
For a regular web app, you don't want you page load time to exceed 5 seconds.
But let's not forget that:
the 20%-80% rule applies here; if it takes 1 sec to load the HTML code, total rendering/loading time is probably 5-ish seconds (like fiXedd stated).
on a dev server, you're often not dealing with the real deal (traffic, DB load and size - number of entries can make a huge difference)
you want to take into account the way users want your app to behave. 5 seconds load time may be good enough to display preferences, but your basic or killer features should take less.
So in my opinion, here's a simple method to get a rough figures for a simple web app (using for example, Spring/Tapestry):
Sort the pages/actions given you app profile (which pages should be lightning fast?) and give them a rough figure for production environment
Then take into account the browser loading/rendering stuff. Dividing by 5 is a good start, although you can use best practices to reduce that time.
Think about your production environment (DB load, number of entries, traffic...) and take an additional margin.
You've got your target load time on your production server; now it's up to you and your dev server to think about your target load time on your dev platform :-D
One of the most useful benchmarks we use for identifying server-side issues is the "internal" time taken from request-received to response-flushed by the web server itself. This means ignoring network traffic / latency and page render times.
We have some custom components (.net) that measure this time and inject it into the HTTP response header (we set a header called X-Server-Response); we can extract this data using our automated test tools, which means that we can then measure it over time (and between environments).
By measuring this time you get a pretty reliable view into the raw application performance - and if you have slow pages that take a long time to render, but the HTTP response header says it finished its work in 50ms, then you know you have network / browser issues.
Once you push your application into production, you (should) have things to like caching, static files sub-domains, js/css minification etc. - all of which can offer huge performance gains (esp. caching), but can also mask underlying application issues (like pages that make hundreds of db calls.)
All of which to say, the values we use for this time is sub 1sec.
In terms of what we offer to clients around performance, we usually use 2-3s for read-only pages, and up to 5s for transactional pages (registration, checkout, upload etc.)

Resources