Lighthouse: difference between results - performance

I'm optimizing a webpage using Lighthouse to monitor performance. I found consistent differences between the results I get running the audit in the Chrome extension and from the Developers Tab.
As an average I get the Largest Contentful Paint:
Developers Tab (with no throttling): 2 - 2.5 secs
Chrome extension: 5.5 - 7 secs
That LCP is an image and measuring it visualy I'm much nearer to the 2,5 secs.
Any clues on which measure should I rely on?
Thanks in advance!

Related

How hits/sec are calculated by blazemeter?

I ran a performance test run for 500VU by using "jp#gc - Stepping Thread Group". I noticed that, right from 200VU-500VU Load, the hits/sec was 20-25 consistently for 25min till the end of the run, error 0.04%.
I know that I could control the hits/sec by using limit RPS and constant throughput timer and I didn't apply or enable.
My questions are
1. was the run Good or Bad?
2. How should be the hits/sec for 500VU load?
3. Was the hits/sec are determined by Blazemeter engine based upon its efficiency?
Make sure you correctly configure Stepping Thread Group
If you have the same throughput for 200 and 500 VU it is no good. On ideal system throughput for 500 VU should be 2.5 times higher. If you are uncertain whether it is your application or BlazeMeter engine(s) to blame, you can check BlazeMeter's instances health during the test time frame on Engine Health tab of your load test report.
As far as I'm aware, BlazeMeter relies on JMeter throughput calculating algorithms. According to JMeter glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
Hits are not always the best measurement of throughput. Here is why: The number of hits can be drastically altered by settings on the servers for applications under test related to the management of cached items. An example: Say you have a very complex page with 100 objects on the page. Only the top level page is dynamic in nature, with the rest of the items as page components such as images, style sheets, fonts, javascript files, etc,.... In a model where no cache settings are in place you will find that all 100 elements have to be requested, each generating a "hit" in both reporting and the server stats. These will show up in your hits/second model.
Now optimize the cache settings where some information is cached at the client for a severely long period ( logo images and fonts for a year ), some for the build interval term of one week (resident in the CDN or client) and only the dynamic top level HTML remains uncached. In this model there is only one hit generated at the server except for the period of time immediately after the build is deployed when the CDN is being seeded for the majority of users. Occasionally you will have a new CDN node come into play with users in a different geographic area, but after the first user seeds the cache the remainder are pulling from CDN and then cached at the client. IN this case your effective hits per second drop tremendously at both CDN and Origin servers, especially where you have return users.
Food for thought.

Realtime alerts for page speed

I'm looking for a tool that will send me an alert for page load time.
Think of a downtime alert, eg: Pingdom, but one that sends alerts once a page load time increases above a certain threshold. Eg: Alert that X page has taken greater than 7 seconds consistently for 30 minutes.
I know of a lot of tools that give you historical records and page speed metrics after the fact, or give you Apdex scores, but nothing that alerts around speed thresholds.
Does anyone know of such a tool?
Almost all website monitoring services have alerts when the response time is above certain threshold. Your question however is bit more specific since you have a time frame (30 min). Depending on the service used and the monitoring frequency, during a 30 min period you might have between 1 and 30 tests. Do you want an alert if all of those tests are above 7 seconds or if the average response time is above 7 seconds?
I can speak of WebSitePulse where you can receive an alert if 1 or more tests in a row have detected a problem or if the page-load time is within certain limits.
GTmetrix.com Offers Daily alerts for Yslow and PageSpeed scores, as well as great breakdowns and grades for specific ticket items. Great freemium business model as well, free for 3 sites.
Upgraded versions include loading videos of your site.
Source: Just used it for my company's site.

Performance Counters - Tool for monitoring in Windows Server 2008

I am able to get Performance counters for every two seconds in Windows Server 2008 machine using Powershell script. But when i go to Task Manager and check for the CPU Usage, powershell.exe is taking 50% of CPU. So i am trying to get those Performance counters using other third party tools. I have searched and found this and this. Those two are need to refresh manually and not getting automatically for every two seconds. Can anyone Please suggest some tool which gives the Performance Counters for every two seconds and analyzes the Maximum, Average of those counters and stores the results in text/xls or any other format. Please help me.
I found some Performance tools from here, listed below:
Apache JMeter
NeoLoad
LoadRunner
LoadUI
WebLOAD
WAPT
Loadster
LoadImpact
Rational Performance Tester
Testing Anywhere
OpenSTA
QEngine (ManageEngine)
Loadstorm
CloudTest
Httperf.
There are a number of tools that do this -- Google for "server monitor". Off the top of my head:
PA Server Monitor
Tembria FrameFlow
ManageEngine
SolarWinds Orion
GFI Max Nagios
SiteScope. This tool leverages either the perfmon API or the SNMP interface to collect the stats without having to run an additional non-native app on the box. If you go the open source route then you might consider Hyperic. Hyperic does require an agent to be on the box.
In either case I would look to your sample window as part of the culprit for the high CPU and not powershell. The higher your sample rate the higher you will drive the CPU, independent of tool. You can see this yourself just by running perfmon. Use the same sets of stats and what what happens to the CPU as you adjust the sample rate from once every 30 seconds, to once in 20, then ten, 5 and finally 2 seconds as the interval. When engaged in performance testing we rarely go below ten seconds on a host as this will cause the sampling tool to distort the performance of the host. If we have a particularly long term test, say 24 hours, then adjusting the interval to once in 30 seconds will be enough to spot long term trends in resource utilization.
If you are looking to collect information over a long period of time, 12 hours to more, consider going to a longer term interval. If you are going for a short period of sampling, an hour for instance, you may want to run a couple of different periods of one hour at lesser and greater levels of sampling (2 seconds vs 10 seconds) to ensure that the shorter sample interval is generating additional value for the additional overhead to the system.
To repeat, tools just to collect OS stats:
Commercial: SiteScope (Agentless). Leverages native interfaces
Open Source: Hyperic (Agent)
Native: Perfmon. Can dump data to a file for further analysis
This should be possible without third party tools. You should be able to collect the data using Windows Performance Monitor (see Creating Data Collector Sets) and then translate that data to a custom format using Tracerpt.
If you are still looking for other tools, I have compiled a list of windows server performance monitoring tools that also includes third party solutions.

What are considered being good average times when serving a page from a website?

At jameslist.com we can see the following times it takes from request to completed pageview;
Server processing a request: (php, memcached, db, sphinx + internal network latency): 150ms
Time spent in network: 650ms
Time spent in DOM: 1200ms
Time spent render page: 1650ms
That is in total about 3.7 seconds from request to fully loaded webpage. In average, is this good, ok or perhaps bad?
When it comes to breakdown of the above points, what could be expected of sites with similar content?
I would suggest google's search times are good for simple pages. I just did a search which took 130 ms and that sounds fine.
The more complex the page, the longer the time which is acceptable. e.g. a site which gets you insurance quotes from dozens of suppliers could reasonably take 10 seconds.
The rest sounds pretty lengthy to me but I know more about high frequency trading where 1 ms is pretty poor. ;)
Time spent in network: 650ms
That's a hell of a network, you could send a request around the world in this time.
Time spent in DOM: 1200ms
Time spent render page: 1650ms
I would be wondering why this is significantly higher than the "real" work which is about 150 ms.
A request from London to New York and back should be about 100 ms. My guess is 150 ms (request) + 150 ms (parsing and rendering) + 100 ms (internet) is good.
3.7 seconds end to end is pretty decent - on the fast side of average, I'd say.
I'm assuming your network time is the total time - it's not terrible, and mostly determined by file size and bandwidth. I've had a quick look at your site, and nothing out of the ordinary seems to be going on.
DOM and render time are a little high. Not freakishly so, but there may be some low-hanging fruit.
Target time for page to be generated and sent to the visitor is 150..300ms. All the important page content has to be loaded within one second from initial request.

Need help understanging Firebug's Net tab statistics

Even though my question is very similar to this one, it's not a duplicate.
The images shows the stats from Firebug's NET tab, each request is taking a fraction of a second (all requests add up to 2.9 sec), yet the total time adds up to 6 seconds.
How do I figure out which request took the longest time, and where did the extra 3 seconds came from?
Requests are not necessarily in parallel. Most browsers only pull 2 concurrent resources per host. So if all six of your resources are on the same host, they could simply be blocking. Furthermore, if these resources are JavaScript or some other resources that may be parsed on load.
Also note that the total time is when the page load event fires, so this doesn't necessarily mean that is a white screen for six seconds.
Check out the YSlow guidelines for more details tips on performance. I also recommend Building Faster Websites if you're really interested in this subject.

Resources