With google chrome chrome dev, I am running a lighthouse Analysis for mobile.
Lighthouse shows a 7.0 seconds delay for Largest Contentful Paint (LCP):
I decide to dive into this and click on: "View original trace".
It redirects me to the Performance tabs:
Here it says that the LCP is 749.7ms (= 0.7497 seconds).
Where this discrepancy between LightHouse and Performance tab comes from?
0.7497 seconds for Performance
7.0 seconds for LightHouse
Why is Lighthouse showing much longer load times?
The answer is a combination of simulated network throttling and CPU throttling.
Simulated Network Throttling
When you run an audit it applies 150ms latency to each request and also limits download speed to 1.6 Megabits (200 kilobytes) per second and upload to 750 megabits (94 kilobytes) per second.
This is done via an algorithm rather than applied (it is simulated)
CPU throttling
Lighthouse applies a 4x slowdown to your CPU to simulate a mid-tier mobile phone performance.
If your JavaScript payload is heavy this could block the main thread and delay rendering. Or if you dynamically insert elements using JavaScript it can delay LCP for the same reason.
This is also done via an algorithm rather than applied (it is simulated)
So why doesn't it match the performance trace?
Because the trace is "as it happened" and doesn't take into account the simulated network and CPU slowdown.
Can I make the performance trace match Lighthouse?
Yes - all you need to do is uncheck "Simulated throttling" under the settings section (you may need to press the cog in the top right of the Lighthouse tab to show this checkbox).
Be aware that you will probably get an even lower score as simulated throttling can be a bit more forgiving.
Also note that your report will take a lot longer to run (which is good for seeing how someone on a slow phone with a slow 4G connection might experience your site!)
Now when you run Lighthouse it will use applied throttling, adding the latency and CPU slowdown in real time. If you view your trace now you will see it matches.
Where can I see what settings were used on a run?
At the bottom of your report you can see what settings were applied. You will see on the screenshot below that "(Devtools)" is listed in the Network Throttling and the CPU throttling sections to show that I use applied throttling.
Related
I have built my game in server mode on Mac OS and attached profiler to it. In profiler I can see unreasonable high cpu load.
Other scripts take a lot of cpu time. How can this be optimized?
Vsync takes a lot of time otherwise. How can it be since I have disabled VSync, built as server, run with -nographics and even removed cameras and UI?
I dont know what your game is doing in terms of calculations from custom scripts, but if its not just an empty project, then i dont see why 33ms is unreasonable. Check your servers specs maybe? Also VSync just idles for a need amount of time to reach the fixed target FPS, meaning its not actually under load, even though the profiler is showing it as a big lump of color. Think of it more as headroom - how much processing you can still do per frame and have your target FPS.
url: http://bookpanda.pk/
The website takes too long to load the landing page. Even though the size of page is 1.97 MB, but the time it takes to load is around 8-9 seconds.
Can you please guide me where the problem persists and what I can do about it?
For such a general, high-level question, I can only provide an equivalently high-level, situationally relevant answer:
Use browser dev tools to diagnose at a high level. Check out the load times of each network call as well as their timings. Modern browsers also have performance profiling tools built in that help you find out where it's spending time or waiting for something else that's slow.
For example, here's a JS Flame Chart of the first 10 seconds of page load:
We're using Riemann and Riemann-health to monitor our servers. However now I get quite a lot of CPU critical warnings, because the CPU peaked for a very short time - This is nothing I even need to know about I think. From my understanding, a constant high CPU usage will increase the load avg, which will be reported as well and sounds way more useful.
I don't want to disable reporting the CPU, just every level should be considered to be ok. If possible, I'd like to change the events on the Riemann server, so I don't have to change all the servers.
Here our Riemann config: https://gist.github.com/iGEL/e352764a8c559440c851
I don't have a full solution, but in theory you should be able to filter your CPU related events via a where function and set the state unconditionally to "ok" using with as follows:
(streams
(where (service #"cpu")
(with :state "ok" index)))
On the other hand, relying on the load average is not a good idea since a high load average can also mean that a large number of processes are waiting for IO.
Instead of silencing CPU alerts, you could alert only if CPU is not in state ok for more than X time units.
Even better, alert on a higher-level metric representing a client-impacting issue, such as response latency, http status codes, error levels etc.
After all, if CPU is high, but there's no impact on the system, an alert is likely just noise.
There are many tools online to measure the speed of a web page.
They provide data such as the loading time of a page.
This loading time depends on the number of files downloaded at the same time and the connection speed (and many other things such as the network state, the content providers, so on).
However, because it is based on the speed of the connection, we don't have the theoretical loading time.
A browser downloads many resources at the same time within a certain limit (5 resource at the same time). So it is optimized to load the resources faster.
If we could set the speed connection to a fixed amount, the loading page of a page would "never" change.
So does anyone know a tool which computes this theoretical loading time of a web page ?
I'd like to get this kind of results :
Theoretical loading time : 56 * t
With t equals the amount of time to download 1kb of data.
What do you mean by "theoretical loading time"? Such a formula would have a huge number of variables (Bandwidth, Round-Trip Time, Processor speed, Antenna Wakeup Time, Load on Server, Packet Loss, Whether TCP Connections Are Already Open, ...) which ones would you put into your theoretical calculation?
And what is the problem you are trying to solve? If you just want a more objective measure of site speed, you can use a tester like http://www.webpagetest.org/ which allows you to choose the network speed and then run many tests to find the distribution of load times.
Note also, that there is not even agreement on when a page is finished loading! Most people measure time until onload handler is called, but that can easily be sped up by prematurely reaching onload and then doing the actual loading of resources with JS afterwards. Waiting until all resources are loaded could also be a bad measure, because many modern pages will continually be updating themselves and loading new resources.
You can use any of these three tools: SpeedPage, DevTools, and WebPageTest. Read more at: TEST YOUR WEBSITE'S LOADING TIME / MOBILE SITE LOADING SPEED blog post.
I was reading various articles regarding Responsive Web Design and I came across how important website performance is for a good user experience, however I cannot understand what exactly is meant by website performance except of course a fast loading time?
With responsive design, being responsible means only loading resources that a particular device needs. You wouldn't necessarily send very large images to a small mobile device, for example, nor would you load heavy JavaScript for apps that don't apply on a particular device.
What Causes Poor Performance?
Most of poor performance is our fault: the average page in 2012 weighs over a megabyte. Much of this weight comes form blocking assets like Javascript and CSS that prevent the page from being displayed.
The average size of images on a Web page is 788KB. That’s a lot to send down to mobile devices.
Javascript, on average, is 211KB per transfer. This comes from the libraries and code we choose to include from third party networks. This cost is always transferred to our users. We need to stop building things for developer convenience and instead build them for user experience.
86% of responsive designs send the same assets to all devices.
http://www.lukew.com/ff/entry.asp?1684=
Website performance from a user point of view generally means loading time + displaying time + fast response for user actions ...but in fact it is much more complex.From designer point of view you need to worry about limited part of the problem - just try to make your designs less resource-consuming (data size, no. of requests, CPU, memory, user actions).
There's a lot of knowledge out there - this article might be interesting for you:
https://developers.google.com/speed/docs/best-practices/rules_intro
Ilya Gregoriks talk Breaking the 1000ms Time to Glass Mobile Barrier explains that latency on mobile is a big issue. Every additional request has a dramatic impact compared to non-mobile.
On top of the loading time i assume that the different cpu speed on mobile must be considered. If the page feels sluggish due to havy javascript use the mobile user will not use your page. Also a few pages i visited did not work on my android tablet; since the marketshare of mobile is increasing every page that does not take this into account will loose visitors