As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am trying to quantify "site slowness". In the olden days you just made sure that your HTML was lightweight, images optimized and servers not overloaded. In high end sites built on top of modern content management systems there are a lot more variables: third party advertising, trackers and various other callouts, the performance of CDN (interestingly enough sometimes content delivery networks make things worse), javascript execution, css overload, as well as all kinds of server side issues like long queries.
The obvious answer is for every developer to clear the cache and continuously look at the "net" section of the Firebug plugin. What other ways to measure "site dragging ass" have you used?
Yslow is a tool (browser extension) that should help you.
YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites.
Firebug, the must have for web developers Firefox extension, can measure the loading time of different elements on your webpage. At least you can rule out CSS, JavaScript, and other elements taking too much time to load.
If you do need to shrink JavaScript and CSS loading times, there are various JavaScript and CSS compressors out there on the web that simply take out unnecessary text out of them like newline characters and comments. Of course, keep an ordinary version on the side for development sake.
If you use PNGs, I recently came across a PNG optimizer that can shrink PNG sizes called OptiPNG.
"Page Load time" is really not easy to define in general.
It depends on the browser you use, because different browsers may do more requests in parallel, because javascript has differents speeds in different browsers and because rendering time is different.
Therefore you can only really measure your true page load time using the browser you are interested in.
The end of the page load can also be difficult to define because there might be an Ajax request after everything is visible on the page. Does that count the the page load or not?
And last but not least the real page load time might not matter that much because the "perceived performance" is what matters. For the user what matters is when sHe has enough information to proceed
Markus
I'm not aware of any way (at least no I could tell you :] ) that would automatically measure your pages perceived load time.
Use AOL Pagetest for IE and YSlow for firefox (link see above) to get a "feeling" for you load time.
Get yourself a proper debugging proxy installed (I thoroughly recommend Charles)
Not only will you be able to see a full breakdown of response times / sizes, you can save the data for later analysis / comparison, as well as fiddle with the requests / responses etc.
(Edit: Charles' support for debugging SOAP requests is worth the pittance of its shareware fee - it's saved me a good half a day of hair-loss this week alone!)
I routinely use webpagetest.org, which you can use to perform performance tests from different locations, on different browsers (although only msie 7-9), with different settings (number of iterations, connection speed, first run vs 2nd visit, excluding specific requests if you want, credentials if needed, ...).
the result is a very detailed report of page loading time which also provides advise on how to optimize.
it really is a great (free) tool!
Last time I worked on a high-volume website, we did several things, including:
We used Yslow to get an analysis of the individual factors affecting page load: https://addons.mozilla.org/en-US/firefox/addon/5369
performance monitoring using an external, commercial tool called Gomez - http://www.gomez.com/instant-test-pro/
We stress tested using a continuous integration build, using Apache JMeter. http://jmeter.apache.org/
If you want a quick look, say a first approximation, I'd go with YSlow and see what the major factors affecting page load time in your app are.
Well, call me old fashioned but..
time curl -L http://www.example.com/path
in linux :) Other than that, I'm a big fan of YSlow as previously mentioned.
PageSpeed is an online checking tool by Google, which is very accurate and reliable:
https://developers.google.com/pagespeed/
If it's asp.net you can use Trace.axd.
Yahoo provide yslow which can be great for checking javascript
YSlow as mentioned above.
And combine this with Fiddler. It is good if you want to see which page objects are taking the most bandwidth, which are being compressed at the server, unexpected round-trips, and what is being cached. And it can give you a general idea about processing time in the client web browser as compared to time taken between server & client
Apache Benchmark. Use
ab -c <number of CPUs on server> -n 1000 url
to get good approximation of how fast your page is.
In Safari, the Network Timeline (available under the Develop menu, which you have to specifically enable) gives useful information about loading time of individual page components, as well as showing when each component started loading.
Yslow is good, and HttpWatch for IE is great as well. However, both miss the most important metric to a user "When is the page -above the fold- ready for use to the user?". I don't think that one has been solved yet...
There are obviously several ways to identify the response time, but the challenge has always been how to measure the rendering time that is spent in browser.
We have a controlled test phase in which we use several automated tools for testing the application. One of the output we generate from this test is a fiddler trace for each transaction (a click). We can then analyse the fiddler trace to understand the Time for last byte and subtract it with the overall time the page took.
Something like this
1. A= Total response time as measured by the an automated tool (in our case we use QTPro)
2. B= Time to last byte (Server + Network time, from the fiddler trace)
3. C= A-B (approx Rendering time, OR the time spent in browser)
All the above I explained can be made a standard test process and end of the test we could generate a break-up of time spent at each layer e.g. rendering time, network time, database calls etc...
Related
We have started to use Lighthouse to track the improvements we make to our sites. While this seems to work quite well for desktop sites, i.e. we see the values improve over time and as we make changes, for mobile sites the values remain consistently low. We do repeat the tests and use the best of three, but still.
Below, we have the results of the New York Times mobile site that appears to perform badly vis-a-vis the desktop site. The other two are sites of ours, the main site and the third one being our own.
Browsing the site (as well as the NYT, of course) this apparent bad performance cannot be felt at all.
The test procedure:
run same test three times for each site
mobile
no PWA
incognito mode
Now, while initially enthusiastic about Lighthouse's capability to evaluate a site by attributing aggregated figures that are easy to digest by management, we have the impression that they are not actually useful as they don't correspond to the users' reality and don't change even though we make changes.
Also, this being a Single Page Application, the first load of the page may take some more time, but any further navigation is quasi-instantaneous. We could not find a Lighthouse feature to take this into account.
No, you can't really rely on Lighthouse. As you've observed, well known fast websites perform badly in tests. While there are some reasons for this, it won't help you measure the actual loading speed. Caching being an important factor. Lazy-loading sometimes is confused as not yet loaded. So even if your website is loaded, Lighthouse might detect missing pieces and deem it not yet loaded.
Pingdom is great for that, and provides you with the options to test different regions, which I believe to be more realistic than one server fits all.
Also the Legacy version of GTmetrix is great because it points you directly to what improvements you can make but it tests only from Canada (unless you buy the PRO version). It takes caching into account.
I have been using PSI for mobile on our sites and worked well. Atleast mobile score was always better than lab data & my motivation was report was consistent on some external sites like https://covid19.ca.gov/.
Coming to tool works well for initial load but does not take into affect for one page app since cls is continuous evaluation has user scroll through CLS changes that is not simulated in tool. That is where field data differs.
Thanks,
I am looking for some advice on how to track application performance; the application is developed using ReactJS, and I am building it with webpack.
First of all I will just present what I have done and what the application is expected to do:
I need to render a lot of, let's just call them widgets, that update real time presenting a lot of data. So, on a scale, I would say each widget renders about 50 to 80 values, these updates might be received from the server side all at once, so they should happen instantly when data is received. Consider I might have around 25 to 30 widgets that need to update real time.
Let me tell you a little bit about the implementation:
I have used the smart/dumb pattern for ReactJS components
The actual data is stored in application state and is distributed by the smart components to dumb components through props
I am using Pure Render Mixin to avoid unnecessary rendering
Also using Immutable data so that I will ensure Pure Render Mixin is working accordingly, that is, being accurate in determining if a render is necessary and at the same time be fast, really fast.
There are no weird bindings of callbacks, that might determine re-rendering of components, this is double checked already.
Now the issues I am having:
with about 5-6 widgets, meaning around 400-500 values that need to render each second, it works very well in Chrome and decent in Firefox.
adding about 25-30 widgets gets the application to still work decent in Chrome, but it starts to act slow in Firefox, by slow I mean user interaction that might even get a delay of around 1 second. That is really unacceptable.
What I have tried:
use Chrome dev tools to measure the performance; that didn't help too much, what I could see though, is that everything is alright. And there is no way I could read all the graphics this tool provides. (And I've read a lot of articles about it)
tried to use Firebug in Firefox. That's an amazing tool, but not in this case; just by opening it with the above mentioned load (30 widgets) gets Firefox to freeze... and the profiler gave me nothing)
on a last resort, I have used the default dev tools from Firefox, it has a performance tab. That got me some information of what parts of the application has the most load on the browser. It seemed it was some heavy computing determining min/max of an Immutable.List.
Unfortunately the application still has performance issues, and it is of high importance to get it working perfect, and Firefox profiler doesn't give me any other leads.
So my questions would be:
what would be the next action to take in order to determine performance issues? (as much as possible where they are: class/method/at least file)
did you guys use any performance testing tool that gives you an insight of what the hell is happening?
is there something else to consider to improve the overall functionality, especially targeting multiple browsers? (Firefox, Chrome, IE11)
How would I go about determining what the hangups are in my javascript app when the profiler puts (program) at the top with 80%? Is my logic too complex for the hotspot tracking to occur? Is my memory footprint too big? What is generally the cause of this?
More Information:
There are no elements on the form save the one canvas tag
There are no waiting communications (xhr)
http://i.imgur.com/j6mu1.png
Idle cycles ("doing nothing") will also render as "(program)" (you may profile this SO page for a few seconds and get 100% of (program)), so this is not a sign of something bad in itself.
The other thing is when you actually see your application lagging. Then (program) will be contributed by the V8 bindings code (and the WebCore code they invoke, which is essentially anything: DOM/CSS operations, painting, memory allocations and GCs, what not.) If that is the case, you can record a Timeline of your application (switch to the Timeline panel in Developer Tools and press the Record button in the bottom status bar, then run your application for a while.) You will see many internal events with their timings as horizontal bars. You will see reflows, style recalculations, timers fired, GC events, and more (btw, the latest Chromium versions have an improved memory profiler utilization timeline, so you will be able to monitor the memory used by certain internal factors, too.)
To diagnose memory problems (multiple allocations entailing multiple Full GC cycles) you may use the Profiles panel. Take a heap snapshot before the intensive part of your code starts, and another one after this code has run for some time. Then compare the two heapshots (the right SELECT at the bottom) to see which allocations have taken place, along with their memory impact.
To check if it's getting slow due to a memory option use: chrome://memory
Also you can check chrome://profiler/ for possible hints of what is happening.
Another option is to post your javascript code here.
See this link : it will help you in Understanding Firebug profiler output
I would say you should check which methods taking %. You can minimize unwanted procedures from them. I saw in your figure given some draw method is consuming around 14% which is running in background. May be because of that your JS loading slowly. You should determine what´s taking time. Both FF and Chrome has a feature that shows the network traffic. Have a look at yslow as well, they have a great addon to Firebug.
I would suggest some Chome's auditing tools which can tell you a lot about why is this happening, you should probably include more information about:
how long did it take to connect to server?
how long did it take to transfer content?
how much other stuff are you loading on that page simultaneously?
anyway even without all that, here's a checklist to improve performance for you:
make sure your javascript is treated and served as static content, e.g. via nginx/apache/whatever directly or cdn, not hitting your application framework
investigate if you can make use of CDN for serving javascript, sometimes even pointing different domain names to your server makes a positive impact, e.g. instead of http://example.com/blah.js -> http://cdn2.example.com/blah.js
make sure your js is served with proper expiration headers, don't re-download it every time client refreshes a page
turn on gzipping of js content
minify your js using different tools available(e.g. with Google closure compiler)
combine your scripts (reduces the number of requests)
put your script tags just before
investigate and cleanup/optimize your onload and document.ready hooks
Have a look at the YSlow plugin and Google PageSpeed, both very useful in improving performance.
YSlow, dynaTrace, HTTPWatch, Fiddler .........
All these things are really good for measuring the performance of the website and get statistics for the same. YSlow is really cool, offers good guidelines also.
However, i am very confused with so many things around (Though it's good that people already invested time and have made nice guidelines to follow and i thank them for great work done).
Following are my questions:
How much accuracy these tools have in terms or numbers they show ?
Which one(tool) is BEST to use (one for all needs)? Or i am missing name of some tool which is out of box and better than above all?
I'm suprised that you haven't mentioned JMeter. It is free, quite easy to use, has lots of features, and great for load testing your website.
As for question one, I'm not sure I can answer that. I'm sure that in general, the numbers these tools show are pretty accurate, but there are some catches. Take JMeter for example:
JMeter itself uses a lot of memory and also some substantial CPU time if you do some heavy load testing. That means that if you run the tool on the same machine as your website, some resources are lost, e.g. not available for the website
Testing it on the same machine does not out-of-the-box take in account that the data has to be sent over the internet connection, so response times are lower then the reality.
But in all, I think you should never blindly trust the results these tools give you, but they can give you a good insight into possible bottlenecks or problems.
YSlow is good to measure performance for a single user. Try to keep it grade A and it will be OK. But it actually doesn't measure performance in case of multiple concurrent users. For that you can use under each Apache JMeter. It's a good webserver/webapplication stresstest tool. So I would say, just use both YSlow (for client performance) and JMeter (for server performance).
I haven't used DynaTrace before, so I'll skip that part. The mentioned HTTP request trackers doesn't really measure performance, they are more debuggers.
As far as I am concerned, i find YSlow to be really good (have tried fiddler too) and it does help me when i need it and i do believe that it provides the correct figures thereby making me use that in the time ahead too unless there is anything unanimously accepted (which is difficult because everyone has different choices and requirements.) or even better. Oh they are right, forgot the JMeter, something you should definitely give a mention.
There is also Speed Tracer extension for Chrome. It should be usable with any JavaScript heavy website.
http://code.google.com/webtoolkit/speedtracer/
http://gtmetrix.com is a good tool and it is free. that analyzes your page's speed performance using Page Speed and YSlow
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
The question I have is a bit of a ethical one.
I read here that Google gives a little more influence to sites that are optimized to load quickly. Obviously this makes Google's job easier, using less resources and it is a better experience for everyone, so why not reward it?
The actual process of finding bottlenecks and improving page load speed is well understood these days. Using tools like YSlow and reducing the number of files is becoming standard practice (which is great!)
So Is it fair / smart / kosher to serve the googlebot (or other search bot) custom content that will download faster? (i.e. no javasript, images, or css) Or would it flag you as a cheater and throw your site into limbo, unsearchable from google?
Personally I'd rather not risk it, I'd actually like to improve the performance for my visitors regardless. But as it stands there isn't much info on the topic so I figured I'd throw it out there.
EDIT:
I found some new information which might factor in.
From Google's Webmaster Tools: http://www.google.com/support/webmasters/bin/answer.py?answer=158541&hl=en
Page load time is the total time from the moment the user clicks on a link to your page until the time the entire page is loaded and displayed in a browser. It is collected directly from users who have installed the Google Toolbar and have enabled the optional PageRank feature.
There is no guarantee that they would use the same algorithm for ranking pages in search results, but it might indeed show that it is the actual user experience that matters most.
(i.e. no javasript, images, or css)
make them JS, CSS external. google will not touch it (very often) - (or you can block it via robots.txt, but that's unnecessary)
make all repeating images sprites.
load the big image asynchronously via js after the onload event of the document body.
this is also good for the user, as the site renderes faster until they see something.
as long as the main content is the same for google and the average first time visitor, and if there is no miss-leading intent, it's ok and a great strategy.
don't worry to much about any possible penalities. as long as there is no missleading intent it's mostly ok.
what is not ok to deliver google based on the user agent something majorly different. (here it's better to be save than sorry)
No one can say for certain exactly what Google will detect and ding your site for; they keep their algorithms secret. However, they generally frown upon anyone who serves different content to googlebot than they serve in general; and if they catch you at it, they are likely to reduce your PageRank for it.
Really, the question becomes why you would want to do this? If you make your page load faster for googlebot, you should make it load faster for your customers as well. Research has shown that just a tenth of a second longer load time can lose you customers; why would you want to get more customers in from Google just to lose them with a slow site?
I'd say this is not worth the risk at all; just improve your site and make it load faster for everyone, rather than trying to serve googlebot different pages.
Check out the google webmaster guidlines: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769
Basicly it breaks down to: You need to give the googlebot and viewers the exact same experience except where the googlebot could not participate in the experience. One notable example would be logins. News web pages frequently skip login pages pages for the googlebot, because the googlebot cannot/does not sign up for accounts.
I would imagine google would not be actively looking for pages optimized/prioritized for the googlebot but if they ever found it one they would come down on it the violator like a hammer.