Sometimes pages on my website load very slowly - performance

Across all browsers/devices, I find random different pages, at random times, are very slow to load/don't load. The browser is stuck on 'Waiting for website.com'. I will wait 20 seconds and nothing will happen until I manually refresh the page. As I realise this is very vague, can you suggest a) most likely issues to look for first or b) some diagnostic tools that I could use to try and de-bug the issue as a starting point, so that my hosts/developers can solve the issue. Here are some results of recent speed tests.
One thing to also add is that, it seems it more often gets stuck on particular pages. Namely the pages where users take practice tests. After each time the user clicks 'Next', their selected answer is inserted into the database. My speculation is that potentially it's an issue with the DB itself, or the process which inserts into the database. It's when clicking 'Next', that the whole website sometimes just dies as described above.
Results from Google Speed Test
Waterfall image

A wait time of 20secs at random times and random pages could possibly be due to stop-the-world garbage collection. So GC logs are probably a good starting point.
A thread sampler such as Djigger a colleague of mine wrote might probably also help you figuring out what the machine is doing during the 20 seconds.
If that doesn't help I suggest to use a Profiler or better an APM tool to monitor whats going on on your system. Those tools give a you a broader insight of the internals.

You need to run a few page speed tests and look at the waterfall images.
It is very common on shared servers for the server to be too busy to get to your request. 20 seconds would indicate a serious issue with the server.
Another common reason is the page has a link to a third party resource and that resource is too often unavailable.
In your case the culprit is website.com and I assume that is your site.
Use something like webpagetest.org to run the tests.
In the waterfall image below
Dark Green is DNS lookup time.
Orange is the time for Browser to connect to server.
Green is the wait time for server to put image in output buffer.
Blue is the time for the server to transmit to the Browser.
The problem with the sample waterfall page is the index page took 4 seconds to be generated or retrieved. Most likely this is a Word Press site with plugins.
I suspect yours may be 20 seconds. But due to the randomness, is is also a good possibility it is a page resource that is stalled.
If it is the index page, then you likely have a poor ISP and or one of the other users of the server is hogging the CPU.
Keep running the tests until you see the problem occur.
It will be very obvious where the problem is located.
You can post the waterfall image and send me a message if you have any questions.
Waterfall from webpagetest.org

Related

Web app initial load time

I am using a shared hosting plan at Bluehost to host a golf tournament live scoring mobile web app. I am caching everything I can on Cloudflare, and spent quite some time on overall optimization of the initial download & rendering times. There might be more I could do, but without question my single biggest issue is the initial call to my website: www.spanishpointscup.org. Sometimes this seems to be related to DNS lookup and other times related to Waiting(TTFB).
Below are 2 screen shot images of the network calls that show variations in accessing my index.html. Sometimes this initial file load can be even longer. Very rarely are any of the other files downloaded creating a long delay time, so my only focus now is the initial file load. I think that even if I had server side rendering, I would still have this issue.
Does anyone have specific recommendations that they are confident will help me? Switch to VPS or other host? Thank you.
This is typical when you use a shared server.
The DNS has nothing to do with the issue. DNS has to do with the request not the response. It is the Browser that must resolve the the domain name to an ip address.
The delay you are seeing is due to the server being busy and your page is sitting in a queue waiting behind other processes. Possibly you have a CPU grabbing neighbor on your shared server. Or Bluehost has some performance issues.
You will likely notice some image files take an excessively long time to transmit. Which image is slow will appear to be random with each fresh (not in cache) page load.
UPDATE
After further review I noticed the "wait" times are excessive. Wait time is in green on your waterfall. Notice how the transmit time (blue) is short. This is the time it takes the server to retrieve the page from the disk and put it into the transmit buffer. 300-400 millisecond is excessive.
Find a new service provider.

Optimising Magento Loading Speed - Can't Identify Why Initial Recieving So Slow

While our website is not yet complete graphically and design wise, most of the backend operations are near completion.
However, after optimising the mysql database we are still receiving a significant initial receiving period when tested on pingdom.com:
http://tools.pingdom.com/fpt/#!/IuoBna86v/http://foscam-uk.com
According to Pingdom:
The yellow part is the time it takes to resolve the hostname and similar (before the connection is initiated to the web server), the green part is connecting to the web server, and the blue part is the time it takes to retrieve the content from the webserver.
Upon asking our managed VPS support team we got the response : 'Have you tried optimizing your script? I believe that the high wait time on there indicates actual website loading time (meaning for your script to load); not actual connection to the website/server.'
Now, pingdom shows the js/css loading relatively quickly, the mysql database side of things doesn't seem to be slowing anything down either - does anyone have any suggestions of what this could be or might be causing it?
Thank you very much for your time and help.
89 requests are too many.
Reduce number of image request by creating sprites.This is pretty important from what is shown in pingdom.
Keep Alive should be set to On and Keep alive time should be a bit higher(15 seconds or so).
Use of compiler plus merge and minify js/css is recommended.
Change the hosting provider. 8 second loading is very very slow. It means that it actually is around 15-17 seconds for a user that doesn't have cached parts of your site (first time visitor). My site www.bebepunk.ro loads according to pingdom in 2.5 seconds and users still complain about the slowness of the site. Check also with http://www.webpagetest.org for both values.

Causes of high network latency

I have a site that is moving incredibly slowly right now. Both Safari's inspector and Firebug are reporting that most of the load time is due to latency. The actual download is happening in less than a second. There's a lot of database activity in play (though the metrics on that indicate that it's pretty healthy), but what else can cause really high latency? Is it a purely network thing or are there changes I can make to the app to improve the latency numbers?
I'm using YSlow to help identify performance improvements, but on the whole, I don't see it reporting anything that seems crazy unreasonable. Opportunities for improvement, certainly, but nothing that seems like it would cause the huge load times I'm seeing.
Thanks.
UPDATE
Some background and metrics, in case it's useful. This is a CakePHP application and I'm using my UsersController::login action as the benchmark. For the sake of identifying how much of a factor the application code plays in this, I've printed a stacktrace immediately upon entering UsersController::beforeFilter(). Here's output:
UsersController::beforeFilter() - APP/controllers/users_controller.php, line 13
Controller::startupProcess() - CORE/cake/libs/controller/controller.php, line 522
Dispatcher::_invoke() - CORE/cake/dispatcher.php, line 187
Dispatcher::dispatch() - CORE/cake/dispatcher.php, line 171
[main] - APP/webroot/index.php, line 83
Load times, as shown by Safari's inspector range from 11.2 seconds to 52.2 seconds. This would seem to point me away from the application code and maybe something with my host, but maybe I'm completely misinterpreting this or oversimplifying it?
If you cannot identify directly a slow moving component of your application, there are a number of other steps along the way that can certainly slow your site down. Whenever I'm experiencing unusually long polling, I typically start by looking at the local DNS and then onto my hosted DNS. Sometimes a cache refresh (on their part, not yours) can cause a lot of polling until their database has caught up.
Else, they might actually have a service outage and your requests are being made to their secondary or backup server. If everything seems fine in terms of domain resolution, your hosting provider might be experiencing a service outage that can take a number of different shapes like serving static content from their backups or over-allocating shared resources until everything is running as it should. You can experience a ton of what they call throttling on shared cloud architectures when they have a box go down. On the plus side, you don't have a total outage in this circumstance.
One time, and this was just in a shared grid configuration, I had a processor go to hell. The bizarre part of it was that static content was still serving from a backup, but it was still polling against our database (which was on a different server) and causing our account to throttle because of over allocation on the backup. Wasn't our fault, but the host started sending nasty emails about our excessive long-polls. Moral of the story is, if it's not your application, and it's out of the blue, somewhere along the line I'll bet you'll find some hardware failure or misconfiguration.
Also now that I think of it, if you are syndicating some outside content (be it server or browser side) it might not be in your chain of responsibility altogether. If you are serving ads for example from a subscriber service, they might be having a high-load period or outage. These are just the steps that I would take to narrow down the culprit.
Probably this will be not the solution for you, but when I has doggy slow safari (and FF too) I simply changed the DNS servers to opendns (208.67.222.222, 208.67.220.220) and all my problems are resolved.

Why are my basic Heroku apps taking two seconds to load?

I created two very simple Heroku apps to test out the service, but it's often taking several seconds to load the page when I first visit them:
Cropify - Basic Sinatra App (on github)
Textile2HTML - Even more basic Sinatra App (on github)
All I did was create a simple Sinatra app and deploy it. I haven't done anything to mess with or test the Heroku servers. What can I do to improve response time? It's very slow right now and I'm not sure where to start. The code for the projects are on github if that helps.
If your application is unused for a while it gets unloaded (from the server memory).
On the first hit it gets loaded and stays loaded until some time passes without anyone accessing it.
This is done to save server resources. If no one uses your app why keep resources busy and not let someone who really needs use them ?
If your app has a lot of continous traffic it will never be unloaded.
There is an official note about this.
You might also want to investigate the caching options you have on Heroku w/ Varnish and Memcached. These are persisted independent of the dynos.
For example, if you have an unchanging homepage, you can cache that for extended periods in Varnish by adding Cache-Control headers to the response. Then your users won't experience the load hit until they want to "do something" rather than when they arrive.
You should check out Tom Robinson's answer to "Scalability: How Does Heroku Work?" on Quora: http://www.quora.com/Scalability/How-does-Heroku-work
Heroku divides up server resources among many different customers/applications. Your app is allotted blocks of computing power. Heroku partitions based on resource demand. When you have a popular application that demands more power, you can pay for more 'dynos' (application containers) and then get a larger chunk of the pie in return.
In your case though, you are running a free app that few people--if any outside of you--are visiting/using. Therefore, Heroku cuts down on the resources you're getting by unloading your app--putting it in hibernation essentially--until there is a request made to your address. When that happens, and your app has been idling for a long time, it takes time to reload.
Add 1 extra dyno to keep your app from falling asleep, if that reload time is important.
I am having the same problem. I deployed a Rails 3 (1.9.2) app last night and it's slow. I know that 1.9.2/Rails 3 is in BETA on Heroku but the support ticket said it should be fine using some instructions they sent me.
I understand that the first request after a long time takes the longest. Makes sense. But simply loading pages that don't even connect to a DB taking 10 seconds sometimes is pretty bad.
Anyway, you might want to try what I'm going to do. That is profile my app and see how long it takes locally. If it's taking 400ms then something is wrong. But if I get 50ms locally and it still takes 10 seconds on Heroku then something is definitely wrong.
Obviously, caching helps but you only get 5MB for free and once again, with ONE person using the site, it shouldn't be that slow.
I had the same problem with every app I have put on via heroku's free account. Now there are options of adding dynos so that your app does not get offloaded while it is not being used for a while, you can also try using redis or memcached for caching. But I used a hacky solution for my small scale project, I basically built web scraper put it inside an infinite loop with sleep and tada the website actually had much better response times(I guess it was not getting offloaded because of the script).

Does Google Analytics have performance overhead?

To what extent does Google Analytics impact performance?
I'm looking for the following:
Benchmarks (including response times/pageload times et al)
Links or results to similar benchmarks
One (possible) method of testing Google Analytics (GA) on your site:
Serve ga.js (the Google Analytics JavaScript file) from your own server.
Update from Google Daily (test 1) and Weekly (test 2).
I would be interested to see how this reduces the communication between the client webserver and the GA server.
Has anyone conducted any of these tests? If so, can you provide your results? If not, does anyone have a better method for testing the performance hit (or lack thereof) for using GA?
2018 update: Where and how you mount Analytics has changed over and over and over again. The current gtag.js code does a few things:
Load the gtag script but async (non-blocking). This means it doesn't slow your page down in any other way than bandwidth and processing.
Create an array on the page called window.datalayer
Define a little gtag() function that just pushes whatever you throw at it into that array.
Calls that with a pageload event.
Once the main gtag script loads, it syncs this array with Google and monitors it for changes. It's a good system and unlike the previous systems (eg stuffing code in just before </body>) it means you can call events before the DOM has rendered, and script order doesn't really matter, as long as you define gtag() first.
That's not to say there isn't a performance overhead here. We're still using bandwidth on loading up the script (it's cached locally for 15 minutes), and it's not a small pile of scripts that they throw at you, so there's some CPU time processing it.
But it's all negligible compared to (eg) modern frontend frameworks.
If you're going for the absolute, most cut-down website possible, avoid it completely. If you're trying to protect the privacy of your users, don't use any third party scripts... But if we're talking about an average modern website, there is much lower hanging fruit than gtag.js if you're hitting performance issues.
There are some great slides by Steve Souders (client-side performance expert) about:
Different techniques to load external JavaScript files in parallel
their effect on loading time and page rendering
what kind of "in progress" indicators the browser displays (e.g. 'loading' in the status bar, hourglass mouse cursor).
I haven't done any fancy automated testing or programmatic number crunching, but using good old Firefox with the Firebug plugin and a pair of JS variables to tell the time difference before and after all GA code is executed, here is what I found.
Two things are downloaded:
ga.js is the JavaScript file containing the code. This is 9kb, so the initial download is negligible and the filename isn't dynamic so it's cached after the first request.
a 35 byte gif file with a dynamic url (via query string args), so this is requested every time. 35 bytes is a negligible download as well (firebug says it took me 70ms to dl it).
As far as execution time, my first request with a clean browser cache was an average of about 330ms each time and subsequent requests were between 35 and 130 ms.
From my own experience it has adding Google-Analytics has not changed the load times.
According to FireBug it loads in less then a second (648MS avg), and according so some of my other test ~60% - 80% of that time was transferring the data from the server, which of course will vary from user to user.I don't preticularly think that caching the analytics code locally will change the load times much, for the above reasons.
I use Google-Analytics on more then 40 websites without it ever being the cause of any, even small, slowdown, the most amount of time is spent getting the images which, due to their typical sizes, is understandable.
You can host the ga.js on your servers with no problems whatsoever, but the idea is that your users will have the ga.js cached from some other site they may have visited. So downloading ga.js, because it's so popular, adds very little overhead in many cases (i.e., it's already been cached).
Plus, DNS lookups do not cost the same in different places due to network topology. Caching behavior would change depending on whether users use other sites that include ga.js or not.
Once the JavaScript has been loaded, the ga.js does communicate with Google servers, but that is an asynchronous process.
There's no/minimal site overhead on the server side.
The HTML for Google Analytics is three lines of javascript that you place at the bottom of your webpage. It's nothing really, and doesn't consume any more server resource than a copyright notice.
On the client side, the page can take a little bit (up to a couple of seconds) of time to finish displaying a page. However - In my experience, the only bit of the page not loaded is the Google stuff, so users can see your page perfectly fine. You just get the throbber at the top of the page throbbing for a little longer.
(Note: You need to place your google analytics code block at the bottom of any served pages for this to be the case. I don't know what happens if the code block is placed at the top of your HTML)
The traditional instructions from Google on how to include ga.js use document.write(). So, even if a browser would somehow asynchronously load external JavaScript libraries until some code is actually to be executed, the document.write() would still block the page loading. The later asynchronous instructions do not use document.write() directly, but maybe insertBefore also blocks page loading?
However, Google sets the cache's max-age to 86,400 seconds (being 1 day, and even set to be public, so also applicable to proxies). So, as many sites load the very same Google script, the JavaScript will often be fetched from the cache. Still, even when ga.js is cached, simply clicking the reload button will often make a browser ask Google about any changes. And then, just like when ga.js was not cached yet, the browser has to await the response before continuing:
GET /ga.js HTTP/1.1
Host: www.google-analytics.com
...
If-Modified-Since: Mon, 22 Jun 2009 20:00:33 GMT
Cache-Control: max-age=0
HTTP/1.x 304 Not Modified
Last-Modified: Mon, 22 Jun 2009 20:00:33 GMT
Date: Sun, 26 Jul 2009 12:08:27 GMT
Cache-Control: max-age=604800, public
Server: Golfe
Note that many users click reload for news sites, forums and blogs they already have open in a browser window, making many browsers block until a response from Google is received. How often do you reload the SO home page? When Google Analytics response is slow, then such users will notice right away. (There are many solutions published on the net to asynchronously load the ga.js script, especially useful for these kind of sites, but maybe no longer better than Google's updated instructions.)
Once the JavaScript has loaded and executed, the actual loading of the web bug (the tracking image) should be asynchronous. So, the loading of the tracking image should not block anything else, unless the page uses body.onload(). In this case, if the web bug fails to load promptly then clicking reload actually makes things worse because clicking reload will also make the browser request the script again, with the If-Modified-Since described above. Before the reload the browser was only awaiting the web bug, while after clicking reload it also needs the response for the ga.js script.
So, sites using Google Analytics should not use body.onload(). Instead, one should use something like jQuery's $(document).ready() or MooTools' domready event.
See also Google's Functional Overview, explaining How Does Google Analytics Collect Data?, including How the Tracking Code Works. (This also makes it official that Google collects the contents of first-party cookies. That is: the cookies from the site you're visiting.)
Update: in December 2009, Google has released an asynchronous version. The above should tell everyone to upgrade just to be sure, though upgrading does not solve everything.
It really depends on the day. I'm just adding this to a blog. I'm in california, very close to their main data centers, on a fast low latency business DSL, on a overclocked i5 with plenty of RAM running a recent linux kernel and stable firefox.
here's a sample page load:
google-analytics alone added 5 seconds just of network download time... to get 15Kb!
You can see blogger.com served 34Kb in 300 mili seconds. That's 32x faster!
Also, look how the Red Line (which represents the onLoad event, meaning, there's no more script executing on the page and the so the browser can finally stops the loading indicators/spinings/etc) ... look how far to the right it is. that's probably 3seconds of garbage javascript processing that happened there. It's very uncommon for that line to be very far away from the end of the resources download bars. I'm done debugging this and it's 1/3 analytics fault, 2/3 blogger fault. ...one would think google stuff was fast.
Edit:
Some more data. Here's a request with everything cached. the above one was first visit.
I've removed the googleplus crap from above for two reasons, I was trying to see if they were playing some part on the slow onLoad event (they aren't) and because It is mostly useless.
So, With this we can see that the network time is the least of your worries. Even on a fast computer with modern software, the toll google analytics + blogger take on processing time will still dump your page load past 7s. Without the blogger, just check this very site, i'm seeing 0.5s of delay after resources are loaded and the red line kicks in.
Loading any extra javascript to your page is going to increase the download time from the client's perspective. You can ameliorate this by loading it at the bottom of your page so that your page is rendered even if GA is not loaded. I would avoid caching because you would lose the advantage of the client cache for your page. If the client has it cached from some other page, your page's request will be filled from the client itself. If you change it to load from your site, it will require a download even if the client already has the code (which is likely). Adding a task to your software processes to avoid loading the file from Google seems unwarranted for what may be an unnecessary optimization. It would be hard to test this since it would always serve up faster locally, but what really matters is how fast it works for your customers. If you decide to evaluate keeping it locally, make sure you test it from your home internet connection --- not the machine sitting next to the server in your rack.
Use FireBug and YSlow to check for yourself. What you will discover however is that GA is about 9KB in size (which is actually quite substantially for what it does) and that it also sometimes does NOT load very fast (for what reasons I don't know, I think it might be the servers "choking" sometimes)
We removed it due to performance issues on our Ajax Samples, but then again for us being ultra fast and responsive was priority 1, 2 and 3
Nothing noticeable.
The call to Google (including DNS lookup, loading the Javascript if not already cached and the actual tracer calls themselves) should be done by the client's browser in a separate thread to actually loading your page. Certainly the DNS lookup will be done by the underlying system and will not, to my knowledge, count as a lookup within the browser (browsers have a limit on the number of request threads they will use per site).
Beyond that, the browser will load the Google script in parallel along with all other embedded resources, so you will potentially get an extremely slight increase in the time it takes to download everything, in the worst case (we're talking in the order of milliseconds, unnoticable. If the Google script is loaded last by the browser, or you don't have many external resources on your page, or if your page's external resources are cached by the browser, or if Google's script is cached by the browser (extremely likely) then you won't see any difference. It's just absolutely trivial overall, the same effect as sticking an extra tiny picture on your page, roughly speaking.
About the only time it might make a concrete difference is if you have some behaviour that fire on the onLoad event (which waits for external resources to load), and the Google servers are down/slow. The latter is unlikely to happen often, but if this were the case then the onLoad even won't fire until the script is downloaded. You can work around this anyway by using various "when DOM loaded" events, which are generally more responsive as you don't have to wait for your own scripts/images to load this way either.
If you're really that worried about the effects on page load time, then have a look a the "Net speed" section of Firebug, which will quantify this and draw you a pretty graph. I would encourage you to do this for yourself anyway as even if other people give you the figures and benchmarks you request, it will be completely different for your own site.
Well, I have have searched, researched and expored extensively on net. But I have not found any statistical data that claims either in favour or against of the premise.
However, this excerpt from http://www.ga-experts.com claims that its a Myth that GA slows down your website.
Err, well okay, maybe slightly, but
we’re talking about milliseconds. GA
works by page tagging, and any time
you add more content to a web page, it
will increase loading times. However
if you follow best practice (adding
the tag before the </body> tag) then
your page will load first. Also, bear
in mind that any page tag based web
analytics package (which is the
majority) will work the same way
From the answers above and all other sources, what I feel is that whatever slowdown it causes in not percieved by the user as the Script is included at the bottom of the page. But if we talk of complete page-loads we might say that it slows down the page-load time.
Please post in more info if you have and DATA if you have any.
I don't think this is what your looking for but what are you worried about performance for?
If its your server... then there's obviously no impact as it resides on Google servers.
If its your users that your worried about then there is no impact either. As long as you place it just above the body tag then your users will not receive anything slower than they would before... the script is loaded last and has no affect on the appearance to the user. So there essentially not waiting on anything and even continue to browse through the page without noticing that its still loading.
The question was will Google Analytics cause your site to slow down and the answer is yes. Right now at the time of writing this Google-Analytics.com is not working so sites that have that in their pages won't load the pages so yes, it can slow down and cause your site to not even load. It's uncommon for google-analytics.com to be down this long which right now has been over 10 minutes, but it just shows that it is possible.
There are two aspects to it.
Analytics script's' (and a gif) download
Downloaded scripts execution
Download time is almost always less than 100ms, which is acceptable.
Here comes the twist.
analytics.js execution 250ms
re-marketing (if enabled) 300ms
demographic (if enabled) 200ms
So analytics with re-marketing takes 750ms on average. I feel that this is a huge number when it comes to performance overhead.
I noticed frequent I/o and CPU overload in cPanel resulting with:
Site unreachable error
And that stopped after I disabled WP Analytics plugin. So I reckon it does have some impact.

Resources