Angular2 slow initial page load caused of load sequence - performance

I have angular2 project created from seed and I've added several angular components on my initial page. Some of them load images - which is relatively slow, but the actual problem is:
I have a bundle which is big (~1mb) and I am currently working on it to make it smaller following a guide on the subject.
The initial load makes just a few requests, loads the bundle (usually ~ 3 seconds) and waits to the Angular application to bootstrap (~1.4 seconds). After that it starts loading all components and load their resources (fonts, images, etc.). Here is how the request looks like:
I am afraid even after I reduce the bundle size the application would still be bootstrapping for 1.5 seconds without making any requests and then again wait a ~1 seconds for the resources of the components to load. That I assume will lead to about 3+ seconds of load time. With my app being relatively simple I found this not acceptable.
Q1: Is there a way to load the component resources earlier and just reference them on component load?
Q2: Is there a way to make the application bootstrap faster?
I'm open to other suggestions too :)
Edit:
I am using AoT compilation, provided with the seed and I have taken steps to lower the size of the app.js file. The problem remains with the gap between the end of app.js downloaded and the first component's resources calls.
UPDATE (2016-12-19):
What I did (for now) is only on the server:
Enabled the HTTP2 support which resulted in major speed improvement.
Enabled GZIP which reduced the size of the JS by more than 5 times.
Those server configurations were trivial on NGINX and Apache so its worth giving them a try.
Now although the site loads a lot faster those changes don't solve the initial problems (Problem 1 and Problem 2). Hence I am looking on some other approaches that you may also want to follow if you are in my spot:
Prerendering with Gulp
Prerendering in Amazon
AoT vs JIT compilation - some insights.
UPDATE (2018-06-11):
Here are some materials that helped me boosting the initial load:
Angular Performance Workshop - Manfred Steyer
Angular — Faster performance and better user experience with Lazy Loading - by Ashwin Sureshkumar
In my case the Lazy Loading plays the big role.

Q2: You can make the application bootstrap faster and decrease the bundle size by implementing Ahead of Time compilation: https://angular.io/docs/ts/latest/cookbook/aot-compiler.html
Just like it sounds, the templates are precompiled and all the scripts generated beforehand which reduces processing time after the initial load. Additionally, the Angular 2 compiler is not included in your bundle which should cut down on the bundle size itself by a large amount.
Q1: There is lazy loading support of components but I haven't looked into what it entails, someone else may be able to answer that part for you.

Buy CloudFlare DNS, We were able to reduce load time from 50sec to 4 sec. also able to reduce the refresh speed to around 1 sec.
There is a free version for it, it works fine.

also considering the minifying/bundling of the js, enable gzip compression at server side will decrease the load time.

You have to reduce as much of your main.js package as possible, the more information in main.js the longer it will take to process it. Check your imports in app.module and use lazy loading.

Related

Magento Admin suddenly slowed down

We have Magento EE 1.14. Admin was working fine till last two days its speed dropped dramatically. Frontend is not affected. Also no changes in code or server configuration. here is my attempt to fix the problem but nothing worked:
Log cleaning is properly configured
removed two unused extensions. but no improvement
tried to disable non-critical extensions to see if speed will improve but also not luck.
I can NOT use REDIS cache at this time. but configured new server which is using REDIS cache and move to it next month.
sometimes backend will gain speed for few minutes
I enabled profilers the source of the delay is mage ( screenshot attached ).
here are my question:
Is there anyway to know the exact reason for Mage delay ?
do I have other test i can use to identify the cause of delay ?
Thanks in advance,
It could be delay on external resources connection. Do you have new relic or similar software? Check there for slow connections. If You don't have NR, profile admin by blackfire.io. Magento profiler is really unhelpful :)
Follow below steps:
Delete unused extensions
It is best to remove unused extensions rather than just disabling them. If you disable an extension, it will still exist in the database. It would not only increase the size of your database (DB) but it also adds to the reading time for DB. So, keep your approach clear: If you don’t need it, DELETE it.
Keep your store clean by deleting unused and outdated products
One should keep in mind that a clean store is a fast store. We can operationalize the front-end faster by caching and displaying only a limited set of products even if we have more than 10,000 items in the back-end, but we cannot escape their wrath. If the number of products keeps on increasing at the backend, it may get slower, so it is best to remove unused products. Repeat this activity in every few months to keep the store fast.
Reindexing
One of the basic reasons why website administrators experience slow performance while saving a product is because of reindexing. Whenever you save a product, the Magento backend starts to reindex, and since you have a lot of products, it will take some time to complete. This causes unnecessary delays.
Clear the Cache
Cache is very important as far as any web application is concerned so that a web server does not have to process the same request again and again.

SonarQube UI rendering delay (exactly 1 minute!)

I installed SonarQube to help with code quality analysis. I set it up to run behind an Nginx reverse proxy using the instructions on their website. Often I have to wait exactly one minute to load a page. Upon investigation using Google Chrome Developer Tools, I saw that a resource was not loading for exactly one minute. Then, something times out and allows the page to continue to load. Here's a typical example of the problem, where some resources load at the beginning, then there's a one minute delay, then the rest of the page loads:
Sometimes the page loads without any delay.
At first I thought it might be a problem with some JavaScript. Here is an example of clicking around to many pages, and sorting by response time (to see resources might be causing the delay):
I then tried loading a static image, and even that intermittently takes a minute to load.
How can I pin down exactly what component is causing the delay? Could it be the reverse proxy? The SonarQube application? Some JVM problem?
As your 1mn delay happen also with static image (here logo), where there is minimal JVM impact, I would suggest to use curl -L -v against Nginx front end and also directly to SQ HTTP connector.
If 1mn delay never happen when connection to SQ HTTP connector, Nginx / SQ link should be investigated.
If 1mn delay happen also with SQ HTTP connector, SQ JVMs and hosting should be investigated

Optimising Magento Loading Speed - Can't Identify Why Initial Recieving So Slow

While our website is not yet complete graphically and design wise, most of the backend operations are near completion.
However, after optimising the mysql database we are still receiving a significant initial receiving period when tested on pingdom.com:
http://tools.pingdom.com/fpt/#!/IuoBna86v/http://foscam-uk.com
According to Pingdom:
The yellow part is the time it takes to resolve the hostname and similar (before the connection is initiated to the web server), the green part is connecting to the web server, and the blue part is the time it takes to retrieve the content from the webserver.
Upon asking our managed VPS support team we got the response : 'Have you tried optimizing your script? I believe that the high wait time on there indicates actual website loading time (meaning for your script to load); not actual connection to the website/server.'
Now, pingdom shows the js/css loading relatively quickly, the mysql database side of things doesn't seem to be slowing anything down either - does anyone have any suggestions of what this could be or might be causing it?
Thank you very much for your time and help.
89 requests are too many.
Reduce number of image request by creating sprites.This is pretty important from what is shown in pingdom.
Keep Alive should be set to On and Keep alive time should be a bit higher(15 seconds or so).
Use of compiler plus merge and minify js/css is recommended.
Change the hosting provider. 8 second loading is very very slow. It means that it actually is around 15-17 seconds for a user that doesn't have cached parts of your site (first time visitor). My site www.bebepunk.ro loads according to pingdom in 2.5 seconds and users still complain about the slowness of the site. Check also with http://www.webpagetest.org for both values.

Why are my basic Heroku apps taking two seconds to load?

I created two very simple Heroku apps to test out the service, but it's often taking several seconds to load the page when I first visit them:
Cropify - Basic Sinatra App (on github)
Textile2HTML - Even more basic Sinatra App (on github)
All I did was create a simple Sinatra app and deploy it. I haven't done anything to mess with or test the Heroku servers. What can I do to improve response time? It's very slow right now and I'm not sure where to start. The code for the projects are on github if that helps.
If your application is unused for a while it gets unloaded (from the server memory).
On the first hit it gets loaded and stays loaded until some time passes without anyone accessing it.
This is done to save server resources. If no one uses your app why keep resources busy and not let someone who really needs use them ?
If your app has a lot of continous traffic it will never be unloaded.
There is an official note about this.
You might also want to investigate the caching options you have on Heroku w/ Varnish and Memcached. These are persisted independent of the dynos.
For example, if you have an unchanging homepage, you can cache that for extended periods in Varnish by adding Cache-Control headers to the response. Then your users won't experience the load hit until they want to "do something" rather than when they arrive.
You should check out Tom Robinson's answer to "Scalability: How Does Heroku Work?" on Quora: http://www.quora.com/Scalability/How-does-Heroku-work
Heroku divides up server resources among many different customers/applications. Your app is allotted blocks of computing power. Heroku partitions based on resource demand. When you have a popular application that demands more power, you can pay for more 'dynos' (application containers) and then get a larger chunk of the pie in return.
In your case though, you are running a free app that few people--if any outside of you--are visiting/using. Therefore, Heroku cuts down on the resources you're getting by unloading your app--putting it in hibernation essentially--until there is a request made to your address. When that happens, and your app has been idling for a long time, it takes time to reload.
Add 1 extra dyno to keep your app from falling asleep, if that reload time is important.
I am having the same problem. I deployed a Rails 3 (1.9.2) app last night and it's slow. I know that 1.9.2/Rails 3 is in BETA on Heroku but the support ticket said it should be fine using some instructions they sent me.
I understand that the first request after a long time takes the longest. Makes sense. But simply loading pages that don't even connect to a DB taking 10 seconds sometimes is pretty bad.
Anyway, you might want to try what I'm going to do. That is profile my app and see how long it takes locally. If it's taking 400ms then something is wrong. But if I get 50ms locally and it still takes 10 seconds on Heroku then something is definitely wrong.
Obviously, caching helps but you only get 5MB for free and once again, with ONE person using the site, it shouldn't be that slow.
I had the same problem with every app I have put on via heroku's free account. Now there are options of adding dynos so that your app does not get offloaded while it is not being used for a while, you can also try using redis or memcached for caching. But I used a hacky solution for my small scale project, I basically built web scraper put it inside an infinite loop with sleep and tada the website actually had much better response times(I guess it was not getting offloaded because of the script).

Does Google Analytics have performance overhead?

To what extent does Google Analytics impact performance?
I'm looking for the following:
Benchmarks (including response times/pageload times et al)
Links or results to similar benchmarks
One (possible) method of testing Google Analytics (GA) on your site:
Serve ga.js (the Google Analytics JavaScript file) from your own server.
Update from Google Daily (test 1) and Weekly (test 2).
I would be interested to see how this reduces the communication between the client webserver and the GA server.
Has anyone conducted any of these tests? If so, can you provide your results? If not, does anyone have a better method for testing the performance hit (or lack thereof) for using GA?
2018 update: Where and how you mount Analytics has changed over and over and over again. The current gtag.js code does a few things:
Load the gtag script but async (non-blocking). This means it doesn't slow your page down in any other way than bandwidth and processing.
Create an array on the page called window.datalayer
Define a little gtag() function that just pushes whatever you throw at it into that array.
Calls that with a pageload event.
Once the main gtag script loads, it syncs this array with Google and monitors it for changes. It's a good system and unlike the previous systems (eg stuffing code in just before </body>) it means you can call events before the DOM has rendered, and script order doesn't really matter, as long as you define gtag() first.
That's not to say there isn't a performance overhead here. We're still using bandwidth on loading up the script (it's cached locally for 15 minutes), and it's not a small pile of scripts that they throw at you, so there's some CPU time processing it.
But it's all negligible compared to (eg) modern frontend frameworks.
If you're going for the absolute, most cut-down website possible, avoid it completely. If you're trying to protect the privacy of your users, don't use any third party scripts... But if we're talking about an average modern website, there is much lower hanging fruit than gtag.js if you're hitting performance issues.
There are some great slides by Steve Souders (client-side performance expert) about:
Different techniques to load external JavaScript files in parallel
their effect on loading time and page rendering
what kind of "in progress" indicators the browser displays (e.g. 'loading' in the status bar, hourglass mouse cursor).
I haven't done any fancy automated testing or programmatic number crunching, but using good old Firefox with the Firebug plugin and a pair of JS variables to tell the time difference before and after all GA code is executed, here is what I found.
Two things are downloaded:
ga.js is the JavaScript file containing the code. This is 9kb, so the initial download is negligible and the filename isn't dynamic so it's cached after the first request.
a 35 byte gif file with a dynamic url (via query string args), so this is requested every time. 35 bytes is a negligible download as well (firebug says it took me 70ms to dl it).
As far as execution time, my first request with a clean browser cache was an average of about 330ms each time and subsequent requests were between 35 and 130 ms.
From my own experience it has adding Google-Analytics has not changed the load times.
According to FireBug it loads in less then a second (648MS avg), and according so some of my other test ~60% - 80% of that time was transferring the data from the server, which of course will vary from user to user.I don't preticularly think that caching the analytics code locally will change the load times much, for the above reasons.
I use Google-Analytics on more then 40 websites without it ever being the cause of any, even small, slowdown, the most amount of time is spent getting the images which, due to their typical sizes, is understandable.
You can host the ga.js on your servers with no problems whatsoever, but the idea is that your users will have the ga.js cached from some other site they may have visited. So downloading ga.js, because it's so popular, adds very little overhead in many cases (i.e., it's already been cached).
Plus, DNS lookups do not cost the same in different places due to network topology. Caching behavior would change depending on whether users use other sites that include ga.js or not.
Once the JavaScript has been loaded, the ga.js does communicate with Google servers, but that is an asynchronous process.
There's no/minimal site overhead on the server side.
The HTML for Google Analytics is three lines of javascript that you place at the bottom of your webpage. It's nothing really, and doesn't consume any more server resource than a copyright notice.
On the client side, the page can take a little bit (up to a couple of seconds) of time to finish displaying a page. However - In my experience, the only bit of the page not loaded is the Google stuff, so users can see your page perfectly fine. You just get the throbber at the top of the page throbbing for a little longer.
(Note: You need to place your google analytics code block at the bottom of any served pages for this to be the case. I don't know what happens if the code block is placed at the top of your HTML)
The traditional instructions from Google on how to include ga.js use document.write(). So, even if a browser would somehow asynchronously load external JavaScript libraries until some code is actually to be executed, the document.write() would still block the page loading. The later asynchronous instructions do not use document.write() directly, but maybe insertBefore also blocks page loading?
However, Google sets the cache's max-age to 86,400 seconds (being 1 day, and even set to be public, so also applicable to proxies). So, as many sites load the very same Google script, the JavaScript will often be fetched from the cache. Still, even when ga.js is cached, simply clicking the reload button will often make a browser ask Google about any changes. And then, just like when ga.js was not cached yet, the browser has to await the response before continuing:
GET /ga.js HTTP/1.1
Host: www.google-analytics.com
...
If-Modified-Since: Mon, 22 Jun 2009 20:00:33 GMT
Cache-Control: max-age=0
HTTP/1.x 304 Not Modified
Last-Modified: Mon, 22 Jun 2009 20:00:33 GMT
Date: Sun, 26 Jul 2009 12:08:27 GMT
Cache-Control: max-age=604800, public
Server: Golfe
Note that many users click reload for news sites, forums and blogs they already have open in a browser window, making many browsers block until a response from Google is received. How often do you reload the SO home page? When Google Analytics response is slow, then such users will notice right away. (There are many solutions published on the net to asynchronously load the ga.js script, especially useful for these kind of sites, but maybe no longer better than Google's updated instructions.)
Once the JavaScript has loaded and executed, the actual loading of the web bug (the tracking image) should be asynchronous. So, the loading of the tracking image should not block anything else, unless the page uses body.onload(). In this case, if the web bug fails to load promptly then clicking reload actually makes things worse because clicking reload will also make the browser request the script again, with the If-Modified-Since described above. Before the reload the browser was only awaiting the web bug, while after clicking reload it also needs the response for the ga.js script.
So, sites using Google Analytics should not use body.onload(). Instead, one should use something like jQuery's $(document).ready() or MooTools' domready event.
See also Google's Functional Overview, explaining How Does Google Analytics Collect Data?, including How the Tracking Code Works. (This also makes it official that Google collects the contents of first-party cookies. That is: the cookies from the site you're visiting.)
Update: in December 2009, Google has released an asynchronous version. The above should tell everyone to upgrade just to be sure, though upgrading does not solve everything.
It really depends on the day. I'm just adding this to a blog. I'm in california, very close to their main data centers, on a fast low latency business DSL, on a overclocked i5 with plenty of RAM running a recent linux kernel and stable firefox.
here's a sample page load:
google-analytics alone added 5 seconds just of network download time... to get 15Kb!
You can see blogger.com served 34Kb in 300 mili seconds. That's 32x faster!
Also, look how the Red Line (which represents the onLoad event, meaning, there's no more script executing on the page and the so the browser can finally stops the loading indicators/spinings/etc) ... look how far to the right it is. that's probably 3seconds of garbage javascript processing that happened there. It's very uncommon for that line to be very far away from the end of the resources download bars. I'm done debugging this and it's 1/3 analytics fault, 2/3 blogger fault. ...one would think google stuff was fast.
Edit:
Some more data. Here's a request with everything cached. the above one was first visit.
I've removed the googleplus crap from above for two reasons, I was trying to see if they were playing some part on the slow onLoad event (they aren't) and because It is mostly useless.
So, With this we can see that the network time is the least of your worries. Even on a fast computer with modern software, the toll google analytics + blogger take on processing time will still dump your page load past 7s. Without the blogger, just check this very site, i'm seeing 0.5s of delay after resources are loaded and the red line kicks in.
Loading any extra javascript to your page is going to increase the download time from the client's perspective. You can ameliorate this by loading it at the bottom of your page so that your page is rendered even if GA is not loaded. I would avoid caching because you would lose the advantage of the client cache for your page. If the client has it cached from some other page, your page's request will be filled from the client itself. If you change it to load from your site, it will require a download even if the client already has the code (which is likely). Adding a task to your software processes to avoid loading the file from Google seems unwarranted for what may be an unnecessary optimization. It would be hard to test this since it would always serve up faster locally, but what really matters is how fast it works for your customers. If you decide to evaluate keeping it locally, make sure you test it from your home internet connection --- not the machine sitting next to the server in your rack.
Use FireBug and YSlow to check for yourself. What you will discover however is that GA is about 9KB in size (which is actually quite substantially for what it does) and that it also sometimes does NOT load very fast (for what reasons I don't know, I think it might be the servers "choking" sometimes)
We removed it due to performance issues on our Ajax Samples, but then again for us being ultra fast and responsive was priority 1, 2 and 3
Nothing noticeable.
The call to Google (including DNS lookup, loading the Javascript if not already cached and the actual tracer calls themselves) should be done by the client's browser in a separate thread to actually loading your page. Certainly the DNS lookup will be done by the underlying system and will not, to my knowledge, count as a lookup within the browser (browsers have a limit on the number of request threads they will use per site).
Beyond that, the browser will load the Google script in parallel along with all other embedded resources, so you will potentially get an extremely slight increase in the time it takes to download everything, in the worst case (we're talking in the order of milliseconds, unnoticable. If the Google script is loaded last by the browser, or you don't have many external resources on your page, or if your page's external resources are cached by the browser, or if Google's script is cached by the browser (extremely likely) then you won't see any difference. It's just absolutely trivial overall, the same effect as sticking an extra tiny picture on your page, roughly speaking.
About the only time it might make a concrete difference is if you have some behaviour that fire on the onLoad event (which waits for external resources to load), and the Google servers are down/slow. The latter is unlikely to happen often, but if this were the case then the onLoad even won't fire until the script is downloaded. You can work around this anyway by using various "when DOM loaded" events, which are generally more responsive as you don't have to wait for your own scripts/images to load this way either.
If you're really that worried about the effects on page load time, then have a look a the "Net speed" section of Firebug, which will quantify this and draw you a pretty graph. I would encourage you to do this for yourself anyway as even if other people give you the figures and benchmarks you request, it will be completely different for your own site.
Well, I have have searched, researched and expored extensively on net. But I have not found any statistical data that claims either in favour or against of the premise.
However, this excerpt from http://www.ga-experts.com claims that its a Myth that GA slows down your website.
Err, well okay, maybe slightly, but
we’re talking about milliseconds. GA
works by page tagging, and any time
you add more content to a web page, it
will increase loading times. However
if you follow best practice (adding
the tag before the </body> tag) then
your page will load first. Also, bear
in mind that any page tag based web
analytics package (which is the
majority) will work the same way
From the answers above and all other sources, what I feel is that whatever slowdown it causes in not percieved by the user as the Script is included at the bottom of the page. But if we talk of complete page-loads we might say that it slows down the page-load time.
Please post in more info if you have and DATA if you have any.
I don't think this is what your looking for but what are you worried about performance for?
If its your server... then there's obviously no impact as it resides on Google servers.
If its your users that your worried about then there is no impact either. As long as you place it just above the body tag then your users will not receive anything slower than they would before... the script is loaded last and has no affect on the appearance to the user. So there essentially not waiting on anything and even continue to browse through the page without noticing that its still loading.
The question was will Google Analytics cause your site to slow down and the answer is yes. Right now at the time of writing this Google-Analytics.com is not working so sites that have that in their pages won't load the pages so yes, it can slow down and cause your site to not even load. It's uncommon for google-analytics.com to be down this long which right now has been over 10 minutes, but it just shows that it is possible.
There are two aspects to it.
Analytics script's' (and a gif) download
Downloaded scripts execution
Download time is almost always less than 100ms, which is acceptable.
Here comes the twist.
analytics.js execution 250ms
re-marketing (if enabled) 300ms
demographic (if enabled) 200ms
So analytics with re-marketing takes 750ms on average. I feel that this is a huge number when it comes to performance overhead.
I noticed frequent I/o and CPU overload in cPanel resulting with:
Site unreachable error
And that stopped after I disabled WP Analytics plugin. So I reckon it does have some impact.

Resources