Our site is considering making the switch to http2.
My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.
Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.
It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?
In our case, our many individual js and css files fall in the 1kb to 180kb range. Jquery and bootstrap might be more. Cumulatively, a fresh download of a page on our site is usually less than 900 kb.
So I have two questions:
Are these file sizes small enough to be cached by browsers?
If they are small enough to be cached, is it good to concatenate files anyways for users who use browsers that don't support http2.
Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.
Let's clarify a few things:
My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.
HTTP/2 renders optimisation techniques like file concatenation somewhat obsolete since HTTP/2 allows many files to download in parallel across the same connection. Previously, in HTTP/1.1, the browser could request a file and then had to wait until that file was fully downloaded before it could request the next file. This lead to workarounds like file concatenation (to reduce the number of files required) and multiple connections (a hack to allow downloads in parallel).
However there's a counter argument that there are still overheads with multiple files, including requesting them, caching them, reading them from cache... etc. It's much reduced in HTTP/2 but not gone completely. Additionally gzipping text files works better on larger files, than gzipping lots of smaller files separately. Personally, however I think the downsides outweigh these concerns, and I think concatenation will die out once HTTP/2 is ubiquitous.
Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.
It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?
The file size has no bearing on whether it would be cached or not (unless we are talking about truly massive files bigger than the cache itself). The reason splitting files into smaller chunks is better for caching is so that if you make any changes, then any files which has not been touched can still be used from the cache. If you have all your javascript (for example) in one big .js file and you change one line of code then the whole file needs to be downloaded again - even if it was already in the cache.
Similarly if you have an image sprite map then that's great for reducing separate image downloads in HTTP/1.1 but requires the whole sprite file to be downloaded again if you ever need to edit it to add one extra image for example. Not to mention that the whole thing is downloaded - even for pages which just use one of those image sprites.
However, saying all that, there is a train of thought that says the benefit of long term caching is over stated. See this article and in particular the section on HTTP caching which goes to show that most people's browser cache is smaller than you think and so it's unlikely your resources will be cached for very long. That's not to say caching is not important - but more that it's useful for browsing around in that session rather than long term. So each visit to your site will likely download all your files again anyway - unless they are a very frequent visitor, have a very big cache, or don't surf the web much.
is it good to concatenate files anyways for users who use browsers that don't support http2.
Possibly. However, other than on Android, HTTP/2 browser support is actually very good so it's likely most of your visitors are already HTTP/2 enabled.
Saying that, there are no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. Ok it could be argued that a number of small files could be downloaded in parallel over HTTP/2 whereas a larger file needs to be downloaded as one request but I don't buy that that slows it down much any. No proof of this but gut feel suggests the data still needs to be sent, so you have a bandwidth problem either way, or you don't. Additionally the overhead of requesting many resources, although much reduced in HTTP/2 is still there. Latency is still the biggest problem for most users and sites - not bandwidth. Unless your resources are truly huge I doubt you'd notice the difference between downloading 1 big resource in I've go, or the same data split into 10 little files downloaded in parallel in HTTP/2 (though you would in HTTP/1.1). Not to mention gzipping issues discussed above.
So, in my opinion, no harm to keep concatenating for a little while longer. At some point you'll need to make the call of whether the downsides outweigh the benefits given your user profile.
Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.
Absolutely wouldn't hurt at all. As mention above there are (basically) no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. It's just not that necessary under HTTP/2 anymore and has downsides (potentially reduces caching use, requires a build step, makes debugging more difficult as deployed code isn't same as source code... etc.).
Use HTTP/2 and you'll still see big benefits for any site - except the most simplest sites which will likely see no improvement but also no negatives. And, as older browsers can stick with HTTP/1.1 there are no downsides for them. When, or if, you decide to stop implementing HTTP/1.1 performance tweaks like concatenating is a separate decision.
In fact only reason not to use HTTP/2 is that implementations are still fairly bleeding edge so you might not be comfortable running your production website on it just yet.
**** Edit August 2016 ****
This post from an image heavy, bandwidth bound, site has recently caused some interest in the HTTP/2 community as one of the first documented example of where HTTP/2 was actually slower than HTTP/1.1. This highlights the fact that HTTP/2 technology and understand is still new and will require some tweaking for some sites. There is no such thing as a free lunch it seems! Well worth a read, though worth bearing in mind that this is an extreme example and most sites are far more impacted, performance wise, by latency issues and connection limitations under HTTP/1.1 rather than bandwidth issues.
Related
I have a hosted webserver with http/2 (medium fast) and additionally I have a space on a fast CDN-Server with only http/1.1.
Is it recommended to load some ressources from the CDN or should I use only the webserver because of http/2?
Loading too many recources from the CDN could be a bottleneck due to http/1.1?
Would be kind to get some hints...
You need to test. It really depends on your app, your users and your servers.
Under HTTP/1.1 you are limited to 6 connections to a domain. So hosting content on a separate domain (e.g. static.example.com) or loading from a CDN was a way to increase that limit beyond 6. These separate domains are also often cookie-less as they are on separate domains which is good for performance and security. And finally if loading jQuery from code.jquery.com then you might benefit from the user already having downloaded it for another site so save that download completely (though with the number of versions of libraries and CDNs the chance of having a commonly used library already downloaded and in the browser cache is questionable in my opinion).
However separate domains requires setting up a separate connection. Which means a DNS lookup, a TCP connection and usually an HTTPS handshake too. This all takes time and especially if downloading just one asset (e.g. jQuery) then those can often eat up any benefits from having the assets hosted on a separate site! This is in fact why browsers limit the connections to 6 - there was a diminishing rate of return in increasing it beyond that. I've questioned the value of sharded domains for a while because of this and people shouldn't just assume that they will be faster.
HTTP/2 aims to solve the need for separate domains (aka sharded domains) by removing the need for separate connections by allowing multiplexing, thereby effectively removing the limit of 6 "connections", but without the downsides of separate connections. They also allow HTTP header compression, reducing the performance downside to sending large cookies back and forth.
So in that sense I would recommended just serving everything from your local server. Not everyone will be on HTTP/2 of course but the support is incredible strong so most users should.
However, the other benefit of a CDN is that they are usually globally distributed. So a user on the other side of the world can connect to a local CDN server, rather than come all the way back to your server. This helps with connection time (as TCP handshake and HTTPS handshake is based on shorter distances) and content can also be cached there. Though if the CDN has to refer back to the origin server for a lot of content then there is still a lag (though the benefits for the TCP and HTTPS setup are still there).
So in that sense I would advise to use a CDN. However I would say put all the content through this CDN rather than just some of it as you are suggesting, but you are right HTTP/1.1 could limit the usefulness of that. That's weird those as most commercial CDNs support HTTP/2, and you also say you have a "CDN server" (rather than a network of servers - plural) so maybe you mean a static domain, rather than a true CDN?
Either way it all comes down to testing as, as stated at the beginning of this answer it really depends on your app, your users and your servers and there is no one true, definite answer here.
Hopefully that gives you some idea of the things to consider. If you want to know more, because Stack Overflow really isn't the place for some of this and this answer is already long enough, then I've just written a book which spends large parts discussing all this: https://www.manning.com/books/http2-in-action
We have a large SPA in backbone and Angular that calls out to a set of Java APIs for a financial system with a large number of users.
One person said:
Switching on http/2.0 will have a massive different for our users in terms of page load time due to the nature of the protocol.
Another person said:
Browsers like Chrome are actually pretty good even without http/2.0. Switching it on won't make a noticeable different to the end user.
We made the change, and measured static page load times before and after the change. We didn't see a difference over 48 hours of data each before and after the change. (By both browser tests, and getting logging data on page load times forwarded to the application from the browser in our logs.)
My question is: Is the improvement from switching on http/2.0 in Cloudfront for an SPA noticeable for the average user of a large site during bootstrap?
Way too vague a question to answer I’m afraid.
Some of the things to consider:
Is your site super optimised with HTTP/1 performance issue workarounds (e.g.
concatenation, spriting, sharding) that HTTP/2 (which looks to remove the need for those) provides no real noticeable performance benefits?
Is your site so full of crappy JavaScript that HTTP downloads (which HTTP/2 looks to make more efficient) are a tiny and almost unnoticeable part of the performance problem in the grand scale of things?
Is your site bandwidth bound (e.g. full of print quality images) so that bandwidth rather than HTTP queuing is the problem?
Is your backend and/or web server so sucky that it takes a long time to generate your pages so again the HTTP transfer part is a tiny, almost unnoticeable part of the problem?
Is your site a really small site with just one HTML page and one JavaScript load?
Could your site be more optimised for HTTP/2 (e.g. hosting everything on a single domain, potentially using HTTP/2 Push...etc.) to allow you to get more performance than you could out of HTTP/2?
All of these things could impact whether switching to HTTP/2 makes a noticeable difference or not. Google found that a sample of sites get a 27%-60% performance improvement for SPDY (that HTTP/2 is based upon), but it really does depend on the site in question.
Ultimately HTTP/2 aims to make downloading many assets more efficient as this is inefficient under HTTP/1 - and particularly on low latency conditions. If you don’t have many assets, or downloading those is not a problem then HTTP/2 then you may not notice much difference.
I’ve a blog post to help show the problems in HTTP/1 that HTTP/2 looks to address (including analysing a real world example - Amazon.com) which may help you look at your site for the same potential issues (full disclosure it’s part of a book I’m writing on the subject).
Is loading fonts via storing them on your server and using #font-face slower than loading them from Google's font API? Or does it always depend on the font and vary from situation to situation?
And the same for Javascript and other similar files: is it faster or slower to load from CDN's than to store the files on your server and load them (locally on the server)?
Or are there too many variables involved from situation to situation to generalize to a single answer? I would imagine that it depends on which CDN you're accessing and/or your personal server settings and the size/nature of the files you're loading, etc, but I was just curious if there might be a general rule or strategy to knowing which is faster?
A CDN might be faster, on the base that it is built with speed in mind (high performance, tuned web servers, good caching...) and it is usually composed by a network of geographically distributed servers, lowering latence both because they are nearer and because they share the load. Also, they could be directly placed on backbones, which allow for much faster transfer rates than a low-to-mid-priced server will ever do.
Thus said, for a low traffic website mostly visited from one specific country, in turn near to the server location, the difference in load is irrelevant.
The reason for using Google or jQuery CDN is both saving bandwidth (if the respective owner allows you to use theirs, of course) on your server and be sure you do not miss urgent patches, as they will push fixed versions on the CDN as soon as possible, while you have to get notified, download the new version, then load it on your server (although I guess that this is not a great issues in modern, sanboxed browsers).
I have recently been working on improving the front end performance of our website and have been employing a number of best practices.
However I have had a recent example where some of the practices are slightly at odds with each other
Minimise HTTP requests
In order to "trick" the browser into making more concurrent requests have some assets served from a different domain
Leverage browser caching
Why?
We used to bundle almost all of our Javascript into one file to minimise HTTP requests. This included JQuery and JQuery UI.
I thought this was silly as many users are likely to have JQuery already cached in their browser so I decided we should remove it from our all.js and instead serve it from Google's CDN. This would save users downloading the code again and because it's on a different domain it can be downloaded in parallell with other resources from our own domains.
The concurrent downloading is shown in the graph below:
This of course has raised the number of requests for people without JQuery already cached which isn't great though.
So my question is this:
Is the change a sensible one? Do the benefits of leveraging caching and allowing concurrent requests outweigh a slight increase in the number of requests?
That is a very good question.
You have explained your reasoning well and they are all good reasons for making this change.
But there still remains benefits to both approaches.
Keeping everything combined in one file
Reduce number of HTTP requests, reduces the negative effects of round-trip latency on the user's connection.
All libraries/plugins are downloaded at once, and should remain cached for when they are later needed.
Reduce dependency on other services (although, Google is going to be quite reliable).
Separate files spread across domains
Increase parallelisation of downloads, reduces the negative effects of bandwidth shaping on the user's connection. (Note that most browsers don't limit concurrent per-domain requests to 2 anymore though.)
Increase granularity - separate parts can be downloaded on-demand as needed, ie if a particular plugin is not needed on the first page hit, it isn't downloaded.
Personally, I'd normally lean a little bit towards the former (reducing HTTP requests by combining them into one big file). I feel like most of my audience is going to be on a fairly high-bandwidth connection and I can reduce latency. Remember to use Google and Yahoo's page speed tools to find other ways of speeding things up.
Is there an advantage of some sort (speed or performance wise) to embed your CSS and JS into your web page, as opposed to keeping the code in sparate files? I was raised to believe that keeping code separate in separate files makes things easier to maintain. However, on high profile websites like amazon or google even facebook, I see a lot of embed code. Is there a performance reason they choose to do so or is it just an old/new way of doing things. I suppose my question is similar to this one: Should I inline CSS & JS in mobile sites to save bandwidth?
But I would like to hear form experts, most notably from people who worked on high profile web sties and have done so, if any.
P.S.
Bonus Question: Last html comment on amazon web pages is <!-- MEOW --> does it mean anything or is it just a funny prank?
There are good reasons to inline resources, but as with most things, it also has its tradeoffs. The simplest case for inlining is cases where the cost of an HTTP connection is much more than the resource itself, ex: if you have a 10x10 icon you need to show, a dedicated request for that may not be worth it vs. inlining the data via a data URI.
This is especially true when and if you have many small resources that need to be fetchd. Most browsers limit themselves to a max of 6 connections per host, so if you have 60 resources which need to be fetched, then you'll be blocked for a significant chunk of time.
To work around these case we've invented other workarounds: domain sharding to go over the 6 connection limit, and "spriting" to fetch one resource vs multiple.
If you take a look at mod_pagespeed (Apache module), which does many of these optimizations on the fly for you, then the recommended setting we provide is to inline any resource that's below 2kb. That's a pretty good rule of thumb for today's stack.
Once SPDY is more widely deployed, many of these workarounds can be eliminated: no need to do domain sharding, cost of extra requests is much less, etc.
Stoyan did an experiment that you might find interesting http://www.phpied.com/style-tag-to-inline-style-attrrib/
CSS/JS external files typically get cached on the user's hard drive under that users browser's profile. So unless you change the code frequently, you won't really be doing yourself a favor by putting it inline.
Definitely saves you time from maintenance, but you can easily call in a javascript/css file and embed the code on the page you're populating on the server side, but that also means you're making your server do additional work.
As for the MEOW - yeah, them trying to be funny, or it's code... for... cat...