I am using Tomcat 8.5.29 and using the respective configuration, i have enabled the HTTP2 support for the site. Below is the configuration in server.xml file.
<Connector port="443" protocol="org.apache.coyote.http11.Http11AprProtocol"
maxThreads="150" SSLEnabled="true"
compressableMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json" compression="on" compressionMinSize="1024"
>
<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
<SSLHostConfig>
<Certificate certificateKeyFile="conf/localhost-key.pem"
certificateFile="conf/localhost-cert.pem"
certificateChainFile="conf/cacert.pem"
type="RSA" />
</SSLHostConfig>
</Connector>
When i tried to compare the page load time for the site which is supporting HTTPS 1.1 and HTTP2, it is not consistent. Sometime it is taking more time to load and sometime it is taking less time to load compare to HTTPS 1.1.
To measure the page load time i am using httpwatch.
I am looking for information on
A) Which are the tools can be used to measure the performance enhancement using http2 ? Our is not public website so cant use the some of the tool available online.
B) Are there any other configuration needs to be done apart from enabling the HTTP2 in tomcat to get the better result ?
Regards,
HTTP/2 aims to address some inefficiencies of loading many resources over HTTP by changing to a binary format with multiplexing.
Under HTTP/1, requesting many resources over a high latency, long distant network (as much of the Internet is) means downloading website assets are slower than they need to be. This is because each HTTP/1.1 connection can only handle one resource at a time and cannot use that connection to handle another request while it’s waiting around for the first to be sent back.
So for your use case I am presuming this is on an intranet and the servers are probably located quite close to you with high speed links to them? If so HTTP/2 is unlikely to give you a huge performance boost to be honest, as the resources will likely be sent back quite quickly anyway. So I am not surprised that you are not seeing improvements for this scenario.
Additionally downloading multiple assets is only one part of using a website. If the website requires a lot of server side processing to produce then the downloading side (which HTTP/2 should improve) may be such a small part of the load time that it might be negligible and even drastic improvements to that might be unnoticeable. Similarly if the website is slow even after downloading (because it uses a tonne of JavaScript for example) then that’s not going to be fixed by moving to HTTP/2.
To me HTTP/2 makes more sense for serving static resources (images, CSS and JavaScript) than dynamic resources from an application server (Java based or otherwise) so I’m not convinced there is a real pressing need for HTTP/2 on Tomcat and the like. Even if you are using Tomcat to serve static resources you’re probably better sticking a faster HTTP/2 webserver (Apache, Nginx) in front of it and offloading them to that and only proxying genuinely dynamic content on to Tomcat.
So while HTTP/2 is a great improvement to the protocol (for most cases), it is not a magic fix to make your site 10 times faster. Saying that HTTP/2 is the future IMHO so there is little reason not to move to it (the primary one being lack of support of HTTP/2 in many implementations - especially if running older versions of server software, but you’ve already solved that issue).
Anyway, back to your question: The easiest way I would suggest would be to use developer tools in browser to see how long it takes to load he site with and without HTTP/2 - that’s ultimately what your users are experiencing. If you can do this programmatically (e.g. record the time taken to fully load the page with JavaScript and report that back some how) to allow for larger scale analysis then so much the better. This takes a bit more setup than running something like Apache’s ab tool or the like, but those won’t triply measure the improvements due to HTTP/2 if they are only downloading the main page and not the resources, and also won’t measure the whole load time the user experiences.
Related
I have a hosted webserver with http/2 (medium fast) and additionally I have a space on a fast CDN-Server with only http/1.1.
Is it recommended to load some ressources from the CDN or should I use only the webserver because of http/2?
Loading too many recources from the CDN could be a bottleneck due to http/1.1?
Would be kind to get some hints...
You need to test. It really depends on your app, your users and your servers.
Under HTTP/1.1 you are limited to 6 connections to a domain. So hosting content on a separate domain (e.g. static.example.com) or loading from a CDN was a way to increase that limit beyond 6. These separate domains are also often cookie-less as they are on separate domains which is good for performance and security. And finally if loading jQuery from code.jquery.com then you might benefit from the user already having downloaded it for another site so save that download completely (though with the number of versions of libraries and CDNs the chance of having a commonly used library already downloaded and in the browser cache is questionable in my opinion).
However separate domains requires setting up a separate connection. Which means a DNS lookup, a TCP connection and usually an HTTPS handshake too. This all takes time and especially if downloading just one asset (e.g. jQuery) then those can often eat up any benefits from having the assets hosted on a separate site! This is in fact why browsers limit the connections to 6 - there was a diminishing rate of return in increasing it beyond that. I've questioned the value of sharded domains for a while because of this and people shouldn't just assume that they will be faster.
HTTP/2 aims to solve the need for separate domains (aka sharded domains) by removing the need for separate connections by allowing multiplexing, thereby effectively removing the limit of 6 "connections", but without the downsides of separate connections. They also allow HTTP header compression, reducing the performance downside to sending large cookies back and forth.
So in that sense I would recommended just serving everything from your local server. Not everyone will be on HTTP/2 of course but the support is incredible strong so most users should.
However, the other benefit of a CDN is that they are usually globally distributed. So a user on the other side of the world can connect to a local CDN server, rather than come all the way back to your server. This helps with connection time (as TCP handshake and HTTPS handshake is based on shorter distances) and content can also be cached there. Though if the CDN has to refer back to the origin server for a lot of content then there is still a lag (though the benefits for the TCP and HTTPS setup are still there).
So in that sense I would advise to use a CDN. However I would say put all the content through this CDN rather than just some of it as you are suggesting, but you are right HTTP/1.1 could limit the usefulness of that. That's weird those as most commercial CDNs support HTTP/2, and you also say you have a "CDN server" (rather than a network of servers - plural) so maybe you mean a static domain, rather than a true CDN?
Either way it all comes down to testing as, as stated at the beginning of this answer it really depends on your app, your users and your servers and there is no one true, definite answer here.
Hopefully that gives you some idea of the things to consider. If you want to know more, because Stack Overflow really isn't the place for some of this and this answer is already long enough, then I've just written a book which spends large parts discussing all this: https://www.manning.com/books/http2-in-action
We have a large SPA in backbone and Angular that calls out to a set of Java APIs for a financial system with a large number of users.
One person said:
Switching on http/2.0 will have a massive different for our users in terms of page load time due to the nature of the protocol.
Another person said:
Browsers like Chrome are actually pretty good even without http/2.0. Switching it on won't make a noticeable different to the end user.
We made the change, and measured static page load times before and after the change. We didn't see a difference over 48 hours of data each before and after the change. (By both browser tests, and getting logging data on page load times forwarded to the application from the browser in our logs.)
My question is: Is the improvement from switching on http/2.0 in Cloudfront for an SPA noticeable for the average user of a large site during bootstrap?
Way too vague a question to answer I’m afraid.
Some of the things to consider:
Is your site super optimised with HTTP/1 performance issue workarounds (e.g.
concatenation, spriting, sharding) that HTTP/2 (which looks to remove the need for those) provides no real noticeable performance benefits?
Is your site so full of crappy JavaScript that HTTP downloads (which HTTP/2 looks to make more efficient) are a tiny and almost unnoticeable part of the performance problem in the grand scale of things?
Is your site bandwidth bound (e.g. full of print quality images) so that bandwidth rather than HTTP queuing is the problem?
Is your backend and/or web server so sucky that it takes a long time to generate your pages so again the HTTP transfer part is a tiny, almost unnoticeable part of the problem?
Is your site a really small site with just one HTML page and one JavaScript load?
Could your site be more optimised for HTTP/2 (e.g. hosting everything on a single domain, potentially using HTTP/2 Push...etc.) to allow you to get more performance than you could out of HTTP/2?
All of these things could impact whether switching to HTTP/2 makes a noticeable difference or not. Google found that a sample of sites get a 27%-60% performance improvement for SPDY (that HTTP/2 is based upon), but it really does depend on the site in question.
Ultimately HTTP/2 aims to make downloading many assets more efficient as this is inefficient under HTTP/1 - and particularly on low latency conditions. If you don’t have many assets, or downloading those is not a problem then HTTP/2 then you may not notice much difference.
I’ve a blog post to help show the problems in HTTP/1 that HTTP/2 looks to address (including analysing a real world example - Amazon.com) which may help you look at your site for the same potential issues (full disclosure it’s part of a book I’m writing on the subject).
Our site is considering making the switch to http2.
My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.
Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.
It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?
In our case, our many individual js and css files fall in the 1kb to 180kb range. Jquery and bootstrap might be more. Cumulatively, a fresh download of a page on our site is usually less than 900 kb.
So I have two questions:
Are these file sizes small enough to be cached by browsers?
If they are small enough to be cached, is it good to concatenate files anyways for users who use browsers that don't support http2.
Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.
Let's clarify a few things:
My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.
HTTP/2 renders optimisation techniques like file concatenation somewhat obsolete since HTTP/2 allows many files to download in parallel across the same connection. Previously, in HTTP/1.1, the browser could request a file and then had to wait until that file was fully downloaded before it could request the next file. This lead to workarounds like file concatenation (to reduce the number of files required) and multiple connections (a hack to allow downloads in parallel).
However there's a counter argument that there are still overheads with multiple files, including requesting them, caching them, reading them from cache... etc. It's much reduced in HTTP/2 but not gone completely. Additionally gzipping text files works better on larger files, than gzipping lots of smaller files separately. Personally, however I think the downsides outweigh these concerns, and I think concatenation will die out once HTTP/2 is ubiquitous.
Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.
It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?
The file size has no bearing on whether it would be cached or not (unless we are talking about truly massive files bigger than the cache itself). The reason splitting files into smaller chunks is better for caching is so that if you make any changes, then any files which has not been touched can still be used from the cache. If you have all your javascript (for example) in one big .js file and you change one line of code then the whole file needs to be downloaded again - even if it was already in the cache.
Similarly if you have an image sprite map then that's great for reducing separate image downloads in HTTP/1.1 but requires the whole sprite file to be downloaded again if you ever need to edit it to add one extra image for example. Not to mention that the whole thing is downloaded - even for pages which just use one of those image sprites.
However, saying all that, there is a train of thought that says the benefit of long term caching is over stated. See this article and in particular the section on HTTP caching which goes to show that most people's browser cache is smaller than you think and so it's unlikely your resources will be cached for very long. That's not to say caching is not important - but more that it's useful for browsing around in that session rather than long term. So each visit to your site will likely download all your files again anyway - unless they are a very frequent visitor, have a very big cache, or don't surf the web much.
is it good to concatenate files anyways for users who use browsers that don't support http2.
Possibly. However, other than on Android, HTTP/2 browser support is actually very good so it's likely most of your visitors are already HTTP/2 enabled.
Saying that, there are no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. Ok it could be argued that a number of small files could be downloaded in parallel over HTTP/2 whereas a larger file needs to be downloaded as one request but I don't buy that that slows it down much any. No proof of this but gut feel suggests the data still needs to be sent, so you have a bandwidth problem either way, or you don't. Additionally the overhead of requesting many resources, although much reduced in HTTP/2 is still there. Latency is still the biggest problem for most users and sites - not bandwidth. Unless your resources are truly huge I doubt you'd notice the difference between downloading 1 big resource in I've go, or the same data split into 10 little files downloaded in parallel in HTTP/2 (though you would in HTTP/1.1). Not to mention gzipping issues discussed above.
So, in my opinion, no harm to keep concatenating for a little while longer. At some point you'll need to make the call of whether the downsides outweigh the benefits given your user profile.
Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.
Absolutely wouldn't hurt at all. As mention above there are (basically) no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. It's just not that necessary under HTTP/2 anymore and has downsides (potentially reduces caching use, requires a build step, makes debugging more difficult as deployed code isn't same as source code... etc.).
Use HTTP/2 and you'll still see big benefits for any site - except the most simplest sites which will likely see no improvement but also no negatives. And, as older browsers can stick with HTTP/1.1 there are no downsides for them. When, or if, you decide to stop implementing HTTP/1.1 performance tweaks like concatenating is a separate decision.
In fact only reason not to use HTTP/2 is that implementations are still fairly bleeding edge so you might not be comfortable running your production website on it just yet.
**** Edit August 2016 ****
This post from an image heavy, bandwidth bound, site has recently caused some interest in the HTTP/2 community as one of the first documented example of where HTTP/2 was actually slower than HTTP/1.1. This highlights the fact that HTTP/2 technology and understand is still new and will require some tweaking for some sites. There is no such thing as a free lunch it seems! Well worth a read, though worth bearing in mind that this is an extreme example and most sites are far more impacted, performance wise, by latency issues and connection limitations under HTTP/1.1 rather than bandwidth issues.
We're currently doing optimizations to our web project when our lead told us to push the use of CDNs for external libraries as opposed to including them into a compile+compress process and shipping them off a cache-enabled nginx setup.
His assumption is that if the user has visits example.com which uses a CDN'ed version of jQuery, the jQuery is cached that time. If the user happens to visit example2.com and happen to use the same CDN'ed jQuery, the jQuery will be loaded from cache instead of over the network.
So my question is: Do domains actually share their cache?
I argued that even if it is possible the browser does share cache, the problem is that we are running on the assumption that the previous sites use the same exact CDN'ed file from the same exact CDN. What are the chances of running into a user browsing through a site using the same CDN'ed file? He said to use the largest CDN to increase chances.
So the follow-up question would be: If the browser does share cache, is it worth the hassle to optimize based on his assumption?
I have looked up topics about CDNs and I have found nothing about this "shared domain cache" or CDNs being used this way.
Well your lead is right this is basic HTTP.
All you are doing is indicating to the client where it can find the file.
The client then handles sending a request to the CDN in compliance with their caching rules.
But you shouldn't over-use CDNs for libraries either, keep in mind that if you need a specific version of the library, especially older ones, you won't be likely to get much cache hits because of version fragmentation.
For widely used and heavy libraries like jQuery you want the latest version of it is recommended.
If you can take them all from the same CDN all the better (ie: Google's) especially as http2 is coming.
Additionally they save you bandwidth, which can amount to a lot when you have high loads of traffic, and can reduce the load time for users far from your server (Google's is great for this).
Our company runs a website which currently supports only http traffic.
We plan to support https traffic too as some of the customers who link to our pages want us to support https traffic.
Our website gets moderate amount of traffic, but is expected to increase over time.
So my question is this:
Is it a good idea to make our website https only?(redirect all http traffic to https)
Will this bring down the websites performance?
Has anyone done any sort of measurement?
PS: I am a developer who also doubles up as a apache admin.
Yes, it will impact performance, but it's usually not too bad compared to the running all the DB queries that go into the typical dymanically generated page.
Of course the real answer is: don't guess, benchmark it. Try it both ways and see the difference. You can use tools like siege and ab to simulate traffic.
Also, I think you may have more luck with this question over at http://www.serverfault.com/
I wouldn't worry about the load on the server; unless you are serving high volumes of static content, the encryption itself won't create much of a burden, in my experience.
However, using SSL dramatically slows down web sites by creating a lot more latency in connection setup.
An encrypted session requires about* three times as much time to set up as an unencrypted one, and the exact time depends on the latency.
Even on low latency connections, it is noticeable to the end user, but on higher latency (e.g. different continents, especially Australasia where latency to America/Europe is quite high) it makes a dramatic difference and will severely impact the user experience.
There are things you can do to mitigate it, such as ensuring that keep-alives are on (But don't turn them on without understanding exactly what the impact is), minimising the number of requests and maximising the use of browser cache.
Using HTTPS also affects browser behaviour in some cases. Certain optimisations tend to get turned off for security reasons, and some web browsers don't store objects loaded over HTTPS in the disc cache, which means they'll need to get them again in a later session, further impacting the user experience.
* An estimate based on some informal measurement
Is it a good idea to make our website
https only?(redirect all http traffic
to https) Will this bring down the
websites performance?
I'm not sure if you really mean all HTTP traffic or just page traffic. A lot of sites unnecessarily encrypt images, javascript and a bunch of other content that doesn't need to be hidden. This kind of content comprises most of the data transferred in a request so
if you do find feel that HTTPs is taking too much out of the system you can recommend the programmers separate content that needs to be secured from the content that does not.
Most webservers, unless severely underpowered, do not even use a fraction of the CPU power for serving up content. Most production servers I've seen are under 10%, even when using some SSL traffic. I think it would be best to see where your current CPU usage is at, and then do some of your own benchmarking to see how much extra CPU usage is used by an SSL request. I would guess it isn't that much.
No, it is not good idea to make any website as only https. Page loading speed might be little slower, because your server has to perform redirection operation unnecessarily for each web page request. It is better idea to make only pages as https that may contain secure/personal/sensitive information of users or organization. Even if the user information passing through web pages, you can use https. The web page which have information that can be shown to all in the world can normally use http. Finally, it is up to your requirement. If all pages contain secure information, you may make the website as https only.