Is there any significant performance benefits with HTTP2 multiplexing as compared with HTTP2 Server Push? - http2

HTTP2 multiplexing uses the same TCP connection thereby removing Connection time to the same host.
But with HTTP2 Server Push is there any significant performance benefits except for the roundtrip time that HTTP2 multiplexing will take while requesting every resource.

I gave a presentation about this, that you can find here.
In particular, the demo (starting at 36:37) shows the benefits that you can have with multiplexing alone, and then by adding HTTP/2 Push.
Spoiler: the combination of HTTP/2 multiplexing and Push yields astonishing better results with respect to HTTP/1.1.
Then again, every case is different, so you have to actually measure your case.
But the potential of HTTP/2 to yield better performance than HTTP/1.1 is really large, and many (most?) cases will benefit from this.

I'm not sure what exactly you're asking here, or if it's a good fit for StackOverflow but will attempt to answer none-the-less. If this is not the answer you are looking for then please rephrase the question so we can understand what exactly it is you are looking for.
You are right in that HTTP/2 uses multiplexing, which does negate the need for multiple connections (and the time and resources needed to set them up and manage them). However it's much more than that as it's not limited (browsers will typically limit connections to 4-6 per host) and also allows for "similar" connections (same IP and same certificate but different hostname) to share connections as well. Basically it solves the queuing of resources that the request/response method of HTTP/1 means and reduces need of limited multiple connections that HTTP/1 requires as a workaround. Which also reduces need for other workarounds like sharding, sprite files, concatenation... etc.
And yes HTTP/2 server push saves on one round trip. So when you request a webpage it sends both the HTML and the CSS needed to draw the page as the server knows you will need the CSS as it's pointless just sending you the HTML, waiting for your web browser to get it, parse it, see it needs CSS and request the CSS file and wait for it to download.
I'm not sure if you're implying that a round trip time is so low, that there is little gains in HTTP/2 server push because there is now no delay in requesting a file due to HTTP/2 multiplexing? If so that is not the case - there are significant gains to be made in pushing resources, particularly blocking resources like CSS which the browser will wait for before drawing a single thing on screen. While multiplexing reduces the delay in sending a request, it does not reduce the latency on the request travelling to the server, now on the server responding to that and sending it back. While these sound small they are noticeable and make a website feel slow.
So yes, at present, the primary gain for HTTP/2 Server Push is in reducing that round trip time (basically to zero for key resources).
However we are at the infancy of this and there are potential other uses for performance or other reasons. For example you could use this as a way of prioritising content so an important image could be pushed early when, without this, a browser would likely request CSS and Javascript first and leave images until later. Server Push could also negate the need for inline CSS (which bloats pages with copies of style sheets and may require Javascript to then load the proper CSS file) - another HTTP/1.1 workaround for performance. I think it will be very interesting to watch what happens with HTTP/2 Server Push over the coming years.
Saying that, there still some significant challenges with HTTP/2 server push. Most importantly how do you prevent wasting bandwidth by pushing resources that the browser already has cached? It's likely a digest HTTP header will be added for this but still under discussion. Which leads on how to implement HTTP/2 Server Push in the best method - for web browsers, web servers and web developers? The HTTP/2 spec is a bit vague on how this should be implemented, which leaves it up to different web servers in particular providing different methods to signal to the server to push a resource.
As I say, I think this one of the parts of HTTP/2 that could lead to some very interesting applications. We live in interesting times...

Related

What is the overhead in nginx to proxy an HTTP/2 request to an HTTP/1.1 request?

I have an Nginx proxy server. When an HTTP/2 request comes to the server and does not find anything in cache, the server makes an outbound request to the origin server using HTTP/1.1. Is there a performance degradation on the server when it converts from one version of the protocol to another? How does this compare to HTTP/1.1 to Nginx and HTTP/1.1 to the origin server? Is there a way to measure the overhead?
Strictly speaking there is performance degradation, since one protocol is binary, other one textural. So proxy must convert, that takes resources, time - you can expect degradation by default.
In general however that can be much more complicated. Say your proxy is used by slow mobile connection. Who cares about a bit of conversion if your app is gaining huge bust after that conversion? Or maybe your proxy had gzip conversion for http/1.1, and that speed gain is not that big, on the other hand maybe performance degradation is not that big?
Can you measure that? Perhaps. Question is what for? I would measure something as close to real case as possible. I would automate that measurement to see where real performance is.
My only warry in your case is CPU of the proxy - just measure it to see changes over time, and setup notifications - like "if cpu is over 80% for longer then 5 min".
Other than all of that. Http2 brings two ways communication, as well as push. My assumption is that it is not your case, since you are comparing 1.1 and 2. For me I would go with http2 everywhere, unfortunately nginx is not supporting http2 and backend side. Fingers crossed to see that soon!
Yes, going from HTTP2 to HTTP 1.1 degrades performance, primarily due to protocol-imposed transport conversions. For example, you lose the following transport optimizations:
Single connection
Request/response multiplexing
Header compression
Additionally, as Michal mentioned, HTTP 1.1 messages are textual while HTTP2 messages are binary.
HTTP2 multiplexes requests and responses over a single connection. However, HTTP 1.1 only affords persistent connections and request/response pipelining, which is not even comparable. For example, pipelining forces a FIFO order of message exchanges, which causes blocking.
To achieve any similar throughput levels, the proxy will have to open a connection pool to each backend. Those pools could be large or small, but considering resource allocations, TCP handshakes, TLS handshakes, etc., per connection and you start to get the idea of how much overhead we're talking about.
Measure the difference between "throughput in" on cache hits and "throughput out" on cache misses, e.g. "protocol conversion throughput penalty" is ~23 tps. (You should also know your average cache miss penalty in terms of time.)
Key metrics
Throughput in versus throughput out
Average cache miss penalty
Cache hit and cache miss ratios
Unless your cache miss ratio is high, I wouldn't worry about this.
I don't think their is a performance degrade. One way to measure the impact (since I can't test for you, you will have to do it) is to use AJAX, send a http/1.1 request and measure the response time. Then compare it to the time it takes to send http/2 requests. Do it multiple times.
That'll help you.
But, beware, their may be a potential security problem.
That is, their will be no point in even installing an SSL/TLS certificate. So is so because, the info that the NGINX server will send will then be open to hackers. Probably.

Is this a correct scenario to use WebSocket?

I have a browser plugin which will be installed on 40,000 dekstops.
This plugin will connect to a backend configuration file available via https, e.g. http://somesite/config_file.js.
The plugin is configured to poll this backend resource once/day.
But there is only one backend server. So if 40,000 endpoints start polling together the server might crash.
I could think of randomize the polling frenquency from the desktop plugins. But randomization still does not gurantee that there will not be a overload at the server.
Is using websocket in this scenario solves the scalability issue?
Polling once a day is very little.
I don't see any upside for Websockets unless you switch to Push and have more notifications.
However, staggering the polling does make a lot of sense, since syncing requests for the same time is like writing a DoS attack against your own server.
Staggering doesn't necessarily have to be random and IMHO, it probably shouldn't.
You could start with a fixed time and add a second per client ID, allowing for ~86K connections in 24 hours which should be easy for any server to handle.
As a side note, 40K concurrent connections might not as hard to achieve as you imagine.
EDIT (relating to the comments)
Websockets vs. Server Sent Events:
IMHO, when pushing data (vs. polling), I would prefer Websockets over Server Sent Events (SSE).
Websockets have a few advantages, such as client side communication which allows clients to ping the server and confirm that the connection is still alive.
The Specific Use-Case:
From the description in the question and the comments it seems that you're using browser clients with a custom plugin and that the updates you wish to install daily might require the browser to be active.
This raises different questions that effect the implementation (are the client browsers open all day? do you have any control over the client browsers and their environment? can you guarantee installation while the browser is closed?).
...
IMHO, you might consider having the client plugins test for an update each morning as they load for the first time during that day (first access).
People arrive at work in different times and they open their browsers for the first time at different schedules. So the 40K requests you're expecting will be naturally scattered across that timeline (probably a 20-30 minute timespan).
This approach makes sure that the browsers and computers are actually open (making the update possible) and that the update requests are staggered over a period of time (about 33.3 requests per second, if my assumption is correct).
If you're serving a pre-written static configuration file (perhaps updated by the server daily), avoiding dynamic content and minimizing any database calls, than 33 req/sec should be very easy to manage.

Is it necessary to cache bust in HTTP/2?

In HTTP/1, to avoid extra network requests that would determine if resources should remain cached, we would set a high max-age or Expires value on static assets, and give them a unique URL for each revision. But in HTTP/2, requests are cheap, so can we get by without cache-busting, and just rely on ETags, last-modified, et al?
The only advantage I can see with continuing to bust the cache (besides dually serving HTTP/1 and HTTP/2 clients) would be to save bandwidth checking if resources are out-of-date. And even that is probably going to be insignificant with HPACK. So am I missing something, or can I stop cache-busting now?
The "necessary" part depends on how extreme do you feel about performance. In short, if you can live with three or four round-trips cache busting is not required. Otherwise cache busting is still the only way to remove those.
Here are some arguments related to HTTP/2 vs HTTP/1.1, the issue of latency, and the use of HTTP/2 Push.
HTTP/2 requests are not instantaneous
HTTP/2 requests are cheaper than HTTP/1.1, but not too much. In HTTP/1.1, once the browser opens the six to eight TCP connections to the server it has six to eight channels to do revalidations. In some scenarios of high TCP packet loss, high latency and especially at the beginning of the connections where TCP slow start is king, the many TCP sockets of HTTP/1.1 work better than a single HTTP/2 TCP connection. HTTP/2 is good, but not a silver bullet.
HTTP/2 connections still have network latency. We have been averaging round-trip-time (RTT) for visitors to our site (It can be measured using HTTP/2 Ping) and because not everybody is in the same block that our server, our mean RTT is between 200 and 280 ms. A 304 revalidation will cost 1 RTT. In a site that doesn't use asset concatenation each new level of the asset tree will cost a further RTT.
HTTP/2 Push can save you as many RTTs as you want while working decently with the cache. But there are some issues, keep reading!
HTTP/2 Push works best with cache busting
The ideal scenario is that the server doesn't push fresh resources, but it pushes everything that has changed since the client's last visit.
If a browser considers a resource fresh (e.g. because of max-age), it rejects or doesn't use any push for that resource. That makes impossible to refresh an asset that the browser considers fresh with HTTP/2 Push.
Pushing 304 revalidations doesn't work due to a widespread bug in browsers. Those would be required with a small max-age.
Therefore, the only way of keeping RTTs to a minimum, not pushing anything that the browser already has and still being able to push a new version of an asset is to use cache busting, i.e, a new name or query parameter for new versions of assets.
See also
Url query parameters are still needed to update assets at clients
Interactions with the browser's cache

Periodic Ajax POST calls versus COMET/Websocket Push

On a site like Trello.com, I noticed in firebug console that it makes frequent and periodic Ajax POST calls to its server to retrieve new data from the database and update the dom as and when something new is available.
On the other hand, something like Facebook notifications seem to be implementing a COMET push mechanism.
What's the advantage and disadvantage of each approach and specifically, my question is why Trello.com uses a "pull" mechanism as I have always thought using such an approach (especially since it pings its server so frequently) as it seems like it is not a scalable solution - when more and more users sign up to use its services?
Short Answer to Your Question
Your gut instinct is correct. Long-polling (aka comet) will be more efficient than straight up polling. And when available, websockets will be more efficient than long-polling. So why some companies use the "pull polling" is quite simply: they are out of date and need to put some time into updating their code base!
Comparing Polling, Long-Polling (comet) and WebSockets
With traditional polling you will make the same request repeatedly, often parsing the response as JSON or stuffing the results into a DOM container as content. The frequency of this polling is not in any way tied to the frequency of the data updates. For example you could choose to poll every 3 seconds for new data, but maybe the data stays the same for 30 seconds at a time? In that case you are wasting HTTP requests, bandwidth, and server resources to process many totally useless HTTP requests (the 9 repeats of the same data before anything has actually changed).
With long polling (aka comet), we significantly reduce the waste. When your request goes out for the updated data, the server accepts the request but doesn't respond if there is no new changes, instead it holds the request open for 10, 20, 30, or 60 seconds or until some new data is ready and it can respond. Eventually the request will either timeout or the server will respond with an update. The idea here is that you won't be repeating the same data so often like in the 3 second polling above, but you still get very fast notification of new data as there is likely already an open request just waiting for the server to respond to.
You'll notice that long polling reduced the waste considerably, but there will still be the chance for some waste. 30-60 seconds is a common timeout period for long polling as many routers and gateways will shutdown hanging connections beyond that time anyway. So what if your data is actually changed every 15 minutes? Polling every 3 seconds would be horribly inefficient, but long-polling with timeouts at 60 seconds would still have some wasted round trips to the server.
Websockets is the next technology advancement that will allow a browser to open a connection with the server and keep it open for as long as it wants and deliver multiple messages or chunks of data via the same open websocket. The server can then send down updates exactly when new data is ready. The websocket connection is already established and waiting for data, so it is quick and efficient.
Reality Check
The problem is that Websockets is still in its infancy. Only the very latest generation of browsers support it, if at all. The spec hasn't been fully ratified as of this posting, so implementations can vary from browser to browser. And of course your visitors may be using browsers a few years old. So unless you can control what browsers your visitors are using (say corporate intranet where IT can dictate the software on the workstations) you'll need a mechanism to abstract away this transport layer so that your code can use the best technique available for that particular visitor's browser.
There are other benefits to having an abstracted communications layer. For example what if you had 3 grid controls on your page all pull polling every 3 seconds, see the mess this would be? Now rolling your own long-polling implementation can clean this up some, but it would be even cooler if you aggregated the updates for all 3 of these tables into one long-polling request. That will again cut down on waste. If you have a small project, again you could roll your own, but there is a standard Bayeux Protocol that many server push implementations follow. The Bayeux protocol automatically aggregates messages for delivery and then segregates messages out by "channel" (an arbitrary path-like string you as a developer use to direct your messages). Clients can listen on channels, and you can publish data on channels, the messages will get to all clients listening on the channel(s) you published to.
The number of server side server push tool kits available is growing quite fast these days as Push technology is becoming the next big thing. There are probably 20 or more working implementations of server push out there. Do your own search for "{Your favorite platform} comet implementation" as it will continue to change every few months I'm sure (and has been covered on stackoverflow before).

HTTP vs HTTPS performance

Are there any major differences in performance between http and https? I seem to recall reading that HTTPS can be a fifth as fast as HTTP. Is this valid with the current generation webservers/browsers? If so, are there any whitepapers to support it?
There's a very simple answer to this: Profile the performance of your web server to see what the performance penalty is for your particular situation. There are several tools out there to compare the performance of an HTTP vs HTTPS server (JMeter and Visual Studio come to mind) and they are quite easy to use.
No one can give you a meaningful answer without some information about the nature of your web site, hardware, software, and network configuration.
As others have said, there will be some level of overhead due to encryption, but it is highly dependent on:
Hardware
Server software
Ratio of dynamic vs static content
Client distance to server
Typical session length
Etc (my personal favorite)
Caching behavior of clients
In my experience, servers that are heavy on dynamic content tend to be impacted less by HTTPS because the time spent encrypting (SSL-overhead) is insignificant compared to content generation time.
Servers that are heavy on serving a fairly small set of static pages that can easily be cached in memory suffer from a much higher overhead (in one case, throughput was havled on an "intranet").
Edit: One point that has been brought up by several others is that SSL handshaking is the major cost of HTTPS. That is correct, which is why "typical session length" and "caching behavior of clients" are important.
Many, very short sessions means that handshaking time will overwhelm any other performance factors. Longer sessions will mean the handshaking cost will be incurred at the start of the session, but subsequent requests will have relatively low overhead.
Client caching can be done at several steps, anywhere from a large-scale proxy server down to the individual browser cache. Generally HTTPS content will not be cached in a shared cache (though a few proxy servers can exploit a man-in-the-middle type behavior to achieve this). Many browsers cache HTTPS content for the current session and often times across sessions. The impact the not-caching or less caching means clients will retrieve the same content more frequently. This results in more requests and bandwidth to service the same number of users.
HTTPS requires an initial handshake which can be very slow. The actual amount of data transferred as part of the handshake isn't huge (under 5 kB typically), but for very small requests, this can be quite a bit of overhead. However, once the handshake is done, a very fast form of symmetric encryption is used, so the overhead there is minimal. Bottom line: making lots of short requests over HTTPS will be quite a bit slower than HTTP, but if you transfer a lot of data in a single request, the difference will be insignificant.
However, keepalive is the default behaviour in HTTP/1.1, so you will do a single handshake and then lots of requests over the same connection. This makes a significant difference for HTTPS. You should probably profile your site (as others have suggested) to make sure, but I suspect that the performance difference will not be noticeable.
To really understand how HTTPS will increase your latency, you have to understand how HTTPS connections are established. Here is a nice diagram. The key is that instead of the client getting the data after 2 "legs" (one round trip, you send a request, the server sends a response), the client won't get data until at least 4 legs (2 round trips). So, if it takes 100 ms for a packet to move between the client and the server, your first HTTPS request will take at least 500 ms.
Of course, this can be mitigated by re-using the HTTPS connection (which browsers should do), but it does explain part of that initial stall when loading up an HTTPS web site.
The overhead is NOT due to the encryption. On a modern CPU, the encryption required by SSL is trivial.
The overhead is due to the SSL handshakes, which are lengthy and drastically increase the number of round-trips required for a HTTPS session over a HTTP one.
Measure (using a tool such as Firebug) the page load times while the server is on the end of a simulated high-latency link. Tools exist to simulate a high latency link - for Linux there is "netem". Compare HTTP with HTTPS on the same setup.
The latency can be mitigated to some extent by:
Ensuring that your server is using HTTP keepalives - this allows the client to reuse SSL sessions, which avoids the need for another handshake
Reducing the number of requests to as few as possible - by combining resources where possible (e.g. .js include files, CSS) and encouraging client-side caching
Reduce the number of page loads, e.g. by loading data not required into the page (perhaps in a hidden HTML element) and then showing it using client-script.
December 2014 Update
You can easily test the difference between HTTP and HTTPS performance in your own browser using the HTTP vs HTTPS Test website by AnthumChris: “This page measures its load time over unsecure HTTP and encrypted HTTPS connections. Both pages load 360 unique, non-cached images (2.04 MB total).”
The results may surprise you.
It's important to have an up to date knowledge about the HTTPS performance because the Let’s Encrypt Certificate Authority will start issuing free, automated, and open SSL certificates in Summer 2015, thanks to Mozilla, Akamai, Cisco, Electronic Frontier Foundation and IdenTrust.
June 2015 Update
Updates on Let’s Encrypt - Arriving September 2015:
Let's Encrypt Launch Schedule (Jun 16, 2015)
Let's Encrypt Root and Intermediate Certificates (Jun 4, 2015)
Draft Let's Encrypt Subscriber Agreement (May 21, 2015)
More info on Twitter: #letsencrypt
For more info on HTTPS and SSL/TLS performance see:
Is TLS Fast Yet?
High Performance Browser Networking, Chapter 4: Transport Layer Security
Overclocking SSL
Anatomy and Performance of SSL Processing
For more info on the importance of using HTTPS see:
Why HTTPS for Everything? (The HTTPS-Only Standard)
Let’s Encrypt (Internet Security Research Group)
HTTPS Everywhere (Electronic Frontier Foundation)
To sum it up, let me quote Ilya Grigorik: "TLS has exactly one performance problem: it is not used widely enough. Everything else can be optimized."
Thanks to Chris - author of the HTTP vs HTTPS Test benchmark - for his comments below.
The current top answer is not fully correct.
As others have pointed out here, https requires handshaking and therefore does more TCP/IP roundtrips.
In a WAN environment typically then the latency becomes the limiting factor and not the increased CPU usage on the server.
Just keep in mind that the latency from Europe to the US can be around 200 ms (torundtrip time).
You can easily measure this (for the single user case) with HTTPWatch.
In addition to everything mentioned so far, please keep in mind that some (all?) web browsers do not store cached content obtained over HTTPS on the local hard-drive for security reasons. This means that from the user's perspective pages with plenty of static content will appear to load slower after the browser is restarted, and from your server's perspective the volume of requests for static content over HTTPS will be higher than would have been over HTTP.
There isn't a single answer for this.
Encryption will always consume more CPU. This can be offloaded to dedicated hardware in many cases, and the cost will vary by algorithm selected. 3des is more expensive than AES, for example. Some algorithms are more expensive for the encrypter than the decryptor. Some have the opposite cost.
More expensive than the bulk crypto is handshake cost. New connections will consume much more CPU. This can be reduced with session resumption, at the cost of keeping old session secrets around until they expire. This means that small requests from a client that doesn't come back for more are the most expensive.
For cross internet traffic you may not notice this cost in your data rate, because the bandwidth available is too low. But you will certainly notice it in CPU usage on a busy server.
I can tell you (as a dialup user) that the same page over SSL is several times slower than via regular HTTP...
In a number of cases the performance impact of SSL handshakes will be mitigated by the fact that the SSL session can be cached on both ends (desktop and server). On Windows machines for example the SSL session can be cached for up to 10 hours. See http://support.microsoft.com/kb/247658/EN-US . Some SSL accelerators will also have parameters allowing you to tune the time the session is cached.
Another impact to consider is that static content served over HTTPS will not be cached by proxies, and this may reduce performance across multiple users accessing the site over the same proxy. This can be mitigated by the fact that static content will be cached at desktops as well, Internet Explorer versions 6 and 7 cache cacheable HTTPS static content unless instructed to do otherwise (Tools Menu/Internet Options/Advanced/Security/Do not save encrypted pages to disk).
Here's a great article (a little bit old, but still great) on SSL handshake latency. Helped me identifying SSL as the main cause of slowness for clients who were using my app through slow Internet connections:
http://www.semicomplete.com/blog/geekery/ssl-latency.html
I made a small experiment and got 16% time difference for the same image from flickr (233 kb):
http://farm8.staticflickr.com/7405/13368635263_d792fc1189_b.jpg
https://farm8.staticflickr.com/7405/13368635263_d792fc1189_b.jpg
Of course these numbers depends on many factors, such as computer performance, connection speed, server load, QoS on path (the particular network path taken from browser to the server) but it shows the general idea: HTTPS is slowser then HTTP, since it requesres more operations to complete (SSL handshaking and encoding/decoding data).
Since I am investigating same problem for my project, I found these slides. Older but interesting:
http://www.cs.nyu.edu/artg/research/comparison/comparison_slides/sld001.htm
There seems to be a nasty edge case here: Ajax over congested wifi.
Ajax usually means that the KeepAlive has timed out after say 20 seconds. However, the wifi means that the (ideally fast) ajax connection has to make multiple round trips. Worse, the wifi often loses packets, and there are TCP retransmits. In this case, HTTPS performs really really badly!
Is TLS fast yet? Yes.
Watch: https://www.youtube.com/watch?v=0EB7zh_7UE4
Read: https://istlsfastyet.com/
There are many projects out there that aim to blur the lines and to make HTTPS just as fast. Like SPDY and mod-spdy.
HTTP VS HTTPS PERFORMANCE COMPARISON
I have always associated HTTPS with slower page load times when compared to plain old HTTP. As a web developer, web page performance is important to me and anything that will slow down the performance of my web pages is a no-no.
In order to understand the performance implications involved, the diagram below gives you a basic idea of what happens under the hood when you make a request for a resource using HTTPS.
As you can see from the diagram above, there are a few extra steps that need to take place when using HTTPS compared to using plain HTTP. When you make a request using HTTPS, a handshake needs to occur in order to verify the authenticity of the request. This handshake is an extra step when compared to an HTTP request and does unfortunately incur some overhead.
In order to understand the performance implications and see for myself whether or not the performance impact would be significant, I used this site as a testing platform. I headed over to webpagetest.org and used the visual comparison tool to compare this site loading using HTTPS vs HTTP.
As you can see from Here is Test video Result using HTTPS did have an impact on my page load times, however the difference is negligible and I only noticed a 300 millisecond difference. It's important to note that these times depend on many factors, such as computer performance, connection speed, server load, and distance from server.
Your site may be different, and it is important to test your site thoroughly and check the performance impact involved in switching to HTTPS.
HTTPS has encryption/decryption overhead so it will always be slightly slower. SSL termination is very CPU intensive. If you have devices to offload SSL, the difference in latencies might be barely noticeable depending on the load your servers are under.
This is almost certainly going to be true given that SSL requires an extra step of encryption that simply isn't required by non-SLL HTTP.
There is a way to measure this. The tool from apache called jmeter will measure throughput. If you set up a large sampling of your service with jmeter, in a controlled environment, with and without SSL, you should get an accurate comparison of the relative cost. I would be interested in your results.
The HTTPS indeed affects page speed...
The quotes above reveal the foolishness of many people about site security and speed. HTTPS / SSL server handshaking creates an initial stall in making Internet connections. There’s a slow delay before anything starts to render on your visitor’s browser screen. This delay is measured in Time-to-First-Byte information.
HTTPS handshake overhead appears in Time-to-First-Byte information (TTFB). Common TTFB ranges from under 100 milliseconds (best-case) to over 1.5 seconds (worst case). But, of course, with HTTPS it’s 500 milliseconds worse.
Roundtrip, wireless 3G connections can be 500 milliseconds or more. The extra trips double delays to 1 second or more. This is a big, negative impact on mobile performance. Very bad news.
My advice, if you're not exchanging sensitive data then you don't need SSL at all, but if you do like an ecommerce website then you can just enable HTTPS on certain pages where sensitive data is exchanged like Login and checkout.
Source: Pagepipe
A more important performance difference is that an HTTPS session is ketp open while the user is connected. An HTTP 'session' lasts only for a single item request.
It you are running a site with a large number of concurrent users, expect to buy a lot of memory.
Browsers can accept HTTP/1.1 protocol with either HTTP or HTTPS, yet browsers can only handle HTTP/2.0 protocol with HTTPS. The protocol differences from HTTP/1.1 to HTTP/2.0 make HTTP/2.0, on average, 4-5 times faster than HTTP/1.1. Also, of sites that implement HTTPS, most do so over the HTTP/2.0 protocol. Therefore, HTTPS is almost always going to be faster than HTTP simply due to the different protocol it generally uses. However, if HTTP over HTTP/1.1 is compared with HTTPS over HTTP/1.1, then HTTP is slightly faster, on average, than HTTPS.
Here are some comparisons I ran using Chrome (Ver. 64):
HTTPS over HTTP/1.1:
0.47 seconds average page load time
0.05 seconds slower than HTTP over HTTP/1.1
0.37 seconds slower than HTTPS over HTTP/2.0
HTTP over HTTP/1.1
0.42 seconds average page load time
0.05 seconds faster than HTTPS over HTTP/1.1
0.32 seconds slower than HTTPS over HTTP/2.0
HTTPS over HTTP/2.0
0.10 seconds average load time
0.32 seconds faster than HTTP over HTTP/1.1
0.37 seconds faster than HTTPS over HTTPS/1.1

Resources