I have upgraded to HTTP2 from http1.1 for a tomcat9 application. After this upgrade, I can see that the performance of the rest APIs is slow. I ran a load test for 1000 samples and as a result, HTTP1.1 has better throughput and average response time however I was expecting it to be for HTTP2.
is there any configuration that has to be done or am I missing something?
PS: The response Content-type is application/json type.
You are comparing HTTP/1.1 without HTTPS with HTTP/2 with HTTPS. That’s not a true comparison of the protocols alone as HTTPS has significant overheads - particularly for just one connection.
Yes it’s true that HTTP/2 normally requires HTTPS as no browser supports HTTP/2 without HTTPS but then again browsers are increasingly warming against unsecured HTTP-only connections and limiting features like HTTP/2, Brotli, Service Workers, Geo... etc. to HTTPS anyway.
HTTP/2 over HTTPS can beat HTTP/1.1 over HTTP in certain circumstances but in general that’s a heck of a back foot to start on.
Additionally when making only one request per connection (like you are in jMeter using threads) the expense of HTTPS will be a large percentage of the connection due to the HTTPS handshake that needs to happen to set up the connection. Future requests on the same connection won’t have the same expense as in general the encryption/decryption part is relatively fast on modern hardware and so doesn’t create a noticeable delay - but the initial setup part definitely does.
Finally HTTP/2 is generally faster than HTTP/1.1 over slower, high latency connections due in large part to multiplexing. If testing over the same network, as I suspect you might be doing, where there are basically no network delays, then the benefit of HTTP/2 over HTTP/1.1 may not be apparent.
Related
I'm building an API that will be hit with many requests/second from a few sources (call it 1000/sec) and will be responding to each quickly with very little information (think < 1k).
When I use HTTP (http.ListenAndServe()), performance is between 1000-2000 req/sec using Siege on my t3.micro and CPU usage rarely exceeds 30-40%
With HTTPS (http.ListenAndServeTLS()), I cap at around 450 req/sec with CPU usage at 100%. It seems pretty obvious it's doing a lot of SSL handshake type work, but my question is why would this be the case? Even with few concurrent connections from Siege it is much slower (I also tried connection=keep-alive in siege config)
I get that the first connection should be slower, but after that, is this behavior still expected or is there some issue I am not aware of? One thing I noticed is Siege is using HTTP/1.1, I would think it should be using 2.0 when going over HTTPS?
Thanks.
When using SSL/TLS your program needs to encode, decode each message sent or received, and this consumes CPU power. You may try to add http/https proxy which will terminate the SSL/TLS traffic. This can be apache http, nginx, haproxy. This can improve the situation.
I really search the web, and I can not find the reason why web browsers do not support h2c (http/2 with no TLS). Any idea, appreciated.
A little bit clarification
http/2 with https uses ALPN (this is called h2).
http/2 with http does not need ALPN(this is called h2c), but almost no web browser support it. Why is so?
I feel that for many resources, there is no need for confidentiality though authenticity is always good (the digital signature of the http body is not widely supported though there are some private implementations). Given confidentiality is not needed, then h2c is really a good thing to have.
Technically
There are several technical reasons why HTTP/2 is much better and easier to handle over HTTPS:
Doing HTTP/2 negotiation in TLS with ALPN is much easier and doesn't lose round-trips like Upgrade: in plain HTTP does. And it doesn't suffer from the upgrade problem on POST that you get with plain-text HTTP/2.
N% of the web doesn't support unsolicited Upgrade: h2cheaders in requests and instead respond with 400 errors.
Doing something else than HTTP/1.1 over TCP port 80 breaks in Y% of the cases since the world is full of middle-boxes that "help" out and replace/add things in-stream for such connections. If that then isn't HTTP/1.1, things break (this is also why brotli for example also requires HTTPS).
Ideologically
There's a push for more HTTPS on the web that is shared by and worked on in part by some of the larger web browser developer teams. That makes it considered a bonus if features are implemented HTTPS-only as they then work as yet another motivation for sites and services to move over to HTTPS. Thus, some teams never tried very hard (if at all) to make HTTP/2 work without TLS.
Practically
At least one browser vendor expressed its intention early on to implement and provide HTTP/2 for users done over plain-text HTTP (h2c). They ended up never doing this because of technical obstacles as mentioned above.
The new HTTP/2 protocol comes with some promising features. Some of them:
Multiplexing - a single TCP connection can be used to make multiple HTTP/2 requests and receive multiple responses (to a single origin)
HTTP/2 Server Push - sending server responses to the client without receiving requests, i.e. initiated by the server
Bidirectional connection - HTTP/2 spec - Streams and Multiplexing:
A "stream" is an independent, bidirectional sequence of frames
exchanged between the client and server within an HTTP/2 connection.
The motivation behind HTTP/2 is explained here HTTP/2 FAQ:
HTTP/1.1 has served the Web well for more than fifteen years, but its
age is starting to show.
and
The goal of the Working Group is that typical uses of HTTP/1.x can use HTTP/2 and see some benefit.
So HTTP/2 is nice and comes to replace HTTP/1.x. Unfortunately, HTTP/2 does not support WebSockets. In this question Does HTTP/2 make websockets obsolete? it is made clear that the HTTP/2 Server Push is not an alternative, neither are Server-Sent Events.
Now to the question: What do we use if we want WebSockts functionality over HTTP/2?
Current forms of HTTP/2 Protocol Negotiation:
HTTP/2 connections start in one of three ways:
In an encrypted connection (TLS/SSL) using ALPN (Application Layer Protocol Negotiation). Most browsers require TLS/SSL for HTTP/2 and use this method for HTTP/2 connection establishment.
In clear text, using the HTTP/1.1 Upgrade header (same as Websockets). Most browsers require TLS/SSL for HTTP/2, so this is limited in it's support.
In clear text, using a special string at the beginning of an HTTP/1.1 connection (which could allow HTTP/2 servers in clear text to disable HTTP/1.1 support). Limited client support.
Negotiating the Websocket Protocol, present tense:
Negotiating Websocket connections, at the moment, requires HTTP/1.1 support and makes use of the HTTP/1.1 Upgrade header.
This is often performed by the same application server that listens to the HTTP/1.1 and HTTP/2 connections. Web applications that support concurrency (whether evented or thread based) are usually protocol agnostic (as long as HTTP semantics are preserved) and work well enough on both protocols.
This allows HTTP data to be used during connection establishment (and perhaps effect the Websocket connection state/authentication procedure).
Once the Websocket connection is established, it's totally independent from the HTTP semantics / layer.
Negotiating the Websocket Protocol in an HTTP/2 world:
In an HTTP/2 (only) world, which might be a while into the future, there could be a number of possible approaches to Websocket protocol negotiation: an ALPN based approach and an HTTP/2 "tunnel" (or "stream") approach.
The ALPN approach preserves protocol independence at the expense of the pre-upgrade (HTTP) stage, while the "stream" approach provides the HTTP pre-"upgrade" (or Connect) stage at the expense of high coupling and complexity.
The ALPN Approach:
One possible future approach will simply add the Websocket protocol to the ALPN negotiation table.
At the moment, ALPN is used to select (or default to) the "http/1.1" protocol and the Upgrade request is handled by the HTTP/1.1 server. Which means that Websocket still provides us with the HTTP header data during protocol negotiation (while using it's own TCP/IP connection)
In the future, ALPN might simply add "wss" as an available choice.
Using this approach, the Websocket (which is currently established using the HTTP/1.1 Upgrade header, both in encrypted and clear text forms) could easily be negotiated using the ALPN extension to the TLS/SSL layer.
This will keep the Websocket protocol independent from the HTTP/2 protocol and allow it's use even when HTTP isn't supported.
However, this will come with the downside that cookies and other HTTP headers might be no longer available as part of the protocol negotiation. Another difference (both good and bad) is that this approach will require a separate TCP/IP connection.
The HTTP/2 "tunnel" / "stream" approach
Another possible future approach, which is reflected in this proposed draft, will dispose of the HTTP/1.1 variation of the Websocket protocol in favor of an HTTP/2 "stream" approach.
HTTP/2 "streams" are the way HTTP/2 implements multiplexing and allows multiple requests to be handled concurrently. Each request receives a stream number ID and any data pertaining to this request (headers, responses etc') is identified using the same numerical stream ID.
Under this approach, "Websocket" data will be contain within the HTTP/2 wrapper and the stream ID will be used to identify the "Websocket" stream.
Although this might provide some benefits (HTTP headers and cookies could be provided as part of the Websocket negotiation), it's not without its downfalls.
Higher complexity and tighter protocol coupling are just two examples, both of which are very serious downfalls.
Conclusion:
At the time of this writing, HTTP/1.1 Upgrade semantics are required for Websocket connections, both when using clear text (ws) and encrypted (wss) connections.
The future is, as of yet, undecided and it will probably take a long time before the current Upgrade process (using HTTP/1.1) is phased out
Well your timing is rather apt!
A new version of the internet standards draft was literally just published:
Bootstrapping WebSockets with HTTP/2
Additional information here:
https://github.com/mcmanus/draft-h2ws/blob/master/README.md
And you can follow the discussion in it here:
https://lists.w3.org/Archives/Public/ietf-http-wg/2017OctDec/0032.html
Until this is approved, and then implemented by browsers and servers, I would say that Daniel Haxx’s post that you included in your question represents a very good summary of the current status.
One of your links actually has one answer: you can just use SSE.
Semantically, you can achieve the same things with either websockets or (SSE + POST ). The view that the two technologies address different use cases is, roughly speaking, bikeshedding around "this syntax works better for this".
There are ongoing efforts to port something similar to websockets to HTTP/2, but unless those technologies make possible new uses cases or efficiencies, I see no point.
From watching HTTPS everywhere on YouTube
they suggest that HTTPS and SPDY combined will be quicker than just serving web pages/assets over HTTP but then since reading SPDY is Dead. Long Live HTTP/2 and what with with HTTP2 support being a way off I am in two minds as to whether to move a large site I'm working on to HTTPS entirely as ultimately it will be slower since doing perf comparison tests (the DOM content loaded took twice the time to load). I also just read somewhere that browsers are dropping support for SPDY.
What is the state of SPDY and should I just wait until HTTP2 until I advocate moving everything to HTTPS everywhere? Should I accept the performance hit?
SPDY is definitely dying, now that HTTP/2 is an official specification.
Firefox and Chrome already support HTTP/2, and servers start to deploy it instead of SPDY - Google, Twitter, etc. Internet Explorer support will arrive soon with IE 11.
HTTP/2 is definitely gaining momentum, and the future will be on HTTP/2 and TLS.
You should not wait for HTTP/2, because it's already here.
About the performance hit, the usual recommendation is to benchmark, but there is evidence that HTTP/2 over TLS is much better than HTTP/1.1 over TLS, and possibly comparable - if not better - than cleartext HTTP/1.1, depending on the case.
Reasons behind this are a number of optimizations performed by HTTP/2 such as multiplexing, header compression and resource push, that are simply not possible with HTTP/1.1.
See for example the demo video (disclaimer, I am a Jetty committer) we gave in 2012 (about Jetty and SPDY at that time, but HTTP/2 behaves the same), or the Go language HTTP/2 demo, or the Akamai HTTP/2 demo.
With Jetty, for example, you can deploy Java webapps on HTTP/2, but also complete PHP websites on HTTP/2. Our own website, https://webtide.com, is WordPress served by Jetty on HTTP/2.
You can move to TLS and HTTP/2 now.
Keep-Alive connection feature in HTTP protocol is meant to reduce TCP connection hits to web server. It should be able to improve web server performance. However, I found that some web servers deliberately disable KeepAlive feature from server side.
In my understanding, some reverse proxy, i.e. HAProxy, disables HTTP keep-alive in order to reduce memory usage which is more critical than CPU usage in some situation.
Is there any other reason why Web server disables Keep-Alive?
Actually, Keep-Alive is meant to improve HTTP performance, not server performance (though for SSL connections it does reduce the cost on the server of re-negotiating the encryption). The big win is in the number of round trips the browser has to make to get the content. With Keep-Alive the browser gets to eliminate a full round trip for every request after the first, usually cutting full page load times in half.
Keep-Alive increase server load which is why some shared hosting providers disable it. Each open connection consumes memory as well as a file descriptor (linux) and in extreme cases (some Apache configs) it may have a 1:1 mapping from connections to processes.