Pros and Cons of Keep-Alive from Web Server Side - performance

Keep-Alive connection feature in HTTP protocol is meant to reduce TCP connection hits to web server. It should be able to improve web server performance. However, I found that some web servers deliberately disable KeepAlive feature from server side.
In my understanding, some reverse proxy, i.e. HAProxy, disables HTTP keep-alive in order to reduce memory usage which is more critical than CPU usage in some situation.
Is there any other reason why Web server disables Keep-Alive?

Actually, Keep-Alive is meant to improve HTTP performance, not server performance (though for SSL connections it does reduce the cost on the server of re-negotiating the encryption). The big win is in the number of round trips the browser has to make to get the content. With Keep-Alive the browser gets to eliminate a full round trip for every request after the first, usually cutting full page load times in half.
Keep-Alive increase server load which is why some shared hosting providers disable it. Each open connection consumes memory as well as a file descriptor (linux) and in extreme cases (some Apache configs) it may have a 1:1 mapping from connections to processes.

Related

Performance Issue with Http2 vs Http1.1

I have upgraded to HTTP2 from http1.1 for a tomcat9 application. After this upgrade, I can see that the performance of the rest APIs is slow. I ran a load test for 1000 samples and as a result, HTTP1.1 has better throughput and average response time however I was expecting it to be for HTTP2.
is there any configuration that has to be done or am I missing something?
PS: The response Content-type is application/json type.
You are comparing HTTP/1.1 without HTTPS with HTTP/2 with HTTPS. That’s not a true comparison of the protocols alone as HTTPS has significant overheads - particularly for just one connection.
Yes it’s true that HTTP/2 normally requires HTTPS as no browser supports HTTP/2 without HTTPS but then again browsers are increasingly warming against unsecured HTTP-only connections and limiting features like HTTP/2, Brotli, Service Workers, Geo... etc. to HTTPS anyway.
HTTP/2 over HTTPS can beat HTTP/1.1 over HTTP in certain circumstances but in general that’s a heck of a back foot to start on.
Additionally when making only one request per connection (like you are in jMeter using threads) the expense of HTTPS will be a large percentage of the connection due to the HTTPS handshake that needs to happen to set up the connection. Future requests on the same connection won’t have the same expense as in general the encryption/decryption part is relatively fast on modern hardware and so doesn’t create a noticeable delay - but the initial setup part definitely does.
Finally HTTP/2 is generally faster than HTTP/1.1 over slower, high latency connections due in large part to multiplexing. If testing over the same network, as I suspect you might be doing, where there are basically no network delays, then the benefit of HTTP/2 over HTTP/1.1 may not be apparent.

Do we still need a connection pool for microservices talking HTTP2?

As HTTP2 supports multiplexing, do we need still a pool of connections for microservice communication?
If yes, what are the benefits of having such a pool?
Example:
Service A => Service B
Both the above services have only one instance available.
Multiple connections may help overcome OS buffer size limitation for each Connection(Socket)? What else?
Yes, you still need connection pool in a client contacting a microservice.
First, in general it's the server that controls the amount of multiplexing. A particular microservice server may decide that it cannot allow beyond a very small multiplexing.
If a client wants to use that microservice with a higher load, it needs to be prepared to open multiple connections and this is where the connection pool comes handy.
This is also useful to handle load spikes.
Second, HTTP/2 has flow control and that may severely limit the data throughput on a single connection. If the flow control window are small (the default defined by the HTTP/2 specification is 65535 bytes, which is typically very small for microservices) then client and server will spend a considerable amount of time exchanging WINDOW_UPDATE frames to enlarge the flow control windows, and this is detrimental to throughput.
To overcome this, you either need more connections (and again a client should be prepared for that), or you need larger flow control windows.
Third, in case of large HTTP/2 flow control windows, you may hit TCP congestion (and this is different from socket buffer size) because the consumer is slower than the producer. It may be a slow server for a client upload (REST request with a large payload), or a slow client for a server download (REST response with a large payload).
Again to overcome TCP congestion the solution is to open multiple connections.
Comparing HTTP/1.1 with HTTP/2 for the microservice use case, it's typical that the HTTP/1.1 connection pools are way larger (e.g. 10x-50x) than HTTP/2 connection pools, but you still want connection pools in HTTP/2 for the reasons above.
[Disclaimer I'm the HTTP/2 implementer in Jetty].
We had an initial implementation where the Jetty HttpClient was using the HTTP/2 transport with an hardcoded single connection per domain because that's what HTTP/2 preached for browsers.
When exposed to real world use cases - especially microservices - we quickly realized how bad of an idea that was, and switched back to use connection pooling for HTTP/2 (like HttpClient always did for HTTP/1.1).

Which is better regarding performance; keep-alive or CDN

Which is better in performance? keep-alive connection to get resources from same server, or getting resources from public CDN.
Edit 1:
I mean by performance = the page loading time
I am comparing between these 2 scenarios to find which is faster in page loading:
1- If I serve HTML & javascript from one (my) server. That will benefit from connection keep-alive which will reduce the multiple TCP / TLS handshakes to different servers.
2- If I use CDN for javascript files, that will benefit from CDN advantages, but also it will require multiple TCP connections to different servers for handshake.
Edit 2:
Please see below images from book "High Performance Browser Networking" authored by "ILYA GRIGORIK"
The next 2 images explain request for HTML from my server and CSS from another server. Keep alive will not be advantage here as 2 requests are needed from different servers which will add more time for TCP handshakes and slow start.
This below picture is for serving HTML and CSS from the same server using keep alive
By comparing both loading times:
1- My server + CDN = 284 ms
2- Only my server + keep alive = 228 ms
The difference between both is the 56ms required for TCP handshake to CDN server.
Additionally, if I added pipelining request to a single server, it will increase page speed to be 172 ms.
The best is to do keep-alive with CDN.
These are orthogonal things:
keep-alive is a feature of HTTP 1.1 to reduce protocol overhead,
CDN reduces the distance to the server and increases bandwidth.
The goal of both is to reduce latency. Keep-alive should simply always be on. Most modern HTTP servers support it.
Though for static content CDN will usually provide much more noticeable performance improvement. It will still use keep-alive, just with a CDN server.
If I use CDN for javascript files ... it will require multiple TCP connections to different servers
Even if you serve everything from your own server, most browsers open 2-4 connections simultaneously. So it doesn't matter much if you serve HTML from one server and JS from another.
Also, most CDN's choose the server once (using DNS), and then your client communicates with the same server. So one server for HTML and one for JS at the most. Or you can choose to proxy everything, also dynamic HTML, through CDN.

High Performance Options for Remote services access

I have a service, foo, running on machine A. I need to access that service from machine B. One way is to launch a web server on A and do it via HTTP; code running under web server on A accesses foo and returns the results. Another is to write socket server on A; socket server access service foo and returns the result.
HTTP connection initiation and handshake is expensive; sockets can be written, but I want to avoid that. What other options are available for high performance remote calls?
HTTP is just the protocol over the socket. If you are using TCP/IP networks, you are going to be using a socket. The HTTP connection initiation and handshake are not the expensive bits, it's TCP initiation that's really the expensive bit.
If you use HTTP 1.1, you can use persistent connections (Keep-Alive), which drastically reduces this cost, closer to that of keeping a persistent socket open.
It all depends on whether you want/need the higher-level protocol. Using HTTP means you will be able to do things like consume this service from a lot more clients, while writing much less documentation (if you write your own protocol, you will have to document it). HTTP servers also supports things like authentication, cookies, logging, out of the box. If you don't need these sorts of capabilities, then HTTP might be a waste. But I've seen few projects that don't need at least some of these things.
Adding to the answer of #Rob, as the question is not precisely pointing to an application or performance boundaries, it would be good to look into the options available in a broader context, which is Inter process communication.
The wikipedia page cleanly lists down the options available and would be a good place to start with.
What technology are you going to use? Let me answer for Java world.
If your request rate is below 100/sec, you should not care about optimizations and use most versatile solution - HTTP.
Well-written asynchronous server like Netty-HTTP can easily handle 1000 requests per second on medium-level hardware.
If you need more or have constrained resources, you can go to binary format. Most popular one out there is Google Protobuf(multilanguage) + Netty (Java).
What you should know about HTTP performance:
Http can use Keep-Alive which removes reconnection cost for every request
Http adds traffic overhead for every request and response - around 50-100 bytes.
Http client and server consumes additional CPU for parsing HTTP headers - that is noticeable after abovementioned 100 req/sec
Be careful when selecting technology. Even in 21 century it is hard to find well-written HTTP server and client.

Slow HTTP vs Web Sockets - Resource utilization

If a bunch of "Slow HTTP" connection to a server can consume so much resources so as to cause a denial of service, why wouldn't a bunch of web sockets to a server cause the same problem?
The accepted answer to a different SO question says that it is almost free to maintain a idle connection.
If it costs nothing to maintain an open TCP connection, why does a "Slow HTTP" cause denial of service?
A WebSocket and a "slow" HTTP connection both use an open connection. The difference is in expectations of the server design.
Typical HTTP servers do not need to handle a large number of open connections and are designed around the assumption that the number of open connections is small. If the server does not protect against slow clients, then an attacker can force a server designed around this assumption to hit a resource limit.
Here are a couple of examples showing how the different expectations can impact the design:
If you only have a few HTTP requests in flight at a time, then it's OK to use a thread per connection. This is not a good design for a WebSocket server.
The default file descriptor limits are often adequate for typical HTTP scenarios, but not for a large numbers of connections.
It is possible to design an HTTP server to handle a large number of open connections and several servers do so out of the box.

Resources