Which is better regarding performance; keep-alive or CDN - performance

Which is better in performance? keep-alive connection to get resources from same server, or getting resources from public CDN.
Edit 1:
I mean by performance = the page loading time
I am comparing between these 2 scenarios to find which is faster in page loading:
1- If I serve HTML & javascript from one (my) server. That will benefit from connection keep-alive which will reduce the multiple TCP / TLS handshakes to different servers.
2- If I use CDN for javascript files, that will benefit from CDN advantages, but also it will require multiple TCP connections to different servers for handshake.
Edit 2:
Please see below images from book "High Performance Browser Networking" authored by "ILYA GRIGORIK"
The next 2 images explain request for HTML from my server and CSS from another server. Keep alive will not be advantage here as 2 requests are needed from different servers which will add more time for TCP handshakes and slow start.
This below picture is for serving HTML and CSS from the same server using keep alive
By comparing both loading times:
1- My server + CDN = 284 ms
2- Only my server + keep alive = 228 ms
The difference between both is the 56ms required for TCP handshake to CDN server.
Additionally, if I added pipelining request to a single server, it will increase page speed to be 172 ms.

The best is to do keep-alive with CDN.
These are orthogonal things:
keep-alive is a feature of HTTP 1.1 to reduce protocol overhead,
CDN reduces the distance to the server and increases bandwidth.
The goal of both is to reduce latency. Keep-alive should simply always be on. Most modern HTTP servers support it.
Though for static content CDN will usually provide much more noticeable performance improvement. It will still use keep-alive, just with a CDN server.
If I use CDN for javascript files ... it will require multiple TCP connections to different servers
Even if you serve everything from your own server, most browsers open 2-4 connections simultaneously. So it doesn't matter much if you serve HTML from one server and JS from another.
Also, most CDN's choose the server once (using DNS), and then your client communicates with the same server. So one server for HTML and one for JS at the most. Or you can choose to proxy everything, also dynamic HTML, through CDN.

Related

How to fix HTTP/2.0 504 Gateway Timeout for multi simultaneous XHR connections when using HTTP/2

I activate HTTP/2 support on my server. Now i got the problem with AJAX/jQuery scipts like upload or Api handling.
After max_input_time of 60sec for php i got: [HTTP/2.0 504 Gateway Timeout 60034ms]
with HTTP/1 only a few connections where startet simultaneously and when one is finished a nother starts.
with HTTP/2 all starts at once.
when fore example 100 images would uploaded it takes to long for all.
I don't wish to change the max_input_time. I hope to limit the simultaneous connections in the scripts.
thank you
HTTP/2 intentionally allows multiple requests in parallel. This differs from HTTP/1.1 which only allowed one request at a time (but which browsers compensated for by opening 6 parallel connections). The downside to drastically increasing that limit is you can have more requests on the go at once, contending for bandwidth.
You’ve basically two choices to resolve this:
Change your application to throttle uploads rather than expecting the browser or the protocol to do this for you.
Limit the maximum number of concurrent streams allowed by your webserver. In Apache for example, this is controlled by the H2MaxSessionStreams Directive while in Nginx it is similarly controlled by the
http2_max_concurrent_streams config. Other streams will need to wait.

How to use HTTP-2 in my production setup?

I have one load balancer and 5 origin servers. For every request, Akamai hits the LB and the request is served randomly by any of the servers. Is it okay if I enable HTTP/2 in one of the origin servers?
How will it impact my system?
How can I measure the performance impact?
Also, does the ALPN step happen at every hop?
Akamai is a CDN. This means it handles all incoming traffic - likely with a server closer to the user than your origin server and then either serves cacheable assets directly, or passes non-cacheable assets back to your origin servers.
HTTP is a hop by hop protocol (mostly - ignoring the CONNECT method for now as only used by some proxies). This means the client connects to Akamai (likely using HTTP/2) and then Akamai connects to your origin server under a separate HTTP connection (HTTP/1.1 as Akamai does not support HTTP/2 to Origin).
So to answer your question enabling HTTP/2 on one of your origin servers will have no effect as neither clients nor Akamai will use it.
Whether HTTP/2 to origin is needed or beneficial is debatable. The biggest gain will be over high latency connections (like the initial client to Akamai server) especially as the browser typically limits you to 6 connections per domain. For Akamai to Origin this is typically over a fast connection in comparison (even if across a long distance) and may not be limited to 6 connections.

How to host images when using HTTP/2

We are migrating our page to HTTP/2.
When using HTTP/1 there was a limitation of 2 concurrent connections per host. Usually, a technique called sharding was used to work around that.
So content was delivered from www.example.com and the images from img.example.com.
Also, you wouldn't send all the cookies for www.example.com to the image domain, which also saves bandwidth (see What is a cookie free domain).
Things have changed with HTTP/2; what is the best way to serve images using HTTP/2?
same domain?
different domain?
Short:
No sharding is required, HTTP/2 web servers usually have a liberal connection limit.
As with HTTP/1.1, keep the files as small as possible, HTTP/2 still is bound by the same bandwidth physical constraints.
Multi-plexing is really a plus for concurrent image loading. There are a few demos out there, you can Google them. I can point you to the one that I did:
https://demo1.shimmercat.com/10/

websockets with load balancer scalability

I use a load balancer with my web site. The browser initiates a websocket connection to my app server. Does the open connection consume any resources on the LB or is it direct between the browser and the app server? If there is something open on the LB isn't it a bottleneck? I mean if my LB can handle X open connections then the X+1 user could not even open a connection.
It depends!
The most efficient load balancers listen for requests, do some analysis, then forward the requests; all the bits do not travel through the load balancer. The network forwarding happens at a lower network layer than http (e.g., it is not an http 302 redirect - the client never knows it happened, maintaining privacy around internal network configuration - this happens at OSI Level 4 I think).
However, some load balancers add more features, like acting as SSL endpoints or applying gzip compression. In these cases, they are processing bits as they pass through (encrypt/decrypt or compress in this case).
A picture may help. Compare the first diagram with the second & third here, noting redirection in the first that is absent in the others.

Pros and Cons of Keep-Alive from Web Server Side

Keep-Alive connection feature in HTTP protocol is meant to reduce TCP connection hits to web server. It should be able to improve web server performance. However, I found that some web servers deliberately disable KeepAlive feature from server side.
In my understanding, some reverse proxy, i.e. HAProxy, disables HTTP keep-alive in order to reduce memory usage which is more critical than CPU usage in some situation.
Is there any other reason why Web server disables Keep-Alive?
Actually, Keep-Alive is meant to improve HTTP performance, not server performance (though for SSL connections it does reduce the cost on the server of re-negotiating the encryption). The big win is in the number of round trips the browser has to make to get the content. With Keep-Alive the browser gets to eliminate a full round trip for every request after the first, usually cutting full page load times in half.
Keep-Alive increase server load which is why some shared hosting providers disable it. Each open connection consumes memory as well as a file descriptor (linux) and in extreme cases (some Apache configs) it may have a 1:1 mapping from connections to processes.

Resources