How does a websocket cdn work with bidirectional data? - websocket

I see that cloudflare has a websocket cdn, but I'm confused with how it would cache bidirectional data. With a normal http request, it would cache the response and then serve it from the CDN.
With a web socket, how does cloudflare cache the data? Especially since the socket can be bi-dirctional.

Caching is really only a small part of what a CDN does.
CloudFlare (and really any CDN that would offer this service), would serve two purposes off the top of my hand:
Network connection optimization - The browser endpoint would be able to have a keepalive connection to whatever the closest Point of Presence (PoP) is to them. Depending on CloudFlare's internal architecture, it could then take an optimized network path to a PoP closer to the origin, or to the origin itself. This network path may have significantly better routing and performance than having the browser go straight to the origin.
Site consistency - By offering WebSockets, a CDN is able to let end users stay on the same URL without having to mess around with any cross-origin issues or complexities of maintaining multiple domains.
Both of these go hand in hand with a term often called "Full Site Acceleration" or "Dynamic Site Acceleration".

Related

How to use HTTP-2 in my production setup?

I have one load balancer and 5 origin servers. For every request, Akamai hits the LB and the request is served randomly by any of the servers. Is it okay if I enable HTTP/2 in one of the origin servers?
How will it impact my system?
How can I measure the performance impact?
Also, does the ALPN step happen at every hop?
Akamai is a CDN. This means it handles all incoming traffic - likely with a server closer to the user than your origin server and then either serves cacheable assets directly, or passes non-cacheable assets back to your origin servers.
HTTP is a hop by hop protocol (mostly - ignoring the CONNECT method for now as only used by some proxies). This means the client connects to Akamai (likely using HTTP/2) and then Akamai connects to your origin server under a separate HTTP connection (HTTP/1.1 as Akamai does not support HTTP/2 to Origin).
So to answer your question enabling HTTP/2 on one of your origin servers will have no effect as neither clients nor Akamai will use it.
Whether HTTP/2 to origin is needed or beneficial is debatable. The biggest gain will be over high latency connections (like the initial client to Akamai server) especially as the browser typically limits you to 6 connections per domain. For Akamai to Origin this is typically over a fast connection in comparison (even if across a long distance) and may not be limited to 6 connections.

How to host images when using HTTP/2

We are migrating our page to HTTP/2.
When using HTTP/1 there was a limitation of 2 concurrent connections per host. Usually, a technique called sharding was used to work around that.
So content was delivered from www.example.com and the images from img.example.com.
Also, you wouldn't send all the cookies for www.example.com to the image domain, which also saves bandwidth (see What is a cookie free domain).
Things have changed with HTTP/2; what is the best way to serve images using HTTP/2?
same domain?
different domain?
Short:
No sharding is required, HTTP/2 web servers usually have a liberal connection limit.
As with HTTP/1.1, keep the files as small as possible, HTTP/2 still is bound by the same bandwidth physical constraints.
Multi-plexing is really a plus for concurrent image loading. There are a few demos out there, you can Google them. I can point you to the one that I did:
https://demo1.shimmercat.com/10/

Understanding HTTPS connection setup overhead

I'm building a web-based chat app which will need to make an AJAX request for every message sent or received. I'd like the data to be encrypted and am leaning towards running AJAX (with long-polling) over HTTPS.
However, since the frequency of requests here is a lot higher than with basic web browsing, I'd like to get a better understanding of the overhead (network usage, time, server CPU, client CPU) in setting up the encrypted connection for each HTTPS request.
Aside from any general info/advice, I'm curious about:
As a very rough approximation, how much extra time does an HTTPS request take compared to HTTP? Assume content length of 1 byte and an average PC.
Will every AJAX request after the first have anything significant cached, allowing it to establish the connection quicker? If so, how much quicker?
Thank you in advance :-)
Everything in HTTPS is slower. Personal information shouldn't be cached, you have encryption on both ends, and an SSL handshake is relatively slow.
Long-polling will help. Long keep-alives are good. Enabling SSL sessions on your server will avoid a lot of the overhead as well.
The real trick is going to be doing load-balancing or any sort of legitimate caching. Not sure how much that will come into play in your system, being a chat server, but it's something to consider.
You'll get more information from this article.
Most of the overhead is in the handshake (exchanging certificates, checking for their revocation, ...). Session resumption and the recent false start extension helps in that respect.
In my experience, the worse case scenario happens when using client-certificate authentication and advertising too many CAs (the CertificateRequest message sent by the server can even become too big); this is quite rare since in practice, when you use client-certificate authentication, you would only accept client-certificates from a limited number of CAs.
If you configure your server properly (for resources for which it's appropriate), you can also enable browser caching for resources served over HTTPS, using Cache-Control: public.

Are there any downsides of running your full website in https

I have a website that makes heavy use of AJAX. There is an almost constant transfer of sensitive data.
Because of this I was thinking of running my full website in HTTPS, making it secure throughout your stay.
I was wondering if there are any downsides doing this. Performance is a huge issue for me, the faster the app runs the better. I can safely say that speed is a larger issue than the security.
On the security side, I already generate a new session id when sensitive data is transfered,so there is no real need to make it all https, but if there are no downsides why not use it.
Can someone please explain to me what the downsides are of using https for everything.
Well, there is obviously the overhead of encrypting everything all the time. It's probably not a huge problem for the client (since it's only encrypting data for a single connection) but it can become a bottleneck on the server (since it has to encrypt everything for every connection).
You could implement an SSL proxy where you have a front-end web server that talks SSL to clients and then forwards requests to the "backend" webservers for real processing. The backend webservers would be firewalled and not use SSL.

What is the fastest way(/make loading faster) to serve images and other static content on a site?

Ours is an e-commerce site with lots of images and flash(same heavy flash rendered across all pages). All the static content is stored and served up from the webserver(IHS clustered-2 nodes). We still notice that the image delivery is slow. Is this approach correct at all? What are the alternative ways of doing this, like maybe serving up images using a third party vendor or implementing some kind of caching?
P.S. All our pages are https. Could this be a reason?
Edit1: The images are served up from
https too so the alerts are a non
issue?
Edit2: The loading is slower on IE and
most of our users are IE. I am not
sure if browser specific styling could
be causing the slower IE loading?(We
have some browser specific styling for
IE)
While serving pages over HTTP may be faster (though I doubt https is not monstrously slow, for small files), a good lot of browsers will complain if included resources such as images and JS are not on https:// URL's. This will give your customers annoying popup notifications.
There are high-performance servers for static file serving, but unless your SSL certificate works for multiple subdomains, there are a variety of complications. Putting the high-performance server in front of your dynamic content server and reverse proxying might be an option, if that server can do the SSL negotiation. For unix platforms, Nginx is pretty highly liked for its reverse proxying and static file serving. Proxy-cache setups like Squid may be an option too.
Serving static content on a cloud like amazon is an option, and some of the cloud providers let you use https as well, as long as you are fine with using a subdomain of their domain name (due to technical limitations in SSL)
You could create a CDN. If your site uses HTTPS, you'll need a HTTPS domain too (otherwise you might get a non-secure items warning).
Also, if your site uses cookies (and most do), having a CDN on a different domain (or if you use www.example.com, on cdn.example.com) you could host them there and not have the cookie data in the HTTP request come in.

Resources