How to host images when using HTTP/2 - http2

We are migrating our page to HTTP/2.
When using HTTP/1 there was a limitation of 2 concurrent connections per host. Usually, a technique called sharding was used to work around that.
So content was delivered from www.example.com and the images from img.example.com.
Also, you wouldn't send all the cookies for www.example.com to the image domain, which also saves bandwidth (see What is a cookie free domain).
Things have changed with HTTP/2; what is the best way to serve images using HTTP/2?
same domain?
different domain?

Short:
No sharding is required, HTTP/2 web servers usually have a liberal connection limit.
As with HTTP/1.1, keep the files as small as possible, HTTP/2 still is bound by the same bandwidth physical constraints.
Multi-plexing is really a plus for concurrent image loading. There are a few demos out there, you can Google them. I can point you to the one that I did:
https://demo1.shimmercat.com/10/

Related

Which is better regarding performance; keep-alive or CDN

Which is better in performance? keep-alive connection to get resources from same server, or getting resources from public CDN.
Edit 1:
I mean by performance = the page loading time
I am comparing between these 2 scenarios to find which is faster in page loading:
1- If I serve HTML & javascript from one (my) server. That will benefit from connection keep-alive which will reduce the multiple TCP / TLS handshakes to different servers.
2- If I use CDN for javascript files, that will benefit from CDN advantages, but also it will require multiple TCP connections to different servers for handshake.
Edit 2:
Please see below images from book "High Performance Browser Networking" authored by "ILYA GRIGORIK"
The next 2 images explain request for HTML from my server and CSS from another server. Keep alive will not be advantage here as 2 requests are needed from different servers which will add more time for TCP handshakes and slow start.
This below picture is for serving HTML and CSS from the same server using keep alive
By comparing both loading times:
1- My server + CDN = 284 ms
2- Only my server + keep alive = 228 ms
The difference between both is the 56ms required for TCP handshake to CDN server.
Additionally, if I added pipelining request to a single server, it will increase page speed to be 172 ms.
The best is to do keep-alive with CDN.
These are orthogonal things:
keep-alive is a feature of HTTP 1.1 to reduce protocol overhead,
CDN reduces the distance to the server and increases bandwidth.
The goal of both is to reduce latency. Keep-alive should simply always be on. Most modern HTTP servers support it.
Though for static content CDN will usually provide much more noticeable performance improvement. It will still use keep-alive, just with a CDN server.
If I use CDN for javascript files ... it will require multiple TCP connections to different servers
Even if you serve everything from your own server, most browsers open 2-4 connections simultaneously. So it doesn't matter much if you serve HTML from one server and JS from another.
Also, most CDN's choose the server once (using DNS), and then your client communicates with the same server. So one server for HTML and one for JS at the most. Or you can choose to proxy everything, also dynamic HTML, through CDN.

How does a websocket cdn work with bidirectional data?

I see that cloudflare has a websocket cdn, but I'm confused with how it would cache bidirectional data. With a normal http request, it would cache the response and then serve it from the CDN.
With a web socket, how does cloudflare cache the data? Especially since the socket can be bi-dirctional.
Caching is really only a small part of what a CDN does.
CloudFlare (and really any CDN that would offer this service), would serve two purposes off the top of my hand:
Network connection optimization - The browser endpoint would be able to have a keepalive connection to whatever the closest Point of Presence (PoP) is to them. Depending on CloudFlare's internal architecture, it could then take an optimized network path to a PoP closer to the origin, or to the origin itself. This network path may have significantly better routing and performance than having the browser go straight to the origin.
Site consistency - By offering WebSockets, a CDN is able to let end users stay on the same URL without having to mess around with any cross-origin issues or complexities of maintaining multiple domains.
Both of these go hand in hand with a term often called "Full Site Acceleration" or "Dynamic Site Acceleration".

Setting up a Server as a CDN

We've got a server and domain we use basically now as a big hard drive with video files and images (hosted by MediaTemple). What would it take to setup this server and domain as a CDN?
I saw this article:
http://www.riyaz.net/blog/how-to-setup-your-own-cdn-in-30-minutes/technology/890/
But that looks to be aliasing to the box, and not actually moving the content. Our content is actually hosted on a different box.
One of the tenets of a CDN is that content is geographically close to the client - if you only have one CDN server (rather than several replicated servers), it's not a CDN.
However, you can still get some of the benefits of a CDN. Browsers will typically only fetch 8 resources in parallel from any given hostname. You can give your 'CDN' server several subdomain hostnames and round-robin requests.
www1.example.com
www2.example.com
www3.example.com
...
This will effectively triple the number of concurrent requests a browser will make to your server, as it will see the three hostnames as three separate web servers.
Its basically like you creating a "best route possible" server for your client.
What you basically does is putting multiple IP addresses to one HOSTNAME example.
non static content *(Dynamic pages) are on WWW.Example.Com
Whereas the JSP,AVI etc are stored on media.cdn.example.com
media.cdn.example.com looks up as 1.2.3.4;8.8.9.9;103.10.4.5;etc
so the router on the user end will find nearest to that location and that will be your cdn.
another way is to force content be served using a certain route, and as such, pushes the router to do the same.

Are there any downsides of running your full website in https

I have a website that makes heavy use of AJAX. There is an almost constant transfer of sensitive data.
Because of this I was thinking of running my full website in HTTPS, making it secure throughout your stay.
I was wondering if there are any downsides doing this. Performance is a huge issue for me, the faster the app runs the better. I can safely say that speed is a larger issue than the security.
On the security side, I already generate a new session id when sensitive data is transfered,so there is no real need to make it all https, but if there are no downsides why not use it.
Can someone please explain to me what the downsides are of using https for everything.
Well, there is obviously the overhead of encrypting everything all the time. It's probably not a huge problem for the client (since it's only encrypting data for a single connection) but it can become a bottleneck on the server (since it has to encrypt everything for every connection).
You could implement an SSL proxy where you have a front-end web server that talks SSL to clients and then forwards requests to the "backend" webservers for real processing. The backend webservers would be firewalled and not use SSL.

What is the fastest way(/make loading faster) to serve images and other static content on a site?

Ours is an e-commerce site with lots of images and flash(same heavy flash rendered across all pages). All the static content is stored and served up from the webserver(IHS clustered-2 nodes). We still notice that the image delivery is slow. Is this approach correct at all? What are the alternative ways of doing this, like maybe serving up images using a third party vendor or implementing some kind of caching?
P.S. All our pages are https. Could this be a reason?
Edit1: The images are served up from
https too so the alerts are a non
issue?
Edit2: The loading is slower on IE and
most of our users are IE. I am not
sure if browser specific styling could
be causing the slower IE loading?(We
have some browser specific styling for
IE)
While serving pages over HTTP may be faster (though I doubt https is not monstrously slow, for small files), a good lot of browsers will complain if included resources such as images and JS are not on https:// URL's. This will give your customers annoying popup notifications.
There are high-performance servers for static file serving, but unless your SSL certificate works for multiple subdomains, there are a variety of complications. Putting the high-performance server in front of your dynamic content server and reverse proxying might be an option, if that server can do the SSL negotiation. For unix platforms, Nginx is pretty highly liked for its reverse proxying and static file serving. Proxy-cache setups like Squid may be an option too.
Serving static content on a cloud like amazon is an option, and some of the cloud providers let you use https as well, as long as you are fine with using a subdomain of their domain name (due to technical limitations in SSL)
You could create a CDN. If your site uses HTTPS, you'll need a HTTPS domain too (otherwise you might get a non-secure items warning).
Also, if your site uses cookies (and most do), having a CDN on a different domain (or if you use www.example.com, on cdn.example.com) you could host them there and not have the cookie data in the HTTP request come in.

Resources