What is the fastest way(/make loading faster) to serve images and other static content on a site? - caching

Ours is an e-commerce site with lots of images and flash(same heavy flash rendered across all pages). All the static content is stored and served up from the webserver(IHS clustered-2 nodes). We still notice that the image delivery is slow. Is this approach correct at all? What are the alternative ways of doing this, like maybe serving up images using a third party vendor or implementing some kind of caching?
P.S. All our pages are https. Could this be a reason?
Edit1: The images are served up from
https too so the alerts are a non
issue?
Edit2: The loading is slower on IE and
most of our users are IE. I am not
sure if browser specific styling could
be causing the slower IE loading?(We
have some browser specific styling for
IE)

While serving pages over HTTP may be faster (though I doubt https is not monstrously slow, for small files), a good lot of browsers will complain if included resources such as images and JS are not on https:// URL's. This will give your customers annoying popup notifications.
There are high-performance servers for static file serving, but unless your SSL certificate works for multiple subdomains, there are a variety of complications. Putting the high-performance server in front of your dynamic content server and reverse proxying might be an option, if that server can do the SSL negotiation. For unix platforms, Nginx is pretty highly liked for its reverse proxying and static file serving. Proxy-cache setups like Squid may be an option too.
Serving static content on a cloud like amazon is an option, and some of the cloud providers let you use https as well, as long as you are fine with using a subdomain of their domain name (due to technical limitations in SSL)

You could create a CDN. If your site uses HTTPS, you'll need a HTTPS domain too (otherwise you might get a non-secure items warning).
Also, if your site uses cookies (and most do), having a CDN on a different domain (or if you use www.example.com, on cdn.example.com) you could host them there and not have the cookie data in the HTTP request come in.

Related

How does a websocket cdn work with bidirectional data?

I see that cloudflare has a websocket cdn, but I'm confused with how it would cache bidirectional data. With a normal http request, it would cache the response and then serve it from the CDN.
With a web socket, how does cloudflare cache the data? Especially since the socket can be bi-dirctional.
Caching is really only a small part of what a CDN does.
CloudFlare (and really any CDN that would offer this service), would serve two purposes off the top of my hand:
Network connection optimization - The browser endpoint would be able to have a keepalive connection to whatever the closest Point of Presence (PoP) is to them. Depending on CloudFlare's internal architecture, it could then take an optimized network path to a PoP closer to the origin, or to the origin itself. This network path may have significantly better routing and performance than having the browser go straight to the origin.
Site consistency - By offering WebSockets, a CDN is able to let end users stay on the same URL without having to mess around with any cross-origin issues or complexities of maintaining multiple domains.
Both of these go hand in hand with a term often called "Full Site Acceleration" or "Dynamic Site Acceleration".

How to host images when using HTTP/2

We are migrating our page to HTTP/2.
When using HTTP/1 there was a limitation of 2 concurrent connections per host. Usually, a technique called sharding was used to work around that.
So content was delivered from www.example.com and the images from img.example.com.
Also, you wouldn't send all the cookies for www.example.com to the image domain, which also saves bandwidth (see What is a cookie free domain).
Things have changed with HTTP/2; what is the best way to serve images using HTTP/2?
same domain?
different domain?
Short:
No sharding is required, HTTP/2 web servers usually have a liberal connection limit.
As with HTTP/1.1, keep the files as small as possible, HTTP/2 still is bound by the same bandwidth physical constraints.
Multi-plexing is really a plus for concurrent image loading. There are a few demos out there, you can Google them. I can point you to the one that I did:
https://demo1.shimmercat.com/10/

Is it better to use CORS or nginx proxy_pass for a RESTful client-server app?

I have a client-server app where the server is a Ruby on rails app that renders JSON and understands RESTful requests. It's served by nginx+passenger and it's address is api.whatever.com.
The client is an angular js application that consumes these services (whatever.com). It is served by a second nginx server and it's address is whatever.com.
I can either use CORS for cross subdomain ajax calls or configure the client' nginx to proxy_pass requests to the rails application.
Which one is better in terms of performance and less trouble for developers and server admins?
Unless you're Facebook, you are not going to notice any performance hit from having an extra reverse proxy. The overhead is tiny. It's basically parsing a bunch of bytes and then sending them over a local socket to another process. A reverse proxy in Nginx is easy enough to setup, it's unlikely to be an administrative burden.
You should worry more about browser support. CORS is supported on almost every browser, except of course for Internet Explorer and some mobile browsers.
Juvia uses CORS but falls back to JSONP. No reverse proxy setup.

Setting up a Server as a CDN

We've got a server and domain we use basically now as a big hard drive with video files and images (hosted by MediaTemple). What would it take to setup this server and domain as a CDN?
I saw this article:
http://www.riyaz.net/blog/how-to-setup-your-own-cdn-in-30-minutes/technology/890/
But that looks to be aliasing to the box, and not actually moving the content. Our content is actually hosted on a different box.
One of the tenets of a CDN is that content is geographically close to the client - if you only have one CDN server (rather than several replicated servers), it's not a CDN.
However, you can still get some of the benefits of a CDN. Browsers will typically only fetch 8 resources in parallel from any given hostname. You can give your 'CDN' server several subdomain hostnames and round-robin requests.
www1.example.com
www2.example.com
www3.example.com
...
This will effectively triple the number of concurrent requests a browser will make to your server, as it will see the three hostnames as three separate web servers.
Its basically like you creating a "best route possible" server for your client.
What you basically does is putting multiple IP addresses to one HOSTNAME example.
non static content *(Dynamic pages) are on WWW.Example.Com
Whereas the JSP,AVI etc are stored on media.cdn.example.com
media.cdn.example.com looks up as 1.2.3.4;8.8.9.9;103.10.4.5;etc
so the router on the user end will find nearest to that location and that will be your cdn.
another way is to force content be served using a certain route, and as such, pushes the router to do the same.

IIS cache with PURGE support

On Unix, I normally deploy nginx in front of Varnish in front of my application server. Both nginx and Varnish are acting as reverse proxies here. Varnish maintains a cache and supports things like If-Modified-Since, Cache-Control response headers and PURGE requests from the application. nginx is good at receiving a lot of connections. I also use it to serve some static content, enable gzip compression etc.
On Windows, I can manage with Squid in front of IIS. I'm planning to deploy my (Python) application as an ISAPI wildcard filter (using the isapi-wsgi package), so the application will live in a thread pool managed by IIS.
However, Squid development on Windows appears to have stalled, and I'd prefer to keep IIS on port 80, so that I can serve certain things directly from disk. I also suspect IIS is more resilient in handling lots of connections than Squid on Windows.
What do people normally use here? One option would be to use another free-standing caching proxy in front of IIS. Another option may be something installed as an ISAPI filter, which would intercept requests and respond to things like If-Modified-Since, requets for images and other cached resources, and PURGE requests from the application.
Does such a thing exist? Or are the only real choices Squid and MS ISA (too expensive).
Cheers,
Martin
IIS7 with Application Request Routing (see http://www.iis.net/download/ApplicationRequestRouting) supports full proxy caching on the same box or with the cache server in front of your middle tier.
Once ARR is installed, to enable proxy caching from the command line run the following:
%windir%\System32\inetsrv\appcmd.exe set config -section:system.webServer/diskCache /+"[path='C:\MyCacheFolder',maxUsage='0']" /commit:apphost
To vary caching based on query string, execute the following:
%windir%\System32\inetsrv\appcmd.exe set config -section:system.webServer/proxy /cache.queryStringHandling:"Accept" /commit:apphost
See the documentation link above for more details. Notice that static and dynamic content can have different caching strategies, etc. If you pursue using this, follow up with specific questions--it can be a bit of a trick lining everything up if you're looking for fine-grained control.

Resources