I have limited bandwidth with my web hosting company as I picked the cheapest plan.
Is having images hosted on a website like www.imgur.com in your website slower than having them on the same server as your website's files?
eg.
<img src="photo.png" alt="Image hosted on same server">
<img src="http://i.imgur.com/2kQirjx.png" alt="Image hosted on imgur">
This depends on the performance of your server and the other. The request to get the image is made by the browser during the DOM's loading, then a HTTP connection is established and the transfer speed depends on the performance of the server, in fact. Banally, if you have a limited bandwidth and the alternative is to get the resource from a C.D.N., the second one is definitely the best choice.
Not always, but it sometimes (often?) is. Additionally, the external server may be down - your users will not receive the external resources until that server is up again.
Imgur is a high traffic service, so outages may be not-so-infrequent there.
Related
I see that cloudflare has a websocket cdn, but I'm confused with how it would cache bidirectional data. With a normal http request, it would cache the response and then serve it from the CDN.
With a web socket, how does cloudflare cache the data? Especially since the socket can be bi-dirctional.
Caching is really only a small part of what a CDN does.
CloudFlare (and really any CDN that would offer this service), would serve two purposes off the top of my hand:
Network connection optimization - The browser endpoint would be able to have a keepalive connection to whatever the closest Point of Presence (PoP) is to them. Depending on CloudFlare's internal architecture, it could then take an optimized network path to a PoP closer to the origin, or to the origin itself. This network path may have significantly better routing and performance than having the browser go straight to the origin.
Site consistency - By offering WebSockets, a CDN is able to let end users stay on the same URL without having to mess around with any cross-origin issues or complexities of maintaining multiple domains.
Both of these go hand in hand with a term often called "Full Site Acceleration" or "Dynamic Site Acceleration".
We are migrating our page to HTTP/2.
When using HTTP/1 there was a limitation of 2 concurrent connections per host. Usually, a technique called sharding was used to work around that.
So content was delivered from www.example.com and the images from img.example.com.
Also, you wouldn't send all the cookies for www.example.com to the image domain, which also saves bandwidth (see What is a cookie free domain).
Things have changed with HTTP/2; what is the best way to serve images using HTTP/2?
same domain?
different domain?
Short:
No sharding is required, HTTP/2 web servers usually have a liberal connection limit.
As with HTTP/1.1, keep the files as small as possible, HTTP/2 still is bound by the same bandwidth physical constraints.
Multi-plexing is really a plus for concurrent image loading. There are a few demos out there, you can Google them. I can point you to the one that I did:
https://demo1.shimmercat.com/10/
We've got a server and domain we use basically now as a big hard drive with video files and images (hosted by MediaTemple). What would it take to setup this server and domain as a CDN?
I saw this article:
http://www.riyaz.net/blog/how-to-setup-your-own-cdn-in-30-minutes/technology/890/
But that looks to be aliasing to the box, and not actually moving the content. Our content is actually hosted on a different box.
One of the tenets of a CDN is that content is geographically close to the client - if you only have one CDN server (rather than several replicated servers), it's not a CDN.
However, you can still get some of the benefits of a CDN. Browsers will typically only fetch 8 resources in parallel from any given hostname. You can give your 'CDN' server several subdomain hostnames and round-robin requests.
www1.example.com
www2.example.com
www3.example.com
...
This will effectively triple the number of concurrent requests a browser will make to your server, as it will see the three hostnames as three separate web servers.
Its basically like you creating a "best route possible" server for your client.
What you basically does is putting multiple IP addresses to one HOSTNAME example.
non static content *(Dynamic pages) are on WWW.Example.Com
Whereas the JSP,AVI etc are stored on media.cdn.example.com
media.cdn.example.com looks up as 1.2.3.4;8.8.9.9;103.10.4.5;etc
so the router on the user end will find nearest to that location and that will be your cdn.
another way is to force content be served using a certain route, and as such, pushes the router to do the same.
Ours is an e-commerce site with lots of images and flash(same heavy flash rendered across all pages). All the static content is stored and served up from the webserver(IHS clustered-2 nodes). We still notice that the image delivery is slow. Is this approach correct at all? What are the alternative ways of doing this, like maybe serving up images using a third party vendor or implementing some kind of caching?
P.S. All our pages are https. Could this be a reason?
Edit1: The images are served up from
https too so the alerts are a non
issue?
Edit2: The loading is slower on IE and
most of our users are IE. I am not
sure if browser specific styling could
be causing the slower IE loading?(We
have some browser specific styling for
IE)
While serving pages over HTTP may be faster (though I doubt https is not monstrously slow, for small files), a good lot of browsers will complain if included resources such as images and JS are not on https:// URL's. This will give your customers annoying popup notifications.
There are high-performance servers for static file serving, but unless your SSL certificate works for multiple subdomains, there are a variety of complications. Putting the high-performance server in front of your dynamic content server and reverse proxying might be an option, if that server can do the SSL negotiation. For unix platforms, Nginx is pretty highly liked for its reverse proxying and static file serving. Proxy-cache setups like Squid may be an option too.
Serving static content on a cloud like amazon is an option, and some of the cloud providers let you use https as well, as long as you are fine with using a subdomain of their domain name (due to technical limitations in SSL)
You could create a CDN. If your site uses HTTPS, you'll need a HTTPS domain too (otherwise you might get a non-secure items warning).
Also, if your site uses cookies (and most do), having a CDN on a different domain (or if you use www.example.com, on cdn.example.com) you could host them there and not have the cookie data in the HTTP request come in.
The Performance Golden Rule from Yahoo's performance best practices is:
80-90% of the end-user response time
is spent downloading all the
components in the page: images,
stylesheets, scripts, Flash, etc.
This means that when I'm developing on my local webserver it's hard to get an accurate idea of what the end user will experience.
How can I simulate latency so that I can understand how my application will perform when I've deployed it on the web?
I develop primarily on Windows, but I would be interested in solutions for other platforms as well.
A laser modem pointed at the mirrors on the moon should give latency that's out of this world.
Fiddler2 can do this very easily. Plus, it does so much more that is useful when doing development.
YSlow might help you out. YSlow analyzes web pages based on Yahoo!'s rules.
Firefox Throttle. This can throttle speed (Windows only).
These are plugins for Firefox.
You can just set up a proxy outside that will tunnel traffic from your web server to it and then back to local browser. It would be quite realistic (of course it depends where you put the proxy).
Otherwise you can find many ways to implement it in software..
Run the web server on a nearby Linux box and configure NetEm to add latency to packets leaving the appropriate interface.
If your web server cannot run under Linux, configure the Linux box as a router between your test client machine and your web server, then use NetEm anyway
While there are many ways to simulate latency, including some very good hardware solutions, one of the easiest for me is to run a TCP proxy in a remote location. The proxy listens and then directs the traffic back to my final destination. On a remote server, I run a unix program called balance. I then point this back to my local server.
If you need to simulate for a just a single server request, a simple way is to simply make the server sleep() for a second before returning.