Why it is recommended to store images on remote server? - image

Sorry for such a question, but I can not find any article on the web with cons on that, I guess it is about async uploading and downloading, but it's just a guess, is there somewhere a detailed info?

It's mostly about specialization, data locality, and concurrency.
Servers that are specialized at serving static content typically do so much faster than dynamic web servers (the web servers are optimized for the specific use-case).
You also have the advantage of storing your content in many zones to achieve better performance (the content is physically closer to the person requesting it), where-as web applications typically should be near its other dependencies, such as databases.
Lastly browsers (for http/1 at least) only allow a fixed number of connections per server, so if your images and api calls are on separate servers, one cannot influence the other in terms of request scheduling.
There are a lot of other reasons I'm sure, but these are just off the top of my head.

Related

Is it better to use Cache or CDN?

I was studying about browser performance when loading static files and this doubt has come.
Some people say that use CDN static files (i.e. Google Code, jQuery
latest, AJAX CDN,...) is better for performance, because it requests
from another domain than the whole web page.
Other manner to improve the performance is to set the Expires header
equal to some months later, forcing the browser to cache the static
files and cutting down the requests.
I'm wondering which manner is the best, thinking about performance and
if I may combine both.
Ultimately it is better to employ both techniques if you are doing web performance optimization (WPO) of a site, also known as front-end optimization (FEO). They can work amazingly hand in hand. Although if I had to pick one over the other I'd definitely pick caching any day. In fact I'd say it's imperative that you setup proper resource caching for all web projects even if you are going to use a CDN.
Caching
Setting Expires headers and caching of resources is a must and should be done 100% of the time for your resources. There really is no excuse for not doing caching. On Apache this is super easy to config after enabling mod_expires.c and mod_headers.c. The HTML5 Boilerplate project has good implementation example in the .htaccess file and if your server is something else like nginx, lighttpd or IIS check out these other server configs.
Here's a good read if anyone is interested in learning about caching: Mark Nottingham's Caching Tutorial
Content Delivery Network
You mentioned Google Code, jQuery latest, AJAX CDN and I want to just touch on CDN in general including those you pay for and host your own resources on but the same applies if you are simply using the jquery hosted files cdn or loading something from http://cdnjs.com/ for example.
I would say a CDN is less important than setting server side header caching but a CDN can provide significant performance gains but your content delivery network performance will vary depending on the provider.
This is especially true if your traffic is a worldwide audience and the CDN provider has many worldwide edge/peer locations. It will also reduce your webhosting bandwidth significantly and cpu usage (a bit) since you're offloading some of the work to the CDN to deliver resources.
A CDN can, in some rarer cases, cause a negative impact on performance if the latency of the CDN ends up being slower then your server. Also if you over optimize and employ too much parallelization of resources (using multi subdomains like cdn1, cdn2, cdn3, etc) it is possible to end up slowing down the user experience and cause overhead with extra DNS lookups. A good balance is needed here.
One other negative impact that can happen is if the CDN is down. It has happened, and will happen again. This is more true with free CDN. If the CDN goes down for whatever reason, so does your site. It is yet another potential single point of failure (SPOF). For javascript resources you can get clever and load the resource from the CDN and should it fail, for whatever the case, then detect and load a local copy. Here's an example of loading jQuery from ajax.googleapis.com with a fallback (taken from the HTML5 Boilerplate):
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="js/vendor/jquery-1.8.2.min.js"><\/script>')</script>
Besides obvious free API resources out there (jquery, google api, etc) if you're using a CDN you may have to pay a fee for usage so it is going to add to hosting costs. Of course for some CDN you have to even pay extra to get access to certain locations, for example Asian nodes might be additional cost then North America.
For public applications, go for CDN.
Caching helps for repeated requests, but not for the first request.
To ensure fast load on first page visit use a CDN, chances are pretty good that the file is already cached by another site already.
As other have mentioned already CDN results are of course heavily cached too.
However if you have an intranet website you might want to host the files yourself as they typically load faster from an internal source than from a CDN.
You then also have the option to combine several files into one to reduce the number of requests.
A CDN has the benefit of providing multiple servers and automatically routing your traffic to the closest location to your client. This can result in faster delivery, optimized by location.
Also, static content doesn't require special application servers (like dynamic content) so for you to be able to offload it to a CDN means you completely reduce that traffic. A streaming video clip may be too big to cache or should not be cached. But you don't neccessarily want to support that bandwidth. A CDN will take on that traffic for you.
It is not always about the cache. A small application web server may just want to provide the dynamic content but needs a solution for the heavy hitting media that rarely changes. CDNs handle the scaling issue for you.
Agree with #Anthony_Hatzopoulos (+1)
CDN complements Caching; also in some cases, it will help optimize Caching directives.
For example, a company I work for has integrated behavior-learning algorithms into its CDN, to identify and dynamically cache generated objects.
Ordinarily, these objects would be un-Cachable (i.e. [Cache-Control: max-age=0] Http header). But in this case, the system is able to identify Caching possibilities and override original HTTP Header directions. (For example: a dynamically generated popular product that should be Cached, or popular Search result page that, while being generated dynamically, is still presented time over time in the same form to thousands of users).
And yes, before you ask, the system can also identify personalized data and very freshness, to prevent false positives... :)
Implementing such an algorithm was only possible due to a reverse-proxy CDN technology. This is an example of how CDN and Caching can complement each other, to create better and smarter acceleration solutions.
Above those experts quotes, the explanation are perfect to understand CDN tech and also cache
I would just provide my personal experience, I had worked on the joomla virtuemart site and unfortunately it will not allow update new joomla and virtuemart version cause it was too much customised fields in product pages, so once the visitor up to 900/DAY and lots user could not put their items in their basket because each time to called lots js and ajax called for order items takes too much time
After optimise the site, we decide to use CDN, then the performance is really getting good, along by record from gtmetrix, the first YSlow Score was 50% then after optimise + CDN it goes to 74%
https://gtmetrix.com/reports/www.florihana.com/jWlY35im
and from dashboard of CDN you could see which datacenter cost most and data charged most to get your improvement of marketing:
But yes to configure CDN it has to be careful of purge time and be balancing numbers of resource CDN cause if it down some problem you need to figure out which resource CDN cause
Hope this does help

Minimising number of requests vs Browser Caching & Multiple domains

I have recently been working on improving the front end performance of our website and have been employing a number of best practices.
However I have had a recent example where some of the practices are slightly at odds with each other
Minimise HTTP requests
In order to "trick" the browser into making more concurrent requests have some assets served from a different domain
Leverage browser caching
Why?
We used to bundle almost all of our Javascript into one file to minimise HTTP requests. This included JQuery and JQuery UI.
I thought this was silly as many users are likely to have JQuery already cached in their browser so I decided we should remove it from our all.js and instead serve it from Google's CDN. This would save users downloading the code again and because it's on a different domain it can be downloaded in parallell with other resources from our own domains.
The concurrent downloading is shown in the graph below:
This of course has raised the number of requests for people without JQuery already cached which isn't great though.
So my question is this:
Is the change a sensible one? Do the benefits of leveraging caching and allowing concurrent requests outweigh a slight increase in the number of requests?
That is a very good question.
You have explained your reasoning well and they are all good reasons for making this change.
But there still remains benefits to both approaches.
Keeping everything combined in one file
Reduce number of HTTP requests, reduces the negative effects of round-trip latency on the user's connection.
All libraries/plugins are downloaded at once, and should remain cached for when they are later needed.
Reduce dependency on other services (although, Google is going to be quite reliable).
Separate files spread across domains
Increase parallelisation of downloads, reduces the negative effects of bandwidth shaping on the user's connection. (Note that most browsers don't limit concurrent per-domain requests to 2 anymore though.)
Increase granularity - separate parts can be downloaded on-demand as needed, ie if a particular plugin is not needed on the first page hit, it isn't downloaded.
Personally, I'd normally lean a little bit towards the former (reducing HTTP requests by combining them into one big file). I feel like most of my audience is going to be on a fairly high-bandwidth connection and I can reduce latency. Remember to use Google and Yahoo's page speed tools to find other ways of speeding things up.

CDN vs Homegrown Caching

My understanding of a CDN (like Akamai or Limelight) is that they are heavy-duty caching services.
I also understand that they are very expensive. So I'm wondering why I can't just create my own cluster of replicated caching servers (using, say, EhCache or Memcached) for all my web app's caching needs (images, URL hits/responses, Javascripts, etc) and basically get the same thing?
In essence, from a developer's perspective, what are the benefits (technical or otherwise) of paying a CDN vs. just using your own caching solution? Or, if I have completely misunderstood what CDNs are, please correct my understanding! Thanks in advance!
The key to this is the N in CDN, Content Delivery Network.
The advantage of a CDN (a good one, at least) is that it's geographically distributed. What the big CDN providers do (the ones you mentioned along with others) is have hundreds or thousands of servers in facilities all over the world. This lets what ever resource the CDN is serving sit as geographically close as possible to the end user no matter where they are in the world.
You can certainly replicate the functionality. At a very basic level most CDN's are simply an object/value store which plenty of software can do - what you can't do, at the scale these guys do it at any rate, is have servers all over the world to serve and replicate these objects.
If all you're interested in is an efficient way to store and serve static files then an object store coupled with something like nginx would do just fine. A CDN is used to make sites load as fast as they possibly can, but they come at a price.
For the record, Amazon's Cloud Front, while not as distributed as Limelight or Akamai is a lot cheaper and a good middle ground compromise on cost vs service.
The advantage of big CDNs is that they own distributed resources all over the world, allowing them to serve the users from a servers near them. Besides caching, this is the major point that makes them fast.
To build your own CDN, you would have to install servers on multiple continents, arrange good Internet connectivity for them, setup caching and make sure you have the servers all synchronized. It's not impossible to do that, but it will be very cost-intensive and might not be profitable.
With all due respect, I disagree with the other two answers given thus far to this question. Just to be clear, I'm not saying they are wrong, I'm just offering a different perspective.
While it is certainly true that the key to CDN performance is the "N," as Bulk eloquently explained, that doesn't mean you can't build your own CDN. The question is whether or not it's worth the time (and by definition, the money).
We live in a world of cheap servers and even cheaper virtual machines. Sure the big CDN networks have thousands/millions of servers all over the world, but that's because they have thousands/millions of sites to serve. Depending on the size of your site/app, all you really need in terms of resources is as much as your site(s) need. If you're small, the minimum might be a VPS on each US coasts, one in Europe, maybe two in Asia, and one in Australia. Sure, the hardware costs are too high for your typical homepage, but they are certainly not extreme, and if you're looking at CDNs in the first place, they are probably within your budget.
To me, commercial CDN services just provide PaaS convenience, but there is nothing preventing you from getting IaaS and building up your own platform.
One more thing on this topic:
I once read a comment either by David Heinemeier Hansson (the creator of Ruby on Rails) or by someone referring to him that went something along the lines of: The owner(s) of 37 Signals were concerned DHH was using Ruby to build their application. At that point Ruby was still very obscure. Almost all web hosts were offering PHP, Perl, and Microsoft technologies. When asked about the fact that there were only a handful of Ruby hosts in the world, DHH asked, "well how many do you need?"
To me the point is you need to look at what's best for your needs and the needs of your application, and not necessarily what the guys with millions of servers think.

How many connections/how much bandwidth can Apache handle?

This is a request for pointers to good documentation/good articles. I'm looking for information on how many connections an Apache server can reasonably handle, and potentially how to load balance between multiple servers. I've done Google searches but it's harder for beginners to judge what are good docs.
Apache 1.3 had some nasty scalability limitations, but later versions are designed to scale with the hardware and operating system, making them the bottleneck rather than the web server itself. As always, though, it comes down to how you configure and tune it if you want uber performance. Each situation has its own demands, and they're documented here:
http://httpd.apache.org/docs/2.2/misc/perf-tuning.html
The above assumes you're serving static content, which is where Apache excels. If you run webapps behind it, that's your bottleneck, not Apache.
Unfortunately you'll be disappointed.
Apache's ability to handle connections (and indeed any other web server's) is limited by what the web application sitting on top of it is doing. If you're serving static pages, you will be able to serve a lot of requests with very little hardware.
Depending on the IO workload (Apache cannot work faster than the IO subsystem - install enough ram to cache your entire content, if you can), you will be able to fill up a gigabit network on any reasonable spec modern box.
Once you've filled a gigabit network, you'll have other things to worry about.
But the reasons that you really need load balancers are because your application slows down Apache and uses up the box's resources. Your application will not be infinitely fast, nor infinitely scalable. You'll need to address those issues.
As the previous answers have pointed out it is generally not the case that Apache becomes the bottleneck, instead it is usually the application server (PHP, Mongrel, etc). However, if you are only serving static content then you will want to do some benchmarking to see how fast it can go. Of course it is unlikely to peg the exact number which Apache will be able to serve since a lot depends on how you configure it (e.g. disabling persistent connections) and the specs of the server. However to get a ballpark estimate you can use this benchmark as a reference since it is run on 1-8 cores (using one or two servers) so you should be able to find something reasonably comparable to the hardware you are considering.
Of course in order to get the most accurate results you will want to test it yourself using a load generator like ab or httperf.

Best scaling methodologies for a highly traffic web application?

We have a new project for a web app that will display banners ads on websites (as a network) and our estimate is for it to handle 20 to 40 billion impressions a month.
Our current language is in ASP...but are moving to PHP. Does PHP 5 has its limit with scaling web application? Or, should I have our team invest in picking up JSP?
Or, is it a matter of the app server and/or DB? We plan to use Oracle 10g as the database.
No offense, but I strongly suspect you're vastly overestimating how many impressions you'll serve.
That said:
PHP or other languages used in the application tier really have little to do with scalability. Since the application tier delegates it's state to the database or equivalent, it's straightforward to add as much capacity as you need behind appropriate load balancing. Choice of language does influence per server efficiency and hence costs, but that's different than scalability.
It's scaling the state/data storage that gets more complicated.
For your app, you have three basic jobs:
what ad do we show?
serving the add
logging the impression
Each of these will require thought and likely different tools.
The second, serving the add, is most simple: use a CDN. If you actually serve the volume you claim, you should be able to negotiate favorable rates.
Deciding which ad to show is going to be very specific to your network. It may be as simple as reading a few rows from a database that give ad placements for a given property for a given calendar period. Or it may be complex contextual advertising like google. Assuming it's more the former, and that the database of placements is small, then this is the simple task of scaling database reads. You can use replication trees or alternately a caching layer like memcached.
The last will ultimately be the most difficult: how to scale the writes. A common approach would be to still use databases, but to adopt a sharding scaling strategy. More exotic options might be to use a key/value store supporting counter instructions, such as Redis, or a scalable OLAP database such as Vertica.
All of the above assumes that you're able to secure data center space and network provisioning capable of serving this load, which is not trivial at the numbers you're talking.
You do realize that 40 billion per month is roughly 15,500 per second, right?
Scaling isn't going to be your problem - infrastructure period is going to be your problem. No matter what technology stack you choose, you are going to need an enormous amount of hardware - as others have said in the form of a farm or cloud.
This question (and the entire subject) is a bit subjective. You can write a dog slow program in any language, and host it on anything.
I think your best bet is to see how your current implementation works under load. Maybe just a few tweaks will make things work for you - but changing your underlying framework seems a bit much.
That being said - your infrastructure team will also have to be involved as it seems you have some serious load requirements.
Good luck!
I think that it is not matter of language, but it can be be a matter of database speed as CPU processing speed. Have you considered a web farm? In this way you can have more than one machine serving your application. There are some ways to implement this solution. You can start with two server and add more server as the app request more processing volume.
In other point, Oracle 10g is a very good database server, in my humble opinion you only need a stand alone Oracle server to commit the volume of request. Remember that a SQL server is faster as the people request more or less the same things each time and it happens in web application if you plan your database schema carefully.
You also have to check all the Ad Server application solutions and there are a very good ones, just try Google with "Open Source AD servers".
PHP will be capable of serving your needs. However, as others have said, your first limits will be your network infrastructure.
But your second limits will be writing scalable code. You will need good abstraction and isolation so that resources can easily be added at any level. Things like a fast data-object mapper, multiple data caching mechanisms, separate configuration files, and so on.

Resources