Magento - Storing all images on cdn not my hosting server - magento

I am new to magento and i have just starting setting up my first sites. One of the requirements i am after is to store all images files on a seperate server from which the site is hosted on. I have briefly looked into amazon cloudfront and the following plugin:
http://www.magentocommerce.com/magento-connect/cloudfront-cdn.html
This works alongside my cloudfront distribution setup so the images are being accessed from the cdn alongside the js,css etc when i check the source. My issue is they still reside on my own server too.
Is there a way to have everything just on a cdn so that my server disk space can be kept as low as possible with only the template files on there, no images?

Based on experience, I would really recommend you do not try completely removing any media files away from your actual server disk. The main role of CDN should just be to completely mirror these whenever files are new or updated and such.
However, if you really want to do this I would also sternly warn you that you do not attempt this with js and css files. The trouble is just not worth it. You'll see why later.
So we're left with media or mostly image files. These files are usually very large thus the reasoning behind moving it away from server disk.
My strategy and what I did before is I used an S3 bucket behind the Cloudfront CDN. I moved all stuff from the media directory to S3, Cloudfront configured to pull from S3, then Cloudfront CDN then is CNAME'd as media.mydomain.com. Obviously, I would then set my media base URLs (System > Configuration > General > Web) to http://media.mydomain.com/media/ and https://media.mydomain.com/media/.
It worked perfectly. No CORS issues at all because I did not touch CSS/JS base url paths. Because for those files, I just relied on the free Cloudflare CDN (yeah, yeah, I know).
Next thing I knew and saw defects with this setup is that all uploads do not work at all. WSIWYG uploads do not go to the S3 bucket immediately.. However, there was a solution using s3fuse though which then immediately degraded into a problem as it had bad memory leaks too.
What ultimately worked is we just paid for the additional disk space (we were using Amazon AWS), Wrapped the whole domain on Cloudflare CDN, and when we needed SSL we upgraded to Pro.
Simple, it works and it's head-ache free.
NB: I'm not connected with Cloudflare whatsoever, I'm just really really happy with their service.

Related

Images on SSL App Engine

I have an application on Google App Engine. I used to store images within Ndb blobs, and serve them from GAE. Then all this relatively static seemed to put needless load on the application, and I switched to storing all images (including user-uploaded) within a GCS bucket, made publicly available through http://static.mysite.com, and served directly from there.
This is working great. Until I consider switching the application to SSL.
Setting up SSL for my GAE application went OK, but then I get security warnings because images are not served through SSL. So I need an SSL access to my GCS images, but GCS does not support SSL, and I see no plans for it in the future.
What are the options ? Storing images in GCS seems to be the recommended choice, and going SSL is now recommended for most websites, but the two seem to be incompatible.
I see Google CDN could be one approach that would (maybe) support SSL, but it's kind of overkill in my case.

Will Cloudflare still cache files behind xsendfile

I have set up a Wordpress Woocommerce storefront. I want to set up downloadable products which will be downloaded via XSendFile module.
However, my download files are quite big (50mb approx) and so am planning to set up Cloudflare to cache the download files so I don't exceed my bandwidth limit from my hosting service.
My question is, will Cloudflare cache files that are linked through Apache's XSendFile module?
Sorry if this is a basic question. I'm just trying to figure out whether this set up will work or whether I will need to find an alternative solution.
NOTE: Forgot to add that the download files are pdf files.
It really depends on if we are proxying the record that it is on (www, for example). It is also important to note that we wouldn't cache third-party resources at all, if it is structured in some way that is not directly on the domain.
I would also recommend reviewing what CloudFlare caches by default.

Not getting improvements by using CDN

I've just added a CDN distribution using Amazon Cloudfront to my Rails application on Heroku, it's working OK.
My homepage serves around 11 static assets, I've made some tests using http://www.webpagetest.org/ and there are no differences (in terms of performance, optimizing load times) between using the CDN or not.
Is there any particular reason why this could be happening?
My region is Latin America btw, so it's using the All locations edge option.
Thanks.
The main benefits of using CDN from Amazon or others is that they are hosted on fast and reliable servers and offload the traffic served directly from your server, which in case that you have a dedicated fast server you won't see a considerable boost.
But another benefit is that they are potentially cached by user's browser (due to visiting other websites which have used the same CDN) so the visitor will have a better experience first time they visit your site.
A couple of suggestinos.
If the site CSS is one of the static assets that you have moved to CloudFront then I would try moving it back to your main server.
Since page display can't start until the site CSS is downloaded, you want to serve this as fast as possible. If it's coming from a CDN then it requires a second HTTP request.
Also, use the waterfall display from webpagetest.org to pinpoint where the bottlenecks are.
Good luck!

Would you use Amazon CloufFront as a Cache for a website?

I have been using Amazon CloudFront for a while now as a cache and edge location for my css, js, image files. I am now thinking about using it for hosting all of my static html files as well. In essence my www.example.com and example.com will be hosted via CloudFront and I will use a separate tomcat server at my.example.com for all the dynamic stuff.
Any feedback about this? Suggestions?
Thanks,
Assaf
This is exactly what CloudFront is designed for. I think you will find this approach is typical of many high traffic web sites.
The only downside is added cost...
I used cloudfront for some time, but recently switched to Google Page Speed Service. It is a little light on features currently, but it deals with edge locations and all the tricks required to speed up you page.
It is currently in beta, but I've had no problems over the 2 months I've been using it. The only question is how much it'll cost when it leaves beta.

Can I retrieve my Flex .swf and images from different Amazon S3 buckets?

I have a Flex 3 SWF in one Amazon S3 bucket, which dynamically loads images for buttons which are stored in another S3 bucket.
I have set a completely open crossdomain.xml file in each bucket, but when I call the SWF from my web site, only a few button images load - and they're just the 'up' or 'normal' state button images (i.e. not 'down', 'over' or 'disabled').
I had hoped that just setting an open crossdomain.xml policy file would have been enough to allow me to pull images across these different domains, but it's clearly not working.
I'd like as simple-a-solution as possible, but I have been reading about using either a SHIM movie (which doesn't sound straightforward) or using PHP, for example, as a proxy - but I don't think I can do this with S3, as it's not an actual server, as such.
I would greatly appreciate any thoughts on this from people who have done something similar.
Just to follow-up on this, I did as James Lawruk suggested and brought the content to a local server, where it was still failing. Some things I hadn't appreciated before I started looking at this, and what I learned:
Amazon S3 buckets support nested structures (for some reason I'd convinced myself they could only hold flat file structures - I don't know why!)
placing a crossdomain.xml file in the root of the S3 bucket was the key to sorting this out
crossdomain.xml files changed in Flash Player 9 onwards and - I believe - in v10 onwards there are extra commands to do with master file behaviour, which I needed to implement.
Some links of interest:
http://www.jodieorourke.com/view.php?id=108&blog=news
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2011

Resources