We have files that are over 1mb and excluded from the automatic compression through Azure Verizon CDN.
To accommodate we manually compress the files before uploading to the Azure Blob backbone. We upload both an compressed and uncompressed version of the file.
We have also configured Azure CDN to handle json file :
Now if i curl the blob or cdn with the appropriate headers, I do not get compressed content.
So what is the standard approach to doing this with Azure? Am I missing a setting or header?
Is content swapping not doable based on the Accept-Encoding header?
Do I need to drop the .gz extension and always serve the json zipped?
Any insights would be appreciated.
Edit for clarity:
The recommended solution here is to gzip and upload your asset to blog storage without the .gz extension and make sure it returns the "Content-Encoding:gzip" header.
After that, just request that asset through the CDN endpoint. If your request contains an Accept-Encoding:gzip header, the CDN will return the compressed asset. If your request does not contain the Accept-Encoding header, the CDN will uncompress the file on the fly and serve the client the uncompressed version of the asset.
Original Answer:
Hey, I'm from the Azure CDN team.
First, are you using a Verizon or Akamai profile?
Verizon has a limit of 1MB for edge compression while Akamai does not. Also, this limit is just for CDN edge compression, so if your origin responds with the correct compressed file, the CDN should still serve it to the client.
Blob storage doesn't automatically do content swapping as far as I'm aware.
Note that if an uncompressed version of the file was already cached on the CDN edge, it will continue to serve that file until it expires. You can reset this by using the 'purge' function.
We also have a troubleshooting document here: https://learn.microsoft.com/en-us/azure/cdn/cdn-troubleshoot-compression
I'd be happy to help you troubleshoot further, if the above doesn't help.
Just send me your origin and cdn endpoint URLs privately at rli#microsoft.com.
Related
I have an imgproxy server, as shown below. It's able to transform an image (resize, crop) using URL parameters.
Now I would like to add a caching proxy. So an image only would be processed (resized, cropped) if that doesn't exist in the cache.
I've read that AWS Cloudfront or Cloudflare or maybe Google Cloud CDN could be the caching proxy, what would be great. But unfortunately I didn't find any example of how to do that. I appreciate if anyone can help me.
On Cloudflare, you can leverage the following services for your use case:
Cloudflare Images: for CDN, storage and resizing services
Cloudflare Image Resizing, combined with the Cloudflare CDN via reverse proxy to allow on the fly resizing and optimization of the images (differences with Images listed here)
You can also store and resize images on your site, using the Cloudflare CDN (DNS base reverse proxy) and put it in front of your image resizing and storage stack. This is explained here in detail.
You can find here the steps to create a Cloudflare account and add your domain to it.
Hi Paolo just I also i had same doubt about how to cache, but in my case i want to use a self server,
I created a project they caches the images based on params and url requested these is generated an hash MD5, when the image with same params is requested the proxy search the md5 if is cached otherwise they regenerate a new one image resized
You can consider that for use to S3 / cloudflare any else
You can take a look of the project:
https://github.com/sefirosweb/Imgproxy-With-Cache
I have a newsstand application that uses a bar to show download progress. It works by getting the content-length from the file download. This used to work on our development server, however we use an nginx server for production and it doesn't seem to be returning a content-length header.
Does anyone know why this would be or a better solution?
Thanks
The lack of a Content-Length header is likely caused by you having compression enabled on your live server but not on your dev server. Because Nginx compresses data as it's sent, it's not possible to send a Content-Length header at the start of the response, as the server can't know what size the data will be after it's compressed.
If you require a Content-Length header for a download progress then the best option is to compress the content yourself, set the Content-Length header to the size of the compressed data, and then serve the compressed data.
Although this will be slightly slower for the first user to download that piece of content, you can use it as an effective caching mechanism if you use unique filenames for the compressed files, with the filename generated from the parameters in the users request. You can also then use Nginx's x-sendfile ability to reduce the load on your server.
btw if you're using Amazon CloudFront CDN (and probably others) you really ought to be setting the Content-Length header as they can serve partial (aka corrupt) files, if there is no Content-Length header and the download from your server to CloudFront is interrupted during transfer.
We're using Azure CDN, but we've stumbled upon a problem. Before, content could not be updated. But we added the option for our users to crop their picture, which changes the thumbnails. See, the image is not being created as new, instead we just update the stream of the blob.
There doesn't seem to be any method to clear the cache, update any headers or anything else.
Is the only answer here to make a new blob and delete the old?
Thanks.
CDN would still cache the content, unless the cache-expiry passes, or the file name changes.
CDN is best for static content with a high cache hit ratio.
Using CDN for dynamic content is not recommended, because it causes the user to wait for a double hop from storage to cdn and from cdn to user.
You also pay twice the bandwidth on the initial load.
I guess the only workaround right now is to pass a dummy parameter in the request from the client to force download the file every time.
http://resourceurl?dummy=dummyval
I have a few files that are served via a content delivery network. These files are not gzipped when served from the CDN servers. I was wondering if I gzip the content on my servers' end, would Akamai first get the gzipped content and then serve gzipped content once it stores my content on their servers?
Akamai can fetch content from your origin without gzip, and then serve the content as gzipped on their end. They will store the content unzipped in their cache, and then compress on the fly for those browsers that support it.
I can't find a reason where settings Gzip compression to always is not beneficial to your end user. You can do this by setting Last Mile Acceleration to Always in Akamai
can I use CDN with images ? and if can then how to use it with upload from website to CDN server
Seems like there are a few options to accomplish this.
The first one would be using the CDN as Origin. In which case, there is already an answer with some advice.
The second option would be using your current website as Origin for the images. In which case you will need to do some DNS work that would look something like this:
Published URL -> CDN -> Public Origin
Step 1 - images.yoursite.com IN CNAME images.yoursite.com.edgesuite.net --- This entry will send all traffic requests for the images subdomain to Akamai's CDN edge network.
Step 2 - origin-images.yoursite.com IN A or IN CNAME Public front end for the images
So the way it works is that in step one you get a request for one of your images, which will be then sent via DNS to the edge network in the CDN (in this case Akamai HTTP only). If the CDN does not already have the image in cache or if its cache TTL is expired, it will then forward the request to the public origin you have setup to pull the file, apply any custom behavior rules (rewrites, cache controls override, etc), cache the content if marked as cacheable and then serve the file to the client.
There is a lot of customization that can be done when serving static content via CDN. The example above is very superficial and it is that way to easily illustrate the logic at a very high level.
Yes, and you can check with your CDN provider on the methods they allow for uploading,
such as
pull (CDN server download the files from your website/server)
or
push (sent from your website/server to the CDN server)
Example : automatic push to CDN deployment strategy
Do you mean you want to use a CDN to host images? And you want to upload images from your website to the CDN or use the website run by the company hosting the CDN to upload the images?
Ok, firstly yes you can use a CDN with images. In fact it's advised to do so.
Amazon CloudFront and RackspaceCloud's Cloudfiles are the two that immediately spring to mind. Cloudfiles you can upload either by their API or through their website and CloudFront you upload to Amazon's S3 storage which then hooks into the CloudFront CDN.
In common CDN setups you actually don't upload images to the CDN. Instead, you access your images via a CDN, quite like accessing resources via an online Proxy. The CDN, in turn, will cache your images according to your HTTP cache headers and make sure that subsequent calls for the same image will be returned from the closest CDN edge.
Some recommended CDNs - AWS CloudFront, Edgecast, MaxCDN, Akamai.
Specifically for images, you might want to take a look at Cloudinary, http://cloudinary.com (the company I work at). We do all of this for you - you upload images to Cloudinary, request Cloudinary for on-the-fly image transformations, and get the results delivered via Akamai's high-end CDN.