I have a few files that are served via a content delivery network. These files are not gzipped when served from the CDN servers. I was wondering if I gzip the content on my servers' end, would Akamai first get the gzipped content and then serve gzipped content once it stores my content on their servers?
Akamai can fetch content from your origin without gzip, and then serve the content as gzipped on their end. They will store the content unzipped in their cache, and then compress on the fly for those browsers that support it.
I can't find a reason where settings Gzip compression to always is not beneficial to your end user. You can do this by setting Last Mile Acceleration to Always in Akamai
Related
I've configured CloudFront in front of my Elastic Beanstalk/load-balanced web application and the static content rule (Png images etc) are being caching and served GZIPPED.
However my JSP pages aren't being Gzipped.
Please note that I have explicitly set my default rule to not cache by setting the min TTL to 0, but it's probably un-necessary because my origin server isn't returning a Content-Length header for JSP pages, so it will never be cached anyway.
CloudFront will only cache if...
Filetype is supported (text/html is)
Response is 1,000 -> 10,000,000 bytes (it is)
Content-Length header must be provided (it is NOT)
Content-Encoding must not be set (it is not)
So that explains why it's not being cached, fair enough.
But why don't my HTML pages get GZIPPED? FYI my HTML and JSP file extensions are all processed through the JSP processor.
Looks like I was right, until my page was modified to return the Content-Length response header, CloudFront did not cache nor GZIP the content.
We have files that are over 1mb and excluded from the automatic compression through Azure Verizon CDN.
To accommodate we manually compress the files before uploading to the Azure Blob backbone. We upload both an compressed and uncompressed version of the file.
We have also configured Azure CDN to handle json file :
Now if i curl the blob or cdn with the appropriate headers, I do not get compressed content.
So what is the standard approach to doing this with Azure? Am I missing a setting or header?
Is content swapping not doable based on the Accept-Encoding header?
Do I need to drop the .gz extension and always serve the json zipped?
Any insights would be appreciated.
Edit for clarity:
The recommended solution here is to gzip and upload your asset to blog storage without the .gz extension and make sure it returns the "Content-Encoding:gzip" header.
After that, just request that asset through the CDN endpoint. If your request contains an Accept-Encoding:gzip header, the CDN will return the compressed asset. If your request does not contain the Accept-Encoding header, the CDN will uncompress the file on the fly and serve the client the uncompressed version of the asset.
Original Answer:
Hey, I'm from the Azure CDN team.
First, are you using a Verizon or Akamai profile?
Verizon has a limit of 1MB for edge compression while Akamai does not. Also, this limit is just for CDN edge compression, so if your origin responds with the correct compressed file, the CDN should still serve it to the client.
Blob storage doesn't automatically do content swapping as far as I'm aware.
Note that if an uncompressed version of the file was already cached on the CDN edge, it will continue to serve that file until it expires. You can reset this by using the 'purge' function.
We also have a troubleshooting document here: https://learn.microsoft.com/en-us/azure/cdn/cdn-troubleshoot-compression
I'd be happy to help you troubleshoot further, if the above doesn't help.
Just send me your origin and cdn endpoint URLs privately at rli#microsoft.com.
I have a newsstand application that uses a bar to show download progress. It works by getting the content-length from the file download. This used to work on our development server, however we use an nginx server for production and it doesn't seem to be returning a content-length header.
Does anyone know why this would be or a better solution?
Thanks
The lack of a Content-Length header is likely caused by you having compression enabled on your live server but not on your dev server. Because Nginx compresses data as it's sent, it's not possible to send a Content-Length header at the start of the response, as the server can't know what size the data will be after it's compressed.
If you require a Content-Length header for a download progress then the best option is to compress the content yourself, set the Content-Length header to the size of the compressed data, and then serve the compressed data.
Although this will be slightly slower for the first user to download that piece of content, you can use it as an effective caching mechanism if you use unique filenames for the compressed files, with the filename generated from the parameters in the users request. You can also then use Nginx's x-sendfile ability to reduce the load on your server.
btw if you're using Amazon CloudFront CDN (and probably others) you really ought to be setting the Content-Length header as they can serve partial (aka corrupt) files, if there is no Content-Length header and the download from your server to CloudFront is interrupted during transfer.
Lately I've become somewhat obsessed with page speed optimization and I wanted to find out can CMS caching mechanism (For example Joomla cache), Gzip compression and Cloudflare work all together in perfect harmony?
I understand how each system works by itself (more or less), but I don't understand would they work together. Is it even recommended to use all of them at once?
If I use cloudflare do CMS cache and Gzip even matter?
P.S What other tools do you use?
can CMS caching mechanism (For example Joomla cache), Gzip compression and Cloudflare work all together in perfect harmony?
Yes, plus they all do slightly different things.
Cloudflare caches the static content, eg images and stylesheets. Fresh page HTML is still downloaded by every visitor on every page.
Gzip compression comes into play both with Cloudflare and your server. By default Cloudflare automatically compresses content passing through it's system, Files not passing through Cloudflare can be compressed by your server, Caching and gzip compression by htaccess , though since you are using Joomla, the easiest way to enable this is from
the control panel > system > global configuration > server > Gzip Page Compression.
This will decrease download times for the page HTML and the dynamic content produced by Joomla.
Using Joomla cache will typically reduce page load times because instead of Joomla using modules and plugins to recalculate the dynamic page content everytime for every visitor, it will simply use the saved cache content. You can cache Joomla content by page, by module or by plugin, here's one good explanation of the differences.
It's worth spending some time testing with a tool like WebPageTest to find the best Joomla cache option for your specific site. I've sometimes had significant savings with this.
It's makes sense to have all 3 working on your site, it will reduce server load and speed up page display.
Good luck!
Just a heads up. Some content in certain browsers can experience byte range request issues if you have gzip enabled while using cloudfare.
For instance, depending on server, Safari will (most likely) not play mp4 video served through cloudfare and gzip enabled server. Gzip can interfere with byte-range separation of requests.
I ran into this issue before and figured I would share in case anyone runs into any of these issues.
If you want to have gzip enabled, but experience issues with certain files, you can disable gzip for those specific files in .htaccess by adding this:
<IfModule mod_headers.c>
<FilesMatch "\.mp4$">
RewriteRule ^(.*)$ $1 [NS,E=no-gzip:1,E=dont-vary:1]
</FilesMatch>
Just replace .mp4 with file type if issues with any others.
When a page is requested there would be parameters as below:
Pragma
Cache-Control
Content-Type
Date
Content-length
Is there a way to remove Date for example? Or remove most of them (except Pargam and some caching mechanisms) for image files? Could we get performance gain here? Should we do it on web server layer?
The Date header is required by HTTP/1.1. Content-Type and Content-length are also valuable and small to have, and you already mentioned that cache headers were important to you. So, I think you are looking in the wrong place for optimization.
What you can do is make sure that images served from a domain separate from the application to make sure the clients aren't sending cookie headers when they request static images. Using a CDN for serving static content is also recommended.