Amazon Cloudfront: private content but maximise local browser caching - caching

For JPEG image delivery in my web app, I am considering using Amazon S3 (or Amazon Cloudfront
if it turns out to be the better option) but have two, possibly opposing,
requirements:
The images are private content; I want to use signed URLs with short expiration times.
The images are large; I want them cached long-term by the users' browser.
The approach I'm thinking is:
User requests www.myserver.com/the_image
Logic on my server determines the user is allowed to view the image. If they are allowed...
Redirect the browser (is HTTP 307 best ?) to a signed Cloudfront URL
Signed Cloudfront URL expires in 60 seconds but its response includes "Cache-Control max-age=31536000, private"
The problem I forsee is that the next time the page loads, the browser will be looking for
www.myserver.com/the_image but its cache will be for the signed Cloudfront URL. My server
will return a different signed Cloudfront URL the second time, due to very short
expiration times, so the browser won't know it can use its cache.
Is there a way round this without having my webserver proxy the image from Cloudfront (which obviously negates all the
benefits of using Cloudfront)?
Wondering if there may be something I could do with etag and HTTP 304 but can't quite join the dots...

To summarize, you have private images you'd like to serve through Amazon Cloudfront via signed urls with a very short expiration. However, while access by a particular url may be time limited, it is desirable that the client serve the image from cache on subsequent requests even after the url expiration.
Regardless of how the client arrives at the cloudfront url (directly or via some server redirect), the client cache of the image will only be associated with the particular url that was used to request the image (and not any other url).
For example, suppose your signed url is the following (expiry timestamp shortened for example purposes):
http://[domain].cloudfront.net/image.jpg?Expires=1000&Signature=[Signature]
If you'd like the client to benefit from caching, you have to send it to the same url. You cannot, for example, direct the client to the following url and expect the client to use a cached response from the first url:
http://[domain].cloudfront.net/image.jpg?Expires=5000&Signature=[Signature]
There are currently no cache control mechanisms to get around this, including ETag, Vary, etc. The nature of client caching on the web is that a resource in cache is associated with a url, and the purpose of the other mechanisms is to help the client determine when its cached version of a resource identified by a particular url is still fresh.
You're therefore stuck in a situation where, to benefit from a cached response, you have to send the client to the same url as the first request. There are potential ways to accomplish this (cookies, local storage, server scripting, etc.), and let's suppose that you have implemented one.
You next have to consider that caching is only just a suggestion and even then it isn't a guarantee. If you expect the client to have the image cached and serve it the original url to benefit from that caching, you run the risk of a cache miss. In the case of a cache miss after the url expiry time, the original url is no longer valid. The client is then left unable to display the image (from the cache or from the provided url).
The behavior you're looking for simply cannot be provided by conventional caching when the expiry time is in the url.
Since the desired behavior cannot be achieved, you might consider your next best options, each of which will require giving up on one aspect of your requirement. In the order I would consider them:
If you give up short expiry times, you could use longer expiry times and rotate urls. For example, you might set the url expiry to midnight and then serve that same url for all requests that day. Your client will benefit from caching for the day, which is likely better than none at all. Obvious disadvantage is that your urls are valid longer.
If you give up content delivery, you could serve the images from a server which checks for access with each request. Clients will be able to cache the resource for as long as you want, which may be better than content delivery depending on the frequency of cache hits. A variation of this is to trade Amazon CloudFront for another provider, since there may be other content delivery networks which support this behavior (although I don't know of any). The loss of the content delivery network may be a disadvantage or may not matter much depending on your specific visitors.
If you give up the simplicity of a single static HTTP request, you could use client side scripting to determine the request(s) that should be made. For example, in javascript you could attempt to retrieve the resource using the original url (to benefit from caching), and if it fails (due to a cache miss and lapsed expiry) request a new url to use for the resource. A variation of this is to use some caching mechanism other than the browser cache, such as local storage. The disadvantage here is increased complexity and compromised ability for the browser to prefetch.

Save a list of user+image+expiration time -> cloudfront links. If a user has an non-expired cloudfront link use it for an image and don't generate a new one.

It seems you already solved the issue. You said that your server is issuing a redirect http 307 to the cloudfront URL (signed URL) so the browser caches only the cloudfront URL not your URL(www.myserver.com/the_image). So the scenario is as follows :
Client 1 checks www.myserver.com/the_image -> is redirect to CloudFront URL -> content is cached
The CloudFront url now expires.
Client 1 checks again www.myserver.com/the_image -> is redirected to the same CloudFront URL-> retrieves the content from cache without to fetch again the cloudfront content.
Client 2 checks www.myserver.com/the_image -> is redirected to CloudFront URL which denies its accesss because the signature expired.

Related

Azure CDN "looses" requests

We're using Azure CDN (Verizon Standard) to serve images to ecommerce sites, however, we're experiencing unreasonable amount of loads from origin, images which should've been cached in the CDN is requested again multiple times.
Images seems to stay in the cache if they're requested very frequently (setting up a pingdom page speed test doesn't show the problem, it's executing every 30 minutes).
Additionally, if I request an image (using the browser), the scaled image is requested from the origin and delivered, but the second request doesn't return a cached file from the CDN but origin is called again. The third request returns from the CDN.
The origin is a web app which scales and delivers the requested images. All requests for images have the following headers which might affect caching:
cache-control: max-age=31536000, s-maxage=31536000
ETag: e7bac8d5-3433-4ce3-9b09-49412ac43c12?cache=Always&maxheight=3200&maxwidth=3200&width=406&quality=85
Since we want the CDN to cache the scaled image, Azure CDN Endpoint is configured to cache every unique url and the caching behaviour is "Set if missing" (although all responses have the headers above).
Using the same origin with AWS Cloudfront works perfectly (but since we have everything else in Azure, it would be nice to make it work). I haven't been able to find if there's any limit or constraints for the ETag but since it works with AWS it seems like I'm missing something related to either Azure or Verizon.

Why we need HTTPS when we send result to user

The reason we need HTTPS(Secured/Encrypted Data over network):
We need to get the user side data(Either via form or by URL which ever way users sends their data to server via network) securely Which is done by http + ssl encryption - so in that case only the form or which ever URL that user posting/sending data to server has to be secure URL and not the page that I am sending to browser[ Eg. When I need to have customer register form From server itself I have to send it as https url - if I dont do that then browser will give warning like mixed content error. Instead is it wrong that browsers could have had some sort of param to mention the form I have has to be secure url.
In some cases my server side content cant be read by anyone outside other than who I allow to be - for that I can use https to deliver the content with extra security measurements in server side.
Other than these two scenarios I dont see any reason on having https based encoded content over network. Lets assume a site with 10+ css, 10+ js, 50+ images with 200k of content weight and total weight may be ~2 - 3MB - so this whole content is encrypted - have no doubt this is going to be min. of 100 - 280 connection creation between browser and server.
Please explain - why we need to follow the way we deliver[Most of us doing because browsers/google like search engines/w3o standards asks us to use on every page].
why we need to follow the way we deliver
Because otherwise it's not secure. The browsers which warn about this are not wrong.
Let's assume a site with 10+ css, 10+ js
Just 1 .js served over non-HTTPS and a man-in-the-middle attacker could inject abitrary code into your HTTPS page, from which origin they can completely control the user's interaction with your site. That's why browsers don't allow it, and give you the mixed content warning.
(And .css can have the same impact in many cases.)
Plus it's just plain bad security-usability to switch between HTTP and HTTPS for different pages. The user is likely to fail to notice the switch, and may be tricked into entering data into (or accepting data from) a non-HTTPS page. All the attacker would have to do would be to change one of the HTTP links so it pointed to HTTP instead of HTTPS, and the usual process would be subverted.
have no doubt this is going to be min. of 100 - 280 connection creation between browser and server.
HTTP[S] reuses connections. You don't pay the SSL handshake latency for every resource linked.
HTTPS is really not that expensive today to be worth worrying about performance for a typical small web app.

Caching with SSL certification

I read if the request is authenticated or secure, it won't be cached. We previously worked on our cache and now planning to purchase a SSL certificate.
If caching cannot be done with SSL connection then is that mean our work on caching is useless?
Reference: http://www.mnot.net/cache_docs/
Your reference is wrong. Content sent over https will be cached in modern browsers, but they obviously cannot be cached in intermediate proxies. See http://arstechnica.com/business/2011/03/https-is-great-here-is-why-everyone-needs-to-use-it-so-ars-can-too/ or https://blog.httpwatch.com/2011/01/28/top-7-myths-about-https/ for example.
You can use the Cache-Control: public header to allow a representation served over HTTPS to be cached.
While the document you refer to says "If the request is authenticated or secure (i.e., HTTPS), it won’t be cached.", it's within a paragraph starting with "Generally speaking, these are the most common rules that are followed [...]".
The same document goes into more details after this:
Useful Cache-Control response headers include:
public — marks authenticated responses as cacheable; normally, if HTTP authentication is required, responses are automatically private.
(What applies to HTTP with authentication also applies to HTTPS.)
Obviously, documents that actually contain sensitive information only aimed for the authenticated user should not be served with this header, since they really shouldn't be cached. However, using this header for items that are suitable for caching (e.g. common images and scripts) should improve the performance of your website (as expected for caching over plain HTTP).
What will never happen with HTTPS is the caching of resources by intermediate proxy servers (between the client and your web-server, at least the external part, if you have a load-balancer or similar). Some CDNs will serve content over HTTPS (assuming it's suitable for your system to trust these CDNs). In general, these proxy servers wouldn't fall under the control of your cache design anyway.

Geo location / filtering and HTTP Caching

I'm trying to add cache support (both HTTP and server) for a ASP.NET Web Api solution.
The solution is geo located, meaning that I can get different results based on the caller IP address.
The question can be trivially solved for the server side cache, using an approach similar to VaryByCustom (like this one). However that does not solve the problem with the client side HTTP caches. Here are the alternatives
I'm considering the following options:
Enforcing a must-revalidate in the cache
Keep the validation server side using the same algorithm to VaryByCustom, but include the extra cache revalidate calls on the server side with ETAGS or any mechanism that keep track of the originally cached value country of origin.
Creating country specific routes HTTP 302
In this scenario an application invoking
http://site/UK/content
Redirects to US version if originating from an US IP address when the cache has expired
http://site/US/content
It might present out-of-date contents that do not match the IP of origin local. That is not a serious problem if the cache expires is a small value (< 1 hour), since country changes are fairly uncommon.
What is the recommended solution?
I'm not sure I understand the problem.
For client caching, if you enable private caching then a user in UK will cache the UK version of http://site/content and the US user will cache the US version of http://site/content.
The only problem I can see is if a user travels from the US to the UK and accesses the content. Or if you allow public caching and some intermediary is shared by US and UK users.
After detailed evaluation first approach was chosen. Actual implementation is:
Create a cache key that depends on the country of origin IP address
Create a ETag for that cache key and store it in Server cache
Additional requests that include ETag If-None-Match header are evaluates in server for cache freshness:
If the country of origin is the same, the cache key will be the same and ETag is valid, returning a HTTP 304 not modified
If the country of origin is different, cache key will be different and such the ETag is not valid, returning a HTTP 200 and returning a new ETag.
Agree with Poul-Henning Kamp geolocation should be a transport level thing, but unfortunately is not, so this is the only way we could come up with to ensure cache freshness for a given country.
The disadvantage is that cannot have any infrastructure cache, e.g., all requests need to check the server for cache freshness.

setting the path on cookie prevent it being sent in http static requests?

I am working with cookies in my .net application. Is there anyway in the setting of the cookie to the users web browser it wont be sent in the http request for static resources such as css, javascript or images. Or is the only way around not sending cookies in such requests, setting up cookieless domains for such resources.
Let me start by saying this: Unless you're getting thousands of requests per second, the total effect on bandwidth and server load will be minimal. So unless you're working with a really high traffic site, I wouldn't bother.
With that said, Path is not really a good option. That's because most paths are underneath a website's valid path (usually / is a valid dynamic url, but statics are served from under /)...
Instead, I would serve static content from a different domain (it could be served by the same server, or a CDN which is preferred). So create a subdomain like static.domain.com, and reference all of your static content from there. It doesn't matter where on the server it's mapped to, just that it's referred by the HTML from the other domain. Cookies won't be transmitted since the domain part won't be the same (as long as you don't use wildcard domain identifiers in the cookie declaration)...

Resources