How to tell cloudfront to not cache 302 responses from S3 redirects, or, how else to workaround this image caching generation issue - caching

I'm using Imagine via the LIIPImagineBundle for Symfony2 to create cached versions of images stored in S3.
Cached images are stored in an S3 web enabled bucket served by CloudFront. However, the default LIIPImagineBundle implementation of S3 is far too slow for me (checking if the file exists on S3 then creating a URL either to the cached file or to the resolve functionality), so I've worked out my own workflow:
Pass client the cloudfront URL where the cached image should exist
Client requests the image via the cloudfront URL, if it does not exist then the S3 bucket has a redirect rule which 302 redirects the user to an imagine webserver path which generates the cached version of the file and saves it to the appropriate location on S3
The webserve 301 redirects the user back to the cloudfront URL where the image is now stored and the client is served the image.
This is working fine as long as I don't use cloudfront. The problem appears to be that cloudfront is caching the 302 redirect response (even though the http spec states that they shouldn't). Thus, if I use cloudfront, the client is sent in an endless redirect loop back and forth from webserver to cloudfront, and every subsequent request to the file still redirects to the webserver even after the file has been generated.
If I use S3 directly instead of cloudfront there are no issues and this solution is solid.
According to Amazon's documentation S3 redirect rules don't allow me to specify custom headers (to set cache-control headers or the like), and I don't believe that CloudFront allows me to control the caching of redirects (if they do it's well hidden). CloudFront's invalidation options are so limited that I don't think they will work (can only invalidate 3 objects at any time)...I could pass an argument back to cloudfront on the first redirect (from the Imagine webserver) to fix the endless redirect (eg image.jpg?1), but subsequent requests to the same object will still 302 to the webserver then 301 back to cloudfront even though it exists. I feel like there should be an elegant solution to this problem but it's eluding me. Any help would be appreciated!!

I'm solving this same issue by setting the "Default TTL" in CloudFront "Cache Behavior" settings to 0, but still allowing my resized images to be cached by setting the CacheControl MetaData on the S3 file with max-age=12313213.
This way redirects will not be cached (default TTL behavior) but my resized images will be (CacheControl max-age on s3 cache hit).

If you really need to use CloudFront here, the only thing I can think of is that you don’t directly subject the user to the 302, 301 dance. Could you introduce some sort of proxy script / page to front S3 and that whole process? (or does that then defeat the point).
So a cache miss would look like this:
Visitor requests proxy page through Cloudfront.
Proxy page requests image from S3
Proxy page receives 302 from S3, follows this to Imagine web
server
Ideally just return the image from here (while letting it update
S3), or follow 301 back to S3
Proxy page returns image to visitor
Image is cached by Cloudfront

TL;DR: Make use of Lambda#Edge
We face the same problem using LiipImagineBundle.
For development, an NGINX serves the content from the local filesystem and resolves images that are not yet stored using a simple proxy_pass:
location ~ ^/files/cache/media/ {
try_files $uri #public_cache_fallback;
}
location #public_cache_fallback {
rewrite ^/files/cache/media/(.*)$ media/image-filter/$1 break;
proxy_set_header X-Original-Host $http_host;
proxy_set_header X-Original-Scheme $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:80/$uri;
}
As soon as you want to integrate CloudFront things get more complicated due to caching. While you can easily add S3 (static website, see below) as a distribution, CloudFront itself will not follow the resulting redirects but return them to the client. In the default configuration CloudFront will then cache this redirect and NOT the desired image (see https://stackoverflow.com/a/41293603/6669161 for a workaround with S3).
The best way would be to use a proxy as described here. However, this adds another layer which might be undesirable. Another solution is to use Lambda#Edge functions as (see here). In our case, we use S3 as a normal distribution and make use of the "Origin Response"-Event (you can edit them in the "Behaviors" tab of your distribution). Our Lambda function just checks if the request to S3 was successful. If it was, we can just forward it. If it was not, we assume that the desired object was not yet created. The lambda function then calls our application that generates the object and stores it in S3. For simplicity, the application replies with a redirect (to CloudFront again), too - so we can just forward that to the client. A drawback is that the client itself will see one redirect. Also make sure to set the cache headers so that CloudFront does not cache the lambda redirect.
Here is an example Lambda Function. This one just redirects the client to the resolve url (which then redirects to CloudFront again). Keep in mind that this will result in more round trips for the client (which is not perfect). However, it will reduce the execution time of your Lambda function. Make sure to add the Base Lambda#Edge policy (related tutorial).
env = {
'Protocol': 'http',
'HostName': 'localhost:8000',
'HttpErrorCodeReturnedEquals': '404',
'HttpRedirectCode': '307',
'KeyPrefixEquals': '/cache/media/',
'ReplaceKeyPrefixWith': '/media/resolve-image-filter/'
}
def lambda_handler(event, context):
response = event['Records'][0]['cf']['response']
if int(response['status']) == int(env['HttpErrorCodeReturnedEquals']):
request = event['Records'][0]['cf']['request']
original_path = request['uri']
if original_path.startswith(env['KeyPrefixEquals']):
new_path = env['ReplaceKeyPrefixWith'] + original_path[len(env['KeyPrefixEquals']):]
else:
new_path = original_path
location = '{}://{}{}'.format(env['Protocol'], env['HostName'], new_path)
response['status'] = env['HttpRedirectCode']
response['statusDescription'] = 'Resolve Image'
response['headers']['location'] = [{
'key': 'Location',
'value': location
}]
response['headers']['cache-control'] = [{
'key': 'Cache-Control',
'value': 'no-cache' # Also make sure that you minimum TTL is set to 0 (for the distribution)
}]
return response
If you just want to use S3 as a cache (without CloudFront). Using static website hosting and a redirect rule will redirect clients to the resolve url in case of missing cache files (you will need to rewrite S3 Cache Resolver urls to the website version though):
<RoutingRules>
<RoutingRule>
<Condition><HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
<KeyPrefixEquals>cache/media/</KeyPrefixEquals>
</Condition>
<Redirect>
<Protocol>http</Protocol>
<HostName>localhost</HostName>
<ReplaceKeyPrefixWith>media/image-filter/</ReplaceKeyPrefixWith>
<HttpRedirectCode>307</HttpRedirectCode>
</Redirect>
</RoutingRule>
</RoutingRules>

Related

Override browser cache in PWA service worker

I am using "caches" to cache in service worker my PWA assets and make it available offline.
When I change an asset, specifically a js file, I modify at least one byte in my service worker to trigger its native update: the service worker updates and retrieves all of its previously cached assets to refresh its caches.
Yet, server responds with a cached version of the file, and whereas I own the files served I have no control over Cache-Control http header.
How can i prevent browser caching on service worker cached resources? Versioning the files with a
"?v="+version
suffix won't work, because this version cannot be passed to the or or tags that references the cached files in html files, which are static and caches will not recognize and serve offline unversioned file names.
Since "caches.addAll" does not allow AFAIK any means to specify http request headers such as Cache-Control as fetch or XMLHttpRequest do, how can I prevent additional aggressive caching stages over my assets?
I am using plain Javascript and if possible I need it to be done without any additional library. Note also that meta http-equiv tags won't solve the problem for assets other than complete html.
You can bypass the browser's cache by explicitly constructing a Request object with a cache property set to an appropriate cache mode. 'reload' is a good choice, as it will bypass the browser's cache for the outgoing request, but it will update the browser's cache with the response (so you'll have a fresher browser cache overall). If you don't even want that update to be performed, you could use 'no-store'.
Here's some code showing how to do this concisely for an array of URLs that could be passed in to cache.addAll():
async function addAllBypassCache(cacheName, urls) {
const cache = await caches.open(cacheName);
const requests = urls.map((url) => new Request(url, {
cache: 'reload',
}));
await cache.addAll(requests);
}

Laravel AWS S3 Storage image cache

I have Laravel based web and mobile application that stores images on AWS S3 and I want to add cache support because even small number of app users produce hundreds and sometimes thouthands of GET requests on AWS S3.
To get image from mobile app I use GET request that is being handled by code like this
public function showImage(....) {
...
return Storage::disk('s3')->response("images/".$image->filename);
}
On the next image you can see response headers that I receive. Cache-Control shows no-cache so I assume that mobile app won't cache this image.
How can I add cache support for this request? Should I do it?
I know that Laravel Documentaion suggests caching for Filestorage - should I implement it for S3? Can it help to decrease GET requests count of read files from AWS S3? Where can I find more info about it.
I would suggest to use a temporary URL as described here: https://laravel.com/docs/7.x/filesystem#file-urls
Then use the Cache to store it until it is expired:
$value = Cache::remember('my-cache-key', 3600 * $hours, function () use ($hours, $image) {
$url = Storage::disk('s3')->temporaryUrl(
"images/".$image->filename, now()->addMinutes(60 * $hours + 1)
);
});
Whenever you update the object in S3, do this to delete the cached URL:
Cache::forget('my-cache-key');
... and you will get a new URL for the new object.
You could use a CDN service like CloudFlare and set a cache header to let CloudFlare keep the cache for a certain amount of time.
$s3->putObject(file_get_contents($path), $bucket, $url, S3::ACL_PUBLIC_READ, array(), array('Cache-Control' => 'max-age=31536000, public'));
This way, files would be fetched once by CloudFlare, stored at their servers, and served to users without requesting images from S3 for every single request.
See also:
How can I reduce my data transfer cost? Amazon S3 --> Cloudflare --> Visitor
How to set the Expires and Cache-Control headers for all objects in an AWS S3 bucket with a PHP script

Caching all images on external site through Cloudflare

Here is my situation:
I have a webapp that uses a lot of images on a remote server. My webapp is behind Cloudflare, although the server that the images are hosted on are not.. and this server can be very slow. It can sometimes take about 5 seconds per image.
I would like to use Cloudflare to proxy requests to this external server, but also cache them indefinitely, or at least as long as possible. The images never change so I do not mind them having a long cache life.
Is this something I should set up in a worker? As a page rule? Or just not use CLoudflare in this way?
If you can't change origin server headers, you could try following snippet in your worker:
fetch(event.request, { cf: { cacheTtl: 300 } })
As per docs:
This option forces Cloudflare to cache the response for this request,
regardless of what headers are seen on the response. This is
equivalent to setting two page rules: “Edge Cache TTL” and “Cache
Level” (to “Cache Everything”).
I think you generally just want a very long caching header on your images. Something like:
Cache-Control: public; max-age=31536000

Allow only CloudFront to read from origin servers?

I'm using origin servers on CloudFront (as opposed to s3) with signed URLs. I need a way to ensure that requests to my server are coming only from CloudFront. That is, a way to prevent somebody from bypassing CloudFront and requesting a resource directly on my server. How can this be done?
As per the documentation, there's no support for that yet. The only thing I can think of is you can restrict more access, although not entirely by just allowing only Amazon IP addresses to your webserver. They should be able to provide them to you (IP address ranges) as they have provided them to us.
This what the docs say:
Using an HTTP Server for Private Content
You can use signed URLs for any CloudFront distribution, regardless of whether the origin is an Amazon S3 bucket or an HTTP server. However, for CloudFront to access your objects on an HTTP server, the objects must remain publicly accessible. Because the objects are publicly accessible, anyone who has the URL for an object on your HTTP server can access the object without the protection provided by CloudFront signed URLs. If you use signed URLs and your origin is an HTTP server, do not give the URLs for the objects on your HTTP server to your customers or to others outside your organization.
I've just done this for myself, and thought I'd leave the answer here where I started my search.
Here's the few lines you need to put in your .htaccess (assuming you've already turned the rewrite engine on):
RewriteCond %{HTTP_HOST} ^www-origin\.example\.com [NC]
RewriteCond %{HTTP_USER_AGENT} !^Amazon\ CloudFront$ [NC]
RewriteRule ^(.*)$ https://example.com/$1 [R=301,L]
This will redirect all visitors to your Cloudfront distribution - https://example.com in this, um, example - and only let www-origin.example.com work for Amazon CloudFront. If your website code is also on a different URL (a development or staging server, for example) this won't get in the way.
Caution: the user-agent is guessable and spoofable; a more secure way of achieving this would be to set a custom HTTP header in Cloudfront, and check for its value in .htaccess.
I ended up creating 3 Security Groups filled solely with CloudFront IP addresses.
I found the list of IPs on this AWS docs page.
If you want to just copy and paste the IP ranges into the console, you can use this list I created:
Regional:
13.113.196.64/26, 13.113.203.0/24, 52.199.127.192/26, 13.124.199.0/24, 3.35.130.128/25, 52.78.247.128/26, 13.233.177.192/26, 15.207.13.128/25, 15.207.213.128/25, 52.66.194.128/26, 13.228.69.0/24, 52.220.191.0/26, 13.210.67.128/26, 13.54.63.128/26, 99.79.169.0/24, 18.192.142.0/23, 35.158.136.0/24, 52.57.254.0/24, 13.48.32.0/24, 18.200.212.0/23, 52.212.248.0/26, 3.10.17.128/25, 3.11.53.0/24, 52.56.127.0/25, 15.188.184.0/24, 52.47.139.0/24, 18.229.220.192/26, 54.233.255.128/26, 3.231.2.0/25, 3.234.232.224/27, 3.236.169.192/26, 3.236.48.0/23, 34.195.252.0/24, 34.226.14.0/24, 13.59.250.0/26, 18.216.170.128/25, 3.128.93.0/24, 3.134.215.0/24, 52.15.127.128/26, 3.101.158.0/23, 52.52.191.128/26, 34.216.51.0/25, 34.223.12.224/27, 34.223.80.192/26, 35.162.63.192/26, 35.167.191.128/26, 44.227.178.0/24, 44.234.108.128/25, 44.234.90.252/30
Global:
120.52.22.96/27, 205.251.249.0/24, 180.163.57.128/26, 204.246.168.0/22, 205.251.252.0/23, 54.192.0.0/16, 204.246.173.0/24, 54.230.200.0/21, 120.253.240.192/26, 116.129.226.128/26, 130.176.0.0/17, 99.86.0.0/16, 205.251.200.0/21, 223.71.71.128/25, 13.32.0.0/15, 120.253.245.128/26, 13.224.0.0/14, 70.132.0.0/18, 13.249.0.0/16, 205.251.208.0/20, 65.9.128.0/18, 130.176.128.0/18, 58.254.138.0/25, 54.230.208.0/20, 116.129.226.0/25, 52.222.128.0/17, 64.252.128.0/18, 205.251.254.0/24, 54.230.224.0/19, 71.152.0.0/17, 216.137.32.0/19, 204.246.172.0/24, 120.52.39.128/27, 118.193.97.64/26, 223.71.71.96/27, 54.240.128.0/18, 205.251.250.0/23, 180.163.57.0/25, 52.46.0.0/18, 223.71.11.0/27, 52.82.128.0/19, 54.230.0.0/17, 54.230.128.0/18, 54.239.128.0/18, 130.176.224.0/20, 36.103.232.128/26, 52.84.0.0/15, 143.204.0.0/16, 144.220.0.0/16, 120.52.153.192/26, 119.147.182.0/25, 120.232.236.0/25, 54.182.0.0/16, 58.254.138.128/26, 120.253.245.192/27, 54.239.192.0/19, 18.64.0.0/14, 120.52.12.64/26, 99.84.0.0/16, 130.176.192.0/19, 52.124.128.0/17, 204.246.164.0/22, 13.35.0.0/16, 204.246.174.0/23, 36.103.232.0/25, 119.147.182.128/26, 118.193.97.128/25, 120.232.236.128/26, 204.246.176.0/20, 65.8.0.0/16, 65.9.0.0/17, 120.253.241.160/27, 64.252.64.0/18
I'd like to note that by default, Security Groups only allow a maximum of 60 inbound and outbound rules each, which is why I'm splitting these up 122 IPs into 3 security groups.
After creating your 3 Security Groups, attach them to your EC2 (you can attach multiple Security Groups to an EC2). I left the EC2's default Security Group to only allow SSH traffic from my IP address.
Then you should be good to go! This forces users to use your CloudFront distribution and keeps your EC2's IP/DNS private.
AWS have finally created an AWS managed prefix list for CloudFront to Origin server requests. So no more need for custom Lambdas updating Security Groups etc.
Use the prefix com.amazonaws.global.cloudfront.origin-facing in your Security Groups etc.
See the following links for more info:
The What's New Announcement
The Documentation

Serving content depending on http accept header - caching problems?

I'm developing an application which is supposed to serve different content for "normal" browser requests and AJAX requests for the same URL requested.
(in fact, encapsulate the response HTML in JSON object if the request is AJAX).
For this purpose, I'm detecting an AJAX request on the server side, and processing the response appropriately, see the pseudocode below:
function process_response(request, response)
{
if request.is_ajax
{
response.headers['Content-Type'] = 'application/json';
response.headers['Cache-Control'] = 'no-cache';
response.content = JSON( some_data... )
}
}
The problem is that when the first AJAX request to the currently viewed URL is made strange things happens on Google Chrome - if, right after the response comes and is processed via JavaScript, user clicks some link (static, which redirects to other page) and then clicks back button in the browser, he sees the returned JSON code instead of the rendered website (logging the server I can say that no request is made). It seems for me that Chrome stores the latest request response for the specific URL, and doesn't take into account that it has different content-type etc.
Is that a bug in the Chrome or am I misusing HTTP protocol ?
--- update 12 11 2012, 12:38 UTC
following PatrikAkerstrand answer, I've found following Chrome bug: http://code.google.com/p/chromium/issues/detail?id=94369
any ideas how to avoid this behaviour?
You should also include a Vary-header:
response.headers['Vary'] = 'Content-Type'
Vary is a standard way to control caching context in content negotiation. Unfortunately it has also buggy implementations in some browsers, see Browser cache vary broken.
I would suggest using unique URLs.
Depending of you framework capabilities you can redirect (302) the browser to URL + .html to force response format and make cache key unique within browser session. Then for AJAX requests you can still keep suffix-less URL. Alternatively you may suffix AJAX URL with .json instead .
Another options are: prefixing AJAX requests with /api or adding some cache boosting query params ?rand=1234.
Setting cache-control to no-store made it in my case, while no-cache didn't. This may have unwanted side effects though.
no-store: The response may not be stored in any cache. Although other directives may be set, this alone is the only directive you need in preventing cached responses on modern browsers.
Source: Mozilla Developer Network - HTTP Cache-Control

Resources