Browser Cache Private S3 Resources - laravel

Stack is:
Angular
Laravel
S3
nginx
I'm using S3 to store confidential resources of my users. Bucket access is set to private which means I can access files either by creating temporary (signed, dynamic) links or by using Storage::disk('s3')->get('path/to/resource') method and returning an actual file as a response.
I'm looking for a way to cache resources in user's browser. I have tried to set cache headers to resource response directly on AWS, but since I'm creating temporary urls, they are dynamic and cache is not working in that case.
Any suggestion is highly appreciated.
EDIT: One thing that makes the whole problem even more complex is that security of resources should be intact. It means that I need a way to cache resources, but in the same time I must prevent users from copy-pasting links and using them outside of the app (sharing with others via direct links).
Temporary links in terms of security are still not an ideal solution, since they can be shared (and accessed multiple times) within the period of time they are valid for (in my case it's 30 seconds).

Caching will work as-is (based on Cache-Control, et al.) as long as the URL stays the same. So, if your application uses the same signed URL for awhile, you'll be fine.
The problem comes when you want to update an expiration date or something. This of course has different querystring parameters, and is effectively a different URL. You need a different caching key, but the browser has no concept of this by default.
If it is acceptable for your security, you can create a Service Worker which uses just the base URL (without querystring) as the cache key. Then, future requests for the same object on the bucket will be able to used the cached response, regardless of other URL parameters.
I must prevent users from copy-pasting links and using them outside of the app (sharing with others via direct links).
This part of your requirement is impossible, and unrelated to caching. Once that URL is signed, it can be used by others.

You have just add one parameter in your code.
'ResponseCacheControl' => 'no-store'
Storage::disk('s3')->getAwsTemporaryUrl(Storage::disk('s3')->getDriver()->getAdapter(), trim($mNameS3), \Carbon\Carbon::now()->addMinutes(config('app.aws_bucket_temp_url_time')), ['ResponseCacheControl' => 'no-store']);

Related

Cache for Angular2

I am looking for a cache implementation for an Angular2 application.
For example, we have a million Movie objects stored on a server (i.e. enough that we don't want to grab them all at once). On the server, a REST endpoint is available : getMovie(String id)
Back on the client side, the cache should provide a simple way to get a movie from Angular, something like cache.getMovie(id:string): Observable<Movie>. This will hit the REST endpoint only for the first call, and store it locally for later requests.
Angular1 has angular-cache or the $cacheFactory, with LRU support and other great functionalities.
I started implementing a basic cache using a local HashMap, but that seems like a very common need.
Is there a good in-memory cache implementation for Angular2 yet?
I would use lscache and extend it providing few underlying storages: localStorage, sessionStorage, and self-implmented memoryStorage. TypeScript definitions are already available.

Why is ServiceStack caching in Service, not FilterAttribute?

In MVC and most other service frameworks I tried, caching is done via attribute/filter, either on the controller/action or request, and can be controlled through caching profile in config file. It seems offer more flexibility and also leave the core service code cleaner.
But ServiceStack has it inside the service. Are there any reason why it's done this way?
Can I add a CacheFilterAttribute, but delegate to service instead?
ToOptimizedResultUsingCache(base.Cache,cacheKey,()=> {
// Delegate to Request/Service being decorated?
});
I searched around but couldn't find an answer. Granted, it probably won't make much difference because the ServiceStack caching via delegate method is quite clean. And you seldom change caching strategy on the fly in real world. So this is mostly out of curiosity. Thanks.
Because the caching pattern involves, checking first to see if it is cached, if not to then execute the service, populate the cache, then return the result.
A Request Filter doesn't allow you to execute the service and a Response Filter means that the Service will always execute (i.e. mitigating the usefulness of the Cache), so the alternative would require a Request + Response filter combination where the logic would be split into 2 disjointed parts. Having it inside the Service, lets you see and reason about how it works and what exactly is going on, it also allows full access to calculate the uniqueHashKey used and exactly what and when (or even if) to Cache, which is harder to control with a generic black-box caching solution.
Although we are open to 'baking-in' built-in generic caching solutions (either via an attribute or ServiceRunner / base class). Add a feature request if you'd like to see this, specifying the preferred functionality/use-case (e.g. cache based on Time / Validity / Cache against user-defined Aggregate root / etc).

How to choose TTURLRequestCachePolicy?

I'm building an app with Three20 and I'm using the photo gallery component.
I can't find any documentation about the different cache policy available.
Could you explain to me each of them ?
TTURLRequestCachePolicyDefault
TTURLRequestCachePolicyDisk
TTURLRequestCachePolicyEtag
TTURLRequestCachePolicyLocal
TTURLRequestCachePolicyMemory
TTURLRequestCachePolicyNetwork
TTURLRequestCachePolicyNoCache
TTURLRequestCachePolicyNone
Thanks !
I'm not sure of the exact policy of each type, and they are not well documented. These is the information I have found out by using and reading the code:
TTURLRequestCachePolicyNone - requests will not use three20 cache system. meaning each request will perform a network request.
TTURLRequestCachePolicyMemory - the request will try to look for an existing cache object in the device memory. memory is cleaned each time the application is terminated. not sure how useful it is. from what i have seem, it's working only for UIImage objects
TTURLRequestCachePolicyDisk - Three20 saves cache objects in the application document folder as files. The request will look only on that disk cache.
TTURLRequestCachePolicyNetwork - not sure. i think it's checks the header expire date of the content.
TTURLRequestCachePolicyNoCache - will not cache new responses and will not look for cache objects in existing cache
TTURLRequestCachePolicyEtag - requests will be looked based on their header etag. I think it's a little buggy in three20, so it's better not to use it.
TTURLRequestCachePolicyLocal - requests will be looked on both disk & memory cache
TTURLRequestCachePolicyDefault - requests will be looked in all cache types (besides the etag)
From my experience, i use TTURLRequestCachePolicyDefault with expiration time i want, and TTURLRequestCachePolicyNoCache for requests i want to disable cache and make sure each request is doing a network call.

Refreshing in RestSharp for Windows Phone

I implemented RestSharp succesfully in my WP7 application, but one issue remains:
When I load resources from the server (for example a GET request on http://localhost:8080/cars), the first time the collection of (in this case) cars is succesfully returned.
When I issue the same request for the second time, I always get the same result as the first time - even when the resources have changed in the meantime. When looking at my server, the second time there is no request issued at all.
I presume there's a caching mechanism implemented in RestSharp, but I see no way to invalidate the cache results.
Are there any ways to manually invalidate the RestSharp for Windows Phone cache results? (Or ways to force the library to get the results from the server)
You can control caching of resources by setting headers on the response your server sends back. If you do not want the resource to be cached then set the cache-control header to no-cache.
It is the server's job to specify how long a resource is good for, the client should do its best to respect that information.
If you really, really want to delete entries in the cache you need to go via the WinINet API
As a quick hack to avoid caching you can append a unique value to the end of the query string. The current DateTime (including seconds and milliseconds if necessary) or a GUID are suitable.
eg.
var uri = "http://example.com/myrequest?rand=" + DateTime.Now().ToString();

Can I clear a specific URL in the browser's cache (using POST, or otherwise)?

The Problem
There's an item (foo.js) that rarely changes. I'd like this item to be stored in the browser's cache (using Expires header). However, when it does change, I'd like the browser to update to the newest version.
The Attempt
Foo.js is returned with a far future Expires header. It's cached on the browser and requires no round trip query to the server. Just the way I like it. Now, when it changes....
Let's assume I know that the user's version of foo.js is outdated. How can I force a fresh copy of it to be obtained? I use xhr to perform a POST to foo.js. This should, in theory, force the browser to get a newer version of foo.js.
Unfortunately, this only seems to work in Firefox. Other browsers will use their cached version of the copy, even if other POST paramters are set.
WTF
First off, is there a way to do what I'm trying to do?
Second, why is there no sensible key/value type of cache that browser's have? Why can I not simply not include in headers: "Cache: some_key, some_expiration_time" and also specify "Clear-Cache: key1, key2, key3" (the keys must be domain specific, of course). Instead, we're stuck with either expensive round-trips that ask "is content new?", or the ridiculous "guess how long it'll be before you modify something" Expires header.
Thanks
Any comments on this matter are greatly appreciated.
Edits
I realize that adding a version number to the file would solve this. However, in my case it is not possible -- the call to "foo.js" is hardcoded into a bookmarklet.
You can just add a querystring to the end of the file, the server can ignore it, but the browser can't, it must treat it as a new request:
http://www.site.com/foo.js?v=1.12345
Many people use this approach, SO uses a hash of some sort, I use the build number (so users get a new version each build). If either of these is an option, you get the benefit of long duration cache headers, but still force a fetch of a new copy when needed.
Why set your cache expiration so far in the future? If you set it to one day for instance, the only overhead that you will incur (once a day) is the browser revalidating that it is the same file. If you still have not changed it, then you will not re-download the file, the server will respond with a not-modified response.
All caches have a set of rules that
they use to determine when to serve a
representation from the cache, if it’s
available. Some of these rules are set
in the protocols (HTTP 1.0 and 1.1),
and some are set by the administrator
of the cache (either the user of the
browser cache, or the proxy
administrator).
Generally speaking, these are the most
common rules that are followed (don’t
worry if you don’t understand the
details, it will be explained below):
If the response’s headers tell the cache not to keep it, it won’t.
If the request is authenticated or secure (i.e., HTTPS), it won’t be
cached.
A cached representation is considered fresh (that is, able to be
sent to a client without checking with
the origin server) if:
* It has an expiry time or other age-controlling header set, and
is still within the fresh period, or
* If the cache has seen the representation recently, and it was
modified relatively long ago.
Fresh representations are served directly from the cache, without
checking with the origin server.
If an representation is stale, the origin server will be asked to
validate it, or tell the cache whether
the copy that it has is still good.
Under certain circumstances — for example, when it’s disconnected
from a network — a cache can serve
stale responses without checking with
the origin server.
If no validator (an ETag or
Last-Modified header) is present on a
response, and it doesn't have any
explicit freshness information, it
will usually — but not always — be
considered uncacheable.
Together, freshness and validation are
the most important ways that a cache
works with content. A fresh
representation will be available
instantly from the cache, while a
validated representation will avoid
sending the entire representation over
again if it hasn’t changed.
http://www.mnot.net/cache_docs/#BROWSER
There is an excellent suggestion made in this thread: How can I make the browser see CSS and Javascript changes?
See the accepted answer by user, "grom".
The idea is to use the "modified" time stamp from the server to note when the file has been modified, and adding a version parameter to the end of the URL, making your CSS and JS files have URLs like this: my.js?version=12345678
This makes the browser think it is a new file, and so it does not refer to the cached version.
I am using a similar method in my app. It works pretty well. Of course, this would assume you are using something like PHP to process your HTML.
Here is another link with a more simple implementation for WordPress: http://markjaquith.wordpress.com/2009/05/04/force-css-changes-to-go-live-immediately/
With these constraints I guess your only option is to use window.location.reload(true) and force the browser to fresh all the cached items.. it's not pretty
You can invalidate cache on a specific url, using Cache-Control HTML header.
On your desired URL you can run (with xhr/ajax for instance) a request with following headers :
headers: {
'Cache-Control': 'no-cache, no-store, must-revalidate, max-age=0',
Pragma: 'no-cache',
Expires: '0',
}
Your cache will be invalidated, and next GET requests will return a brand new result.

Resources