When you configure a cache in Edge you give it some key fragments (e.g. request.uri, request.header.Accept, request.header.Accept-Language, etc.). To clear that key you pass the same key fragments.
If I have 5,000 elements cached, how can I clear the entire cache without generating 5,000 calls to my API with all the possible cache keys?
You can use the clear all cache entries API call, documented here. If you don't pass in the prefix query parameter, it should remove all.
Invalidate cache policy is used to explicitly invalidate the cache entry for the given CacheKey (Where Cachekey is the combination of 'Prefix and KeyFragment'), not for clearing the all entries, associated with the given Cache resource. Please go through the document here to understand more about 'Invalidate Cache'.
The cache can also be cleared from UI.
You can login to UI then go to API's tab under that will be "Environment Configuration" Tab
Here you will get the option to clear entire cache.
The following API call will also allow you to delete all your cache entries :
curl -v -u admin 'https://api.enterprise.apigee.com/v1/organizations/{org-name}/environments/{env-name}/caches/{cache-name}/entries?action=clear' -X POST
Related
I have the following endpoints:
/orders
/orders?status=open
/orders?status=open&my_orders=true
The third example uses headers to determine user and return their specific items.
Obviously, this is a poor API design but we want to cache the first two and not the third. The caching policy can be modified to either whitelist or exclude querystring params but based on my understanding this won't be helpful. If we include the user specific header than the first 2 URIs will all be cached per user.
Is there an option I am missing that allows me to avoid caching the 3rd endpoint, while still caching the first two? Another option is to cache the 3rd but include the user specific headers in the cache key.
If you exclude the my_orders query string from the cache policy, CloudFront will not include that value in the cache key. That means all else held equal, these two URI paths will share the same cache key:
/orders?status=open
/orders?status=open&my_orders=true
That doesn't sound like it's what you want - you do want to treat requests with my_orders=true as separate cache keys, but you also need to account for a specific request header where the value of that header changes the cache key. If that's the case, you need to include the request header as part of your cache key (which will also ensure CloudFront passes it through to your origin)
I just added some functionality to my site which, when a user hovers their mouse over a link (to a 3rd party page), a preview of the link is created from the meta tags on the target page and displayed. I'm worried about the implications of hot-linking in my current implementation.
I'm now thinking of implementing some kind of server-side caching such that the first request for the preview fetches the info and image from the target page, but each subsequent request (up to some age limit) is served from a cache on my host. I'm relatively confident that I could implement something of my own, but is there an off-the-shelf solution for something like this? I'm self-taught so I'm guessing that my DIY solution would be less than optimal. Thanks.
Edit I implemented a DIY solution (see below) but I'm still open to suggestions as to how this could be accomplished efficiently.
I couldn't find any off-the-shelf solutions so I wrote one in PHP.
It accepts a URL as a HTTP GET parameter and does some error checking. If error-checking passes, it opens a JSON-encoded database from disk and parses the data into an array of Record objects that contain the info that I want. The supplied URL is used as the array key. If the key exists in the array, the cached info is returned. Otherwise, the web page is fetched, meta tags parsed, image saved locally, and cached data returned. The cached info is then inserted into the database. After the cached info is returned to the requesting page, each record is examined for its expiration date and expired records are removed. Each request for a cached record extends its expiration date. Lastly, the database is JSON-encoded and written back to disk.
let's say I have a variable, that can either be 1, 2 or 3, it is stored in a user cookie. eg:
foo=2
The first time someone access pageX with foo=2, the page shall be cached.
All next visitors with foo=2 in their cookie shall see the same version (hit).
The first time someone access pageX with foo=1, the page shall be cached (as a second version).
All next visitors with foo=1 in their cookie shall see this specific version (hit).
same principle with foo=3
In other words, all pages of my website will have 3 versions, even if the same URL, one for each value of foo in visitor's cookie.
Is this feasible?
thanks,
Rod
I think the answer you are looking for can be found in the varnish docs
https://www.varnish-cache.org/trac/wiki/VCLExampleCachingLoggedInUsers
there is a good example in how to use a cookie variable to create a unique hash.
This could also be used to create different pages for the same url.
Be careful with the browsercache settings of the page. May the page change by url and the browser cache is set to high you might get strange behaviour.
I have a Guava cache which I would like to expire after X minutes have passed from the last access on a key. However, I also periodically do an action on all the current key-vals (much more frequently than the X minutes), and I wouldn't like this to count as an access to the key-value pair, because then the keys will never expire.
Is there some way to read the value of the keys without this influencing the internal state of the cache? ie cache._secretvalues.get(key) where I could conceivably subclass Cache to StealthCache and do getStealth(key)? I know relying on internal stuff is non-ideal, just wondering if it's possible at all. I think when I do cache.asMap.get() it still counts as an access internally.
From the official Guava tutorials:
Access time is reset by all cache read and write operations (including
Cache.asMap().get(Object) and Cache.asMap().put(K, V)), but not by
containsKey(Object), nor by operations on the collection-views of
Cache.asMap(). So, for example, iterating through cache.entrySet()
does not reset access time for the entries you retrieve.
So, what I would have to do is iterate through the entrySet instead to do my stealth operations.
This question is with reference to Simple Spring memcached.
I have a scenario where a list of deals are cached for user using the userId as the key. Now in case a deal data is updated I need to flush the cache for all users since this would affect deals data for all the users.
How can I achieve this with SSM annotations. The invalidate*cache and update*cache options seem to invalidate/update key specific cache entries whereas I need to clear the entire cache.
Currently it's impossible in plain SSM to flush entire cache using annotations, if you require such option please create a feature request on: https://code.google.com/p/simple-spring-memcached/issues/list
There is another way to flush entire cache by using SSM with Spring Cache as describer here: https://code.google.com/p/simple-spring-memcached/wiki/Getting_Started#Spring_3.1_Cache_Integration.
Just change allowClear to 'true' and use #CacheEvict(value = YOUR_CACHE_NAME, allEntries = true)