Remove cache keys by pattern/wildcard - laravel

I'm building a REST API with Lumen and want to cache some of the routes with Redis. E.g. for the route /users/123/items I use:
$items = Cache::remember('users:123:items', 60, function () {
// Get data from database and return
});
When a change is made to the user's items, I clear the cache with:
Cache::forget('users:123:items');
So far so good. However, I also need to clear the cache I've implemented for the routes /users/123 and /users/123/categories since those include an item list as well. This means I also have to run:
Cache::forget('users:123');
Cache::forget('users:123:categories');
In the future, there might be even more caches to clear, which is is why I'm looking for a pattern/wildcard feature such as:
Cache::forget('users:123*');
Is there any way to accommodate this behavior in Lumen/Laravel?

You can use cache tags.
Cache tags allow you to tag related items in the cache and then flush all cached values that have been assigned a given tag. You may access a tagged cache by passing in an ordered array of tag names. For example, let's access a tagged cache and put value in the cache:
Cache::tags(['people', 'artists'])->put('John', $john, $minutes);
You may flush all items that are assigned a tag or list of tags. For example, this statement would remove all caches tagged with either people, authors, or both. So, both Anne and John would be removed from the cache:
Cache::tags(['people', 'authors'])->flush();

First Get the cached keys with pattern
$output = Redis::connection('cache')->keys("*mn");
Output
[
"projectName_database_ProjectName_cahe_:mn"
]
Output contain from four parts
redis prefix ==> config('database.redis.options.prefix')
cache prefix ==> config('cache.prefix')
seperator ":"
your cahced key "mn"
Get Key
$key = end(explode(":", $output[0]));
Cache::forget($key); // delete key

Related

StackExchange.Redis server.Keys(pattern: alternative

I need to store some keys in the cache, that each one expires individually, but have them grouped.
So I do something like this:
Connection.GetDatabase().StringSet($"Logged:{userID}", "", TimeSpan.FromSeconds(30));
At some point I want to get all the grouped keys, so following the example in library's github documentation
https://github.com/StackExchange/StackExchange.Redis/blob/main/docs/KeysScan.md
I do this:
var servers = Connection.GetEndPoints();
return Connection.GetServer(servers[0]).Keys(pattern: "Logged:*);
But in the same page in github there is this warning
Either way, both SCAN and KEYS will need to sweep the entire keyspace,
so should be avoided on production servers - or at least, targeted at
replicas.
What else can I use to achieve what I want without using Keys, if we don't have replicas?

User specific content in TYPO3 and caching

How to show not cached fe_user data in TYPO3? According to Prevent to Cache Login Information.
The ViewHelper sets $GLOBALS['TSFE']->no_cache = 1 if the user logged in. Is there a better way? Because not the whole page should not be cached, only some parts of it.
Unfortunately this is not possible.
The best way is, you render the not cached fe_user data with a AJAX called eID or TypeNum and the whole page is completly cached.
like this one:
http://www.typo3-tutorials.org/cms/typo3-und-ajax-wie-geht-das.html
your example code disabled cache for the complete page. but you only need to disable cache for the part where you display the user specific data. As you can except any part from caching you need to select whether to cache only
one content element (especially for plugins this is standard behaviour: just declare your plugin as uncachable in your ext_localconf.php)
one column (make sure you use a COA_INT (or other uncached object) in your typoscript)
one viewhelper (make your viewhelper uncachable [1] or use the v:render.uncache() VH from EXT:vhs)
[1]
as a viewhelper is derived from AbstractConditionViewHelper, which uses the Compilable Interface, which caches the result, the compile() method from AbstractConditionViewHelper must be rewritten and return the constant
\TYPO3\CMS\Fluid\Core\Compiler\TemplateCompiler::SHOULD_GENERATE_VIEWHELPER_INVOCATION
like this:
public function compile(
$argumentsVariableName,
$renderChildrenClosureVariableName,
&$initializationPhpCode,
\TYPO3\CMS\Fluid\Core\Parser\SyntaxTree\AbstractNode $syntaxTreeNode,
\TYPO3\CMS\Fluid\Core\Compiler\TemplateCompiler $templateCompiler
) {
parent::compile(
$argumentsVariableName,
$renderChildrenClosureVariableName,
$initializationPhpCode,
$syntaxTreeNode,
$templateCompiler
);
return \TYPO3\CMS\Fluid\Core\Compiler\TemplateCompiler::SHOULD_GENERATE_VIEWHELPER_INVOCATION;
}

Umbraco get all cache items

I am using Umbraco caching using Umbraco.Core.Cache;
I have no problem getting cache item using this line of code
ApplicationContext.Current.ApplicationCache.RuntimeCache.GetCacheItem(
given the correct cache key item.
Now my question is:
What if I forgot the cache key item? is there any way I can peek for all cache item? Or for debugging purpose I just want to see all of them?
I traced all possible intellisense suggestion but seems no "GetAllCacheItem" available
Anyone please enlighten me is it possible?
When writing the code you could only get intellisense on the cache keys if you used constants (like below) but the drawback is having to maintain the constant values when adding new cache items.
ApplicationContext.ApplicationCache.RuntimeCache.GetCacheItem(CacheKeys.SAMPLE_KEY)
public class CacheKeys
{
public const string SAMPLE_KEY = "some-example-key";
}
Whilst debugging you are able to view the cache keys as follows; under the hood IRuntimeCacheProvider (ApplicationContext.ApplicationCache.RuntimeCache) uses the HttpRuntime cache so although you cannot iterate over cache items in the RuntimeCache property directly you can use HttpRuntime.Cache like:
var keys = new StringBuilder();
foreach (DictionaryEntry cacheItem in HttpRuntime.Cache)
{
keys.AppendLine(cacheItem.Key.ToString());
}
Items added to the runtime cache via the Umbraco provider contain the prefix "umbrtmche-" so you may wish to filter the results:
HttpRuntime.Cache.Cast<DictionaryEntry>()
.Where(x => x.Key.ToString().StartsWith("umbrtmche"))
.Select(x => x.Key.ToString().Replace("umbrtmche-", ""))
.ToList();
And the final thing to note is that Umbraco uses the cache itself so you will not only see cache keys you have added, if you wish to filter these I suggest adding a prefix of your own so that you can filter your own cache keys from Umbraco's.

Memcache tags simulation

Memcached is a great scalable cache layer but it have one big problem (for me) that it cannot manage tags. And tags are really useful for group invalidation.
I have done some research and I'm aware about some solutions:
Memcache tag fork http://code.google.com/p/memcached-tag/
Code implementation to emulate tags (ref. Best way to invalidate a number of memcache keys using standard php libraries?)
One of my favorite solution is namespace, and this solution is explained on memcached wiki.
However I don't understand why we are integrate namespace on key cache?
From what I understood about namespace trick is: to generate key we have to get value of the namespace (on cache). And if the namespace->value cache entry is evicted, we can no longer compute the good key to fetch cache... So the cache for this namespace are virtually invalidate (I said virtually because the cache still exist but we can no more compute the key to access).
So why can we not simply implement something like:
tag1->[key1, key2, key5]
tag2->[key1, key3, key6]
key1->["value" => value1, "tags" => [tag1, tag2]]
key2->["value" => value2, "tags" => [tag1]]
key3->["value" => value3, "tags" => [tag3]]
etc...
With this implementation I come back with the problem that if tag1->[key1, key2, key5] is evicted we can no more invalidate tag1 key. But with
function load($cacheId) {
$cache = $memcache->get($cacheId);
if (is_array($cache)) {
$evicted = false;
// Check is no tags have been evicted
foreach ($cache["tags"] as $tagId) {
if (!$memcache->get($tagId) {
$evicted = true;
break;
}
}
// If no tags have been evicted we can return cache
if (!$evicted) {
return $cache
} else {
// Not mandatory
$memcache->delete($cacheId);
}
// Else return false
return false;
}
}
It's pseudo code
We are sure to return cache if all of this tags are available.
And first thing we can say it's "each time you need to get cache we have to check(/get) X tags and then check on array". But with namespace we also have to check(/get) namespace to retrieve namespace value, the main diff is to iterate under an array...
But I do not think keys will have many tags (I cannot imagine more than 10 tags/key for my application), so iterate under size 10 array it's quite speed..
So my question is: Does someone already think about this implementation? And What are the limits? Did I forget something? etc
Or maybe I have missunderstand the concept of namespace...
PS: I'm not looking for another cache layer like memcached-tag or redis
I think you are forgetting something with this implementation, but it's trivial to fix.
Consider the problem of multiple keys sharing some tags:
key1 -> tag1 tag2
key2 -> tag1 tag2
tag1 -> key1 key2
tag2 -> key1 key2
Say you load key1. You double check both tag1 and tag2 exist. This is fine and the key loads.
Then tag1 is somehow evicted from the cache.
Your code then invalidates tag1. This should delete key1 and key2 but because tag1 has been evicted, this does not happen.
Then you add a new item key3. It also refers to tag1:
key3 -> tag1
When saving this key, tag1 is (re)created:
tag1 -> key3
Later, when loading key1 from cache again your check in the pseudo code to ensure tag1 exists succeeds. and the (stale) data from key1 is allowed to be loaded.
Obviously a way around this is to check the values of the tag1 data to ensure the key you are loading is listed in that array and only consider your key valid if this is true.
Of course this could have performance issues depending on your use case. If a given key has 10 tags, but each of those tags is used by 10k keys, then you are having to do search through an array of 10k items to find your key and repeat that 10 times each time you load something.
At some point, this may become inefficient.
An alternative implementation (and one which I use), is more appropriate when you have a very high read to write ratio.
If reads are very much the common case, then you could implement your tag capability in a more permanent database backend (I'll assume you have a db of sorts anyway so it only needs a couple extra tables here).
When you write an item in the cache, you store the key and the tag in a simple table (key and tag columns, one row for each tag on a key). Writing a key is simple: "delete from cache_tags where id=:key; foreach (tags as tag) insert into cache_tags values(:key, :tag);
(NB use extended insert syntax in real impl).
When invalidating a tag, simply iterate over all keys that have that tag: (select key from cache_tags where tag=:tag;) and invalidate each of them (and optionally delete the key from the cache_tags table too to tidy up).
If a key is evicted from memcache then the cache_tags metadata will be out of date, but this is typically harmless. It will at most result in an inefficiency when invalidating a tag where you attempt to invalidate a key which had that tag but has already been evicted.
This approach gives "free" loading (no need to check tags) but expensive saving (which is already expensive anyway otherwise it wouldn't need to be cached in the first place!).
So depending on your use case and the expected load patterns and usage, I'd hope that either your original strategy (with more stringent checks on load) or a "database backed tag" strategy would fit your needs.
HTHs

Doctrine2 Caching of updated Elements

I have a problem with doctrine. I like the caching, but if i update an Entity and flush, shouldn't doctrine2 be able to clear it's cache?
Otherwise the cache is of very little use to me since this project has a lot of interaction and i would literally always have to disable the cache for every query.
The users wouldn't see their interaction if the cache would always show them the old, cached version.
Is there a way arround it?
Are you talking about saving and fetching a new Entity within the same runtime (request)? If so then you need to refresh the entity.
$entity = new Entity();
$em->persist($entity);
$em->flush();
$em->refresh($entity);
If the entity is managed and you make changes, these will be applied to Entity object but only persisted to your database when calling $em->flush().
If your cache is returning an old dataset for a fresh request (despite it being updated successfully in the DB) then it sounds like you've discovered a bug. Which you can file here >> http://www.doctrine-project.org/jira/secure/Dashboard.jspa
Doctrine2 never has those delete methods such as deleteByPrefix, which was in Doctrine1 at some point (3 years ago) and was removed because it caused more trouble.
The page http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/caching.html#deleting is outdated (The next version of the doctrine2 document will see those methods removed). The only thing you can do now is manually managing the cache: find the id and delete it manually after each update.
More advanced doctrine caching is WIP: https://github.com/doctrine/doctrine2/pull/580.
This is according to the documentation on Doctrine2 on how to clear the cache. I'm not even sure this is what you want, but I guess it is something to try.
Doctrine2's cache driver has different levels of deleting cached entries.
You can delete by the direct id, using a regex, by suffix, by prefix and plain deleting all values in the cache
So to delete all you'd do:
$deleted = $cacheDriver->deleteAll();
And to delete by prefix, you'd do:
$deleted = $cacheDriver->deleteByPrefix('users_');
I'm not sure how Doctrine2 names their cache ids though, so you'd have to dig for that.
Information on deleting cache is found here: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/caching.html#deleting
To get the cache driver, you can do the following. It wasn't described in the docs, so I just traced through the code a little.
I'm assuming you have an entity manager instance in this example:
$config = $em->getConfiguration(); //Get an instance of the configuration
$queryCacheDriver = $config->getQueryCacheImpl(); //Gets Query Cache Driver
$metadataCacheDriver = $config->getMetadataCacheImpl(); //You probably don't need this one unless the schema changed
Alternatively, I guess you could save the cacheDriver instance in some kind of Registry class and retrieve it that way. But depends on your preference. Personally I try not to depend on Registries too much.
Another thing you can do is tell the query you're executing to not use the result cache. Again I don't think this is what you want, but just throwing it out there. Mainly it seems you might as well turn off the query cache altogether. That is unless it's only a few specific queries where you don't want to use the cache.
This example is from the docs: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/caching.html#result-cache
$query = $em->createQuery('select u from \Entities\User u');
$query->useResultCache(false); //Don't use query cache on this query
$results = $query->getResult();

Resources