#Cacheable(cacheName = "cacheOne")
public Map<String, Object> getSomeData(List<String> taglist,String queryString) {
I am using ehcache with Spring as shown in the code above. I can clear all the keys in cacheOne by doing this :
cacheManager.getCache("cacheOne").removeAll();
But what if I need to remove only those keys from this cache where taglist contains a particular tag? For e.g. I want to remove all the entries in cacheOne where there taglist contains a tag cricket?
I am afraid what you are asking will be on your shoulders.
What you are requesting, in addition to the caching you already perform, is a mapping between tags (cricket in your example) and the keys that contain these tags.
In order to store this mapping, you probably need to devise your own KeyGenerator that will keep track of this mapping in parallel of creating the cache key.
This mapping could even be smart if only a subset of tags are concerned by this purge need.
By default Spring will not keep track of that information for you, so you will not have a configuration based way of doing it.
The other option - not recommended - is to brute force by iterating over all the keys. And it should be pretty clear why this is a bad idea as soon as your dataset grows.
Related
I am well aware that there are multiple questions on this topic, but I just can't get the sense of it. The problem seems to be that #CachePut does not add the new value to the #Cacheable list.
After debugging the problem I found out that the problem seems to be in the key.
Here is the code snippet
#CacheConfig(cacheNames = "documents")
interface DocumentRepository {
#CachePut(key = "#a0.id")
Document save(Document document);
#Cacheable()
List<Document> findAll();
}
So when I invoke the save method, the key being used for caching is incrementing integer, or 1,2,3...
But when I try to get all documents, the cache uses SimpleKey[] as key. If I try to use the same key for #Cacheable, I get SpelEvaluationException, property 'id' cannot be found on null.
So what I am left with at the end is functional cache (the data is saved in the cache), but somehow I am not able to retrieve it.
The underlying cache implementation is EhCache.
I really don't understand what you are expecting here.
It looks like you expect your findAll method to return the full content of the cache named documents. I don't think there is anything in the documentation that can let you conclude that this feature exists (it does not). It is also very fragile. If we were implementing that, findAll would return different results based on the state of the cache. If someone would configure this cache to have a max size of 100 for instance. Or If the cache isn't warm-up on startup.
You can't expect a cache abstraction (or even a cache library) to maintain a synchronized view of "a list of objects". What findAll does is returning the entry that corresponds to a key with no argument (new SimpleKey by default).
How do I get a List of objects from redis cache based on the key passed?
I am exploring cachemanager.net for redis cache. I have gone through the examples. But I could not find any example related to getting the List of objects based on the key passed.
var lst =cache.Get("Key_1");
It is returning only one object.
But I would like it like this. I have stored 1000 objects in cache with key name like Key_1, Key_2, Key_3..... Key_1000. I want to get list of 1000 objects if I pass Key_* as Key.
CacheManager does not provide any functionality to search keys or get many keys via wildcard. That's simply not how caches work.
As Karthikeyan pointed out, in Redis you could use the keys operator, but that's not a good solution and should only be used for debugging manually. Other cache systems don't even have something like that, therefore CacheManager also cannot provide that feature. Hope that makes sense ;)
With CacheManager, you can either store all your objects in one cache key and cache the list. That might have some limitations if you use redis because serialization might be become an issue.
Or, you store each object separately and retrieve them in a loop. The redis client will optimize certain things, also, in CacheManager, if you have 2 layers of caching, the performance will get better over time.
You can use redis hash instead. And you can use hgetall command to retrieve all the values in that hash.
http://redis.io/commands#hash
Or if you want to use a normal key Value pair you have to write a lua script to achieve it.
local keys = redis.call('keys','key_*')
return redis.call('mget',keys)
Keys is not advisable in production as it is blocking.
You can use scan command instead of keys to get all the keys matching that pattern and then follow the same procedure to achieve the same.
I have a static hashmap which I am using to cache objects in it. The objects are of different types including lists and hashmaps.
I want to invalidate the objects from the cache after certain time interval. I could add a timestamp to my objects and invalidate them manually. But, I don't know if there is any way I could find the timestamp of when a list was added to the hashmap.
Any comments or suggestions?
Have all the objects which you store in your Hashmap implement a single Expirable interface:
public interface Expirable {
public Date getExpiryDate();
}
Once done you'll easily be able to iterate through each element in your Hashmap and remove those which have expired.
The Guava interface com.google.common.cache.Cache can be accessed as a map by calling Cache.asMap().
Refer to CacheBuilder for documentation, specifically the expireAfterWrite() method.
The idea of the desired filter is to check the memcached for page content with url as a key and if found, return it to client directly from cache and skip the controller altogether. Storing would be done in separate filter, which is the easy part. I'm aware i could write it to action's preExecute() but filters would offer more elegant solution (could turn them off for dev envs).
In other words - is there a smart way for a filter to push the response to client and skip going to action?
Implementing such a filter is quite easy. Actually similar solution exists in symfony.
Look at the default caching filter (sfCacheFilter class). It's doing something similar to what you're looking for.
Alternative path
It is already possible to use memcache directly by changing the default file caching to memcache.
In your factories file you're able to switch cache driver (apps/yourapp/config/factories.yml or config/factories.yml):
all:
view_cache:
class: sfMemcacheCache
You could do the same with memcached but as symfony doesn't provide sfMemcachedCache class you would have to implement it on your own.
This way you could reuse existing caching framework and take advantage of cache.yml files.
I would suggest you have a look at overwriting the sfExecutionFilter.
It's the last filter in the default filters.yml, which means it's the first executed.
This is what is responsible for calling your action's executeXXX method and loading associated view and bunch of other things.
Presumably you could write your own filter the extends sfExecutionFilter and overwrite it's functionality to skip executing the controller it the output is cached.
You can find the default filters.yml # %SYMFONY_DIR%/config/config/filters.yml
Given the following domain classes:
class Post {
SortedSet tags
static hasMany = [tags: Tag]
}
class Tag {
static belongsTo = Post
static hasMany = [posts: Post]
}
From my understanding so far, using a hasMany will result in hibernate Set mapping.
However, in order to maintain uniqueness/order, Hibernate needs to load the entire set from the database and compare their hashes.
This could lead to a significant performance problem with adding and deleting posts/tags
if their sets get large. What is the best way to work around this issue?
There is no order ensured by Hibernate/GORM in the default mapping. Therefore, it doesn't have to load elements from the database in order to do the sorting. You will have your hands on a bunch of ids, but that's that extent of it.
See 19.5.2:
http://www.hibernate.org/hib_docs/reference/en/html/performance-collections.html
In general, Hibernate/GORM is going to have better performance than you expect. Unless and until you can actually prove a real-world performance issue, trust in the framework and don't worry about it.
The ordering of the set is guaranteed by the Set implementation, ie, the SortedSet. Unless you use a List, which keeps track of indexes on the db, the ordering is server-side only.
If your domain class is in a SortedSet, you have to implement Comparable in order to enable the proper sorting of the set.
The question of performance is not really a question per se. If you want to access a single Tag, you should get it by its Id. If you want the sorted tags, well, the sort only makes sense if you are looking at all Tags, not a particular one, so you end up retrieving all Tags at once. Since the sorting is performed server-side and not db-side, there is really not much difference between a SortedSet and a regular HashSet in regards to Db.
The Grails docs seems to be updated:
http://grails.org/doc/1.0.x/
In section 5.2.4 they discuss the potential performance issues for the collection types.
Here's the relevant section:
A Note on Collection Types and Performance
The Java Set type is a collection that doesn't allow duplicates. In order to ensure uniqueness when adding an entry to a Set association Hibernate has to load the entire associations from the database. If you have a large numbers of entries in the association this can be costly in terms of performance.
The same behavior is required for List types, since Hibernate needs to load the entire association in-order to maintain order. Therefore it is recommended that if you anticipate a large numbers of records in the association that you make the association bidirectional so that the link can be created on the inverse side. For example consider the following code:
def book = new Book(title:"New Grails Book")
def author = Author.get(1)
book.author = author
book.save()
In this example the association link is being created by the child (Book) and hence it is not necessary to manipulate the collection directly resulting in fewer queries and more efficient code. Given an Author with a large number of associated Book instances if you were to write code like the following you would see an impact on performance:
def book = new Book(title:"New Grails Book")
def author = Author.get(1)
author.addToBooks(book)
author.save()