Usage of RemoteCache with DeltaAware and Delta interface infinispan - caching

I need some guidance related to the following scenario in infinispan. Here is my scenario:
1) I created two nodes and started successfully in infinispan using client server mode.
2) In the hot rod client I created a remotechachemanager and then obtained a RemoteCache.
3) In the remote cache I put like this cache.put(key, new HashMap()); it is successfully added.
4) Now when I am going to clear this value using cache.remove(key) , I am seeing that it is not getting removed and the hash map is still there every time I go to remove it.
How can clear the value so that it will be cleared from all node of the cluster?
How can I also propagate the changes like adding or removing from the value HashMap above?
Has it anything to do with implementing DeltaAware and Delta interface?
Please suggest me about this concept or some pointers where I can learn
Thank you

Removal of the HashMap should work as long as you use the same key and have equals() and hashCode() correctly implemented on the key. I assume you're using distributed or replicated mode.
EDIT: I've realized that equals() and hashCode() are not that important for RemoteCache, since the key is serialized anyway and all the comparison will be executed on the underlying byte[].
Remote cache does not directly support DeltaAware. Generally, using these is quite tricky even in library mode.
If you want to use cache with maps, I suggest rather using composite key like cache-key#map-key than storing complex HashMap.

Related

What is Redis ValueOperations?

What is Redis Value operations in Spring boot?
Is it like we can directly store Key-value pair in Redis database without creating the entity and stuff just by using RedisTemplate<String, Object> ?
Also, if we use ValueOperations how will it impact the performance?
When using Redis, you should think about what data format/datatype suits your needs best, similar to what you would do when coding in any general programming language. All those operations, ValueOperations, ListOperations, SetOperations, HashOperations, StreamOperations are the support provided for interacting with the mentioned datatypes. They are provided by the RedisTemplate.
When you are using ValueOperations, you are more or less treating your whole Redis instance as a giant hash map. For example, you can store entries in Redis like current_user = "John Doe". However, you can also do something silly such as keeping a string representation of a huge hashmap against a key, top_users = <huge_string_representing_a_hash_map> when thinking from the perspective of the second case, what if you want to get the value for one key in the mentioned hash map. Then, the task becomes more or less impossible without transferring the whole hash map in RAM. Yet, if you have used Redis Hashes and HashOperations that would have been a more trivial task.
Going back to your question, if you want to store a simple object using ValueOperations. That wouldn't degrade the performance. In contrast, if you are moving huge maps around, you'll utilise a lot of your network bandwidth and RAM capacity.
In summary, choose your Redist data types carefully to suit your needs.
https://redis.io/topics/data-types

Best way to remove cache entry based on predicate in infinispan?

I want to remove few cache entries if the key in the cache matches some pattern.
For example, I've the following key-value pair in the cache,
("key-1", "value-1"), ("key-2", "value-2"), ("key-3", "value-3"), ("key-4", "value-4")
Since cache implements a map interface, i can do like this
cache.entrySet().removeIf(entry -> entry.getKey().indexOf("key-") > 0);
Is there a better way to do this in infinispan (may be using functional or cache stream api)?
The removeIf method on entrySet should work just fine. It will be pretty slow though for a distributed cache as it will pull down every entry in the cache and evaluate the predicate locally and then perform a remove for each entry that matches. Even in a replicated cache it still has to do all of the remove calls (at least the iterator will be local though). This method may be changed in the future as we are updating the Map methods already [a].
Another option is to instead use the functional API as you said [1]. Unfortunately the way this is implemented is it will still pull all the entries locally first. This may be changed at a later point if the Functional Map APIs become more popular.
Yet another choice is the cache stream API which may be a little more cumbersome to use, but will provide you the best performance of all of the options. Glad you mentioned it :) What I would recommend is to apply any intermediate operations first (luckily in your case you can use filter since your keys won't change concurrently). Then use the forEach terminal operation which passes the Cache on that node [2] (note this is an override). Inside the forEach callable you can call the remove command just as you wanted.
cache.entrySet().parallelStream() // stream() if you want single thread per node
.filter(e -> e.getKey().indexOf("key-") > 0)
.forEach((cache, e) -> cache.remove(e.getKey()));
You could also use indexing to avoid the iteration of the container as well, but I won't go into that here. Indexing is a whole different beast.
[a] https://issues.jboss.org/browse/ISPN-5728
[1] https://docs.jboss.org/infinispan/9.0/apidocs/org/infinispan/commons/api/functional/FunctionalMap.ReadWriteMap.html#evalAll-java.util.function.Function-
[2] https://docs.jboss.org/infinispan/9.0/apidocs/org/infinispan/CacheStream.html#forEach-org.infinispan.util.function.SerializableBiConsumer-

Is it safe to use Spring Redis keys?

I want to search keys with the string pattern. I don't see SCAN is straight forward as Keys do.
redistemplate.opsForSet().getOperations().keys(pattern);
This is so straight forward, so if I have my value as my key, I can do search and also sorting to an extent. But my only problem is that there is a warning stating not to use KEYS command. Not sure if Spring has handled it, please provide your thoughts.
You should consider KEYS (http://redis.io/commands/keys) a debug command. Running it in redis-cli on your development instance is perfectly fine, but don't use it in code that will eventually end up on your production instance.
Depending on the size of your redis database and the pattern used with KEYS, the command can potentially take a long time to execute. During that time the redis server will not be able to service any other commands.
SCAN may not be as straight-forward, but it is the right way to enumerate keys without slowing the server down. And you'll find plenty of samples for Spring, like this one: https://stackoverflow.com/a/30260108/3677188

how to swap between two cache keys with CAS with memcached / spymemcached

I have a cache object (large json object) associated with a key. I would like to switch between this cache object / key instance and another without any down time in availability in one of the two.
I have been reading about memcached / spymemcached's CAS (compare and set) I feel as if this will allow me to swap between the two cache instance key pairs without any down time.
If so how can I implement the compare and set? Is there a code example using the spymemcached api to accomplish this?
I created the value I want stored temporarily. When it is finished populating, I delete the old cache and swap.

Cache Management with Numerous Similar Database Queries

I'm trying to introduce caching into an existing server application because the database is starting to become overloaded.
Like many server applications we have the concept of a data layer. This data layer has many different methods that return domain model objects. For example, we have an employee data access object with methods like:
findEmployeesForAccount(long accountId)
findEmployeesWorkingInDepartment(long accountId, long departmentId)
findEmployeesBySearch(long accountId, String search)
Each method queries the database and returns a list of Employee domain objects.
Obviously, we want to try and cache as much as possible to limit the number of queries hitting the database, but how would we go about doing that?
I see a couple possible solutions:
1) We create a cache for each method call. E.g. for findEmployeesForAccount we would add an entry with a key account-employees-accountId. For findEmployeesWorkingInDepartment we could add an entry with a key department-employees-accountId-departmentId and so on. The problem I see with this is when we add a new employee into the system, we need to ensure that we add it to every list where appropriate, which seems hard to maintain and bug-prone.
2) We create a more generic query for findEmployeesForAccount (with more joins and/or queries because more information will be required). For other methods, we use findEmployeesForAccount and remove entries from the list that don't fit the specified criteria.
I'm new to caching so I'm wondering what strategies people use to handle situations like this? Any advice and/or resources on this type of stuff would be greatly appreciated.
I've been struggling with the same question myself for a few weeks now... so consider this a half-answer at best. One bit of advice that has been working out well for me is to use the Decorator Pattern to implement the cache layer. For example, here is an article detailing this in C#:
http://stevesmithblog.com/blog/building-a-cachedrepository-via-strategy-pattern/
This allows you to literally "wrap" your existing data access methods without touching them. It also makes it very easy to swap out the cached version of your DAL for the direct access version at runtime quite easily (which can be useful for unit testing).
I'm still struggling to manage my cache keys, which seem to spiral out of control when there are numerous parameters involved. Inevitably, something ends up not being properly cleared from the cache and I have to resort to heavy-handed ClearAll() approaches that just wipe out everything. If you find a solution for cache key management, I would be interested, but I hope the decorator pattern layer approach is helpful.

Resources