how to evict hazelcast caches manually or on a future timestamp - spring

I have hazelcast running in kubernetes in a cluster.
I want to make sure, once someone changes a thing lets say in a PriceCatalog,
hazelcast must be evicted immediately(or configurable) of the all the caches related.
Imagine there is price change on a product, which will take place in 24 hours.
Waiting Time2Live is not an option, there can be time windows that old price will be applied, which will be broken.
Is there a way to evict the caches on given timestamp?
Or easily manually evict all the caches related?

You can do better than just evicting entities on a change by using Cache-Ahead.
I suggest you read this post where I explain how to set up a cache that's always in sync with the database.

Related

Springboot Ehcache load data

Service using SpringBoot, Maven, MongoDB, Ehcache.
Service requires a fast and frequently cache server, so eventually, I chose Ehcache.
All the cache will be called almost at the same frequency so there are no hot cold data in this case.
The original data in MongoDB will be updated every day by a timer service, so what I do is to load all the updated data to Ehcache every day.
Each item in this data has a connection with each other, like you use one to find the relevant Ids of the other. So if one cache is updated, but the other one hasn't, then you can't find these relevant Ids. I want to avoid this situation.
So my question is, is there any way to achieve a function like this, like using two Ehcache servers or something? i.e. When one is in use, the other one can load the data from MongoDB. When the update is done, switch it to the updated one. So every day when the MongoDB data updated, and I have to update the Ehcache data, it won't influence my current cache. That's just a thought I have. Another thought is something like a SQL transaction. Is there any other way to achieve this.
Please advise.
Good question. I see two ways.
One is to use an application lock. When you are ready to reload the cache, you block access to it and do it. There is no way to clear all caches are the same time. The problem is that everything will be blocked during the update.
The other way is to use an other cache. So you load the new cache with the new data and then swap the new cache and the expired one. The problem with this solution is that at a given moment you will take twice the memory since both caches are in memory.

Redis memory usage issue

I have a cluster with two Redis docker instances (v3.2.5) I use for caching responses from Spring boot microservices.
I've disabled all persistence and the number of keys is stable over time, all of them expiring between 5 minutes and 1 day.
Despite this, I can see the memory usage creeping up. It looks like once a day (around midnight) it uses a lot of memory and then releases some of it.
Does anyone have any idea what this process may be, if there's any way to configure Redis to avoid using that much memory?
The number of keys I have doesn't justify this amount of memory
UPDATE
After taking a snapshot of the database and loading the data on a fresh new Redis instance (same version, same config) the memory_used_human is 10 times lower than the original one.
Is it possible that key expiration doesn't really delete keys from memory?

Can you evict Ignite cache backups to disk?

We would like to keep primary keys in memory and backup keys on disks. So on re-shuffle, we will accept the performance of reading key/values from disks.
From my research on the ignite documentation, I don't see that option out of the box. Is there any way to do this via configuration?
If this feature doesn't exist, as a workaround I had the following idea. If we know our cache takes 1 terabyte, we know with backups it will be 2 terabytes. (Approximately) If we allocate a little over 1 terabyte in memory and set the eviction policy to disk, will this effectively get us the functionality we want? That is, will it evict backup copies to disk and leave primaries in memory?
This feature doesn't exist and your workaround won't work because it will randomly evict primary and backup copies. However, you can probably implement your own eviction policy that will immediately evict any created backup and configure swap space to store this backups.
Note that I see sense only in case you're running SQL queries and/or if you don't have persistence store. If you only use key based access, any lost entry will be reloaded from the persistence store when needed.

Will Caching be useful when we need multiple items in one go

We are working on a ecom site, where admin can store some configuration on the combination of Product-Category-manufacturer or on Product-Category.
We have some reports, which can return 10000 Product's transactions (with 100-1000 unique combination of product-category-manufacturer ).
In this report, we also need to use configuration as well.
One option could be to fetch configurations from the same stored procedure for all unique Product-Category-manufacturer.
Another option could be to cache all these combination in some outproc cache (like redis). And once transaction data is fetched from stored procedure, system will pull the data from cache for all 1000 Product-Category-Feature combinations. But in this case, we will have to request cache 1000 times and if some of keys are not found in cache, we will have to hit database.
In fact there can be some combination where data does not exist in database. If we request for these combination, system will not find it in cache, and it will have to hit database every-time. To resolve this, we will have to form a set of all the Product-Category-Feature combination where there is data available in cache.
Could anybody suggest that if cache will be useful in this case?
We use caching mainly in 2 occasions,
To Reduce latency: Cache is closer to the client it takes less time for the resource to reach the client.
To Reduce network traffic: Most of the time we see that some resources are reusable but always fetch from original source which
is costly and make more unnecessary traffic. Adding a cache layer
solves this.
So to answer your question, "Will Caching be useful when we need multiple items in one go?" You have to think on the above 2 points. How much you are reusing (cache hit percentage). And cost difference between cache call and call to original source.
If your issue is getting 1000 items at once, Redis don't have issue providing that. It will be so much faster than the transnational DB. And you can have set of all the Product-Category-Feature combinations, its better as we will no have cache misses. However think about the size of the Redis DB, before you proceed.

Terracotta L2 cache invalidation by messaging

I am trying to evaluate Terracotta Disctributed Cache with ehcache. I have the following query. There are 20+ apps which will use a TAS distributed cache. As I understand there will be a L1 cache in each of these apps and a L2 in the cluster. The cluster cache data is fronting a Database which will be updated by a different app which we do not have access to. So we only read from this DB. But the DB updates needs to flow to the cache.
By the way of DB triggers the updated (keys alone) are stored in a temp table. In specific intervals a job monitors this table and collects the keys in the cache that needs to be expired. This is a separate batch job.
From here I need help. How do I inform the TAS L2 cache to expire/evict these keys? What options in terracotta are there?. Will this expiry event flow from L2 to all the individual apps? What is the time lag? I do not want to send the expiry keys to all the individual apps. Can this be accomplished?.
Thanks for the help!
Maybe I am missing something, but I am not sure why you would want to expire/evict those keys instead of simply calling cache.removeAll(keys). This removal will be automatically propagated to all L1 nodes which have those entries in their local cache.
The time lag depends on the consistency settings of the distributed cache.

Resources