In my application I use ehcache with several caches that are backed by a terracotta server.
I noticed that there is a correlation between the size of the data that is saved in the server and the time it takes to create a cache manager instance in the client (the bigger the size the longer it takes).
I couldn't find any info about what actually happens when the cache manager is created.
To my understanding, the data would only be pulled when it is actually requested and not when creating the manager, so what is the overhead?
Any thoughts or references to relevent reading would be much appreciated.
First of all, CacheManager is not related to any data pushing or pulling, it create the caches which contains the elements as name value pairs and holds the data for put/get and other operations. Actually CacheManager do creation, access and removal of Caches.
In-fact when you create a CacheManager that has caches that participates in the terracotta cluster you might see a difference in the time it loads up. The cache manager will establish a connection to the server specified in the config. If there is any pre cache loaders like classes extending BootstrapCacheLoader will affect the load time too. The cache consistency attribute in caches that participate in the cluster has also impact on the load time. Terracotta server by design will push the most hit data to clients in order to reduce cache misses on local and also if the cache is identified for pinning.
Related
I have implemented Caching in my Spring Boot REST Application. My policy includes a time based cache eviction strategy, and an update-based cache eviction strategy. I am worried that since I employ a stateless server, if there is a method called to update certain data, and this was handled by server instance A, then the corresponding caches in server instance B, C and D, are not updated as well.
Is this an issue I would face / is there a way to overcome this issue?
This is the oldest problem in software development - cache invalidation when you have multiple servers
One way to handle it is to move your cache out of the individual servers and move them to somewhere shared like another instance that holds the cache entries that every other app refers to or something like redis [centralized cache]
Second way is to do a broadcast message so that each server now knows to invalidate the entry once the data has been modified or deleted - here you run the risk of the message not being processed and thus a stale entry is left in some server[s]
Another option is to have some sort of write ahead log [like kafka or redis streams] which is processed by each server and thus they will all process the events deterministically and have the same cache state
Lmk if you need more help - we can setup some time outside of SO
What are the main differences between in-memory-store and managed-store in mule cache scope and which gives a best performance.
What is the best way to configure caching in global scope?
We are currently using in-memory-store caching. We are always getting issues with memory outage as we are using a server with less HW configurations. We are using mule 3.7v.
Please provide your suggestions to configure cache in optimized way.
We are facing issue with cache expiration with in-memory-store. cache date is not being expunged after expiration time also. But when we use "managed-store" its working as expected.
Below is my configuration:
In-memory:
This store the data inside system memory. The data stored with In-memory is non-persistent which means in case of API restart or crash, the data been cached will be lost.
Managed-store:
This stores the data in a place defined by ListableObjectStore. The data stored with Managed-store is persistent which means in case of API restart or crash, the data been cached will no be lost.
Source (explained in detail with configuration difference):
http://www.tutorialsatoz.com/caching-in-mule-cache-scope/
One of my friend clearly explained me this difference as follows:
in-memory cache--> It is a temperoy memory storage area where it will store the data. for example: Consider using a VM component in Mule, the data will be stored in VM in the form of in-memory queue
in the case of Managed store--> we can store the data and use it in later stages. example: object store
mainly cache will store the frequently used data. It will reduce the db or http calls by saving the frequently used data or results in cache scope.
But both are for temporary storage only, means they are valid for that particular session alone.
I am using Infinispan 6.0.2 via the Wildfly 8.2 sub-system. I have configured a transactional cache that uses a String Based JDBC Cache Store to persist content placed in the infinispan cache.
My concern is that after reading the following in the Infinispan documentation that there is potential for the cache and cache store to become out of sync when putting/updating/removing multiple entries into the cache in the same transaction due to the transaction committing/rolling-back in the cache but only partial succeeding/failing in the cache store.
4.5. Cache Loaders and transactional caches
When a cache is transactional and a cache loader is present, the cache loader won’t be enlisted in the transaction in which the cache is part. That means that it is possible to have inconsistencies at cache loader level: the transaction to succeed applying the in-memory state but (partially) fail applying the changes to the store. Manual recovery would not work with caches stores.
Could some one please clarify if the above statement only refers to loading from a cache store if it also refers to writing to a store as well.
If this is also the case when writing to a cache store are there any recommended strategies/solutions for ensuring a cache and cache store remain in sync?
The driving factors behind this for me is that I am using Infinispan both for write-through and over-flow of business critical data and need confidence that the cache store correctly represents the state of the data.
I have also asked this question on the Infinispan Forums
Many thanks in advance.
It applies to writes as well, failure to write to the store does not affect rest of the transaction.
The reason for this is that the actual persistence API is not transactional (edit: newer versions of Infinispan support transactional persistence, too). Therefore, with 2-phase commits (in first phase - prepare - all locks are acquired, in second one - commit - the write is executed) the write to the store is executed in the second phase. Therefore, the failure cannot rollback changes on different nodes.
Although Infinispan is trying to get close to strongly consistent in-memory database, it is still rather a cache given the guarantees. If you are more interested in the design limitations (and some of them also theoretical limitations), I recommend reading Infinispan wiki.
I'm working on a web app where Ehcache is deployed. This app is load balanced between multiple servers. The app is installed on each server and when a user accesses the app he/she is redirected to one of the load balanced servers.
Is my understanding of the following correct? If a user makes a request for a key and it's annotated with #Cacheable a check is made for the value in the Ehcache store. If the key is in the store then its value is returned from Ehcache, if the key is not in the store then the key and its value is added to the store and its value is returned. If one of the load balanced servers becomes unavailable and the user requests a value which has already been cached in the now-unavailable server then this new value will be added to the cache store and returned to the user as described above.
There is no risk of the user requesting a key which is not available, is there?
I don't understand what you mean by a key which is not available.
Essentially, Spring caching abstraction works more or less as you described. Spring delegates the actual caching business to the configured caching library. In your case, this is ehcache.
If you have one cache per node, each node will have its copy of the data. If one server goes down and the user requests something that is not yet in this server's store, it will be put in that node's store as you described.
If you have a distributed cache, then the fact that the server goes down is meaningless with regards to your question: you cached data will still available from the other node.
In any case, Cacheable tries to retrieve the value from the cache before calling the method and returns it directly in case of cache hit. In case of a cache miss, the method is executed and its result is stored in the cache.
I currently have a distributed cache using EHCache via RMI that works just fine. I was wondering if you can include persistence with the caches to create a distributed, persistent cache.
Alongside this, if the cache was persistent, would it load from the file store, then bootstrap from the cache cluster? Basically, what I want is:
Cache starts
Cache loads persistent objects from the file store
Cache joins the distruted cluster and bootstraps as normal
The usecase behind this is having 2 identical components running on independent machines, distributing the cache to avoid losing data in the event that one of the components fails. The persistence would guard against losing all data on the rare occasion that both components fail.
Would moving to another distribution method (such as Terracotta) support this?
I would take a look at the write-through caching options in EHCache. As described in the link, combining a read-through and write-behind cache will provide persistence to a user-defined data store.
What Terracotta gives you is consistency (so you don't have to worry about resolving conflicts among cluster members). You have the option of defining an interface to your own store (through CacheLoader and CacheWriter or just letting Terracotta persist your data, but I have received mixed signals from Terracotta and documentation on whether TC is appropriate for a system-of-record. If your data is transient and can be blown away at any time (like for web sessions) it might be OK.
Adding bootstrapCacheLoaderFactory element along with cacheEventListenerFactory to the Cache(which which needs to bootstrapped from other nodes when it is down & replicates with other nodes if that node got any updates)
memoryStoreEvictionPolicy="LFU"
diskPersistent="true"
timeToLiveSeconds="86400"
maxElementsOnDisk="1000">