ehcache - configure disk persistence - ehcache

I am trying to configure Ehcache with a disk persistence that does not initialize the disk cache until the heap is full. Is that possible?
Right now as soon as I initialize my cache I see that the disk cache is created on the file system. Most cases my heap will be big enough to contain my data, so thus I would not want to create a disk store that will not be used. I am using ehcache on a phone where disk space is an issue. And I could potentially need a 1GB disk persistence and so now it creates this large file everytime I initialize my cache.

While specific configuration on old versions will behave that way, since Ehcache 2.6.x the tiering model no longer works using overflow.
When you assign a disk tier, it becomes the authority and it will always contain all mappings, from the very first one inserted into the cache.

Related

Dotnet Core In Memory Cache - what is the default expiration

I'm using MemoryCache in a dotnet core C# project. I'm using it store a list of enums that I read out of a collection (I want to store it since it uses reflection to load the enums which takes time).
However, since my collection isn't going to change, I don't want it to expire at all. How do I do that? If I don't set any Expiration (SlidingExpiration or Absolute Expiration), will my cache never expire?
If you do not specify an absolute and/or sliding expiration, then the item will theoretically remain cached indefinitely. In practical terms, persistence is dependent on two factors:
Memory pressure. If the system is resource-constrained, and a running app needs additional memory, cached items are eligible to be removed from memory to free up RAM. You can however disable this by setting the cache priority for the entry to CacheItemPriority.NeverRemove.
Memory cache is process-bound. That means if you restart the server or your application restarts for whatever reason, anything stored in memory cache is gone. Additionally, this means that in web farm scenarios, each instance of your application will have it's own memory cache, since each is a separate process (even if you're simply running multiple instances on the same server).
If you care about only truly doing the operation once, and persisting the result past app shutdown and even across multiple instances of your app, you need to employ distributed caching with a backing store like SQL Server or Redis.

Difference between in-memory-store and managed-store in mule cache

What are the main differences between in-memory-store and managed-store in mule cache scope and which gives a best performance.
What is the best way to configure caching in global scope?
We are currently using in-memory-store caching. We are always getting issues with memory outage as we are using a server with less HW configurations. We are using mule 3.7v.
Please provide your suggestions to configure cache in optimized way.
We are facing issue with cache expiration with in-memory-store. cache date is not being expunged after expiration time also. But when we use "managed-store" its working as expected.
Below is my configuration:
In-memory:
This store the data inside system memory. The data stored with In-memory is non-persistent which means in case of API restart or crash, the data been cached will be lost.
Managed-store:
This stores the data in a place defined by ListableObjectStore. The data stored with Managed-store is persistent which means in case of API restart or crash, the data been cached will no be lost.
Source (explained in detail with configuration difference):
http://www.tutorialsatoz.com/caching-in-mule-cache-scope/
One of my friend clearly explained me this difference as follows:
in-memory cache--> It is a temperoy memory storage area where it will store the data. for example: Consider using a VM component in Mule, the data will be stored in VM in the form of in-memory queue
in the case of Managed store--> we can store the data and use it in later stages. example: object store
mainly cache will store the frequently used data. It will reduce the db or http calls by saving the frequently used data or results in cache scope.
But both are for temporary storage only, means they are valid for that particular session alone.

How to change the size of the off-heap in neo4j?

Does anyone know which parameters can affect the size of the off-heap memory?
In neo4j documentation, they said that the size of the off-heap memory can be modified with the dbms.pagecache.memory parameter.
I tried changing the size of this parameter, but when I check the off-heap memory with jconsole, the size is always the same.
PS: I'm working with the free version of neo4j.
Neo4j <= 2.1.x uses the so called MMIO cache as first level cache. This basically uses mapped memory capabilities provided by the operating system. On Unix style OSes the file buffer cache is off heap on windows it's on heap. For a verbose description including the config settings refer to http://neo4j.com/docs/2.1.8/configuration-caches.html#_file_buffer_cache.
In Neo4j 2.2 this cache layer's implementation was changed to the page cache. Page cache is off heap on all OSes. It's configuration has been reduced to just on setting:
dbms.pagecache.memory
So you've used a 2.2 config option on a 2.1 instance. So either use the set of options for 2.1 or do a upgrade.

what happens when a new ehcache cachemanager is created?

In my application I use ehcache with several caches that are backed by a terracotta server.
I noticed that there is a correlation between the size of the data that is saved in the server and the time it takes to create a cache manager instance in the client (the bigger the size the longer it takes).
I couldn't find any info about what actually happens when the cache manager is created.
To my understanding, the data would only be pulled when it is actually requested and not when creating the manager, so what is the overhead?
Any thoughts or references to relevent reading would be much appreciated.
First of all, CacheManager is not related to any data pushing or pulling, it create the caches which contains the elements as name value pairs and holds the data for put/get and other operations. Actually CacheManager do creation, access and removal of Caches.
In-fact when you create a CacheManager that has caches that participates in the terracotta cluster you might see a difference in the time it loads up. The cache manager will establish a connection to the server specified in the config. If there is any pre cache loaders like classes extending BootstrapCacheLoader will affect the load time too. The cache consistency attribute in caches that participate in the cluster has also impact on the load time. Terracotta server by design will push the most hit data to clients in order to reduce cache misses on local and also if the cache is identified for pinning.

Distributed, persistent cache using EHCache

I currently have a distributed cache using EHCache via RMI that works just fine. I was wondering if you can include persistence with the caches to create a distributed, persistent cache.
Alongside this, if the cache was persistent, would it load from the file store, then bootstrap from the cache cluster? Basically, what I want is:
Cache starts
Cache loads persistent objects from the file store
Cache joins the distruted cluster and bootstraps as normal
The usecase behind this is having 2 identical components running on independent machines, distributing the cache to avoid losing data in the event that one of the components fails. The persistence would guard against losing all data on the rare occasion that both components fail.
Would moving to another distribution method (such as Terracotta) support this?
I would take a look at the write-through caching options in EHCache. As described in the link, combining a read-through and write-behind cache will provide persistence to a user-defined data store.
What Terracotta gives you is consistency (so you don't have to worry about resolving conflicts among cluster members). You have the option of defining an interface to your own store (through CacheLoader and CacheWriter or just letting Terracotta persist your data, but I have received mixed signals from Terracotta and documentation on whether TC is appropriate for a system-of-record. If your data is transient and can be blown away at any time (like for web sessions) it might be OK.
Adding bootstrapCacheLoaderFactory element along with cacheEventListenerFactory to the Cache(which which needs to bootstrapped from other nodes when it is down & replicates with other nodes if that node got any updates)
memoryStoreEvictionPolicy="LFU"
diskPersistent="true"
timeToLiveSeconds="86400"
maxElementsOnDisk="1000">

Resources