We are using the ehcache in our application. Look at the following configuration:
<diskStore path="java.io.tmpdir" />
<cache name="service" maxElementsInMemory="50000" eternal="true" overflowToDisk="true"/>
Since we have configured as eternal="true", Is it going to create caches for ever?. Is there a chance of running out of disk space?
What would be the performance impact on disk store?. It is definitely slower than the in-memory cache, but how much impact.
If more caches are stored in the disk, will it cause IO issue of doing multiple file operations?
Please suggest the best practice for a production grade applications. Consider that we have a 3 GB heap memory and 25000 concurrent users accessing the application. But, there is no database used in our application.
The application is deployed in WAS 8.5.5.
eternal=true means mappings will never expire.
overflowToDisk=true means that all mappings put in the cache will end up written on disk, from the first mapping put in the cache. The current Ehcache tiering model (since version 2.6.0) always makes use of the slower store - disk here - in order to give you predictable latency. When a mapping is accessed, it gets faulted into heap for faster retrieval. When too many mappings are faulted in heap, eviction from heap kicks in to keep the heap cache size according to maxElementsInMemory.
Given that you do not size the disk store by setting maxElementsLocalDisk, it defaults to 0 which means no limit. So yes, you may run out of disk space if you never explicitly remove cache entries.
It is quite hard to recommend proper cache size without knowing the details of your application. What I can recommend is that you measure both heap and disk usage and assess when the increased memory usage outweighs the performance gain.
Related
I would like to use EhCache in combination of memory and disk cache. EhCache should move new elements to disk when memory is full. e.g. I have 100 elements in ehCache memory store and tries to put 101st element and if memory is full then put 101st element to disk not 1st element.
Could you please let me know the cache configuration to achieve this?
Ehcache no longer works that way. The tiering model introduced in Ehcache 2.6 and used since then will always store ALL mappings into the lower tier, disk in your case.
The reason is predictable latency. If Ehcache waited for the memory tier to be full before using the disk, you would see a latency increase maybe at the worst time for your application. While the model were all mappings are written to disk gives you the upper bound for the write latency, while reads may be faster for hot value that are available in memory directly.
I’m using Hibernate 4.3.11.Final with the accompanying version of ehcache. I have a simple cache configuration which looks like the following:
<defaultCache maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="86400"
timeToLiveSeconds="86400"
overflowToDisk="false"
memoryStoreEvictionPolicy="LRU">
</defaultCache>
<cache name="main" />
My question is, because the memory setting is part of the heap and the heap gets garbage collected periodically, what happens when some of the entries in my cache get garbage collected? Is it the same as those entries getting evicted from the cache?
Garbage collection (GC) will never collect entries from a cache to which there is a root path since they are referenced by the cache itself.
To answer the question around offheap, let's say you decide to have 500k mappings in your cache and each mapping is 10k bytes. That amounts to nearly 5GB of cached data. Data that has an impact on the GC when it runs, since it needs to perform operations around it - mark, promotions, compaction depending on GC impl. So offheap answers this problem by placing the objects outside of the area where GC happens in order to enable the application to run with a much smaller heap and thus reduced GC pauses.
All of this does not contradict that it is never the GC that will remove a cache entry. It is the opposite - once evicted, expired or replaced, then a former mapping becomes free for GC as long as there are no more root paths to it.
And this is what the explanation given in this answer says.
I am using ehache v. 2.8.
But I am not sure if I understand the documentation correctly regarding reservation of the memory for the cache.
If the memory is set in ehcache.xml like this:
<ehcache maxBytesLocalHeap="256M">
(...)
</ehcache>
..will it actually be allocated at start and this cache will use exactly 256MB of heap or does this only mean (like it should, if this attribute is named like it is) that this cache can take at most 256MB of heap?
This means that this cache will do its best to contain 256MB or less of user data.
But note that the actual memory footprint of the cache can be somewhat larger due to internal data structures.
Also in case the cache operates at full capacity, it may temporarily go over size while eviction takes place.
Suppose you had a server with 24G RAM at your disposal, how much memory would you allocate to (Tomcat to run) eXist?
I'm setting up our new webserver, with an Intel Xeon E5649 (2.53GHz) processor, running Ubuntu 12.04 64-bit. eXist is running as a webapp inside Tomcat, and the db is only used for querying 'stable' collections --that is, no updates are being executed to the resources inside eXist.
I've been experimenting with different heap sizes (via -Xms and -Xmx settings when starting the Tomcat process), and so far haven't noticed much difference in response time for queries against eXist. In other words, it doesn't seem to matter much whether the JVM is allocated 4G or 16G. I have also upped the #cachesize and #collectionCache in eXist's WEB-INF/conf.xml file to e.g. 8192M, but this doesn't seem to have much effect. I suppose these settings /do/ have an influence when eXist is running inside Tomcat?
I know each situation is different (and I know there's a Tomcat server involved), but are there some rules of thumb for eXist performance w.r.t. the memory it is allocated? I'd like to get at a sensible memory configuration for a setup with a larger amount of RAM available.
This question was asked and answered on the exist-open mailing list. The answer from wolfgang#exist-db.org was:
Giving more memory to eXist will not necessarily improve response times. "Bad"
queries may consume lots of RAM, but the better your queries are optimized, the
less RAM they need: most of the heavy processing will be done using index
lookups and the optimizer will try to reduce the size of the node sets to be
passed around. Caching memory thus has to be large enough to hold the most
relevant index pages. If this is already the case, increasing the caching space
will not improve performance anymore. On the other hand, a too small cacheSize
of collectionCache will result in a recognizable bottleneck. For example, a
batch upload of resources or creating a backup can take several hours (instead
of e.g. minutes) if #collectionCache is too small.
If most of your queries are optimized to use indexes, 8gb RAM for eXist does
usually give you enough room to handle the occasional high load. Ideally you
could run some load tests to see what the maximum memory use actually is. For
#cacheSize, I rarely have to go beyond 512m. The setting for #collectionCache
depends on the number of collections and documents in the database. If you have
tens or hundreds of thousands of collections, you may have to increase it up to
768m or more. As I said above, you will recognize a sudden breakdown in
performance during uploads or backups if the collectionCache becomes too small.
So to summarize, a reasonable setting for me would be: -Xmx8192m,
#cacheSize="512m", #collectionCache="768m". If you can afford giving 16G main
memory it certainly won’t hurt. Also, if you are using the lucene index or the
new range index, you should consider increasing the #buffer setting in the
corresponding index module configurations in conf.xml as well:
<module id="lucene-index" buffer="256" class="org.exist.indexing.lucene.LuceneIndex" />
<module id="range-index" buffer="256" class="org.exist.indexing.range.RangeIndex"/>
I have some questions on "overflowToDisk" attribute of element?
1) I read at this URL that :
overflowToDisk sets whether element can overflow to disk when the memory store has reached the maximum limit.
"Memory" above refers JVM memory allocated for Java process running EHCACHE, or is there any parameter in to specify Cache memory size?
2) When the poces running EHCACHE terminates for some reason, whether this disk gets cleared and everything in cache gets vanished?
Elements start to overflow to the disk when you have more than maxElementsInMemory of them in the memory store. The following example creates a cache that stores 1000 elements in memory, and, if you need to store more, up to 10000 on disk:
<cache name="cacheName"
maxElementsInMemory="1000"
maxElementsOnDisk="10000"
overflowToDisk="true"
timeToIdleSeconds="..."
timeToLiveSeconds="...">
</cache>
For the second question, have a look at the diskPersistent parameter. If it is set to true, Ehcache will keep your data stored on the disk when you stop the JVM. The following example demonstrates this:
<cache name="cacheName"
maxElementsInMemory="1000"
maxElementsOnDisk="10000"
overflowToDisk="true"
diskPersistent="true"
timeToIdleSeconds="..."
timeToLiveSeconds="...">
</cache>
As of Ehcache 2.6, the storage model is no longer an overflow one but a tiered one. In the tiered storage model, all data will always be present in the lowest tier. Items will be present in the higher tiers based on their hotness.
Possible tiers for open source Ehcache are:
On-heap that is on the JVM heap
On-disk which is the lowest one
By definition high tiers have lower latency but less capacity than lower tiers.
So for an open source cache configured with overflowToDisk, all the data will always be inside the disk tier. It will store the key in memory and the data on disk.
Answer copied from this other question.