What happend if guava cache is full and no evictable element? - caching

I am using google guava cache with reference-based eviction.
I wonder what happened if the cache is full and no element of it is marked as evictable? Is there an out of memory exception thrown?

Reference-based eviction is essentially no different than Java's standard GC behavior - the GC just ignores the reference's presence in the cache. If an object falls out of scope (everywhere but the cache) it will be evicted from the cache during GC. If all elements of the cache are in scope somewhere else and therefore cannot be GCed you will run into memory problems exactly like you would if you weren't using a cache. You cannot have more data in memory than the JVM is configured to permit. Using a reference-evicting cache doesn't change this.

Related

Debugger memory watch/examine for cached memory

I am trying to debug a program which is partially working with cached data memory and from cached instruction memory. The question is about how the debugger works, when trying to examine such a memory. Does it access the cached copy when examining a specific location? If so, does it actually modify the cache, as it has to fetch the data once it's a miss? Does it mean that the program behavior might be different under debugger from the one without it? Any way to debug cache-related issues, without the debugger to affect the caches?
Update: The specific CPU core is ARM Cortex-A5. The debugger is DSTREAM/DS-5
I think the question is a bit generic because it will depend on the CPU.
However some very global rules:
The debugger will try to see what the CPU sees on a data access, which will include cache lookups on the data cache.
This is different for instruction cache, as the debugger will normally not do a lookup as it will perform data accesses. But this is normally not a problem as instruction cache does not contain dirty data. Depending on the debugger, it can clean DCache and invalidate corresponding ICache line if a data is written.
Debug access will try to not be intrusive, and can force a mode in which no linefill is performed in case of miss. But this is really dependent on the CPU and not a global rule.
The DS-5 uses a JTAG probe connected into the CPU. To read the CPU's addressable memory, it has to run the CPU through its micro-operations to fetch memory. This perturbs the cache differently than if the CPU were simply to run the program.
You can minimize the effect by not stopping the CPU until after critical (suspect) code and then try to piece together what must have happened from the contents of registers and memory. If you can run a program from its beginning to the breakpoint, especially if that is 10,000+ instructions, the cache probably will be put into the correct state. Unless there is asynchronous activity.
To identify whether an issue is due to caching, maybe you can simply disable the cache?

OLAP Saiku Cache expires

I'm using Saiku and PHPAnalytics to run MDX queries on my cube.
it seems if i run queries it's all good, caching is fine. But if I go for 2 hours and run those queries again - it does not using cache! Why? I need the cache to be saved for a long time! What to do? I tried to add this ti mondrian.properties mondrian.rolap.CachePool.costLimit = 2147483647
But no help. What do to?
The default in-memory cache of Mondrian stores things in a WeakHashMap. This means that it could be cleared at the discretion of the JVM's garbage collector. Most application servers are setup to do a periodical sweep of garbage collection (usually each hour or so). You have to either tweak your JVM's configuration to not do this.
-Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
You can also implement your own cache implementation of the SegmentCache SPI. If your implementation uses hard references, they will never be collected. This is trickier to do and will require you to do quite a bit of studying to get it right. You can start by taking a look at the default implementation and start from there.
The mondrian cache should cache up until the cache is deliberately flushed. That said it uses an aging system to determine what should be cached should it run out of memory to store the data, the oldest query gets pushed out of the cache and replaced.
I've not tried the PHPAnalytics stuff, but maybe they've put some call into the Saiku server to flush the cache on a regular basis, otherwise this shouldn't happen.

enterprise library cached object size

I've setup the enterprise library caching counters using Perfmon. However all I can see is number of entries in cache.
COuld someone please help me if there's way to find out the size of the cached object so that I can specify correct value for Max num of items to be cached and removed etc?
Also, what does Missed Caches really means as I see quiet large number of misses although my web application is working as expected. Do I need to worry about this counter?
Enterprise Library Caching does not provide the size of the cache or the size of objects in the cache.
There are various approaches to finding the object size that you could use to try to tune the cache size. See:
Find out the size of a .net object
How to get object size in memory?
A Cache Miss is when an item is attempted to be retrieved from the cache but the key is not found in the cache. Usually when this happens you would add the item to the cache. This is not usually alarming since for a cache with no backing store it will be empty at first so initially you would see cache misses but misses should decrease as the cache is loaded (unless of course items expire and are removed from the cache).

java.lang.OutOfMemoryError: GC overhead limit exceeded Spring Hibernate Tomcat 6

I am facing issue in my web application which uses Spring + Hibernate .
I am randomly getting error
java.lang.OutOfMemoryError: GC overhead limit exceeded
when web application is running in tomcat
I tried to get Heap dump and did analysis of heap dump using Eclipse MAT
Here are my findings
Object org.hibernate.impl.SessionFactoryObjectFactory holds 86% of the memory , this object’s Fashhashmap instance holds more than 100000 Hashmaps.
Inside the every Hashmap there is an instance of org.hibernate.impl.SessionFactoryImpl ,
It seems org.hibernate.impl.SessionFactoryImpl is loaded several times and stored inside org.hibernate.impl.SessionFactoryObjectFactory ‘s Fashhashmap
Can somebody help me in finding root cause for this issue and suggest some solution to fix this.
Well, even if you are getting that SessionFactoryObjectFactory holds 86% of the memory, it doesn't seem the cause to me.The first thing is before relying on any memory analysis tool, we should first understand how this tool predict out the outofmemory issues.
Memory tools just try to capture the instant HIKES which are shown in application once you run that tool. I am pretty sure that you would get the same error logs saying but with different causes mentioned by the tool that Catalina web class loader is accessing major amount of memory, which is obvious and expected.
So just I want to figure out that, instead of relying on any such tools (which may right in particular cases/implementations), you try to dig your app source code and try to find where the unnecessary temp objects are being created.
For debugging purpose, you may turn on the JVM option - -XX:-PrintGCDetails to view what exactly GC is collecting.
See these posts/references for more info - http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#Options
java.lang.OutOfMemoryError: GC overhead limit exceeded
Well your GC thread is spending 98% or more of processor time trying to clean up objects.
The idea of the Factory pattern is to return a non null instance of the object you wish to create, which is generally done by returning the same instance once one has been instantiated.
Now it could be that you have 100,000 different sessions or whatnot but I doubt that is correct, hence you need to check your code to make sure that the Factory method calls are being down correctly, and likely without a local copy being cached.
If you do indeed have 100,000 sessions then take a good look at the methods which are creating them. Break long methods up so that loops and while structures are separated by method calls so that method local variables can be cleaned up once out of scope.
Also ensure that these smaller methods are not final as the compiler will stitch final methods together into a single stack frame as an optimisation technique.

What's the best strategy to invalidate ORM cache?

We have our ORM pretty nicely coupled with cache, so all our object gets are cached. Currently we invalidate our objects before and after our insert/update/delete of our object. What's your experience?
Why before AND after i/u/d?
If you don't want to update your cache directly then it's enough to invalidate an object after i/u/d assuming you load it into cache on every cache miss. If your object space is big enough that your cache could use up too much memory, you'll need some expiration mechanism too (invalidate after X minutes or after X minutes w/o being accessed).
Or you could go for LRU (Least Recently used) but this is not easy to implement on your own if your ORM doesn't support it natively.

Resources