Ehcache - How does ehcache get to know about the current memory size - ehcache

I donot see anywhere in Put function of Cache , that the sizeof Object or anything similar to it is being called . So how does ehCache comes to know the current memory filled.
I know there are other function to get that , but they need to be called explicitly.
I want to know at the runtime , how does it know the memoey filled.

First of all, Ehcache will only size objects on heap if it is configured to do so. And when it is, the sizing happens inside the on-heap store and ARC related code not in the Cache.put method.
So if you want to track down the code, start from net.sf.ehcache.store.MemoryStore#put and see also net.sf.ehcache.pool.impl.AbstractPoolAccessor#add(java.lang.Object, java.lang.Object, java.lang.Object, boolean).

Related

How do I find memory leak of my application?

I have written a windows service using .NET technologies. I am using `JetBrains dotMemory' tool to understand the memory leak.
I am getting below report but as a new bee I am not sure how to read this.
System namespace is showing more survived bytes. But how do I know which code is the root cause of memory leak?
At the first your should decide which kind of memory issue you are going to find
Constantly growing memory consumption - get base snaphsot, get another after memory consumption is increased, open snapshots comparison, open new objects created after first snapshot, look at them to understand which should be collected.
Ensure that some key object doesn't leak - set your app in a state when some object should not be presented in memory (e.g. close some view), get snapshot, using filter on "Group by type" view to ensure that this object is not presented in memory.
Memory traffic - get base snapshot if needed, run action/algorithm in your app which you want to check, get snapshot. Open "Memory Traffic" view, look if it looks as you implemented or more objects then you expected were allocated during the action.
Grab this free book for other possible memory issues.
P.S. Only you as an app author can answer the question, is it a problem or it is as designed.
You should look at the survived bytes / retained bytes which will point you to the base instance or the root object of the creation. It depends on your application design and implementation to decide whether the specified object in the memory should be retained or not.
If you identify the root object of the creation, you should try to separate the linkage and make the .net garbage collector to automatically collect the unwanted objects.
There is no fixed flag points to identify memory leaks.
Using ANTS Memory Profiler
Using Windbg or here
One source of memory leaks are the event handlers that are not being de-referenced.
Example:
myClass.DoSomething += Event_DoSomething
You need to make sure the resources are being clearead like below:
myClass.DoSomething -= Event_DoSomething

Get number of weak_ptr objects that point to resource

I am trying to create a custom cashing mechanism where I am returning a weak_ptr to the cache created. Internally, I hold a shared_ptr to control the lifetime of the object.
When the maximum cache pre-set is consumed, the disposer looks for those cache objects that are not accessed for a long time and will clean them up.
Unfortunately this may not be ideal. If it was possible to check how many cache objects can be accessed through the weak_ptr, then this can be a criteria for making the decision to clean up or not.
Turns out there is no way to check how many weak_ptr(s) have handle to the resource.
But when I look at the shared_ptr documentation and implementation notes
=> the number of weak_ptrs that refer to the managed object
is part of the implementation. Why is this not exposed through an API ?

javaFX Memory release , javaFX bug?

I found when switch pages frequently in the javaFX sample Ensemble.jar, memory will get higher an higher and can't release. This also happened in my project.
Is that a bug of javaFX? Now our testers are always complaining about this problem.
Are there some good ways to solve this problem? What can we do in "memory release" in javaFX?
To solve this problem,what we've done:
Set the global variables to NULL when we destroyed the javaFX pages.
Decrease the use of "repeated big images" in .css file.
Invoke GC in Platform.runLater(). (This seems a little silly)
But the effect is not so clear, Who can help us?
This is not a bug in JavaFX.
I guess your memory leaks come from the use of listeners on Properties.
JavaFX uses Properties as an implementation of the Observer Pattern. When you add a ChangeListener to a property, you actually add a reference to your listener in the property object. If you don't call the RemoveListener method to remove this reference, your listener won't be garbage collected as long as the property object is not garbage collected itself.
I have no idea of what your code looks like but I can make some assumptions:
Each time you switch pages you instantiate a new controller
Each controller instantiate a listener object and add it to a property object.
When switching pages the previous controller is garbage collected while the property object is not. In the property object, there is a reference to the Listener object and thus the listener object remains in the memory.
The more you switch pages, the more you instantiate listeners that won't be garbage collected, the bigger your memory leak is.
If you add Listeners to Properties, try to call the removeListener method and see if it solves the problem.
Regards,
Cyril

java.lang.OutOfMemoryError: GC overhead limit exceeded Spring Hibernate Tomcat 6

I am facing issue in my web application which uses Spring + Hibernate .
I am randomly getting error
java.lang.OutOfMemoryError: GC overhead limit exceeded
when web application is running in tomcat
I tried to get Heap dump and did analysis of heap dump using Eclipse MAT
Here are my findings
Object org.hibernate.impl.SessionFactoryObjectFactory holds 86% of the memory , this object’s Fashhashmap instance holds more than 100000 Hashmaps.
Inside the every Hashmap there is an instance of org.hibernate.impl.SessionFactoryImpl ,
It seems org.hibernate.impl.SessionFactoryImpl is loaded several times and stored inside org.hibernate.impl.SessionFactoryObjectFactory ‘s Fashhashmap
Can somebody help me in finding root cause for this issue and suggest some solution to fix this.
Well, even if you are getting that SessionFactoryObjectFactory holds 86% of the memory, it doesn't seem the cause to me.The first thing is before relying on any memory analysis tool, we should first understand how this tool predict out the outofmemory issues.
Memory tools just try to capture the instant HIKES which are shown in application once you run that tool. I am pretty sure that you would get the same error logs saying but with different causes mentioned by the tool that Catalina web class loader is accessing major amount of memory, which is obvious and expected.
So just I want to figure out that, instead of relying on any such tools (which may right in particular cases/implementations), you try to dig your app source code and try to find where the unnecessary temp objects are being created.
For debugging purpose, you may turn on the JVM option - -XX:-PrintGCDetails to view what exactly GC is collecting.
See these posts/references for more info - http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#Options
java.lang.OutOfMemoryError: GC overhead limit exceeded
Well your GC thread is spending 98% or more of processor time trying to clean up objects.
The idea of the Factory pattern is to return a non null instance of the object you wish to create, which is generally done by returning the same instance once one has been instantiated.
Now it could be that you have 100,000 different sessions or whatnot but I doubt that is correct, hence you need to check your code to make sure that the Factory method calls are being down correctly, and likely without a local copy being cached.
If you do indeed have 100,000 sessions then take a good look at the methods which are creating them. Break long methods up so that loops and while structures are separated by method calls so that method local variables can be cleaned up once out of scope.
Also ensure that these smaller methods are not final as the compiler will stitch final methods together into a single stack frame as an optimisation technique.

Usage of RemoteCache with DeltaAware and Delta interface infinispan

I need some guidance related to the following scenario in infinispan. Here is my scenario:
1) I created two nodes and started successfully in infinispan using client server mode.
2) In the hot rod client I created a remotechachemanager and then obtained a RemoteCache.
3) In the remote cache I put like this cache.put(key, new HashMap()); it is successfully added.
4) Now when I am going to clear this value using cache.remove(key) , I am seeing that it is not getting removed and the hash map is still there every time I go to remove it.
How can clear the value so that it will be cleared from all node of the cluster?
How can I also propagate the changes like adding or removing from the value HashMap above?
Has it anything to do with implementing DeltaAware and Delta interface?
Please suggest me about this concept or some pointers where I can learn
Thank you
Removal of the HashMap should work as long as you use the same key and have equals() and hashCode() correctly implemented on the key. I assume you're using distributed or replicated mode.
EDIT: I've realized that equals() and hashCode() are not that important for RemoteCache, since the key is serialized anyway and all the comparison will be executed on the underlying byte[].
Remote cache does not directly support DeltaAware. Generally, using these is quite tricky even in library mode.
If you want to use cache with maps, I suggest rather using composite key like cache-key#map-key than storing complex HashMap.

Resources