Understanding LeakCanary memory leak stack trace - leakcanary

I have a simple news app and recently I started getting crash reports with OOM from some users(from Firebase Crashlytics). After some researches I found that the cause could be caused by memory leak. So I installed LeakCanary and eventually while using the apo, I can see it reporting potential problem.
Can some experienced help me understand what is the problem? I have attached the log

See https://square.github.io/leakcanary/fundamentals/#how-do-i-fix-a-memory-leak
Summary:
Each node in the leak trace is a Java object and is either a class, an object array or an instance.
Going down, each node has a reference to the next node. In the UI, that reference is in purple. In the Logcat representation, the reference is on the line that starts with a down arrow.
At the top of the leak trace is a garbage-collection (GC) root. GC roots are special objects that are always reachable.
At the bottom of the leak trace is the leaking instance. This instance was passed to RefWatcher.watch() to confirm it would be garbage collected, and it ended up not being garbage collected which triggered LeakCanary.
The chain of references from the GC root to the leaking instance is what is preventing the leaking instance from being garbage collected. If you can identify the reference that should not exist at that point in time, then you can figure out why it’s incorrectly still set and then fix the memory leak.

Related

How do I find memory leak of my application?

I have written a windows service using .NET technologies. I am using `JetBrains dotMemory' tool to understand the memory leak.
I am getting below report but as a new bee I am not sure how to read this.
System namespace is showing more survived bytes. But how do I know which code is the root cause of memory leak?
At the first your should decide which kind of memory issue you are going to find
Constantly growing memory consumption - get base snaphsot, get another after memory consumption is increased, open snapshots comparison, open new objects created after first snapshot, look at them to understand which should be collected.
Ensure that some key object doesn't leak - set your app in a state when some object should not be presented in memory (e.g. close some view), get snapshot, using filter on "Group by type" view to ensure that this object is not presented in memory.
Memory traffic - get base snapshot if needed, run action/algorithm in your app which you want to check, get snapshot. Open "Memory Traffic" view, look if it looks as you implemented or more objects then you expected were allocated during the action.
Grab this free book for other possible memory issues.
P.S. Only you as an app author can answer the question, is it a problem or it is as designed.
You should look at the survived bytes / retained bytes which will point you to the base instance or the root object of the creation. It depends on your application design and implementation to decide whether the specified object in the memory should be retained or not.
If you identify the root object of the creation, you should try to separate the linkage and make the .net garbage collector to automatically collect the unwanted objects.
There is no fixed flag points to identify memory leaks.
Using ANTS Memory Profiler
Using Windbg or here
One source of memory leaks are the event handlers that are not being de-referenced.
Example:
myClass.DoSomething += Event_DoSomething
You need to make sure the resources are being clearead like below:
myClass.DoSomething -= Event_DoSomething

Cocoa: Finding the missing reference for deallocating

I'm almost done with and app and I'm using instruments to analyse it. I'm having a problem with ARC deallocating something, but I don't know what. I run instruments using the allocations tool ,what I'm doing is starting the app at the main view, then I mark a heap, I interact with the app a little and return to the original main view and mark another heap.
I do this several times and as I understand it, there should not be any significant heap growth because I am returning to the exact same place, everything I did in between should have been deallocated, providing no heap growth. However I have significant growth so I dive into the heaps and I find that almost everything on it has a retain count of 1, which leads me to believe that one object or view, etc is not being deallocated because of a mistake I've made and that object is what's holding references to everything else.
What I'm trying to find out is which object is not being deallocated. Instruments is very vague and only offers obscure pointers that do not allow me to trace back the problem.
Please let me know if there is a way for me to trace what is holding a reference that may be keeping the retain count at 1.
Thanks.
My 1st thought are 2 things:
1) You may have a retain cycle: As an example, one object has to a delegate a strong reference. And the delegate has also a strong reference (instead of a weak reference) to the 1st object back. Since both of them "hold" the other one, none of them can be released.
2) You may have a multi-threaded app, one of the threads does not have an autorelease pool assigned (i.e. does not have an #autoreleasepool block), and is creating autorelease objects. This may happen even in a simple getter method that returns an autorelease object. If so, the autorelease object is "put" into an non-existing autorelease pool (which does not give you an error message, since you can send any message to nil), and it is never released.
Maybe one of these cases applies to your problem.

java.lang.OutOfMemoryError: GC overhead limit exceeded Spring Hibernate Tomcat 6

I am facing issue in my web application which uses Spring + Hibernate .
I am randomly getting error
java.lang.OutOfMemoryError: GC overhead limit exceeded
when web application is running in tomcat
I tried to get Heap dump and did analysis of heap dump using Eclipse MAT
Here are my findings
Object org.hibernate.impl.SessionFactoryObjectFactory holds 86% of the memory , this object’s Fashhashmap instance holds more than 100000 Hashmaps.
Inside the every Hashmap there is an instance of org.hibernate.impl.SessionFactoryImpl ,
It seems org.hibernate.impl.SessionFactoryImpl is loaded several times and stored inside org.hibernate.impl.SessionFactoryObjectFactory ‘s Fashhashmap
Can somebody help me in finding root cause for this issue and suggest some solution to fix this.
Well, even if you are getting that SessionFactoryObjectFactory holds 86% of the memory, it doesn't seem the cause to me.The first thing is before relying on any memory analysis tool, we should first understand how this tool predict out the outofmemory issues.
Memory tools just try to capture the instant HIKES which are shown in application once you run that tool. I am pretty sure that you would get the same error logs saying but with different causes mentioned by the tool that Catalina web class loader is accessing major amount of memory, which is obvious and expected.
So just I want to figure out that, instead of relying on any such tools (which may right in particular cases/implementations), you try to dig your app source code and try to find where the unnecessary temp objects are being created.
For debugging purpose, you may turn on the JVM option - -XX:-PrintGCDetails to view what exactly GC is collecting.
See these posts/references for more info - http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#Options
java.lang.OutOfMemoryError: GC overhead limit exceeded
Well your GC thread is spending 98% or more of processor time trying to clean up objects.
The idea of the Factory pattern is to return a non null instance of the object you wish to create, which is generally done by returning the same instance once one has been instantiated.
Now it could be that you have 100,000 different sessions or whatnot but I doubt that is correct, hence you need to check your code to make sure that the Factory method calls are being down correctly, and likely without a local copy being cached.
If you do indeed have 100,000 sessions then take a good look at the methods which are creating them. Break long methods up so that loops and while structures are separated by method calls so that method local variables can be cleaned up once out of scope.
Also ensure that these smaller methods are not final as the compiler will stitch final methods together into a single stack frame as an optimisation technique.

How do you see the specific methods responsible for memory allocation in XCode Instruments?

I've been asked to try and reduce memory usage in an app's code I have been given. The app runs fine in the simulator but on the device it is terminated or something, when debugging it enters a 'Paused' state and the app closes on the device.
When running instruments I discovered leaks, fixed them, however there is a large amount of allocation going on. Within a few seconds of launch the instruments allocation trace shows 1,021 KB for 'Malloc 16 Bytes'. This is essentially useless information as is, I need to see where the memory is being allocated but I can't seem to find anything useful. All i can get for a deeper inspection is that 'dyld', 'libsystem_c.dylib', 'libCGFreetype.A.dylib' etc are allocating a lot, but the responsible caller is never a recognizable method from the app source.
How can I see what methods are causing the most allocations here? I need to get this usage down! Thank you
Opening the extended detail view will show the call stack for memory allocations. Choose View > Extended Detail to open the extended detail view.
Switching to the call tree view will help you find where you are allocating memory in your code. Use the jump bar to switch to the call tree view.
1MB is no big deal. You can't do much in terms of throwing up a full view without using 1MB.
There's a good video from WWDC 2010 (http://developer.apple.com/videos/wwdc/2010/) that covers using instruments to analyze memory use. Title is Advanced Memory Analysis with Instruments. There may be an updated video from 2011.

What is the Oracle KGL SIMULATOR?

What is this thing called a KGL SIMULATOR and how can its memory utilisation be managed by application developers?
The background to the question is that I'm occasionally getting errors like the following and would like to get a general understanding of what is using this heap-space?
ORA-04031: unable to allocate 4032 bytes of shared memory ("shared pool","select text from > view$ where...","sga heap(3,0)","kglsim heap")
I've read forum posts through Google suggesting that the kglsim is related to the KGL SIMULATOR, but there is no definition of that component, or any tips for developers.
KGL=Kernel General Library cache manager, as the name says it deals with library objects such cursors, cached stored object definitions (PL/SQL stored procs, table definitions etc).
KGL simulator is used for estimating the benefit of caching if the cache was larger than currently. The general idea is that when flushing out a library cache object, it's hash value (and few other bits of info) are still kept in the KGL simulator hash table. This stores a history of objects which were in memory but flushed out.
When loading a library cache object (which means that no existing such object is in library cache), Oracle goes and checks the KGL simulator hash table to see whether an object with matching hash value is in there. If a matching object is found, that means that the required object had been in cache in past, but flushed out due space pressure.
Using that information of how many library cache object (re)loads could have been been avoided if cache had been bigger (thanks to KGL simulator history) and knowing how much time the object reloads took, Oracle can predict how much response time would have been saved instancewide if shared pool was bigger. This is seen from v$library_cache_advice.
Anyway, this error was probably raised by a victim session due running out of shared pool space. In other words, someone else may have used up all the memory (or all the large enough chunks) and this allocation for KGL sim failed because of that.
v$sgastat would be the first point for troubleshooting ORA-4031 errors, you need to identify how much free memory you have in shared pool (and who's using up most of the memory).
--
Tanel Poder
http://blog.tanelpoder.com
I've found that KGL stands for "Kernel Generic Library".
Your issue could be a memory leak within Oracle. You probably should open a case with Oracle support.

Resources