I have written a windows service using .NET technologies. I am using `JetBrains dotMemory' tool to understand the memory leak.
I am getting below report but as a new bee I am not sure how to read this.
System namespace is showing more survived bytes. But how do I know which code is the root cause of memory leak?
At the first your should decide which kind of memory issue you are going to find
Constantly growing memory consumption - get base snaphsot, get another after memory consumption is increased, open snapshots comparison, open new objects created after first snapshot, look at them to understand which should be collected.
Ensure that some key object doesn't leak - set your app in a state when some object should not be presented in memory (e.g. close some view), get snapshot, using filter on "Group by type" view to ensure that this object is not presented in memory.
Memory traffic - get base snapshot if needed, run action/algorithm in your app which you want to check, get snapshot. Open "Memory Traffic" view, look if it looks as you implemented or more objects then you expected were allocated during the action.
Grab this free book for other possible memory issues.
P.S. Only you as an app author can answer the question, is it a problem or it is as designed.
You should look at the survived bytes / retained bytes which will point you to the base instance or the root object of the creation. It depends on your application design and implementation to decide whether the specified object in the memory should be retained or not.
If you identify the root object of the creation, you should try to separate the linkage and make the .net garbage collector to automatically collect the unwanted objects.
There is no fixed flag points to identify memory leaks.
Using ANTS Memory Profiler
Using Windbg or here
One source of memory leaks are the event handlers that are not being de-referenced.
Example:
myClass.DoSomething += Event_DoSomething
You need to make sure the resources are being clearead like below:
myClass.DoSomething -= Event_DoSomething
Related
When running Visual Studio's memory profiler (Memory Usage Analysis) on my Windows Store app it shows me that there are many objects of type System.Signature (mscorlib.dll) on the heap.
I can't find System.Signature in the Object Browser, so I assume that it is some internal class only used and only usable by the Framework.
Does anyone have more information about this class?
When or why are System.Signature objects created and what are they used for internally?
According to the memory profiler, the number of System.Signature objects raises constantly while the app is used. This looks like a memory leak to me, but where do these objects come from or what creates these objects? To answer this question, I first need to know what System.Signature actually is.
The Valgrind documentation on debugging custom memory allocators is based on an abstraction called a "pool." I'm having a little trouble figuring out how the pool is intended to be used. My initial guess is that because I have a fairly simple memory allocator (mark-and-sweep garbage collector), I can use just a single "pool." Perhaps if I had multiple entities managing different memory in different ways, I would use multiple "pools"?
I'd love any guidance on how you think the pool is intended to be used, or how you used the pool in your application.
I am bit late here. I learned that pool is just an reference / anchor address for the valgrind chunk we are allocating. In my case, the pool is quite dynamic (split heap), and whenever a memory heap block is allocated, I marked it as noaccess (as suggested by the documentation), and whenever a new object (valgrind chunk) is allocated, I call the VALGRIND_MEMPOOL_ALLOC with the pool address. This gives valgrind ability to handle multiple pools at a time. We can also directly destroy the pool and valgrind will automatically free the objects in it, and then when we create a new pool valgrind knows that new objects aren't overlapping on previous ones preventing wrong errors.
Here is my code: https://github.com/eclipse/omr/pull/1311 .
There is also a link to documentation, which contains how I understood and used the api.
I have a website with a simple page. On click of a button we execute a MDX query which returns around 200,000 rows with 20 columns. I use following code to execute MDX query using Microsoft.AnalysisServices.AdomdClient library (version is 10.0.0.0 runtime version v2.0.50727)
var connection = new AdomdConnection(connectionString);
var command = new AdomdCommand(query, connection)
{
CommandTimeout = 900
};
connection.ShowHiddenObjects = true;
connection.Open();
var cellSet = command.ExecuteCellSet();
connection.Close();
While the query is executing the memory usgae of the app pool goes very high.
This is the initial state of the memory usage on the server :
After running the query:
I am not sure why the memory usage goes so high and stays like that. I have used profiler on my local box and everything looked ok.
What options I have to figure out what is holding on to the memory?
Is there any explicit way to clear off this memory?
Does ADOMD library always consumes this much memory? Do we have any alternate options to execute MDX queries using C#?
When the memory usgae goes this high, IIS stop processing other queries and the application hosted on same IIS server (using different app pool) also get affected and request takes longer to execute.
I've recently started at a place where we have a similar issue.
Your options to figure out whats holding memory are:
Download a memory profiler such as Redgate's Ants profiler, and that will allow you to see whats going on in the App pool. However theres only a 2 week trial but will allow you to see whats going on initially.
Get hold of CLR Profiler, this tool can be downloaded and allows you to see snapshots of the memory, so you can tell whats in memory in the CLR.
One thing to be aware of is the Large Object Heap, by design the CLR will not compact space in the LOH and so if objects are put there then that can lead to memory fragmentation. Objects greater than 85000 bytes get put there. One example is large lists of objects.
One thing I've tried doing to get around it is create a specialised collection like a composite list, which basically is a list of lists, then as each component list is under 85000 bytes it will remain in the normal heap and the entire object misses being put into the LOH. Others too have mentioned this approach.
That said I'm still having issues, as the composite list hasn't really sorted out the problem so there are still other factors at play which need to resolve. Am puzzled at it and thinking that a memory dump of the app pool and analysing with winDbg may provide further answers.
One further point, although I'm sure its not the source of the problem, is that its recommended to have a using statement for your connection, as otherwise if there's an exception before your close statement then it may not get closed.
I've been asked to try and reduce memory usage in an app's code I have been given. The app runs fine in the simulator but on the device it is terminated or something, when debugging it enters a 'Paused' state and the app closes on the device.
When running instruments I discovered leaks, fixed them, however there is a large amount of allocation going on. Within a few seconds of launch the instruments allocation trace shows 1,021 KB for 'Malloc 16 Bytes'. This is essentially useless information as is, I need to see where the memory is being allocated but I can't seem to find anything useful. All i can get for a deeper inspection is that 'dyld', 'libsystem_c.dylib', 'libCGFreetype.A.dylib' etc are allocating a lot, but the responsible caller is never a recognizable method from the app source.
How can I see what methods are causing the most allocations here? I need to get this usage down! Thank you
Opening the extended detail view will show the call stack for memory allocations. Choose View > Extended Detail to open the extended detail view.
Switching to the call tree view will help you find where you are allocating memory in your code. Use the jump bar to switch to the call tree view.
1MB is no big deal. You can't do much in terms of throwing up a full view without using 1MB.
There's a good video from WWDC 2010 (http://developer.apple.com/videos/wwdc/2010/) that covers using instruments to analyze memory use. Title is Advanced Memory Analysis with Instruments. There may be an updated video from 2011.
What is this thing called a KGL SIMULATOR and how can its memory utilisation be managed by application developers?
The background to the question is that I'm occasionally getting errors like the following and would like to get a general understanding of what is using this heap-space?
ORA-04031: unable to allocate 4032 bytes of shared memory ("shared pool","select text from > view$ where...","sga heap(3,0)","kglsim heap")
I've read forum posts through Google suggesting that the kglsim is related to the KGL SIMULATOR, but there is no definition of that component, or any tips for developers.
KGL=Kernel General Library cache manager, as the name says it deals with library objects such cursors, cached stored object definitions (PL/SQL stored procs, table definitions etc).
KGL simulator is used for estimating the benefit of caching if the cache was larger than currently. The general idea is that when flushing out a library cache object, it's hash value (and few other bits of info) are still kept in the KGL simulator hash table. This stores a history of objects which were in memory but flushed out.
When loading a library cache object (which means that no existing such object is in library cache), Oracle goes and checks the KGL simulator hash table to see whether an object with matching hash value is in there. If a matching object is found, that means that the required object had been in cache in past, but flushed out due space pressure.
Using that information of how many library cache object (re)loads could have been been avoided if cache had been bigger (thanks to KGL simulator history) and knowing how much time the object reloads took, Oracle can predict how much response time would have been saved instancewide if shared pool was bigger. This is seen from v$library_cache_advice.
Anyway, this error was probably raised by a victim session due running out of shared pool space. In other words, someone else may have used up all the memory (or all the large enough chunks) and this allocation for KGL sim failed because of that.
v$sgastat would be the first point for troubleshooting ORA-4031 errors, you need to identify how much free memory you have in shared pool (and who's using up most of the memory).
--
Tanel Poder
http://blog.tanelpoder.com
I've found that KGL stands for "Kernel Generic Library".
Your issue could be a memory leak within Oracle. You probably should open a case with Oracle support.