Win from application gets slow - visual-studio-2010

I have built an application using Visual Studio .NET and it works fine. After the application is used for more than 2-3 hours it starts to get slow and I don't know why. I have used GC.Collect(); to get memory leak problems but now I have the new one.
Does anyone know a solution?

If you really have a memory leak, just calling GC.Collect() will get you nowhere. The GarbageCollector can only collect those objects, that are not referenced from others anymore.
If you do not cleanup your objects properly, the GC will not collect anything.
When handling with memory consumptions, you should strongly consider the following patterns:
Weak Events (MSDN Documentation here)
If you do not unsubscribe from events, the subscribing objects will never be released into the Garbage Collection. GC.Collect() will NOT remove those objects and they will clutter your memory.
Implement the IDisposable interface (MSDN documentation here)
(I strongly suggest to read this ducumentation as I have seen lots of wrong implementations.)
You should always free resources that you used. Call Dispose() on every object that offers it!
The same applies to streams. Always call Close() on every object that offers this.
To make points 2. and 3. easier you can use the using blocks. (MSDN documentation here)
As soon as these code blocks go out of scope they automatically call the appropriate Dispose() or Close() methods on the given object. This is the same, but more convinient, as using a try... finally combination.

Try a memory profiler, such as the ANTS Memory Profiler. First you need to understand what's going on, then you can think about how to fix it.
http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/

Related

Source of Node<Object> in the Visual Studio memory snapshot

I am doing memory profiling for my application using Visual Studio diagnostic tool. I find that there is Node takes up a lot of memory (based on Inclusive Size Diff. (bytes). (see below #1). And when I click on the first instance of Node, 'Referenced Objects', I see Node is referencing other Node. And I see something like 'Overlapped data' in the attribute.
How can I find out where is creating these Node as they are from mscorlib.ni.dll.
The weapon of choice when you are rooting through these .NET Framework objects is a good decompiler. I use Reflector, there are others.
You see an opaque Node<T> object back. Just type it into the Search box, out pops but a few types that use it. Most are in the System.Collections.Concurrent namespace. Well, look no further, the profiler already told you about that one. Clearly it is the Stack<T> class in the System.Collections.Concurrent namespace that's storing Nodes.
Your profiler told you there is just one Stack<> class object that owns these objects. Well good, that narrows it down to just a single object. It just happens to have 208 elements. Hmm, well, not that much, is it?
That's not where you have to stop, the Stack<> class is a pretty useless class, nobody ever actually uses it in their code. Keep using the decompiler and let it search for usages of that class.
Ah, nice, that's a very short list as well. You see System.Data.ProviderBase show up several times, hmm, this question probably isn't related to querying dbases. Only other set of references is System.PinnableBufferCache.
"Pinnable buffers", whoa, that's a match. Pinning a buffer is important when you ask native code to get a job done to fill a managed array. With BeginRead(), the universal asynchronous I/O call. The driver needs a stable reference to the array while it is working on the async I/O request. Getting a stable buffer requires pinning in .NET. And big bingo in the profiler data, you see OverlappedData, the core data-structure in Windows to do async I/O.
Long story short, you found this guy's project. Programmers noticed, not very often.
Knowing when the stop profiling is very important. You cannot change code written by another programmer. And nobody at Microsoft thinks that guy did anything wrong. He didn't, caches are Good Things.
You are most definitely done. Congratulations.

EF5 (entity framework) memory leak and doesn't release after dispose

So I'm using web api to expose data services. Initially I created my dbcontext as a static memory, and each time I open up my project under IISExpress, the memory balloons to over 100MB in memory. I understand that it isnt recommended to use static due to the solved answer here:
Entity framework context as static
So I went ahead and converted my application to using regular non-static dbcontext and included a dispose method on my api:
protected override void Dispose(Boolean disposing)
{
if (provider.Context != null)
{
provider.Context.Dispose();
provider = null;
}
base.Dispose(disposing);
}
Now every time I make a call, it goes through this method and disposes. Now, I open the application, still balloons to 100k, and each time I make a call, I watch the memory of my iisexpress process, and it keeps on going up and it's not coming back down after the dispose, it keeps increasing to almost 200MB+.
So static or not, memory explodes whenever I use it.
Initially I thought it was my web api that was causing it, until I removed all my services and just created the EF object in my api (I'm using breezejs, so this code is trivial, the actual implementation is down below, but makes no diff to memory consumption):
private DistributorLocationEntities context = new DistributorLocationEntities();
And bam, 110MB immediately.
Is there any helpful tips and tweaks on how I can release memory when I use it? Should I add garbage collect to my dispose()? Any pitfalls to allocating and deallocating memory rapidly like that? For example, I make calls to the service each time I make a keystroke to accomplish an "autocomplete" feature.
I'm also not certain what will happen if I put this in production, and we have dozens of users accessing the db; I wouldn't want the users to increase the memory to 1 or 2GB and it doesn't get released.
Side note: All my data services for now are searches, so there are no save changes or updates, though there can be later on though. Also, I don't return any linq queries as an array or enumerable, they remain as queryables throughout the service call.
One more thing, I do use breezejs, so I wrap up my context as such:
readonly EFContextProvider<DistributorLocationEntities> provider = new EFContextProvider<DistributorLocationEntities>();
and the tidbits that goes along with this:
Doc for Breeze's EFContextProvider
proxycreationenabled = false
ladyloadingenabled = false
idispose is not included
but I still dispose the context anyways, which makes no difference.
I don't know what you're doing. I do know that you should not have any static resources of any kind in your Web API controllers (breeze-flavored or not).
I strongly suspect you've violated that rule.
Adding a Dispose method no difference if the object is never disposed ... which it won't be if it is held in a static variable.
I do not believe that Breeze has any role in your problem whatsoever. You've already shown that it doesn't.
I suggest you start from a clean slate, forget Breeze for now, a get a simple Web API controller that creates a DbContext per request. When you've figured that out, proceed to add some Breeze.
As mentioned in Ward's comment, statics are a big no-no, so I spent time on moving my EF objects out of static. Dispose method didn't really help either.
I gave this article a good read:
http://msdn.microsoft.com/en-us/data/hh949853.aspx
There are quite a few performance options EF provides (that doesn't come out of the box). So here are a few things I've done:
Added pre-generated views to EF: T4 templates for generating views for EF4/EF5. The nice thing about this is that it abstracts away from the DB and pre-generates the view to decrease model load time
Next, I read this post on Contains in EF: Why does the Contains() operator degrade Entity Framework's performance so dramatically?. Apparently I saw an an attractive answer of converting my IEnumerable.Contains into a HashSet.Contains. This boosted my performance considerably.
Finally, reading the microsoft article, I realized there is a "AsNoTracking()" that you can hook up to the DBContext, this turns of automatic caching for that specific context in linq. So you can do something like this
var query = (from t in db.Context.Table1.AsNoTracking() select new { ... }
Something I didn't have to worry about was compiling queries in EF5, since it does it for you automatically, so you don't have to add CompileQuery.Compile(). Also if you're using EF 6 alpha 2, you don't need to worry about Contains or pre-generating views, since this is fixed in that version.
So when I start up my EF, this is a "cold" query execution, my memory goes high, but after recycling IIS, memory is cut in half and uses "warm" query execution. So that explains a lot!

What kind of memory leaks XCode Analyzer may not notice?

I'm afraid that asking this question may result in some downvotes, but after making some not satisfying research I decided to take a risk and ask more experienced people...
There are many questions here referring to some specific problems connected with the XCode Analayzer Tool. It seems to be very helpful solution. But I would like to ask you - as a beginner in iOS world - what kind of memory management stuff cannot be noticed by this tool.
In other words, are there any common memory management aspects, about which the iOS beginners should think "Oh, be careful with that, because in this case XCode Analyzer may not warn you about your mistake"...
For instance, I've found here Why cannot XCode static analyzer detect un-released retained properties? that:
(...)the analyzer can't reliably detect retain/release issues across
method/library boundaries(...)
It sounds like a good hint to consider, but maybe you know about some other common issues...
The analyzer is very good at finding the routine leaks that plague new programmers writing non-ARC code (failures to call release, returning objects of the wrong retain count, etc.).
In my experience, there are a couple of types of memory issues it does not find:
It cannot generally identify strong reference cycles (a.k.a. retain cycles). For example, you add a repeating NSTimer to a view controller, unaware that the timer maintains a strong reference to the view controller, and if you don't invalidate the timer (or do it in the wrong place, such as the dealloc method), neither the view controller nor the timer will get released.
It cannot find circular logic errors. For example, if you have some circular references where view controller A presents view controller B, which in turn presents a new copy of A (rather than dismissing/popping to get back to A).
It cannot find many non-reference counting memory issues. While it's getting better in dealing with Core Foundation functions, if you have code that is doing manual memory allocations (such as via malloc and free), the static analyzer may be of limited use. The same is true whenever you're using non-reference counting code (e.g. you use SQLite sqlite3_prepare_v2 and fail to call sqlite3_finalize).
I'm sure that's not a complete list of what it doesn't find, but those are the common issues I see asked about on Stack Overflow for which the static analyzer will be of limited help. But the analyzer is still a wonderful tool (it finds issues other than memory issues, too) and for those individuals not using ARC, it's invaluable.
Having said that, while the static analyzer is an under-appreciated first line of defense, you really should use Instruments to find leaks. See Locating Memory Issues in Your App in the Instruments User Guide. That's the best way to identify leaks.

Helping the GC in mono droid using mvvmCross

I am working with mono droid, using the mvvmcross framework provided by slodge. However I am having some memory issues. I am disposing bitmaps in the activities ondestroy methods and I am wondering if it is possible to help the GC collecting unused objects of viewmodels. If you try setting the viewmodel in the activity to null it all goes to hell and it is clearly not the right way to go.
Do you guys have any suggestions to an approach?
Regards
The mvx framework tries to ensure that the activity owns the viewmodel.
So in theory, after your activity has being destroyed, then the gc should be able to collect all of your c# objects - the activity, the views it owns, the view model and the objects it owns.
Where i've seen this this go wrong is where any 'global' or singleton object owns a reference to a view or viewmodel object. For example:
if a view registers itself with a singleton - eg an http image loader - and then that singleton keeps a reference to the view, preventing it from being garbage collected.
if a viewmodel subscribes to an event on a central service (often a singleton) and doesn't unsubscribe from it - then in this situation, the viewmodel can't be garbage collected (and often this also prevents other objects being collected too)
Generally both these types of errors can be solved by performing cleanup actions on activity destroy. However, other approaches are also available - eg for event subscriptions you can try using weak references (this is an approach taken on other platforms too - eg mvvm light's messenger)
From experience, the areas where leaks are most noticeable are around 'big objects' like images - their size helps them become noticeable. However, the real challenge on monodroid is identifying where the leaks are - fixing them is generally comparatively easy.
Sadly, there isn't currently a memory profiler available for droid. If you are cross-compiling to wp7, then certainly for viewmodel objects/leaks you can use its memory profiler. If not, then the way I generally try to solve memory leaks is to amplify them - try writing a sample that rapidly reproduces them - eg by adding large byte[] members to data elements or by rapidly repeating actions. Once you have the leak easily reproduced, then you can try to find the leaks by placing trace statements in finalizers, in event remove handlers, etc.

What can I access from a BackgroundWorker without "Cross Threading"?

I realise that I can't access Form controls from the DoWork event handler of a BackgroundWorker. (And if I try to, I get an Exception, as expected).
However, am I allowed to access other (custom) objects that exist on my Form?
For instance, I've created a "Settings" class and instantiated it in my Form and I seem to be able to read and write to its properties.
Is it just luck that this works?
What if I had a static class? Would I be able to access that safely?
#Engram:
You've got the gist of it - CrossThreadCalls are just a nice feature MS put into the .NET Framework to prevent the "bonehead" type of parallel programming mistakes. It can be overridden, as I'm guessing you've already found out, by setting the "AllowCrossThreadCalls" property on the class (and not on an instance of the class, e.g. set Label.AllowCrossThreadCalls and not lblMyLabel.AllowCrossThreadCalls).
But more importantly, you're right about the need to use some kind of locking mechanism. Whenever you have multiple threads of execution (be it threads, processes or whatever), you need to make sure that when you have one thread reading/writing to a variable, you probably don't want some other thread barging and changing that value under the feet of the first thread.
The .NET Framework actually provides several other mechanisms which might be more useful, depending on circumstances, than locking in code. The first is to use a Monitor class, which has the effect of locking a particular object. When you use this, other threads can continue to execute, as long as they don't try to lock that same object. Another very useful and common parallel-programming idea is the Mutex (or Semaphore). The Mutex is basically like a game of Capture the Flag between your threads. If one thread grabs the flag, no other threads can grab it until the first thread drops it. (A Semaphore is just like a Mutex, except that there can be more than one flag in a game.)
Obviously, none of these concepts will work in every particular problem - but having a few more tools to help you out might come in handy some day :)
You should communicate to the user interface through the ProgressChanged and RunWorkerCompleted events (and never the DoWork() method as you have noted).
In principle, you could call IsInvokeRequired, but the designers of the BackgroundWorker class created the ProgressChanged callback event for the purpose of updating UI elements.
[Note: BackgroundWorker events are not marshaled across AppDomain boundaries. Do not use a BackgroundWorker component to perform multithreaded operations in more than one AppDomain.]
MSDN Ref.
Ok, I've done some more research on this and I think have an answer. (Let the votes decide if I'm right!)
The answer is.. you can access any custom object that's in scope, however your access will not be thread-safe.
To ensure that it is thread-safe you should probably be using lock. The lock keyword prevents more than one thread executing a particular piece of code. (Subject to actually using it properly!)
The Cross Threading Exception that occurs when you try and access a Control is a safety mechanism designed especially for Controls. (It's easier and probably more efficient to get the user to make thread-safe calls then it is to design the controls themselves to be thread-safe).
You can't access controls that where created in one thread from another thread.
You can either use Settings class that you mentioned, or use InvokeRequired property and Invoke methods of control.
I suggest you look at the examples on those pages:
http://msdn.microsoft.com/en-us/library/ms171728.aspx
http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invokerequired.aspx

Resources