We are upgrading from VC8 to VC10 and have found a number of memory leaks that seem to be CDialog related. The simplest example of this is demonstrated with the following code using a CDialog that just has a number of buttons. In VC10 this leaks, but in VC8 it doesn't:
for (int i = 0; i < 5000; ++i) {
CDialog* dialog = new CDialog;
dialog->Create(IDD_LEAKER, 0);
dialog->DestroyWindow();
delete dialog;
}
Memory usage keeps rising and the example dialog we have with about 30 buttons leaks 10s of Mb.
Note that the above is a test example where we have stripped out all of our dialog handling code, in our real code we have a derived class and use PostNcDestroy().
Oddly neither of the following code examples leak in either VC8 or VC10:
CDialog* dialog = new CDialog;
for (int i = 0; i < 5000; ++i) {
dialog->Create(IDD_LEAKER, 0);
dialog->DestroyWindow();
}
delete dialog;
for (int i = 0; i < 5000; ++i) {
CDialog* dialog = new CDialog;
delete dialog;
}
What are we missing here?
This appears to be down to the way that MFC manages its handle maps:
What is the lifetime of a CWnd obtained from CWnd::FromHandle?
If you wait long enough for your application to become idle, you do get your memory back, i.e. it's not really a leak. However, as you have observed, while Visual C++ 2010 continues to consume more and more memory - until the maps are tidied in OnIdle() - this doesn't appear to happen in Visual C++ 2008.
Debugging an application containing your code does show that there are a lot more objects in the HWND temporary map in the VC 10 version than there are in the VC 9 version.
The handle map code (winhand.cpp) doesn't appear to have changed between the two versions but there's lots of code in MFC that uses it!
Anyway, assuming that you really want to run your program like this - I guess you're running in some kind of automated mode? - then you'll want to force the garbage collection at appropriate intervals. Have a look at this entry on MSDN:
http://msdn.microsoft.com/en-us/library/xt4cxa4e(v=VS.100).aspx
CWinThread::OnIdle() actually calls this to tidy things up:
AfxLockTempMaps();
AfxUnlockTempMaps(/*TRUE*/);
Related
I've written a simple program in C and objC with a leak, and I cannot understand Leaks.
here it is:
int main(void)
{
int t = 78;
t = malloc(50);
t = 4;
return 0;
}
Can it show me which variable is the leak, or where it leaks?
Every Leaks tutorial on the internet (all two of them) are bad.
please help?
If you are testing the Leaks instrument with the code you provided, it is no wonder that it can't uncover any problems.
Leaks has a default snapshot interval of 10 seconds. But your program won't even run for 10s.
You are allocating in the scope of the application's entry point. "t" is valid (when not freed) until main exits. So the OS would reclaim the memory anyway.
And foremost: Your code does not contain a leak. It would be a leak if you "loose reference" to t. (e.g. by doing another t = malloc() or assigning t some other variable)
If you want to see Leaks in action, create a default Cocoa Application, add an instance variable "test" to your AppDelegate and put the following code into the implementation.
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
test = malloc(50);
test = malloc(20);
}
I've not used Leaks, but there are plenty of tutorials on the net, starting with Apple's - Apple's developer documentation on the subject, Mobile Orchard and Cocoa is my Girlfriend, which seems to be the best.
in my app I'm trying to process an YUV422 image from a webcam. but I'm getting a huge memory leak. below you can see a sample of simplified code of my app. if I disable the "(m1..." line in the function, there is no leak. (but the images are not being processed). I've tried locks, pools, etc but nothing changed. I'm relatively new to cocoa so all these square brackets look funny/scary to me ;-)
is it a problem with using "char"? in my old linux+c++ app there was no problem. but I was using "unsigned char*", no threads, and I have never checked for leaks...
global:
...
char m [640*480];
"main":
...
[NSThread detachNewThreadSelector:#selector(processOutputBuffer) toTarget:self withObject:nil];
...
function1:
- (void)processOutputBuffer {
[NSThread setThreadPriority:0.4];
[lock lock];
...
Ptr outputBufferBaseAddress = (Ptr)CVPixelBufferGetBaseAddress(outputBuffer);
CVPixelBufferLockBaseAddress(outputBuffer, 0);
[self yuv422_to_y8uv8:outputBufferBaseAddress m1:m];
...
}
function2:
- (void) yuv422_to_y8uv8:(char *)image m1:(char *)m1 {
int x,y;
for (y = 0; y < 480; y++)
for (x = 0; x < 640; x++)
{
*(m1 + (640 * y) + (x))=*(image + (640*2 * y) + (x*2)+1);
}
}
if I disable the "*(m1..." line in the function, there is no leak.
Untrue. That's just an assignment. Either there is no leak anyway, or that is not the cause of the leak.
You can use Instruments to look for objects (both plain memory allocations and Cocoa objects) that get leaked, and to diagnose the leaks.
is it a problem with using "char *"?
No. Types do not cause leaks. Incorrect memory management causes leaks.
in my old linux+c++ app there was no problem. but I was using "unsigned char*", no threads, and I have never checked for leaks...
It's possible that you introduced the leak when adding threads, or when adding Cocoa code. It's equally possible that the leak was always there and you never saw it before. Only when you find the problem with Instruments or another tool will you know for sure.
You might also try running the Clang Static Analyzer. It can detect some code patterns that cause leaks, among other things.
A lame answer... I know. But in case no better answer comes to you. You can always put the function in a C file and make it plain c.
Might even be nice to see if it solves the leak. If not, then the problem lies somewhere else.
It seems that using Critical Sections quite a bit in Vista/Windows Server 2008 leads to the OS not fully regaining the memory.
We found this problem with a Delphi application and it is clearly because of using the CS API. (see this SO question)
Has anyone else seen it with applications developed with other languages (C++, ...)?
The sample code was just initialzing 10000000 CS, then deleting them. This works fine in XP/Win2003 but does not release all the peak memory in Vista/Win2008 until the application has ended.
The more you use CS, the more your application retains memory for nothing.
Microsoft have indeed changed the way InitializeCriticalSection works on Vista, Windows Server 2008, and probably also Windows 7.
They added a "feature" to retain some memory used for Debug information when you allocate a bunch of CS. The more you allocate, the more memory is retained. It might be asymptotic and eventually flatten out (not fully bought to this one).
To avoid this "feature", you have to use the new API InitalizeCriticalSectionEx and pass the flag CRITICAL_SECTION_NO_DEBUG_INFO.
The advantage of this is that it might be faster as, very often, only the spincount will be used without having to actually wait.
The disadvantages are that your old applications can be incompatible, you need to change your code and it is now platform dependent (you have to check for the version to determine which one to use). And also you lose the ability to debug if you need.
Test kit to freeze a Windows Server 2008:
- build this C++ example as CSTest.exe
#include "stdafx.h"
#include "windows.h"
#include <iostream>
using namespace std;
void TestCriticalSections()
{
const unsigned int CS_MAX = 5000000;
CRITICAL_SECTION* csArray = new CRITICAL_SECTION[CS_MAX];
for (unsigned int i = 0; i < CS_MAX; ++i)
InitializeCriticalSection(&csArray[i]);
for (unsigned int i = 0; i < CS_MAX; ++i)
EnterCriticalSection(&csArray[i]);
for (unsigned int i = 0; i < CS_MAX; ++i)
LeaveCriticalSection(&csArray[i]);
for (unsigned int i = 0; i < CS_MAX; ++i)
DeleteCriticalSection(&csArray[i]);
delete [] csArray;
}
int _tmain(int argc, _TCHAR* argv[])
{
TestCriticalSections();
cout << "just hanging around...";
cin.get();
return 0;
}
-...Run this batch file (needs the sleep.exe from server SDK)
#rem you may adapt the sleep delay depending on speed and # of CPUs
#rem sleep 2 on a duo-core 4GB. sleep 1 on a 4CPU 8GB.
#for /L %%i in (1,1,300) do #echo %%i & #start /min CSTest.exe & #sleep 1
#echo still alive?
#pause
#taskkill /im cstest.* /f
-...and see a Win2008 server with 8GB and quad CPU core freezing before reaching the 300 instances launched.
-...repeat on a Windows 2003 server and see it handle it like a charm.
Your test is most probably not representative of the problem. Critical sections are considered "lightweight mutexes" because a real kernel mutex is not created when you initialize the critical section. This means your 10M critical sections are just structs with a few simple members. However, when two threads access a CS at the same time, in order to synchronize them a mutex is indeed created - and that's a different story.
I assume in your real app threads do collide, as opposed to your test app. Now, if you're really treating critical sections as lightweight mutexes and create a lot of them, your app might be allocating a large number of real kernel mutexes, which are way heavier than the light critical section object. And since mutexes are kernel object, creating a excessive number of them can really hurt the OS.
If this is indeed the case, you should reduce the usage of critical sections where you expect a lot of collisions. This has nothing to do with the Windows version, so my guess might be wrong, but it's still something to consider. Try monitoring the OS handles count, and see how your app is doing.
You're seeing something else.
I just built & ran this test code. Every memory usage stat is constant - private bytes, working set, commit, and so on.
int _tmain(int argc, _TCHAR* argv[])
{
while (true)
{
CRITICAL_SECTION* cs = new CRITICAL_SECTION[1000000];
for (int i = 0; i < 1000000; i++) InitializeCriticalSection(&cs[i]);
for (int i = 0; i < 1000000; i++) DeleteCriticalSection(&cs[i]);
delete [] cs;
}
return 0;
}
Windows HeapFree, msvcrt free: do they cause the memory being freed to be paged-in? I am trying to estimate if not freeing memory at exit would speed up application shutdown significantly.
NOTE: This is a very specific technical question. It's not about whether applications should or should not call free at exit.
If you don't cleanly deallocate all your resources at application shutdown it will make it nigh on impossible to detect if you have any really serious problems - like memory leaks - which would be more of a problem than a slow shut down. If the UI disappears quickly, then the user will think the it has shut down quickly even if it has a lot of work still to do. With UI, perception of speed is more important than actual speed. When the user selects the 'Exit Application' option, the main application window should immediately disappear. It doesn't matter if the application takes a few seconds after that to free up everything an exit gracefully, the user won't notice.
I ran a test for HeapFree. The following program has access violation inside HeapFree at i = 31999:
#include <windows.h>
int main() {
HANDLE heap = GetProcessHeap();
void * bufs[64000];
// populate heap
for (unsigned i = 0; i < _countof(bufs); ++i) {
bufs[i] = HeapAlloc(heap, 0, 4000);
}
// protect a block in the "middle"
DWORD dwOldProtect;
VirtualProtect(
bufs[_countof(bufs) / 2], 4000, PAGE_NOACCESS,
&dwOldProtect);
// free blocks
for (unsigned i = 0; i < _countof(bufs); ++i) {
HeapFree(heap, 0, bufs[i]);
}
}
The stack is
ntdll.dll!_RtlpCoalesceFreeBlocks#16() + 0x12b9 bytes
ntdll.dll!_RtlFreeHeap#12() + 0x91f bytes
shutfree.exe!main() Line 19 C++
So it looks like the answer is "Yes" (this applies to free as well, since it uses HeapFree internally)
I'm almost certain the answer to the speed improvement question would be "yes". Freeing a block may or may not touch the actual block in question, but it will certainly have to update other bookkeeping information in any case. If you have zillions of small objects allocated (it happens), then the effort required to free them all could have a significant impact.
If you can arrange it, you might try setting up your application such that if it knows it's going to quit, save any pending work (configuration, documents, whatever) and exit ungracefully.
I'm writing the memory manager for an application, as part of a team of twenty-odd coders. We're running out of memory quota and we need to be able to see what's going on, since we only appear to be using about 700Mb. I need to be able to report where it's all going - fragmentation etc. Any ideas?
You can use existing memory debugging tools for this, I found Memory Validator 1 quite useful, it is able to track both API level (heap, new...) and OS level (Virtual Memory) allocations and show virtual memory maps.
The other option which I also found very usefull is to be able to dump a map of the whole virtual space based on VirtualQuery function. My code for this looks like this:
void PrintVMMap()
{
size_t start = 0;
// TODO: make portable - not compatible with /3GB, 64b OS or 64b app
size_t end = 1U<<31; // map 32b user space only - kernel space not accessible
SYSTEM_INFO si;
GetSystemInfo(&si);
size_t pageSize = si.dwPageSize;
size_t longestFreeApp = 0;
int index=0;
for (size_t addr = start; addr<end; )
{
MEMORY_BASIC_INFORMATION buffer;
SIZE_T retSize = VirtualQuery((void *)addr,&buffer,sizeof(buffer));
if (retSize==sizeof(buffer) && buffer.RegionSize>0)
{
// dump information about this region
printf(.... some buffer information here ....);
// track longest feee region - usefull fragmentation indicator
if (buffer.State&MEM_FREE)
{
if (buffer.RegionSize>longestFreeApp) longestFreeApp = buffer.RegionSize;
}
addr += buffer.RegionSize;
index+= buffer.RegionSize/pageSize;
}
else
{
// always proceed
addr += pageSize;
index++;
}
}
printf("Longest free VM region: %d",longestFreeApp);
}
You can also find out information about the heaps in a process with Heap32ListFirst/Heap32ListNext, and about loaded modules with Module32First/Module32Next, from the Tool Help API.
'Tool Help' originated on Windows 9x. The original process information API on Windows NT was PSAPI, which offers functions which partially (but not completely) overlap with Tool Help.
Our (huge) application (a Win32 game) started throwing "Not enough quota" exceptions recently, and I was charged with finding out where all the memory was going. It is not a trivial job - this question and this one were my first attempts at finding out. Heap behaviour is unexpected, and accurately tracking how much quota you've used and how much is available has so far proved impossible. In fact, it's not particularly useful information anyway - "quota" and "somewhere to put things" are subtly and annoyingly different concepts. The accepted answer is as good as it gets, although enumerating heaps and modules is also handy. I used DebugDiag from MS to view the true horror of the situation, and understand how hard it is to actually thoroughly track everything.