Memory leak in c++ - memory-management

How to detect memory leak. I mean is there any tool/utility available or any piece of code (i.e. overloading of operator new and delete) or just i need to check the new and delete in the code??
If there any tool/utility is there please tell me. and if code is there then what is the code can any one explain?

Tools that might help you:
Linux: valgrind
Win32: MemoryValidator
You have to check every bit of memory that gets allocated (new, malloc, ...) if it get's freed using the appropriate function (delete, free, ...).

Use e.g. boost:shared_ptr instead of naked pointers.
Analyze your application with one of these: http://en.wikipedia.org/wiki/Memory_debugger

One way is to insert file name and line number strings (via pointer) of the module allocating memory into the allocated block of data. The file and line number is handled by using the C++ standard "FILE" and "LINE" macros. When the memory is de-allocated, that information is removed.
One of our systems has this feature and we call it a "memory hog report". So anytime from our CLI we can print out all the allocated memory along with a big list of information of who has allocated memory. This list is sorted by which code module has the most memory allocated. Many times we'll monitor memory usage this way over time, and eventually the memory hog (leak) will bubble up to the top of the list.

valgrind is a very powerful tool that you can use to detect memory leaks. Once installed, you can run
valgrind --leak-check=full path/to/program arguments...
and valgrind will run the program, finding leaks and reporting them to you.

I can also recommend UMDH: http://support.microsoft.com/kb/268343

Your best solution is probably to use valgrind, which is one of the better tools.
If you are running in OS X with Xcode, you can use the Leaks tool. If you click Run with performance tool and select Leaks, it will show allocated and leaked memory.
Something to remember though. Most of the tools listed only describe tools that catch memory leaks as they occur. So if you have some code that leaks memory but is rarely called (or rarely enough that you don't encounter it when testing for memory leaks), then you could miss it. If you want something that actually runs through your code, you'll need a static analyzer. The only one I know of is the Clang Static Analyzer, but it is for C and Obj-C (I don't know if it supports C++).

Related

Leak_DefinitelyLost can valgrind points out where is the last time the address was on the stack?

I'm having a leak which is very hard to detect;
Can valgrind tell me which is the last call where address was accessible? and what were the values of the variables? I use Clion, can it just break when it happens?
There is no "instantaneous" detection of leaks functionality in valgrind/memcheck
that reports a leak exactly at the time the last pointer to a block is lost.
There was an experimental tool that tried to do that, but it was never considered for integration in valgrind, due to various difficulties to make this work properly.
If your leak is easy to reproduce, you can run your application under valgrind +
gdb/vgdb. You can then add breaks at various points in your program, and then
use monitor commands such as "leak_check" or "who_points_at" to check if the leak already happened. By refining the locations where to put a break, this might help to find when the last pointer to a block is lost.
See e.g. https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands for more info.

Creating deliberately dirty heap in gdb?

I found that trying to debug accidentally uninitialized data in gdb can be annoying. The program will crash when directly executed from the command line, but not while under inspection in gdb. It seems like gdb's heap is often clean (all zeroes), whereas from the command line, clearly not.
Is there a reason for this? If so, can I deliberately tell gdb or gcc to dirty the heap? IE, is there way to specify a "debug" allocator that will always give random data to malloc() and new? I imagine this might involve a special libc? Obviously if there was a way to do this without changing the linker options would be great so that the release version is as similar as possible to the debug version.
I'm currently using MinGW-w64 (gcc 4.7 based), but I'd be interested in a general answer.
The Linux way of doing this would be to use valgrind. On Mac OS X there are environment variables that control allocation debugging, see the Mac OS X man page for malloc. Valgrind support for Mac OS X is starting to appear but 10.8 support is not complete as of me writing this.
As you're using MinGW-w64 I am assuming you're using Windows. It seems like this SO question talks about alternatives to valgrind on Windows. One solution would be to run your app in Wine on a Linux box under valgrind.
If your program is running under valgrind, it is not directly running on a CPU. Valgrind is simulating every instruction, hence you can't simply attach a debugger to it. To get this to work you need to use the valgrind GDB server, see this page for more details.
Another approach would be to use calloc instead of malloc, which would zero your heap allocations. This doesn't give you a deliberately dirty heap but at least gives you consistent behaviour with or without a debugger.
Yes, GDB zeroes out everything, this is both useful and very annoying. Useful, insofar as everything is guaranteed to be in a well-defined state (no random values in memory, just zero). Which means, in theory, no nasty surprises while debugging.
In practice, and this is where it gets annoying, the theory sometimes fails spectacularly. The infamous "works fine, but crashes in debugger!?!" or "works fine in debugger, but crashes otherwise?!" issues are an example of this. Usually, this is a combination of an uninitialized pointer with a well-intended if(ptr != NULL) somewhere, which totally blows up for "no good reason" because the debugger initializes memory to zero, so the test fails to do what you intended.
About your question on deliberately garbling data allocated by malloc, GCC supports malloc hooks (see docs here and question here on SO).
This lets you, in a very easy and unintrusive manner, redirect all calls to malloc to a function of your own. From there you can call the real malloc and fill the allocated block with garbage (or some invalid-pointer magic value like DEADBEEF), if you wish to do so.
As for operator new, this happens to be a wrapper around malloc (that's an implementation detail, but malloc hooks are non-portable already, so relying on that won't make things worse), therefore malloc hooking should already deal with this, too.

free mem as function of command 'purge'

one of my app needs the function that free inactive/used/wired memory just like command 'purge'.
Check and google a lot, but can not get any hit
Welcome any comment
Purge doesn't do what you seem to think it does. It doesn't "free inactive/used/wired memory". As the manpage says:
It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc.
All it does is purge the disk cache. This is only useful if you're running performance tests and want to simulate the effects of "first run after cold boot" without actually cold booting. Again, from the manpage:
Purge can be used to approximate initial boot conditions with a cold disk buffer cache for performance analysis.
There is no public API for this, although a quick scan of the symbols shows that it seems to call a function CPOSXPurgeAllDiskBuffers from the CoreProfile private framework. I believe the underlying kernel and userland disk cache code is all or mostly available on http://www.opensource.apple.com, so you could do probably implement the same thing yourself, if you really want.
As iMysak says, you can just exec (or NSTask, etc.) the tool if you want to.
As a side note, it you could free used/wired memory, presumably that memory is used by something—even if you don't have pointers into it in your own data structures, malloc probably does. Are you trying to segfault your code?
Freeing inactive memory is a different story. Just freeing something up to malloc doesn't necessarily make malloc return it to the OS. And there's no way you can force it to. If you think about the way traditional UNIX works, it makes sense: When you ask it to allocate more memory, it uses sbrk to expand your data segment; if you free up memory at the top, it can sbrk back down, but if you free up memory in the middle, there's no way it can do that. Of course modern UNIX systems don't work that way, but the POSIX and C APIs are all designed to be compatible with systems that do. So, if you want to make sure memory gets freed, you have to handle memory allocation directly.
The simplest and most portable way to do this is to create and mmap a temporary backing file, or just MAP_ANON, and explicitly unmap pages when you're done with them. (This works on all POSIX systems—and, with a pretty simple wrapper, even Windows.) If you need even more control (e.g., to manually handle flushing pages to disk, etc.), you can use the mach/mach_vm.h APIs.
You can directly run it from OS // with exec() function

How to interpret the output from the "Leaks" XCode performance tool?

I don't understand the output from the "Leaks" performance tool in XCode. How can I interpret this output?
The Leaks Instrument looks for blocks of memory that are not referenced from the application code.
The Table View shows the addresses of the block found in such condition.
Yes, Instruments it's not simple to use, there are many leaks apparently from the OS and/or the system libraries, the details often show over-freed blocks (?!).
Life is complex :)
Leaks is only marginally useful. A much bigger problem you will have is references which are still retained that you think have been released. For that, use the Object Allocation tool with "created and still living" checked.
If you see memory use increase over time, highlight a region and see what objects are allocated in your own code that you were not expecting.
Leaks is covered in a wonderful video of Lecture 10 of Stanford's CS 193P (Cocoa/iPhone Application Programming).
http://www.stanford.edu/class/cs193p/cgi-bin/index.php

Question about g++ generated code

Dear g++ hackers, I have the following question.
When some data of an object is overwritten by a faulty program, why does the program eventually fail on destruction of that object with a double free error? How does it know if the data is corrupted or not? And why does it cause double free?
It's usually not that the object's memory is overwritten, but some part of the memory outside of the object. If this hits malloc's control structures, free will freak out once it accesses them and tries to do weird things based on the corrupted structure.
If you'd really only overwrite object memory with silly stuff, there's no way malloc/free would know. Your program might crash, but for other reasons.
Take a look at valgrind. It's a tool that emulates the CPU and watches every memory access for anomalies (like trying to overwrite malloc's control structures). It's really easy to use, most of the time you just start your program inside valgrind by prepending valgrind on the shell, and it saves you a lot of pain.
Regarding C++: always make sure that you use new in conjunction with delete and, respectively, new[] in conjunction with delete[]. Never mix them up. Bad things will happen, often similar to what you are describing (but valgrind would warn you).

Resources