electric-fence segfaults in malloc - debugging

I've got a rather complicated program that does a lot of memory allocation, and today by surprise it started segfaulting in a weird way that gdb couldn't pin-point the location of. Suspecting memory corruption somewhere, I linked it against Electric Fence, but I am baffled as to what it is telling me:
ElectricFence Exiting: mprotect() failed:
Program received signal SIGSEGV, Segmentation fault.
__strlen_sse2 () at ../sysdeps/i386/i686/multiarch/strlen.S:99
99 ../sysdeps/i386/i686/multiarch/strlen.S: No such file or directory.
in ../sysdeps/i386/i686/multiarch/strlen.S
#0 __strlen_sse2 () at ../sysdeps/i386/i686/multiarch/strlen.S:99
#1 0xb7fd6f2d in ?? () from /usr/lib/libefence.so.0
#2 0xb7fd6fc2 in EF_Exit () from /usr/lib/libefence.so.0
#3 0xb7fd6b48 in ?? () from /usr/lib/libefence.so.0
#4 0xb7fd66c9 in memalign () from /usr/lib/libefence.so.0
#5 0xb7fd68ed in malloc () from /usr/lib/libefence.so.0
#6 <and above are frames in my program>
I'm calling malloc with a value of 36, so I'm pretty sure that shouldn't be a problem.
What I don't understand is how it is even possible that I could be trashing the heap in malloc. In reading the manual page a bit more, it appears that maybe I am writing to a free page, or maybe I'm underwriting a buffer. So, I have tried the following environment variables, together and by themselves:
EF_PROTECT_FREE=1
EF_PROTECT_BELOW=1
EF_ALIGNMENT=64
EF_ALIGNMENT=4096
The last two had absolutely no effect.
The first one changed the portions of the stack frame which are in my program (where in my program was executing when malloc was called fatally), but with identical frames once malloc was entered.
The second one changed a bit more; in addition to the crash occurring at a different place in my program, it also occurred in a call to realloc instead of malloc, although realloc is directly calling malloc and otherwise the back trace is identical to above.
I'm not explicitly linking against any other libraries besides fence.
Update: I found several places where it suggests that the message: " mprotect() failed: Cannot allocate memory" means that there is not enough memory on the machine. But I am not seeing the "Cannot allocate memory" part, and ps says I am only using 15% of memory. With such a small allocation (4k+32) could this really be the problem?

I just wasted several hours on the same problem.
It turns out that it is to do with the setting in
/proc/sys/vm/max_map_count
From the kernel documentation:
"This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries.
While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation."
So you can 'cat' that file to see what it is set to, and then you can 'echo' a bigger number into it. Like this: echo 165535 > /proc/sys/vm/max_map_count
For me, this allowed electric fence to get past where it was before, and start to find real bugs.

Related

How to free all GPU memory from pytorch.load?

This code fills some GPU memory and doesn't let it go:
def checkpoint_mem(model_name):
checkpoint = torch.load(model_name)
del checkpoint
torch.cuda.empty_cache()
Printing memory with the following code:
print(torch.cuda.memory_reserved(0))
print(torch.cuda.memory_allocated(0))
shows BEFORE running checkpoint_mem:
0
0
and AFTER:
121634816
97332224
This is with torch.__version__ 1.11.0+cu113 on Google colab.
Does torch.load leak memory? How can I get the GPU memory completely cleared?
It probably doesn't. Also, it depends on what you call memory leak. In this case, after the program ends all memory should be freed, python has a garbage collector, so it might not happen immediately (your del or after leaving the scope) like it does in C++ or similar languages with RAII.
del
del is called by Python and only removes the reference (same as when the object goes out of scope in your function).
torch.nn.Module does not implement del, hence its reference is simply removed.
All of the elements within torch.nn.Module have their references removed recursively (so for each CUDA torch.Tensor instance their __del__ is called).
del on each tensor is a call to release memory
More about __del__
Caching allocator
Another thing - caching allocator occupies part of the memory so it doesn't have to rival other apps in need of CUDA when you are going to use it.
Also, I assume PyTorch is loaded lazily, hence you get 0 MB used at the very beginning, but AFAIK PyTorch itself, during startup, reserves some part of CUDA memory.
The short story is given here, longer one here in case you didn’t see it already.
Possible experiments
You may try to run time.sleep(5) after your function and measure afterwards.
You can get snapshot of the allocator state via torch.cuda.memory_snapshot to get more info about allocator’s reserved memory and inner workings.
You might set the environment variable PYTORCH_NO_CUDA_MEMORY_CACHING=1 and see whether and if anything changes.
Disclaimer
Not a CUDA expert by any means, so someone with more insight could probably expand (and/or correct) my current understanding as I am sure way more things happen under the hood.
It is not possible, see here for the same question and the response from PyTorch developer:
https://github.com/pytorch/pytorch/issues/37664

accessing process memory parts

I'm currently studying memory management of OS by the video lecture. The instructor says,
In fact, you may have, and it is quite often the case that there may
be several parts of the process memory, which are not even accessed at
all. That is, they are neither executed, loaded or stored from memory.
I don't understand the saying since even if in a simple C program, we access whole address space of it. Don't we?
#include <stdio.h>
int main()
{
printf("Hello, World!");
return 0;
}
Could you elucidate the saying? If possible could you provide an example program wherein "several parts of the process memory, which are not even accessed at all" when it is run.
Imagine you have a large and complicated utility (e.g. a compiler), and the user asks it for help (e.g. they type gcc --help instead of asking it to compile anything). In this case, how much of the utility's code and data is used?
Most programs have various optional parts that aren't used (e.g. maybe something that works with graphics will have some code for 16 bits per pixel and other code for 32 bits per pixel, and will determine which code to use and not use the other code). Most heap allocators are "eager" (e.g. they'll ask the OS for 20 MiB of space and then might only "malloc() 2 MiB of it). Sometimes a program will memory map a huge file but then only access a small part of it.
Even for your trivial "hello world" example code; the virtual address space probably contains a huge (several MiB) shared library to support lots of C standard library functions (e.g. puts(), fprintf(), sprintf(), ...) and your program only uses a small part of that shared library; and your program probably reserves a conservative amount of space for its stack (e.g. maybe 20 KiB of space for its stack) and then probably only uses a few hundred bytes of stack.
In a virtual memory system, the address space of the process is created in secondary store at start up. Little or nothing gets placed in memory. For example, the operating system may use the executable file as the page file for the code and static data. It just sets up an internal structure that says some range of memory is mapped to these blocks in the executable file. The same goes for shared libraries. The other data gets mapped to the page file.
As your program runs it starts page faulting rapidly because nothing is in memory and the operating system has to load it from secondary storage.
If there is something that your program does not reference, it never gets loaded into memory.
If you had global variable declared like
char somedata [1045] ;
and your program never references that variable, it will never get loaded into memory. The same goes for code. If you have pages of code that done get execute (e.g. error handling code) it does not get loaded. If you link to shared libraries, you will likely bece including a lot of functions that you never use. Likewise, they will not get loaded if you do not execute them.
To begin with, not all of the address space is backed by physical memory at all times, especially if your address space covers 248+ bytes, which your computer doesn't have (which is not to say you can't map most of the address space to a single physical page of memory, which would be of very little utility for anything).
And then some portions of the address space may be purposefully permanently inaccessible, like a few pages near virtual address 0 (to catch NULL pointer dereferences).
And as it's been pointed out in the other answers, with on-demand loading of programs, you may have some portions of the address space reserved for your program but if the program doesn't happen to need any of its code or data there, nothing needs to be loader there either.

Detection of freed memory usage (FPC -> heaptrc -> keepreleased)

Free Pascal heaptrc keepreleased is described as "useful if you suspect that the same memory block is released twice" but is it possible to detect usage of previously freed memory (object method call of freed object) with it? If it is impossible - can it be detected with other tools?
Yes, it should do that. The idea is the following:
an used allocation has a different .sig then $AAAAAAAA or $DEADBEEF. On freemem the sig is checked (see around line 593 in trunk) against sig $AAAAAAA IF useCRC is false.
The keepreleased prevents blocks from being reused, which would change the signature to something else then $AAAAAAAA. It will print something like:
Marked memory at $12345678 released
to the file descriptor ptext. The error standard files can be set and directed using various other variables. It looks fairly complicated, but that is probably to deal with consoleless GUI applications
Some other variables (like haltonerror) govern if the application is halted on such corruption
An alternate (but very slow) way is using valgrind (fpc option -gv), but I only have run valgrind on *nix, and as said it is extremely slow, so not for very heavy processing apps.

Investigating Memory Leak

We have a slow memory leak in our application and I've already gone through the following steps in trying to analyize the cause for the leak:
Enabling user mode stack trace database in GFlags
In Windbg, typing the following command: !heap -stat -h 1250000 (where 1250000 is the address of the heap that has the leak)
After comparing multiple dumps, I see that a memory blocks of size 0xC are increasing over time and are probably the memory that is leaked.
typing the following command: !heap -flt s c
gives the UserPtr of those allocations and finally:
typing !heap -p -a address on some of those addresses always shows the following allocation call stack:
0:000> !heap -p -a 10576ef8
address 10576ef8 found in
_HEAP # 1250000
HEAP_ENTRY Size Prev Flags UserPtr UserSize - state
10576ed0 000a 0000 [03] 10576ef8 0000c - (busy)
mscoreei!CLRRuntimeInfoImpl::`vftable'
7c94b244 ntdll!RtlAllocateHeapSlowly+0x00000044
7c919c0c ntdll!RtlAllocateHeap+0x00000e64
603b14a4 mscoreei!UtilExecutionEngine::ClrHeapAlloc+0x00000014
603b14cb mscoreei!ClrHeapAlloc+0x00000023
603b14f7 mscoreei!ClrAllocInProcessHeapBootstrap+0x0000002e
603b1614 mscoreei!operator new[]+0x0000002b
603d402b +0x0000005f
603d5142 mscoreei!GetThunkUseState+0x00000025
603d6fe8 mscoreei!_CorDllMain+0x00000056
79015012 mscoree!ShellShim__CorDllMain+0x000000ad
7c90118a ntdll!LdrpCallInitRoutine+0x00000014
7c919a6d ntdll!LdrpInitializeThread+0x000000c0
7c9198e6 ntdll!_LdrpInitialize+0x00000219
7c90e457 ntdll!KiUserApcDispatcher+0x00000007
This looks like thread initialization call stack but I need to know more than this.
What is the next step you would recommend to do in order to put the finger at the exact cause for the leak.
The stack recorded when using GFlags is done without utilizing .pdb and often not correct.
Since you have traced the leak down to a specific size on a given heap, you can try
to set a live break in RtlAllocateHeap and inspect the stack in windbg with proper symbols. I have used the following with some success. You must edit it to suit your heap and size.
$$ Display stack if heap handle eq 0x00310000 and size is 0x1303
$$ ====================================================================
bp ntdll!RtlAllocateHeap "j ((poi(#esp+4) = 0x00310000) & (poi(#esp+c) = 0x1303) )'k';'gc'"
Maybe you then get another stack and other ideas for the offender.
The first thing is that the new operator is the new [] operator so is there a corresponding delete[] call and not a plain old delete call?
If you suspect this code I would put a test harness around it, for instance put it in a loop and execute it 100 or 1000 times, does it still leak and proportionally.
You can also measure the memory increase using process explorer or programmatically using GetProcessInformation.
The other obvious thing is to see what happens when you comment out this function call, does the memory leak go away? You may need to do a binary chop if possible of the code to reduce the likely suspect code by half (roughly) each time by commenting out code, however, changing the behaviour of the code may cause more problems or dependant code path issues which can cause memory leaks or strange behaviour.
EDIT
Ignore the following seeing as you are working in a managed environment.
You may also consider using the STL or better yet boost reference counted pointers like shared_ptr or scoped_array for array structures to manage the lifetime of the objects.

C++/msvc6 application crashes due to heap corruption, any hints?

About the application
It runs on Windows XP Professional SP2.
It's built with Microsoft Visual C++ 6.0 with Service Pack 6.
It's MFC based.
It uses several external dlls (e.g. Xerces, ZLib or ACE).
It has high performance requirements.
It does a lot of network and hard disk I/O, but it's also cpu intensive.
It has an exception handling mechanism which generates a minidump when an unhandled exception occurs.
UPDATE: It is a highly multithreaded application and we are using mutexes to protect concurrent access (of course, we might be failing at some place...)
Facts about the crash
It only happens on multiprocessor/multicore machines and under heavy loads of work.
It happens at random (neither we nor our client have found a pattern yet) after some some hours running.
We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines)
It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols)
Exception code: c0000005 ACCESS_VIOLATION
Address : 006a85b9
Access Type : write
Access Address : 2e020fff
Fault address: 006a85b9 01:002a75b9 C:\MyDir\MyApplication.exe
ChildEBP RetAddr Args to Child
WARNING: Stack unwind information not available. Following frames may be wrong.
030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9
030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo])
030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0])
030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4])
030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo])
030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96
030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo])
030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0])
030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b
030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo])
030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4])
030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo])
7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo])
7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff
7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be
*** WARNING: Unable to verify checksum for xerces-c_2_7.dll
*** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll -
7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff
7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7
7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be
7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090
7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b
The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list.
What I have tried so far
Reviewing all the code related to the point where the crash ends happening.
Trying to enable pageheap on our testing lab though nothing useful has been found by now.
We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump:
ntdll.dll!_RtlpCoalesceFreeBlocks#16() + 0x124e bytes
ntdll.dll!_RtlFreeHeap#12() + 0x91f bytes
msvcrt.dll!_free() + 0xc3 bytes
MyApplication.exe!006a4fda()
[Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe]
MyApplication.exe!0069f305()
ntdll.dll!_NtFreeVirtualMemory#16() + 0xc bytes
ntdll.dll!_RtlpSecMemFreeVirtualMemory#16() + 0x1b bytes
ntdll.dll!_ZwWaitForSingleObject#12() + 0xc bytes
ntdll.dll!_RtlpFreeToHeapLookaside#8() + 0x26 bytes
ntdll.dll!_RtlFreeHeap#12() + 0x114 bytes
msvcrt.dll!_free() + 0xc3 bytes
c5ffffff()
(12-Apr-2010) I've tried to enable heap free checking (using gflags) but it slows down the application a lot...
Possible solutions (that I'm aware of) which cannot be applied
"Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment.
"Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily.
I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer!
You can try peppering your code with calls to the debug heap checking routines to see if you can locate the corruption closer to the source (you're using the debug CRT to track down this problem, right?):
http://msdn.microsoft.com/en-us/library/aa271695(VS.60).aspx
Use Application Verifier from debugging tools for windows. Sometimes it helps.
Try to set up VS to download OS debug symbols and make sure that OMIT FRAME POINTERS is off in your application. Perhaps stack trace will be informative.
Highly multithreaded
Long time ago I discovered that there is a limit for thread count per process in WinXP. My test snippet could create only few thoursands of thread. The problem was resolved by thread pool.
EDIT:
For my purposes there was enough just to check “Application Verifier” checkbox in gflags.exe. Unfortunately, I have no experience with other options.
As for thread limit, test snippet was simple:
unsigned __stdcall ThreadProc(LPVOID)
{
_tprintf(_T("Thread started\n"));
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
while (TRUE)
{
unsigned threadId = 0;
_tprintf(_T("Start thread\n"));
_beginthreadex( NULL, 0, &ThreadProc, NULL, 0, &threadId);
}
return 0;
}
I didn’t wait long this time, but handle count in Task Manager was increasing very fast. My real world application got this effect only in 12 hours. But must say the issue was not in crashing, new threads just not created.
Can you post what exceptions you are getting?
If this is some memory corruption bug, then the crash occurs sometime after the memory corruption, so that will be challenging to track down the root cause. You should:
Travel (or remotely logon) to the production system, install Visual Studio, have .pdb and .map files ready (and windows' symbols as well), attach debugger to the release-build and wait for the crash. Though if you set it up correctly, you can use the minidump file on your dev machine, where you would already have your app and window's symbols setup. Then you can see which free call is throwing, and try to figure out which object is being freed to see if that object is corrupted somehow and nearby objects in memory.
Somehow find a way to reproduce the bug in your office, can you create high enough volumes to duplicate what the customer is doing?
Your posted callstacks don't look particularly illuminating.
Since you are using VS 6 with SP6, then its STL is OK.
Can you tell if the app on the production system is leaking any resources? Running perfmon can help with this.
Another thing, you're not calling new/delete like very frequently from different threads are you? I've found that if you do this fast enough, you'll crash your app rather quickly (did this on XP). I had to replace new/delete calls in my app with VirtualAlloc (windows Virtual Memory API), that worked great for me. Of course, STL could be allocating from the heap as well.
Use a performance profiler that can hook into CPU events, such as VTune. Set it up in sampling mode and tell it to wait for events related to cache line sharing. These are identified by a HITM event from the SNOOP phase.
If you run this on a multi processor machine with a realistic workload then it will find places in your code where there is active contention between threads for a single piece of data. You will need to analyze the profiler hot spots found this way and try to find something that is not being wrapped in an appropriate mutex.
I'm not an expert on CPU architecture or anything, but my understanding is that when the CPUs are about to access a piece of data the system will check if any other CPUs are accessing the same piece of data, this is done by watching the memory fetches and writes coming out of each CPU, a process called snooping. Snooping makes sure that if TWO or more CPUs have the same data in each of their caches that the duplicated copies of the data are removed when one of them is modified. A HIT-Modified event means that the system detected this situation and had to flush one of the CPUs cache lines.
See this document for more information on using VTune like this
http://software.intel.com/en-us/articles/using-intel-vtune-performance-analyzer-events-ratios-optimizing-applications/
I don't have a copy of VTune in front of me right now so maybe this won't work but it seems like the lowest impact way of getting some data. VTune in sampling mode should not cause a lot of problems with performance.
The key here is that this only happens on multiprocessor machines (Cores are the same as processors)
What happens when a threaded program runs on a single processor is that two threads never execute at the same time. The OS has to time-slice each processor to simulate threads.
In a multiprocessor system multiple threads can operate at the same time.
You are probably accessing shared resources from different threads at the same time now.
These resources can be be connections to external systems and even global variables and data structures even Singleton classes.
Unfortunately you now have one of the hardest problems to find.
If you can find the memory being corrupted then you need to find who else is using it on a different thread and then synchronize the memory (Semaphore or CriticalSection).
Unfortunately there is no easy way to find the problem.
You might be able to set the processor affinity temporarily to only run on one processor until you find the problem. See link
http://msdn.microsoft.com/en-us/library/ms684251(VS.85).aspx
Here is a method to set affinity on
For Windows XP/Vista/7, access Affinity by opening the Windows Task Manager (CTL+ALT+DEL, or right-click on Task Bar), select "Processes" tab, right-click the application process you wish to isolate, then select "Set Affinity." Inside the Processor Affinity dialog, un-check the CPU/cores you do not need to use. This effectively isolates that application to the selected CPUs/cores preventing cashe spanning and reducing process-switching and simplifies your ability to supervise CPU/core allocation for multiple programs.
As your second stack trace shows, your application is corrupting the heap. The header of a heap block is written over and thus the crash occurs in the heap manager when coalescing free blocks, or when going through the free list (in the first stack trace).
The code you identified that is currently freeing memory may be a victim of another code overflowing or underflowing a memory block.
The easiest way to debug this kind of crash is to use the debugging help from windows, through pageheap or appverifier, but depending on the application it may slow down too much, or grow the memory usage too high to be usable, which seems to be the case. You may try to use light pageheap, which will have less impact.
You need to identify what part of the application is overflowing. One way to do this is to look at the information contained in the overflown block. If you have a crash in RtlpCoalesceFreeBlocks, I think I remember one of the registers (#esi) is pointing to the start of the corrupted block (I am not on a windows system at the time of this writing and can not check that). Or if you have a dump, using windbg command !heap -a will dump all memory and display corrupted blocks (better log into a file, since the full heap listing can be long). Once corrupted blocks are known, their content may help to identify the code.
Another help can be to enable the stack backtraces (using gflags). This can be done in production as it is lighter than pageheap. It will add some information to heap blocks and may move the crash to another place in your application, but the stack traces will help to identify what code allocated the blocks that are overflowing.
I would focus on getting the issue to happen on a build for which you have proper debugging symbols, at least for your main application. You seem to gloss over this with "sorry we don't have symbols", but when symbols are applied, the stacktraces may show you more information.
What exactly does this mean: "We can't generate symbols because we're linking with a library which doesn't link if we're using them."? This seems odd.

Resources