It seems whenever there are static objects, _CrtDumpMemoryLeaks returns a false positive claiming it is leaking memory. I know this is because they do not get destroyed until after the main() (or WinMain) function. But is there any way of avoiding this? I use VS2008.
I found that if you tell it to check memory automatically after the program terminates, it allows all the static objects to be accounted for. I was using log4cxx and boost which do a lot of allocations in static blocks, this fixed my "false positives"...
Add the following line, instead of invoking _CrtDumpMemoryLeaks, somewhere in the beginning of main():
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
For more details on usage and macros, refer to MSDN article:
http://msdn.microsoft.com/en-us/library/5at7yxcs(v=vs.71).aspx
Not a direct solution, but in general I've found it worthwhile to move as much allocation as possible out of static initialization time. It generally leads to headaches (initialization order, de-initialization order etc).
If that proves too difficult you can call _CrtMemCheckpoint (http://msdn.microsoft.com/en-us/library/h3z85t43%28VS.80%29.aspx) at the start of main(), and _CrtMemDumpAllObjectsSince
at the end.
1) You said:
It seems whenever there are static objects, _CrtDumpMemoryLeaks returns a false positive claiming it is leaking memory.
I don't think this is correct. EDIT: Static objects are not created on heap. END EDIT: _CrtDumpMemoryLeaks only covers crt heap memory. Therefore these objects are not supposed to return false positives.
However, it is another thing if static variables are objects which themselves hold some heap memory (if for example they dynamically create member objects with operator new()).
2) Consider using _CRTDBG_LEAK_CHECK_DF in order to activate memory leak check at the end of program execution (this is described here: http://msdn.microsoft.com/en-us/library/d41t22sb(VS.80).aspx). I suppose then memory leak check is done even after termination of static variables.
Old question, but I have an answer. I am able to split the report in false positives and real memory leaks. In my main function, I initialize the memory debugging and generate a real memory leak at the really beginning of my application (never delete pcDynamicHeapStart):
int main()
{
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
char* pcDynamicHeapStart = new char[ 17u ];
strcpy_s( pcDynamicHeapStart, 17u, "DynamicHeapStart" );
...
After my application is finished, the report contains
Detected memory leaks!
Dumping objects ->
{15554} normal block at 0x00000000009CB7C0, 80 bytes long.
Data: < > DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD
{14006} normal block at 0x00000000009CB360, 17 bytes long.
Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74
{13998} normal block at 0x00000000009BF4B0, 32 bytes long.
Data: < ^ > E0 5E 9B 00 00 00 00 00 F0 7F 9C 00 00 00 00 00
{13997} normal block at 0x00000000009CA4B0, 8 bytes long.
Data: < > 14 00 00 00 00 00 00 00
{13982} normal block at 0x00000000009CB7C0, 16 bytes long.
Data: < # > D0 DD D6 40 01 00 00 00 90 08 9C 00 00 00 00 00
...
Object dump complete.
Now look at line "Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74".
All reportet leaks below are false positives, all above are real leaks.
False positives don't mean there is no leak (it could be a static linked library which allocates heap at startup and never frees it), but you cannot eliminate the leak and that's no problem at all.
Since I invented this approach, I never had leaking applications any more.
I provide this here and hope this helps other developers to get stable applications.
Can you take a snapshot of the currently allocated objects every time you want a list? If so, you could remove the initially allocated objects from the list when you are looking for leaks that occur in operation. In the past, I have used this to find incremental leaks.
Another solution might be to sort the leaks and only consider duplicates for the same line of code. This should rule out static variable leaks.
Jacob
Ach. If you are sure that _CrtDumpMemoryLeaks() is lying, then you are probably correct. Most alleged memory leaks that I see are down to incorect calls to _CrtDumpMemoryLeaks(). I agree entirely with the following; _CrtDumpMemoryLeaks() dumps all open handles. But your program probably already has open handles, so be sure to call _CrtDumpMemoryLeaks() only when all handles have been released. See http://www.scottleckie.com/2010/08/_crtdumpmemoryleaks-and-related-fun/ for more info.
I can recommend Visual Leak Detector (it's free) rather than using the stuff built into VS. My problem was using _CrtDumpMemoryLeaks with an open source library that created 990 lines of output, all false positives so far as I can tell, as well as some things coming from boost. VLD ignored these and correctly reported some leaks I added for testing, including in a native DLL called from C#.
Related
I want to trace the goid of go programs using ebpf.
After reading for some posts and blogs, I know that %fs:0xfffffffffffffff8 points to the g struct of go and mov %fs:0xfffffffffffffff8,%rcx instruction always appear at the start of a go function.
Taking main.main as an example:
func main() {
177341 458330: 64 48 8b 0c 25 f8 ff mov %fs:0xfffffffffffffff8,%rcx
177342 458337: ff ff
177343 458339: 48 3b 61 10 cmp 0x10(%rcx),%rsp
177344 45833d: 76 1a jbe 458359 <main.main+0x29>
177345 45833f: 48 83 ec 08 sub $0x8,%rsp
177346 458343: 48 89 2c 24 mov %rbp,(%rsp)
177347 458347: 48 8d 2c 24 lea (%rsp),%rbp
177348 myFunc()
177349 45834b: e8 10 00 00 00 callq 458360 <main.myFunc>
177350 }
I also know the goid information is stored in the g struct of go. The value of fs register can be obtained via the ctx argument of ebpf function.
But I don't know what the real address of %fs:0xfffffffffffffff8 because I am new to assembly language. Could anyone give me some hints?
If the value of fs register were 0x88, what is the value of %fs:0xfffffffffffffff8?
That's a negative number, so it's one qword before the FS base. You need the FS base address, which is not the selector value in the FS segment register that you could see with a debugger.
Your process probably made a system call to ask the OS to set it, or possibly used the wrfsbase instruction at some point on systems that support it.
Note that at least outside of Go, Linux typically uses FS for thread-local storage.
(I'm not sure what the standard way to actually find the FS base is; it's obviously OS dependent to do that in user-space where rdmsr isn't available; FS and GS base are exposed as MSRs, so OSes use that instead of actually modifying a GDT or LDT entry. rdfsbase needs to be enabled by the kernel setting a bit in CR4 on CPUs that support the FSGSBASE ISA extension, so you can't count on that working.)
#MargaretBloom suggests that user-space could trigger an invalid page fault; most OSes report the faulting virtual address back to user-space. In Linux for example, SIGSEGV has the address. (Or SIGBUS if it was non-canonical, IIRC. i.e. not in the low or high 47 bits of virtual address space, but in the "hole" where the address isn't the sign-extension of the low 48.)
So you'd want to install signal handlers for those signals and try a load from an offset that (with 0 base) would be in the middle of kernel space, or something like that. If for some reason that doesn't fault, increment the virtual address by 1TiB or something in a loop. Normally no MMIO is mapped into user-space's virtual address space so there are no side effects for merely reading.
I am not getting AFL in the GPO command for Visa contactless Application
GPO Request as Below:
Request :80 A8 00 00 12 83 10 B6 60 40 00 00 00 00 01 00 00 00 00 38 39 30 31 00
Tag 9F 66: Terminal Transaction Qualifiers : B6 60 40 00
Tag 9F 02: Transaction Amount : 00 00 00 01 00 00
Tag 5F 2A: Transaction Currency Code : 03 56
Tag 9F 37: Unpredictable Number : 38 39 30 31
Getting AFL is not mandatory. If you do not get AFL you are not expected to do any READs. You need not do some functions like ODA as you wont have data associated with it. You can proceed with the available data as such.
As per VISA specification (VCPS), AFL is not mandatory.
If it is not returned in GPO the kernel shall skip the READ RECORDS and proceeds to Card Read Complete.
Your Terminal Transaction Qualifier byte 1 bit 1 is set to zero, meaning "Offline Data Authentication for Online Authorizations not supported". Try setting it to 1: B6 60 40 00 --> B7 60 40 00.
I was having the same issue and this was enough to receive an AFL.
I am experimenting now with Visa contactless, Get Processing Options, PDOL, and Read Record commands.
Here is what I found:
Visa Contactless has data accessible via Read Record in either rec 1 or 2, in file 1. You do not need to issue GPO to get this data.
A more complicated case is Visa Contactless inside Google Pay.
Contrary to simple PDOL having 4 elements, this "card" application requests PDOL over 20 elements. I was not able to guess so far the proper values of all of them, to construct proper PDOL and get AFL in GPO APDU Response, and SW=0x90.
The application returns 0 bytes for each Read Record I tried, and so far I cannot find which record file contains application data.
I want to get a byte sequence out of the .text section of an object file and turn it into a signature. I want to execute ClamAV's clamscan with this signature to find other object files containing the same byte sequence.
With objdump the byte sequence looks like this:
A byte sequence for this example could look like this:
55 48 89 e5 48 83 ec 10 bf 0a 00 00 00 e8 ?? ?? ?? ?? 48 89 45 f8 c9 c3
the ?? being place holder.
I didn't find a way to do it with sigtool. Is there another tool for that, or do I have to do it manually and if so in which form do I have to save the signatures (format within the signature database and format of the database itself)?
I had to write a script which was doing this task by hand. I didn't find a way sigtool can do that for me. A script ran through the objdump and replaced the variable bytes. I stored the result in a database and with this database I could identify which library was linked statically using clamscan in binary mode (even if someone strips out the library names).
I am trying to build a program that will give more information about a file and possibly a disassembler. I looked at https://code.google.com/p/corkami/wiki/PE101 to get more information and after reading it a few times I am understanding most of it. the part I don't understand is the call addresses to windows api. for example how did he know that the instruction call [0x402070] was an api call to messagebox? I understand how to count the addresses to the strings and the 2 push commands to strings make sense, but not the dll part.
I guess what I am trying to say is I don't understand the part that says "imports structures"
(the part I drew a box around in yellow) If any one could please explain to me how 0x402068 points to exitProcess and 0x402070 points to MessageBoxA, this would really help me. thanks
Loader (a part of Windows OS) "patches up" the Import Address Table (IAT) before starting the sample program, that's when the real addresses of the library procedures appear in the memory locations 0x402068 and 0x402068. Please note that imports reside in section nobits in simple.asm:
section nobits vstart=IMAGEBASE + 2 * SECTIONALIGN align=FILEALIGN
The section with imports after load starts at virtual address (IMAGEBASE=400000h)+2*(SECTIONALIGN=1000h)=0x402000 .
The yasm source of the example is quite unusual and the diagram is also not the best place to learn PE format from. Please start by reading Wikipedia:Portable_Executable first (a short article). It has links to the full documents, so I will only make some short notes here.
You might also want to use the Cheat Engine to inspect the sample. Launch simple.exe, then attach to the process with Cheat Engine, press Memory View, then menu Tools->Dissect PE headers, then button Info, look at tab Imports. In the memory dump, go to address 00402000 (CTRL+G 00402000 Enter:
00402068: E4 39 BE 75 00 00 00 00 69 5F 47 77 00 00 00 00 6B 65 72 6E 65 6C 33 32 2E
Note the values at these locations
00402068: 0x75BE39E4 (on my computer) = the address of KERNEL32.ExitProcess
00402070: 0x77475F69 (in my case only) = the address of user32.MessageBoxA
Notice the text "kernel32.dll user32.dll" right after them. Now look at the hexdump of simple.exe (I would use Far Manager) and spot the same location before strings "kernel32.dll user32.dll". The values there are
0000000450: 69 74 50 72 6F 63 65 73 │ 73 00 00 00 4D 65 73 73 itProcess Mess
0000000460: 61 67 65 42 6F 78 41 00 │ 4C_20_00_00 00 00 00 00 ageBoxA L
0000000470: 5A_20_00_00 00 00 00 00 │ 6B 65 72 6E 65 6C 33 32 Z kernel32
0000000480: 2E 64 6C 6C 00 75 73 65 │ 72 33 32 2E 64 6C 6C 00 .dll user32.dll
0000000468: 0x0000204C — the Relative Virtual Address of dw 0;db 'ExitProcess', 0
0000000470: 0x0000205A — the Relative Virtual Address of dw 0;db 'MessageBoxA', 0
The loader has changed these values from what they were in the file after loading into memory. The Microsoft document pecoff.doc says about it:
6.4.4. Import Address Table
The structure and content of the Import Address Table are identical to that of the Import Lookup Table, until the file is bound. During binding, the entries in the Import Address Table are overwritten with the 32-bit (or 64-bit for PE32+) addresses of the symbols being imported: these addresses are the actual memory addresses of the symbols themselves (although technically, they are still called “virtual addresses”). The processing of binding is typically performed by the loader.
I'm screwing around with the Visual Studio 2013 C++ compiler (what I'm doing is not really important or interesting at all) and I'm running across some very odd behavior. Right now I have the code:
void fun(void)
{
int *ebp = (int *)(&ebp + 1);
}
Which should give me a pointer to ebp using Visual Studio's stack semantics. NOTE: I have set Basic Runtime Checks to "Default" and I have disabled security checks. When I look at the memory when debugging in Visual Studio I see:
0x009BFBFC 41 03 81 51 fe ff ff ff 44 fc 9b 00
Where 0x009BFBFC is the address of ebp. Note that there is a random -2 in the location immediately preceding ebp (0xfffffffe). Also note that the saved ebp is right after this -2 (0x009bfc44). "Okay" I say, "I'll just add 2 instead!" I now have this code:
void fun(void)
{
int *ebp = (int *)(&ebp + 2);
}
And when I run it and look at the memory, this time I see:
0x0032FCD8 fe ff ff ff 1c fd 32 00 5e 3c 39 00
Again, 0x0032FCD8 is the address of ebp. What madness is this! The random extra space is now gone giving me a pointer to the return address instead!
Is this deliberate? I can't see any reason why the Visual Studio compiler would intentionally prevent me from accessing the base pointer from code, but then again I can't see why the compiler would behave so oddly when I change a two to a one. For those curious, I looked at the disassembly and the first example does allocate 4 more bytes than the second for no apparent reason (it's not used anywhere). If anyone has any insight, that would be awesome; this is kind of irritating that it would do this.