UMDH not giving call stack - windows

I'm using UMDH(x64) to test memory leak. My code is neither FPO optimized nor using customized allocators. It uses just "new" operator.
"Create User Mode stack trace Database" is enabled in the Gflags(x64) for the image that's being tested.
I have tracked my application using UMDH both in non-leaky case and leaky case and obtained the logs in both the cases.
And compared the logs with UMDH. It has picked the right pdb as evident from its comment lines in the top.
Problem:
The call stack doesn't show my code's stack. It just traces generic windows functions names. I have tried with both debug and release versions in x64.
Am I missing something?
The code and diff trace obtained are below:
// code:
#include <iostream>
using namespace std;
void myFunc()
{
int k;
cin >> k;
int* ii = new int[1998];
if (k == 0) delete[] ii;
}
int main()
{
myFunc();
return 0;
}
// stack trace obtained:
+ 390 ( 390 - 0) 1 allocs BackTraceAC905E8D
+ 1 ( 1 - 0) BackTraceAC905E8D allocations
ntdll!RtlpCallInterceptRoutine+0000003F
ntdll!RtlpAllocateHeapInternal+0000069F
ntdll!TppWorkerThread+00000ADB
KERNEL32!BaseThreadInitThunk+00000022
ntdll!RtlUserThreadStart+00000034
.....
.....
...

As described in Using UMDH to Find a User-Mode Memory Leak (MSDN), you need to define the environment variable _NT_SYMBOL_PATH before using UMDH.
If you run it from command line, use
set _NT_SYMBOL_PATH=c:\mysymbols;srv*c:\mycache*https://msdl.microsoft.com/download/symbols

Related

Trap memory accesses inside a standard executable built with MinGW

So my problem sounds like this.
I have some platform dependent code (embedded system) which writes to some MMIO locations that are hardcoded at specific addresses.
I compile this code with some management code inside a standard executable (mainly for testing) but also for simulation (because it takes longer to find basic bugs inside the actual HW platform).
To alleviate the hardcoded pointers, i just redefine them to some variables inside the memory pool. And this works really well.
The problem is that there is specific hardware behavior on some of the MMIO locations (w1c for example) which makes "correct" testing hard to impossible.
These are the solutions i thought of:
1 - Somehow redefine the accesses to those registers and try to insert some immediate function to simulate the dynamic behavior. This is not really usable since there are various ways to write to the MMIO locations (pointers and stuff).
2 - Somehow leave the addresses hardcoded and trap the illegal access through a seg fault, find the location that triggered, extract exactly where the access was made, handle and return. I am not really sure how this would work (and even if it's possible).
3 - Use some sort of emulation. This will surely work, but it will void the whole purpose of running fast and native on a standard computer.
4 - Virtualization ?? Probably will take a lot of time to implement. Not really sure if the gain is justifiable.
Does anyone have any idea if this can be accomplished without going too deep? Maybe is there a way to manipulate the compiler in some way to define a memory area for which every access will generate a callback. Not really an expert in x86/gcc stuff.
Edit: It seems that it's not really possible to do this in a platform independent way, and since it will be only windows, i will use the available API (which seems to work as expected). Found this Q here:
Is set single step trap available on win 7?
I will put the whole "simulated" register file inside a number of pages, guard them, and trigger a callback from which i will extract all the necessary info, do my stuff then continue execution.
Thanks all for responding.
I think #2 is the best approach. I routinely use approach #4, but I use it to test code that is running in the kernel, so I need a layer below the kernel to trap and emulate the accesses. Since you have already put your code into a user-mode application, #2 should be simpler.
The answers to this question may provide help in implementing #2. How to write a signal handler to catch SIGSEGV?
What you really want to do, though, is to emulate the memory access and then have the segv handler return to the instruction after the access. This sample code works on Linux. I'm not sure if the behavior it is taking advantage of is undefined, though.
#include <stdint.h>
#include <stdio.h>
#include <signal.h>
#define REG_ADDR ((volatile uint32_t *)0x12340000f000ULL)
static uint32_t read_reg(volatile uint32_t *reg_addr)
{
uint32_t r;
asm("mov (%1), %0" : "=a"(r) : "r"(reg_addr));
return r;
}
static void segv_handler(int, siginfo_t *, void *);
int main()
{
struct sigaction action = { 0, };
action.sa_sigaction = segv_handler;
action.sa_flags = SA_SIGINFO;
sigaction(SIGSEGV, &action, NULL);
// force sigsegv
uint32_t a = read_reg(REG_ADDR);
printf("after segv, a = %d\n", a);
return 0;
}
static void segv_handler(int, siginfo_t *info, void *ucontext_arg)
{
ucontext_t *ucontext = static_cast<ucontext_t *>(ucontext_arg);
ucontext->uc_mcontext.gregs[REG_RAX] = 1234;
ucontext->uc_mcontext.gregs[REG_RIP] += 2;
}
The code to read the register is written in assembly to ensure that both the destination register and the length of the instruction are known.
This is how the Windows version of prl's answer could look like:
#include <stdint.h>
#include <stdio.h>
#include <windows.h>
#define REG_ADDR ((volatile uint32_t *)0x12340000f000ULL)
static uint32_t read_reg(volatile uint32_t *reg_addr)
{
uint32_t r;
asm("mov (%1), %0" : "=a"(r) : "r"(reg_addr));
return r;
}
static LONG WINAPI segv_handler(EXCEPTION_POINTERS *);
int main()
{
SetUnhandledExceptionFilter(segv_handler);
// force sigsegv
uint32_t a = read_reg(REG_ADDR);
printf("after segv, a = %d\n", a);
return 0;
}
static LONG WINAPI segv_handler(EXCEPTION_POINTERS *ep)
{
// only handle read access violation of REG_ADDR
if (ep->ExceptionRecord->ExceptionCode != EXCEPTION_ACCESS_VIOLATION ||
ep->ExceptionRecord->ExceptionInformation[0] != 0 ||
ep->ExceptionRecord->ExceptionInformation[1] != (ULONG_PTR)REG_ADDR)
return EXCEPTION_CONTINUE_SEARCH;
ep->ContextRecord->Rax = 1234;
ep->ContextRecord->Rip += 2;
return EXCEPTION_CONTINUE_EXECUTION;
}
So, the solution (code snippet) is as follows:
First of all, i have a variable:
__attribute__ ((aligned (4096))) int g_test;
Second, inside my main function, i do the following:
AddVectoredExceptionHandler(1, VectoredHandler);
DWORD old;
VirtualProtect(&g_test, 4096, PAGE_READWRITE | PAGE_GUARD, &old);
The handler looks like this:
LONG WINAPI VectoredHandler(struct _EXCEPTION_POINTERS *ExceptionInfo)
{
static DWORD last_addr;
if (ExceptionInfo->ExceptionRecord->ExceptionCode == STATUS_GUARD_PAGE_VIOLATION) {
last_addr = ExceptionInfo->ExceptionRecord->ExceptionInformation[1];
ExceptionInfo->ContextRecord->EFlags |= 0x100; /* Single step to trigger the next one */
return EXCEPTION_CONTINUE_EXECUTION;
}
if (ExceptionInfo->ExceptionRecord->ExceptionCode == STATUS_SINGLE_STEP) {
DWORD old;
VirtualProtect((PVOID)(last_addr & ~PAGE_MASK), 4096, PAGE_READWRITE | PAGE_GUARD, &old);
return EXCEPTION_CONTINUE_EXECUTION;
}
return EXCEPTION_CONTINUE_SEARCH;
}
This is only a basic skeleton for the functionality. Basically I guard the page on which the variable resides, i have some linked lists in which i hold pointers to the function and values for the address in question. I check that the fault generating address is inside my list then i trigger the callback.
On first guard hit, the page protection will be disabled by the system, but i can call my PRE_WRITE callback where i can save the variable state. Because a single step is issued through the EFlags, it will be followed immediately by a single step exception (which means that the variable was written), and i can trigger a WRITE callback. All the data required for the operation is contained inside the ExceptionInformation array.
When someone tries to write to that variable:
*(int *)&g_test = 1;
A PRE_WRITE followed by a WRITE will be triggered,
When i do:
int x = *(int *)&g_test;
A READ will be issued.
In this way i can manipulate the data flow in a way that does not require modifications of the original source code.
Note: This is intended to be used as part of a test framework and any penalty hit is deemed acceptable.
For example, W1C (Write 1 to clear) operation can be accomplished:
void MYREG_hook(reg_cbk_t type)
{
/** We need to save the pre-write state
* This is safe since we are assured to be called with
* both PRE_WRITE and WRITE in the correct order
*/
static int pre;
switch (type) {
case REG_READ: /* Called pre-read */
break;
case REG_PRE_WRITE: /* Called pre-write */
pre = g_test;
break;
case REG_WRITE: /* Called after write */
g_test = pre & ~g_test; /* W1C */
break;
default:
break;
}
}
This was possible also with seg-faults on illegal addresses, but i had to issue one for each R/W, and keep track of a "virtual register file" so a bigger penalty hit. In this way i can only guard specific areas of memory or none, depending on the registered monitors.

Bison+Flex segfault no backtrace

I'm trying to debug code generated by Bison + Flex (what a joy!). It segfaults so badly that there isn't even stack information available to gdb. Is there any way to make this combination generate code that's more debuggable?
Note that I'm trying to compile a reentrant lexer and parser (which is in itself a huge pain).
Below is the program that tries to use the yyparse:
int main(int argc, char** argv) {
int res;
if (argc == 2) {
yyscan_t yyscanner;
res = yylex_init(&yyscanner);
if (res != 0) {
fprintf(stderr, "Couldn't initialize scanner\n");
return res;
}
FILE* h = fopen(argv[1], "rb");
if (h == NULL) {
fprintf(stderr, "Couldn't open: %s\n", argv[1]);
return errno;
}
yyset_in(h, yyscanner);
fprintf(stderr, "Scanner set\n");
res = yyparse(&yyscanner);
fprintf(stderr, "Parsed\n");
yylex_destroy(&yyscanner);
return res;
}
if (argc > 2) {
fprintf(stderr, "Wrong number of arguments\n");
}
print_usage();
return 1;
}
Trying to run this gives:
(gdb) r
Starting program: /.../program
[Inferior 1 (process 3292) exited with code 01]
Note 2: I'm passing -d to flex and -t to bison.
After shuffling the code around I was able to get backtrace. But... it appears that passing -t has zero effect as does %debug directive in *.y file. The only way to get traces is to set yydebug = 1 in your code.
You are clobbering the stack by passing the address of yyscanner instead of its value to yyparse. Once the stack has been overwritten in that fashion, even gdb will be unable to provide accurate backtraces.
The -d and %debug directives cause bison to emit the code necessary to perform debugging traces. (This makes the parser code somewhat larger and a tiny bit slower, so it is not enabled by default.) That is necessary for tracing to work, but you still have to request traces by setting yydebug to a non-zero value.
This is mentioned right at the beginning of the Bison manual section on tracing: (emphasis added)
8.4.1 Enabling Traces
There are several means to enable compilation of trace facilities
And slightly later on:
Once you have compiled the program with trace facilities, the way to request a trace is to store a nonzero value in the variable yydebug. You can do this by making the C code do it (in main, perhaps), or you can alter the value with a C debugger.
Unless you are working in an extremely resource-constrained environment, I suggest you always use the -t option, as do the Bison authors:
We suggest that you always enable the trace option so that debugging is always possible.

Why is this loop not executed?

I compile with GCC 5.3 2016q1 for STM32 microcontroller.
Right at the beginning of main I placed a small routine to fill stack with a pattern. Later I search the highest address that still holds this pattern to find out about stack usage, you surely know this. Here is my routine:
uint32_t* Stack_ptr = 0;
uint32_t Stack_bot;
uint32_t n = 0;
asm volatile ("str sp, [%0]" :: "r" (&Stack_ptr));
Stack_bot = (uint32_t)(&_estack - &_Min_Stack_Size);
//*
n = 0;
while ((uint32_t)(Stack_ptr) > Stack_bot)
{
Stack_ptr--;
n++;
*Stack_ptr = 0xAA55A55A;
} // */
After that I initialize hardware, also a UART and print out values of Stack_ptr, Stack_bot and n and then stack contents.
The results are 0x20007FD8 0x20007C00 0
Stack_bot is the expected value because I have 0x400 Bytes in 32k RAM starting at 0x20000000. But I would expect Stack_ptr to be 0x20008000 and n somewhat under 0x400 after the loop is finished. Also stack contents shows no entries of 0xAA55A55A. This means the loop is not executed.
I could only manage to get it executed by creating a small function that holds the above routine and disable optimization for this function.
Anybody knows why that is? And the strangest thing about it is that I could swear it worked a few days ago. I saw a lot of 0xAA55A55A in the stack dump.
Thanks a lot
Martin
Probably problem is with the assembler function, In my code I use this:
// defined by linker script, pointing to end of static allocation
extern unsigned _heap_start;
void fill_heap(unsigned fill=0x55555555) {
unsigned *dst = &_heap_start;
register unsigned *msp_reg;
__asm__("mrs %0, msp\n" : "=r" (msp_reg) );
while (dst < msp_reg) {
*dst++ = fill;
}
}
it will fill memory between _heap_start and current SP.

How to calculate required stack size for tree of called functions in GCC

Is it possible to determine the demand a non recursive function on the stack without external computation, right in the text of the program? I need this to allocate a memory resource for the thread in very small micro-controllers, such as AVR. And I need know this before function calling. Directive --stack-usage is very non informative, unfortunately. Or I something do not understand?
Getting the address of a passed argument yields it's place on the stack. Therefore running this:
#include <stdio.h>
void my_fun(int dummy);
int get_stack_space(int dummy);
int main(void)
{
int dummy = 0;
my_fun(dummy);
return 0;
}
void my_fun(int dummy)
{
// do stuff
printf("%d\n", get_stack_space((int)&dummy));
return;
}
int get_stack_space(int dummy)
{
return dummy - (int)&dummy;
}
should get you the distance in bytes on the stack between the point of calling my_fun() and calling get_stack_space(). Hope it helps.
Edit: on x86 you get the distance + a machine word for the push of the return address when calling my_fun() + a machine word for the push of ebp at the start of my_fun()

why this signal handler is called infinitely

I am using Mac OS 10.6.5, g++ 4.2.1. And meet problem with following code:
#include <iostream>
#include <sys/signal.h>
using namespace std;
void segfault_handler(int signum)
{
cout << "segfault caught!!!\n";
}
int main()
{
signal(SIGSEGV, segfault_handler);
int* p = 0;
*p = 100;
return 1;
}
It seems the segfault_handler is called infinitely and keep on print:
segfault caught!!!
segfault caught!!!
segfault caught!!!
...
I am new to Mac development, do you have any idea on what happened?
This is because after your signal handler executes, the EIP is back to the instruction which causes the SIGSEGV - so it executes again, and SIGSEGV is raised again.
Usually ignoring SIGSEGV like you do is meaningless anyway - suppose the instruction actually read some value from a pointer to a register, what would you do? You don't have any 'correct' value to put in the register, so the following code will likely SIGSEGV again or, worse, trigger some logic error.
You should either exit the process when SIGSEGV happens, or return to a known safe point - longjmp should work, if you know that this is indeed the safe point (the only possible example that comes to mind is VM interpreters/JITs).
Have you tried returning 0 instead of 1 in your program? Traditionally, values other than 0 indicate error. Also, does removing the two lines dealing with *p resolve it?

Resources