Interpreting Call Stack output in Visual Studio - visual-studio

System info: Windows 7, MSVS 2010
The following is a simple program, in which I am testing how Call Stack option in debug works
#include<stdio.h>
#include "stdafx.h"
int main()
{
printf("hello"); //breakpoint
}
When I debug the control hits the break point and the Call Stack is:
testapp.exe!main() Line 10 C++
testapp.exe!__tmainCRTStartup() Line 555 + 0x19 bytes C
testapp.exe!mainCRTStartup() Line 371 C
kernel32.dll!75e7ed6c()
[Frames below may be incorrect and/or missing, no symbols loaded for kernel32.dll]
ntdll.dll!77a537eb()
ntdll.dll!77a537be()
How do I interpret this result? Ad what is __tmainCRTStartup()?
Update
Just checked, the same output in Call Stack even if I am having .c file instead of .cpp file.

The call stack is used to figure out which line of code the debugger is currently at. The top one is the current location.
In your example the relevant line is testapp.exe!main() Line 10 C++ which means it's stopped at a function called main() which is at Line 10 in your file. Normally this contains the filename too.
Paste this code into your file and see if the call stack makes more sense for you when you break:
int main()
{
apple();
}
void apple()
{
banana();
}
void banana()
{
printf("hello"); //breakpoint
}

Related

UMDH not giving call stack

I'm using UMDH(x64) to test memory leak. My code is neither FPO optimized nor using customized allocators. It uses just "new" operator.
"Create User Mode stack trace Database" is enabled in the Gflags(x64) for the image that's being tested.
I have tracked my application using UMDH both in non-leaky case and leaky case and obtained the logs in both the cases.
And compared the logs with UMDH. It has picked the right pdb as evident from its comment lines in the top.
Problem:
The call stack doesn't show my code's stack. It just traces generic windows functions names. I have tried with both debug and release versions in x64.
Am I missing something?
The code and diff trace obtained are below:
// code:
#include <iostream>
using namespace std;
void myFunc()
{
int k;
cin >> k;
int* ii = new int[1998];
if (k == 0) delete[] ii;
}
int main()
{
myFunc();
return 0;
}
// stack trace obtained:
+ 390 ( 390 - 0) 1 allocs BackTraceAC905E8D
+ 1 ( 1 - 0) BackTraceAC905E8D allocations
ntdll!RtlpCallInterceptRoutine+0000003F
ntdll!RtlpAllocateHeapInternal+0000069F
ntdll!TppWorkerThread+00000ADB
KERNEL32!BaseThreadInitThunk+00000022
ntdll!RtlUserThreadStart+00000034
.....
.....
...
As described in Using UMDH to Find a User-Mode Memory Leak (MSDN), you need to define the environment variable _NT_SYMBOL_PATH before using UMDH.
If you run it from command line, use
set _NT_SYMBOL_PATH=c:\mysymbols;srv*c:\mycache*https://msdl.microsoft.com/download/symbols

What is the difference between main and mainCRTStartup?

I'm trying to understand how substituting a different entry point for WinMain works in the Microsoft toolchain.
I already found this question and it was super helpful, but one last detail is nagging at me.
The first time I changed the Linker>Advanced>Entry Point option in Visual Studio, I set it to main by mistake and my program compiled and ran fine. I realized it later and rebuilt the program with it set to mainCRTStartup, as the accepted answer in the linked question suggests, and didn't find anything different.
So, my question is: is there any difference at all between main and mainCRTStartup, and if so, what is the difference?
main() is the entrypoint of your C or C++ program. mainCRTStartup() is the entrypoint of the C runtime library. It initializes the CRT, calls any static initializers that you wrote in your code, then calls your main() function.
Clearly it is essential that both the CRT and your own initialization is performed first. You can suffer from pretty hard to diagnose bugs if that doesn't happen. Maybe you won't, it is a crap-shoot. Something you can test by pasting this code in a small C++ program:
class Foo {
public:
Foo() {
std::cout << "init done" << std::endl;
}
} TestInit;
If you change the entrypoint to "main" then you'll see that the constructor never gets called.
This is bad.
In VS2017,create a console C++ application:
#include "pch.h"
#include <iostream>
int func()
{
return 1;
}
int v = func();
int main()
{
}
set a breakpoint in main() and begin debug,then the call stack is like:
testCppConsole.exe!main() Line 8 C++
testCppConsole.exe!invoke_main() Line 78 C++
testCppConsole.exe!__scrt_common_main_seh() Line 288 C++
testCppConsole.exe!__scrt_common_main() Line 331 C++
testCppConsole.exe!mainCRTStartup() Line 17 C++
kernel32.dll!#BaseThreadInitThunk#12() Unknown
ntdll.dll!__RtlUserThreadStart() Unknown
ntdll.dll!__RtlUserThreadStart#8() Unknown
So the program entry point is mainCRTStartup,it finally calls the C entry point main(),and the value of v will be 1.
Now set Linker>Advanced>Entry Point to "main" and begin debug,now the call stack is:
> testCppConsole.exe!main() Line 8 C++
kernel32.dll!#BaseThreadInitThunk#12() Unknown
ntdll.dll!__RtlUserThreadStart() Unknown
ntdll.dll!__RtlUserThreadStart#8() Unknown
So main() become the program entry point,and for this time the value of v will be 0,because CRT init functions are not called at all,so func() won't be called.
Now modify the code to :
#include "pch.h"
#include <iostream>
extern "C" int mainCRTStartup();
extern "C" int entry()
{
return mainCRTStartup();
}
int func()
{
return 1;
}
int v = func();
int main()
{
}
and set Linker>Advanced>Entry Point to "entry" and begin debug,now the call stack is:
> testCppConsole.exe!main() Line 14 C++
testCppConsole.exe!invoke_main() Line 78 C++
testCppConsole.exe!__scrt_common_main_seh() Line 288 C++
testCppConsole.exe!__scrt_common_main() Line 331 C++
testCppConsole.exe!mainCRTStartup() Line 17 C++
testCppConsole.exe!entry() Line 10 C++
kernel32.dll!#BaseThreadInitThunk#12() Unknown
ntdll.dll!__RtlUserThreadStart() Unknown
ntdll.dll!__RtlUserThreadStart#8() Unknown
and v will be 1 again.Program entry point is entry(),it calls mainCRTStartup() which call CRT init funtions which calls func() to init v,and mainCRTStartup() finally calls main().

Steps to make a loadable DLL of some tcl methods in Visual Studio

I want to create a loadable DLL of some of my tcl methods. But I am not getting how to do this. For that I have taken a simple example of tcl api which adds two numbers and prints the sum. Now I want to create a loadable DLL for this to export this tcl functionality.
But I am not understanding how to do it in Visual Studio. I have written a C code which can call this tcl api and get the sum of two integers, but again I don't want it to do this way. I want to create a DLL file to use this tcl functionality. How can I create this DLL on Visual Studio 2010.
Below is my sample tcl program that I am using:
#!/usr/bin/env tclsh8.5
proc add_two_nos { } {
set a 10
set b 20
set c [expr { $a + $b } ]
puts " c is $c ......."
}
And here is the C code which can use this tcl functionality :
#include <tcl.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
Tcl_Interp *interp;
int code;
char *result;
Tcl_FindExecutable(argv[0]);
interp = Tcl_CreateInterp();
code = Tcl_Eval(interp, "source myscript.tcl; add_two_nos");
/* Retrieve the result... */
result = Tcl_GetString(Tcl_GetObjResult(interp));
/* Check for error! If an error, message is result. */
if (code == TCL_ERROR) {
fprintf(stderr, "ERROR in script: %s\n", result);
exit(1);
}
/* Print (normal) result if non-empty; we'll skip handling encodings for now */
if (strlen(result)) {
printf("%s\n", result);
}
/* Clean up */
Tcl_DeleteInterp(interp);
exit(0);
}
I have successfully compiled this code with the below command
gcc simple_addition_wrapper_new.c -I/usr/include/tcl8.5/ -ltcl8.5 -o simple_addition_op
The above code is working with the expected output.
What steps do I need to take to create a loadable dll for this in Visual Studio 2010?
If you look at the answers to this question: here it gives the basic outline of the process you need to go through. There are links from my answer to some Microsoft MSDN articles on creating DLLs.
To go into this in a little more detail for a C++ dll that has Tcl embedded in it.
The first step is to create a new visual studio project with the correct type, one that is going to build a dll that exports symbols. My example project is called TclEmbeddedInDll and that name appears in code in symbols such as TCLEMBEDDEDINDLL_API that are generated by Visual Studio.
The dllmain.cpp look like this:
// dllmain.cpp : Defines the entry point for the DLL application.
#include "stdafx.h"
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
{
allocInterp() ;
break ;
}
case DLL_THREAD_ATTACH:
break ;
case DLL_THREAD_DETACH:
break ;
case DLL_PROCESS_DETACH:
{
destroyInterp() ;
break;
}
}
return TRUE;
}
The allocInterp() and destroyInterp() functions are defined in the TclEmbeddedInDll.h, the reason for using functions here rather than creating the Tcl_Interp directly is that it keeps the details about Tcl away from the DLL interface. If you create the interp here then you have to include tcl.h and then things get complicated when you try and use the DLL in another program.
The TclEmbeddedInDll.h and .cpp are shown next, the function fnTclEmbeddedInDll() is the one that is exported from the DLL - I'm using C linkage for this rather than C++ as it makes it easier to call the function from other languages IMHO.
// The following ifdef block is the standard way of creating macros which make exporting
// from a DLL simpler. All files within this DLL are compiled with the TCLEMBEDDEDINDLL_EXPORTS
// symbol defined on the command line. This symbol should not be defined on any project
// that uses this DLL. This way any other project whose source files include this file see
// TCLEMBEDDEDINDLL_API functions as being imported from a DLL, whereas this DLL sees symbols
// defined with this macro as being exported.
#ifdef TCLEMBEDDEDINDLL_EXPORTS
#define TCLEMBEDDEDINDLL_API __declspec(dllexport)
#else
#define TCLEMBEDDEDINDLL_API __declspec(dllimport)
#endif
extern "C" {
TCLEMBEDDEDINDLL_API void fnTclEmbeddedInDll(void);
}
void allocInterp() ;
void destroyInterp() ;
// TclEmbeddedInDll.cpp : Defines the exported functions for the DLL application.
//
#include "stdafx.h"
extern "C" {
static Tcl_Interp *interp ;
// This is an example of an exported function.
TCLEMBEDDEDINDLL_API void fnTclEmbeddedInDll(void)
{
int code;
const char *result;
code = Tcl_Eval(interp, "source simple_addition.tcl; add_two_nos");
result = Tcl_GetString(Tcl_GetObjResult(interp));
}
}
void allocInterp()
{
Tcl_FindExecutable(NULL);
interp = Tcl_CreateInterp();
}
void destroyInterp()
{
Tcl_DeleteInterp(interp);
}
The implementation of allocInterp() and destroyInterp() is very naive, no error checking is done.
Finally for the Dll the stdafx.h file ties it all together like this:
// stdafx.h : include file for standard system include files,
// or project specific include files that are used frequently, but
// are changed infrequently
//
#pragma once
#include "targetver.h"
#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers
// Windows Header Files:
#include <windows.h>
// TODO: reference additional headers your program requires here
#include <tcl.h>
#include "TclEmbeddedInDll.h"

How to convert a function address to a symbol

Let's say I have a program like this
// print-addresses.cpp
#include <stdio.h>
void foo() { }
void bar() { }
void moo() { }
int main(int argc, const char** argv) {
printf("%p\n", foo);
printf("%p\n", bar);
printf("%p\n", moo);
return 0;
}
It prints some numbers like
013510F0
013510A0
01351109
How do I convert those numbers back into the correct symbols? Effectively I'd like to be able to do this
print-addresses > address.txt
addresses-to-symbols < address.txt
And have it print
foo
bar
moo
I know this has something to do with the Debug Interface Access SDK but it's not entirely clear to me how I go from an address to a symbol.
This seems like exactly what you're looking for: Retrieving Symbol Information by Address. This uses DbgHelp.dll and relies on calling SymFromAddr. You have to do that (I think) from within the running application, or by reading in a minidump file.
You can also use the DIA, but the calling sequence is a bit more complicated. Call IDiaDataSource::loadDataForExe and IDiaDataSource::openSession to get an IDiaSession, then IDiaSession::getSymbolsByAddr to get IDiaEnumSymbolsByAddr. Then, IDiaEnumSymbolsByAddr::symbolByAddr will let you look up a symbol by address. There is also a way (shown in the example at the last link) to enumerate all symbols.
EDIT: This DIA sample application might be a good starting point for using DIA: http://msdn.microsoft.com/en-us/library/hd8h6f46%28v=vs.71%29.aspx . Particularly check out the parts using IDiaEnumSymbolsByAddr.
You could also parse the output of dumpbin, probably with /SYMBOLS or /DISASM option.
if you are in linux, you could try addr2line
addr2line addr -e execuablebin -f

scoped_lock doesn't work on file?

According to the link below, I wrote a small test case. But it doesn't work. Any idea is appreciated!
Reference:
http://www.cppprog.com/boost_doc/doc/html/interprocess/synchronization_mechanisms.html#interprocess.synchronization_mechanisms.file_lock.file_lock_careful_iostream
#include <iostream>
#include <fstream>
#include <boost/interprocess/sync/file_lock.hpp>
#include <boost/interprocess/sync/scoped_lock.hpp>
using namespace std;
using namespace boost::interprocess;
int main()
{
ofstream file_out("fileLock.txt");
file_lock f_lock("fileLock.txt");
{
scoped_lock<file_lock> e_lock(f_lock); // it works if I comment this out
file_out << 10;
file_out.flush();
file_out.close();
}
return 0;
}
Running the test on Linux produces your desired output. I notice these two warnings:
The page you reference has this warning: "If you are using a std::fstream/native file handle to write to the file while using file locks on that file, don't close the file before releasing all the locks of the file."
Boost::file_lock apparently uses LockFileEx on Windows. MSDN has this to say: "If the locking process opens the file a second time, it cannot access the specified region through this second handle until it unlocks the region."
It seems like, on Windows at least, the file lock is per-handle, not per-file. As near as I can tell, that means that your program is guaranteed to fail under Windows.
Your code appears to be susceptible to this long-standing bug on the boost trac site: https://svn.boost.org/trac/boost/ticket/2796
The title of that bug is "interprocess::file_lock has incorrect behavior when win32 api is enabled".
Here is a workaround to append in a file with a file locking based on Boost 1.44.
#include "boost/format.hpp"
#include "boost/interprocess/detail/os_file_functions.hpp"
namespace ip = boost::interprocess;
namespace ipc = boost::interprocess::detail;
void fileLocking_withHandle()
{
static const string filename = "fileLocking_withHandle.txt";
// Get file handle
boost::interprocess::file_handle_t pFile = ipc::create_or_open_file(filename.c_str(), ip::read_write);
if ((pFile == 0 || pFile == ipc::invalid_file()))
{
throw runtime_error(boost::str(boost::format("File Writer fail to open output file: %1%") % filename).c_str());
}
// Lock file
ipc::acquire_file_lock(pFile);
// Move writing pointer to the end of the file
ipc::set_file_pointer(pFile, 0, ip::file_end);
// Write in file
ipc::write_file(pFile, (const void*)("bla"), 3);
// Unlock file
ipc::release_file_lock(pFile);
// Close file
ipc::close_file(pFile);
}

Resources