Do I need to save more than PDBs to debug a crash dump file? - windows

Exactly much from the original build do I need to save in order to properly debug a crash dump file sent to me by a customer? Obviously I need the PDBs Do I need something else?
(This would be for a crash dump file written by the MiniDumpWriteDump function from dbghelp.dll.)
Until now I've always saved off the entire build folder. Code, PDBs, .OBJ files, output binaries, everything. Just to be safe. I'd like to minimimize what I save. But I can't afford to find out the hard way that I missed something.
The actual source code will be in source control and tagged with the build label so I can pull by label and get exactly what I used to build. Would I even need to bother pulling source before debugging the crash dump or is it enough to have just the PDBs?

From your company, PDBs should be all you need. Source code is helpful, too, because PDBs provide line numbers and you can have a direct look into the source code.
Typically you would not copy the PDBs, but set up a symbol server instead. You would also not copy the source code, but let a source server manage that. Team Foundation Server (TFS) provides both. But there are alternatives that do not need TFS.
In case of a .NET program, you also need the SOS.dll and MSCORDACWKS.DLL from the machine of the client where the crash occurred. having the correct version number (note that there may be several .NET frameworks installed).
SOS is the debugging extension for WinDbg that enables .NET debugging. MSCORDACWKS is the Data Access Component (DAC) that knows how objects in memory look like. (MS = Microsoft, COR = .NET, WKS = Workstation). There's a dedicated WinDbg command for that, .cordll.

Related

Store and manage multiple PDB files?

when building my application some PDB files are generated which are useful during debugging. Now when a user experiences a crash, I get the dump-file and using this I can analyse the problem.
Unfortunately for this the version of the PDB file needs to fit to the build the crashdump was generated from - or in other words, for every release I build I need to store the related PDB in order to have them available for later analysis.
Now I know that MS offers a product named "symbolserver" which does the complete job of storing and managing the PDB files of a build. Unfortunately this is a way too complex solution for me.
So my question: is there an easy to use and simply to handle alternative available for storing multiple versions of a PDB files in order to have them available for crash dump analysis?
Thanks!
Note that you don't need a symbol server, just a directory. See also How to set up a symbol server. With symstore, you only need to set up your symbol path once, save it in the WinDbg workspace or in Visual Studio and you're ready to go.
The alternative is to manage create a folder with version number of your build and move all PDBs there.
The debugging then goes like this (WinDbg):
lm m <yourapp>
to find out the version number
.sympath C:\path\to\symbols\<version>
Similar in Visual Studio: you need to find out the version and then change the symbol path. It's always 2 steps instead of 0.

Which tool to use to open .pdb (symbol) files?

I have .pdb file, downloaded from MS symbols server. I need to fetch list of symbols (functions, arguments, anything it has). There is a tool on CodeProject, but it only reports modules. There is DbgHelp API, but it only could be attcahed to running process. How can I read .pdb file offline?
Good News for anyone still looking,
The information you seek is now open source!
https://github.com/Microsoft/microsoft-pdb
Some real interesting stuff there. Like this pdbdump.cpp file,
with its dumpPublics function or its main flow controls. Good documentation too
You can also use Visual Studio's Dia2Dump sample program to dump human-readable output from a PDB file, including its public symbols.
Be sure to build it as a 32-bit application though, or you might run into some problems with it. (See dia2dump: CoCreateInstance failed - HRESULT = 80040154)

Debugging Source Indexed Software with Visual Studio is a pain

I'm responsible for some internal tools and I want to make debugging them as easy as possible on client machines. We used to ship with full source code and debugging information and built them in exactly the same location as the typical client install path. This made debugging extremely easy.
We have now moved to using Windows Installer to deploy our software, and we symbol serve the debugging information and source index the pdbs.
This now makes debugging a right pain, as for each client I want to debug, I have to change several options in Visual Studio to enable source indexing and add a file into a Visual Studio directory to stop it complaining about running the source control commands. I also can't search the source code easily as the files are only extracted on demand.
Is there a better way we can do this? I have tried hosting archives of the source code on a network share, and using symbolic links to alias the network share with the local hard drive on build, but that requires UNC paths, which the VS2008 manifest tool can't cope with.
We also have used an undocumented linker option (/sourcemap) which does let you redirect all the source paths to an arbitrary location, but this does not work for .Net apps (as that option is availble for link.exe)
Even if we supplied the source code with the installation, the paths in the PDB would still be wrong.
Is there any way to patch a PDB (supported or unsupported by Microsoft) to redirect the paths elsewhere?
I can't help you with a direct answer, but perhaps I can inspire you with a meta-answer.
Debugging in a production environment is fundamentally wrong. Others have tried it (Netscape comes to mind) and failed miserably. For something like tools, it's even more pertinent to ensure proper function through solid unit testing, and automated tests for your release. You can setup your tests to work with different qualities of input (from perfect-textbook to medium to poor to awful) and quantities (i.e. test limits). There's a lot of books available on testing and particularly unit testing.
I understand it's tough getting out of the day-to-day to make something like this work (I'm neck deep in it myself), but it's one of the sanest, if not the only sane way out.

Any recommended VC++ settings for better PDB analysis on release builds

Are there any VC++ settings I should know about to generate better PDB files that contain more information?
I have a crash dump analysis system in place based on the project crashrpt.
Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.
"Are there any VC++ settings I should know about"
Make sure you turn off Frame pointer ommision. Larry osterman's blog has the historical details about fpo and the issues it causes with debugging.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code.
What version of VS are you using? (Or are you using Windbg?) ... in VS it should defintely prompt for source the first time if it doesn't find the location. However it also keeps a list of source that was 'not found' so it doesn't ask you for it every time. Sometimes the don't look list is a pain ... to get the prompt back up you need to go to solution explorer/solution node/properties/debug properties and edit the file list in the lower pane.
Finally you might be using 'stripped symbols'. These are pdb files generated to provide debug info for walking the callstack past FPO, but with source locations stripped out (along with other data). The public symbols for windows OS components are stripped pdbs. For your own code these simply cause pain and are not worth it unless you are providing your pdbs to externals. How would you have one of these horrible stripped pdbs? You might have them if you use "binplace" with the -a command.
Good luck! A proper mini dump story is a godsend for production debugging.
If your build directly from your sourcecode management system, you should annotate your pdb files with the file origins. This allows you to automatically fetch the exact source files while debugging. (This is the same proces as used for retrieving the .Net framework sourcecode).
See http://msdn.microsoft.com/en-us/magazine/cc163563.aspx for more information. If you use subversion as your SCM you can check out the SourceServerSharp project.
You could trying using the MS-DOS subst command to assign your source code directory to the D: drive.
This is the procedure I used after some trouble similar to yours:
a) Copied to the production server all the EXE & DLL files that were built, each with its corresponding PDB to the same directory, started the system, and waited for the crash to happen.
b) Copied back all the EXE, DLL & PDB files to the development machine (to a temporary folder) along with the minidump (in the same folder). Used Visual Studio to load the minidump from that folder.
Since VS found the source files where they were originally compiled, it was always able to identify them and load them correctly. As with you, in the production machine the drive used was not C:, but in the development machine it was.
Two more tips:
One thing I did often was to copy an EXE/DLL rebuilt and forget to copy the new PDB. This ruined the debug cycle, VS would not be able to show me the call stack.
Sometimes, I got a call stack that didn't make sense in VS. After some headache, I discovered that windbg would always show me the correct stack, but VS often wouldn't. Don't know why.
In case anyone is interested, a co-worker replied to this question to me via email:
Artem wrote:
There is a flag to MiniDumpWriteDump()
that can do better crash dumps that
will allow seeing full program state,
with all global variables, etc. As for
call stacks, I doubt they can be
better because of optimizations...
unless you turn (maybe some)
optimizations off.
Also, I think disabling inline
functions and whole program
optimization will help quite a lot.
In fact, there are many dump types,
maybe you could choose one small
enough but still having more info
http://msdn.microsoft.com/en-us/library/ms680519(VS.85).aspx
Those types won't help with call stack
though, they only affect the amount of
variables you'll be able to see.
I noticed some of those dump types
aren't supported in dbghelp.dll
version 5.1 that we use. We could
update it to the newest, 6.9 version
though, I've just checked the EULA for
MS Debugging Tools -- the newest
dbghelp.dll is still ok to
redistribute.
Is Visual Studio prompting you for the path to the source file? If it isn't then it doesn't think it has symbols for the callstack. Setting the source path should work without having to map the exact original location.
You can tell if symbols are loaded by looking at the 'modules' window in Visual Studio.
Assuming you are building a PDB then I don't think there are any options that control the amount of information in the PDB directly. You can change the type of optimizations performed by the compiler to improve debuggabilty, but this will cost performance -- as your co-worker points out, disabling inline will help make things more obvious in the crash file, but will cost at runtime.
Depending on the nature of your application I would recommend working with full dump files if you can, they are bigger, but give you all the information about the process ... and how often does it crash anyway :)
Is Visual Studio prompting you for the
path to the source file?
No.
If it isn't then it doesn't think it has symbols
for the callstack. Setting the source
path should work without having to map
the exact original location.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code. I can of course search in files for the line in question, but this is hard work :)

Attaching to a foreign executable in Visual C++ 2003

I have an executable (compiled by someone else) that is hitting an assertion near my code. I work on the code in Visual C++ 2003, but I don't have a project file for this particular executable (the code is used to build many different tools). Is it possible to launch the binary in Visual C++'s debugger and just tell it where the sources are? I've done this before in GDB, so I know it ought to be possible.
Without the PDB symbols for that application you're going to have a tough time making heads or tails of what is going on and where. I think any source code information is going to be only in that PDB file that was created when whoever built that application.
This is assuming that the PDB file was EVER created for this application - which is not the default configuration for release mode VC++ projects I think. Since you're asserting, I guessing this is a debug configuration?
Short of any other answers, I would try attaching to the executable process in Visual Studio, setting a break point in your code and when you step into the process you don't have source to, it should ask for a source file.
Yes, it's possible. Just set up an empty project and specify the desired .exe file as debug target. I don't remember exactly how, but I know it's doable, because I used to set winamp.exe as debug target when I developed plug-ins for Winamp.
Since you don't have the source file it will only show the assembly code, but that might still be useful as you can also inspect memory, registers, etc.
Update
If you are debugging an assertion in your own program you should be able to see the source just fine, since the path to the source file is stored in the executable when you compile it with debug information.

Resources