I'm using MSVC 2013 compiler, CDB debugger, and Qt Creator/qmake under Windows 7. I just discovered that the build directory for one of my projects is a whopping 16 gigabytes. The culprit is a sub-directory named "srv" which contains various .pdb files. The curious part is that there are pdb files for all kinds of system libraries like commctl32, ntdll, user32, etc. Do I really need to generate pdb's for these system files, or is this some setting that I can turn off, or is it a bug? I don't plan to debug user32.dll, so I can't see any reason to have debug information generated for it.
The pdb files for commctl32, ntdll, user32 are not generated (as those libraries are not compiled by you). Those are automatically downloaded when you debug applications for resolving the memory addresses into function names (i.e., preparing a readable stack trace).
You can configure this in VS2013 settings, Debugging, Symbols. There you can disable automatic downloading and/or change the folder where to put the files. Suppose this can also be disabled/configured for other debuggers.
The "symbol cache" grows: whenever you install windows updates, new libraries might get deployed to your computer and in your next debugging sessions new symbols are downloaded. If you have a fast internet connection, it's no problem to empty the cache.
Related
I was running a unit-test in Visual Studio today using FakeItEasy. I was offline and found the following symbol-loading to be happening and taking a long time:
My question is, where does the path Z:\Builds\work\... come from and why is Visual Studio trying to load symbols from that path. Could it be that this path corresponds to the CI that the binaries were built on? If so, is it a thing that the maintainer of the library should fix, or something that I must locally configure? I am using the FakeItEasy 1.25.2 binaries that I fetched via NuGet.
I am aware of the fact that you can disable symbol loading (e.g. see this question), but actually I want the symbols to be loaded if possible.
Yes, Z:\Builds\work\… is the path from which TeamCity builds FakeItEasy.
I'm not a big symbol user, so am not sure what you want "fixed". Why are you loading the symbols, and what behaviour would you expect in this case?
If we push the symbols to SymbolSource.org you'd still need to be online to access them, no?
Can you give an example of a NuGet package that behaves how you'd like?
How does it behave in your situation?
PDBs can be built for debug configuration and release configuration and it's typically a good idea to keep them for debugging purposes. FakeItEasy or any other DLL or EXE, contains the full path to the PDB file where it was located during compile time. If that path is part in a DLL (or EXE), Visual Studio will try to load symbols from there.
To see that information, get DebugDir and run debugdir <path to>\FakeItEasy.dll. Or, in any hex editor, search for pdb.
You'll find the full path of the PDB along with some other information. Since it wasn't you who built the DLL, the PDB is not present on your disk and you'd need to download it from a symbol server.
The Sourforge clone of DebugDir contains supports a command line parameter clean which can remove debug information. If you want to get rid of Visual Studio accessing the non-existing Z: drive, you can remove the path to the PDB file.
In MSVS 2010 if you go to Tools->Options->Debuggibg, the following window opens.
I want to know what are these symbols? And why we want to cache it ? Is it too big for 500 GB drives which are common now a days?
Symbols are the .pdb files that are generated when you build your code with the proper features enabled. These files provide a mapping from the binary code to the actual statement that was written for that piece of binary to be generated.
Since they're usually downloaded from a remote location and since they don't change that often for external libraries, it really speeds up the debugging when you don't have to download these files each and every time.
On my machine the symbol cache is about 600MB in size. Your 500GB drive should be more than enough for normal operations.
See also: http://msdn.microsoft.com/en-us/library/windows/desktop/aa363368(v=vs.85).aspx
This feature is especially powerful when you have Framework Source Stepping enabled, this, in combination with the Microsoft Symbol Server, allows you to debug certain parts of the .NET framework. The symbols are downloaded from Microsoft and from the symbols Visual Studio reads which source file(s) to download to enable debugging for you.
The Symbol cache is used during debugging, to give richer information. There is a similar dialog inside each solution that defines what level of debug symbols are generated during the building of your solution. This is where the Assembler/Compiler is told what to do with these things.
And inside Team Build (if you're running TFS) there is an option to generate these symbol files during the automated build process. These files can then be referenced from IntelliTrace or WinDbg to provide you a richer debugging experience without having to deploy a debug version or the debug symbols to your production environment (deploying the .pdb files to you production environment causes higher memory usage and a small performance hit because the runtime will load these symbols into memory. When exceptions happen it will also cause additional overhead, because these are enriched with the information found in the symbol files.
See: http://msdn.microsoft.com/en-us/library/vstudio/hh190722.aspx
I have been fighting tooth and nail with xperf to get symbols for a tool I'm profiling. My code that runs within the tool is split between the .exe and a .dll -- the important stuff to profile being in the .dll. I ran xperf:
xperf -on PROC_THREAD+LOADER+INTERRUPT+DPC+PROFILE -stackwalk profile
And then I ran my tool for a bit, and then
xperf -d profile.etl
Then I tried xperfview. I loaded up the profile, toggled "load symbols" on, and opened the summary table. No symbols at all -- literally module came up "unknown" in the function column. I've scoured other threads on this and here's what I've tried:
I set my environment variables, _NT_SYMBOL_PATH and _NT_SYMCACHE
I cleared out my symbol cache and run xperf -symbols -i profile_results.etl.
I copied over dbghelp.dll from a recent version of Windows Debugging Tools and repeated the above.
After doing all this I now get function names showing up properly for most of the modules that are not my own code, but I can't get my dll to show up. The dll is being compiled in release mode (with optimization) but I set the Visual Studio project specifically to create a pdb, I've verified that the pdb exists and that it is within a directory on my _NT_SYMBOL_PATH. Does anyone know how I can fix this, or at least debug it further?
You can set some environment variables to enable diagnostic logging during symbol loading:
DBGHELP_DBGOUT = 1
DBGHELP_LOG = C:\dbghelp.log
I just encountered the same problem... tried all the same steps... browsed all the (apparently) similar advice...
Additionally, I tried launching symchk using the same dbghelp.dll/symsrv.dll DLLs I had copied into my WPA 'bin' folder, to make sure that my PDB is locatable. (still thinking I'm going crazy...)
I should note: my _NT_SYMBOL_PATH value contained servers with lcl cache & straight up local locations: _NT_SYMBOL_PATH=srv*D:\SymbolCache*http://msdl.microsoft.com/download/symbols;D:\GitHub\....
Then it dawned on me that my DLL, used by my "partner" EXE, is loaded dynamically via LoadLibrary()/GetProcAddress() ... could this be an issue for XPerf ?????
I hesitated even trying this...
I added a useless export in my DLL, and I invoke it directly in the EXE (to trigger an Import Table entry for my DLL) So now the EXE depends on the DLL to even load.
Turns out...
.............then XPerf loaded all the symbols :).
Edit: I just found this URL on MSDN, where someone posted code back in '11 that demonstrates a similar (the same?) problem
EDIT:
I recently discussed this with a collegue, and learned that XPerf will properly "decide" to load symbols for DLLs loaded programmatically ... IF the DLL remains loaded until the termination of the process.
So, for DLLs that are Loaded and Unloaded during the execution, and are unloaded at termination... XPerf will skip the attempt to load those symbols.
I'm not sure if this helps, but in here is one more detail I came across today in addition to the Q&A at xperf can't load my DLL's symbols:
For me, xperfview doesn't like PDB files on mapped network drives: as I was running xperf and xperfview on a different machine from where the code was built, I was getting both executables and PDB files off a network share, which I mapped to a drive letter to recreate exactly the same absolute paths as on the build machine - no luck. Even adding the folder with the PDB files to the symbol path didn't help.
Everything worked as expected once I made sure the .pdb file was in a local folder.
Try using wpa instead of xperfview. It uses the same system for loading symbols that xperfview does but it also has a Diagnostic Console which lets you see symbol loading messages which can be helpful.
Also, you should tell us what you have _NT_SYMBOL_PATH set to. There are many ways that it can be incorrectly set.
Also, in _NT_SYMBOL_PATH you should specify a local cache for your PDB files -- you can then check there to see if your PDBs have been copied to the local cache.
You can also look in the SymCache Path (pointed to by _NT_SYMCACHE_PATH, defaults to c:\symcache) which is where the WPT .symcache files are stored. The PDB files are converted to this format and the .symcache files are what are ultimately loaded by WPA and xperfview.
For more information see:
http://randomascii.wordpress.com/2012/10/04/xperf-symbol-loading-pitfalls/
I have a mindmp file from a target's application crash. Is it possible for me to rebuild the dll/pdb files for a version of software and have windbg load symbols correctly?
My problem is that our pdb files are only kept for major releases (unfortunately). This is a daily build, which I can rebuild myself, but I'm getting tripped up on errors.
With !sym noisy on:
"image header does not match memory image header."
DBGENG: C:\...\XXX.dll image header does not match memory image header.
DBGENG: XXX.dll - Partial symbol image load missing image info
DBGHELP: Module is not fully loaded into memory.
DBGHELP: Searching for symbols using debugger-provided data.
DBGHELP: C:\...\XXX.pdb - mismatched pdb
Note I've build the pdb with the dll, they are from the same RELEASE directory (should I be building debug?)
Theses are release builds (as release builds are installed on the target and crashing) should I be somehow using the debug build dlls to get more symbol information?
The ChkMatch utility is designed for this exact scenario.
As long as you have the original .EXE, you can recompile the sources (with the same compiler and compiler settings) and patch the new .PDB to match the old .EXE.
In this example, OriginalExecutable.exe is the executable that no longer has a .PDB file, and RebuiltPDB.pdb is one that has been produced by rebuilding the original source.
chkmatch -m OriginalExecutable.exe RebuiltPDB.pdb
Now, as long as the two files have their original names, The debugger should accept them as a matching pair.
In my experience probably not.
If you have the exact build directory and build with the exact same compiler settings then this might work. You definitely will not be able to load symbols from a debug build against a release crash dump.
You will need to turn on the 'load anything' options: .symopt+0x40 to get windbg to ignore the timestamp differences.
if you still have the exact source code the image was compiled from, then rebuild it producing a new pdb file and then instruct WinDbg to forcibly load this pdb when you open the crash dump - it worked once in my practice.
PDB files are tied to their EXE files by a GUID and an "age" (it's a sequence number). These are embedded in the EXE, and into the PDB. The GUID is regenerated on each complete build, and the "age" is changed on each incremental build.
The debugger uses these to ensure that it's looking at the correct PDB for the EXE file.
I didn't know about the "chkmatch" tool mentioned by SteveMan, but I suspect that it works by patching up the GUID/age so that they match.
This is too late to help Doug, but for the sake of anyone who comes across this question, another thread (Is it possible to load mismatched symbols in Visual Studio?) pointed out a way to get WinDbg to accept mismatched .PDB files
.symopt_0x40
Are there any VC++ settings I should know about to generate better PDB files that contain more information?
I have a crash dump analysis system in place based on the project crashrpt.
Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.
"Are there any VC++ settings I should know about"
Make sure you turn off Frame pointer ommision. Larry osterman's blog has the historical details about fpo and the issues it causes with debugging.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code.
What version of VS are you using? (Or are you using Windbg?) ... in VS it should defintely prompt for source the first time if it doesn't find the location. However it also keeps a list of source that was 'not found' so it doesn't ask you for it every time. Sometimes the don't look list is a pain ... to get the prompt back up you need to go to solution explorer/solution node/properties/debug properties and edit the file list in the lower pane.
Finally you might be using 'stripped symbols'. These are pdb files generated to provide debug info for walking the callstack past FPO, but with source locations stripped out (along with other data). The public symbols for windows OS components are stripped pdbs. For your own code these simply cause pain and are not worth it unless you are providing your pdbs to externals. How would you have one of these horrible stripped pdbs? You might have them if you use "binplace" with the -a command.
Good luck! A proper mini dump story is a godsend for production debugging.
If your build directly from your sourcecode management system, you should annotate your pdb files with the file origins. This allows you to automatically fetch the exact source files while debugging. (This is the same proces as used for retrieving the .Net framework sourcecode).
See http://msdn.microsoft.com/en-us/magazine/cc163563.aspx for more information. If you use subversion as your SCM you can check out the SourceServerSharp project.
You could trying using the MS-DOS subst command to assign your source code directory to the D: drive.
This is the procedure I used after some trouble similar to yours:
a) Copied to the production server all the EXE & DLL files that were built, each with its corresponding PDB to the same directory, started the system, and waited for the crash to happen.
b) Copied back all the EXE, DLL & PDB files to the development machine (to a temporary folder) along with the minidump (in the same folder). Used Visual Studio to load the minidump from that folder.
Since VS found the source files where they were originally compiled, it was always able to identify them and load them correctly. As with you, in the production machine the drive used was not C:, but in the development machine it was.
Two more tips:
One thing I did often was to copy an EXE/DLL rebuilt and forget to copy the new PDB. This ruined the debug cycle, VS would not be able to show me the call stack.
Sometimes, I got a call stack that didn't make sense in VS. After some headache, I discovered that windbg would always show me the correct stack, but VS often wouldn't. Don't know why.
In case anyone is interested, a co-worker replied to this question to me via email:
Artem wrote:
There is a flag to MiniDumpWriteDump()
that can do better crash dumps that
will allow seeing full program state,
with all global variables, etc. As for
call stacks, I doubt they can be
better because of optimizations...
unless you turn (maybe some)
optimizations off.
Also, I think disabling inline
functions and whole program
optimization will help quite a lot.
In fact, there are many dump types,
maybe you could choose one small
enough but still having more info
http://msdn.microsoft.com/en-us/library/ms680519(VS.85).aspx
Those types won't help with call stack
though, they only affect the amount of
variables you'll be able to see.
I noticed some of those dump types
aren't supported in dbghelp.dll
version 5.1 that we use. We could
update it to the newest, 6.9 version
though, I've just checked the EULA for
MS Debugging Tools -- the newest
dbghelp.dll is still ok to
redistribute.
Is Visual Studio prompting you for the path to the source file? If it isn't then it doesn't think it has symbols for the callstack. Setting the source path should work without having to map the exact original location.
You can tell if symbols are loaded by looking at the 'modules' window in Visual Studio.
Assuming you are building a PDB then I don't think there are any options that control the amount of information in the PDB directly. You can change the type of optimizations performed by the compiler to improve debuggabilty, but this will cost performance -- as your co-worker points out, disabling inline will help make things more obvious in the crash file, but will cost at runtime.
Depending on the nature of your application I would recommend working with full dump files if you can, they are bigger, but give you all the information about the process ... and how often does it crash anyway :)
Is Visual Studio prompting you for the
path to the source file?
No.
If it isn't then it doesn't think it has symbols
for the callstack. Setting the source
path should work without having to map
the exact original location.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code. I can of course search in files for the line in question, but this is hard work :)