VS2010 fatal Error LNK1318, mspdbsrv.exe used up it's 4GB memory - visual-studio-2010

We have a large project, recently we've merged two dll into one for some reason. Then we got an Error LNK1318 while linking, and mspdbsrv.exe reached 4063MB of max memory usage,then linker report Fatal Error LNK1318 Unexpected PDB Error, OK(0)

mspdbsrv.exe is the utility program that is launched behind the scenes to create PDB symbols used for debugging your code.
I've read anecdotal reports regarding earlier versions of Visual Studio (e.g., 2005) that this little process has been a source of pain for before in the past, but I haven't run into any with daily dev work in 2010.
It sounds to me like you've built up a cache of PDB files that it's trying to combine into one at build time. Only problem is, that produces a file that's 4 GB (!!) in size. I'd delete all of the temporary files associated with your project and kill the mspdbsrv.exe process (or restart the computer), and then try building again. You might also want to turn off incremental builds, which rebuild only the information that has changed since the last build. That'll force a full rebuild, which should produce a PDB file without any extra bloat.

Related

Store and manage multiple PDB files?

when building my application some PDB files are generated which are useful during debugging. Now when a user experiences a crash, I get the dump-file and using this I can analyse the problem.
Unfortunately for this the version of the PDB file needs to fit to the build the crashdump was generated from - or in other words, for every release I build I need to store the related PDB in order to have them available for later analysis.
Now I know that MS offers a product named "symbolserver" which does the complete job of storing and managing the PDB files of a build. Unfortunately this is a way too complex solution for me.
So my question: is there an easy to use and simply to handle alternative available for storing multiple versions of a PDB files in order to have them available for crash dump analysis?
Thanks!
Note that you don't need a symbol server, just a directory. See also How to set up a symbol server. With symstore, you only need to set up your symbol path once, save it in the WinDbg workspace or in Visual Studio and you're ready to go.
The alternative is to manage create a folder with version number of your build and move all PDBs there.
The debugging then goes like this (WinDbg):
lm m <yourapp>
to find out the version number
.sympath C:\path\to\symbols\<version>
Similar in Visual Studio: you need to find out the version and then change the symbol path. It's always 2 steps instead of 0.

Why am I getting so many .pdb files?

I'm using MSVC 2013 compiler, CDB debugger, and Qt Creator/qmake under Windows 7. I just discovered that the build directory for one of my projects is a whopping 16 gigabytes. The culprit is a sub-directory named "srv" which contains various .pdb files. The curious part is that there are pdb files for all kinds of system libraries like commctl32, ntdll, user32, etc. Do I really need to generate pdb's for these system files, or is this some setting that I can turn off, or is it a bug? I don't plan to debug user32.dll, so I can't see any reason to have debug information generated for it.
The pdb files for commctl32, ntdll, user32 are not generated (as those libraries are not compiled by you). Those are automatically downloaded when you debug applications for resolving the memory addresses into function names (i.e., preparing a readable stack trace).
You can configure this in VS2013 settings, Debugging, Symbols. There you can disable automatic downloading and/or change the folder where to put the files. Suppose this can also be disabled/configured for other debuggers.
The "symbol cache" grows: whenever you install windows updates, new libraries might get deployed to your computer and in your next debugging sessions new symbols are downloaded. If you have a fast internet connection, it's no problem to empty the cache.

In MSVS what are symbols and why it is required to be cached?

In MSVS 2010 if you go to Tools->Options->Debuggibg, the following window opens.
I want to know what are these symbols? And why we want to cache it ? Is it too big for 500 GB drives which are common now a days?
Symbols are the .pdb files that are generated when you build your code with the proper features enabled. These files provide a mapping from the binary code to the actual statement that was written for that piece of binary to be generated.
Since they're usually downloaded from a remote location and since they don't change that often for external libraries, it really speeds up the debugging when you don't have to download these files each and every time.
On my machine the symbol cache is about 600MB in size. Your 500GB drive should be more than enough for normal operations.
See also: http://msdn.microsoft.com/en-us/library/windows/desktop/aa363368(v=vs.85).aspx
This feature is especially powerful when you have Framework Source Stepping enabled, this, in combination with the Microsoft Symbol Server, allows you to debug certain parts of the .NET framework. The symbols are downloaded from Microsoft and from the symbols Visual Studio reads which source file(s) to download to enable debugging for you.
The Symbol cache is used during debugging, to give richer information. There is a similar dialog inside each solution that defines what level of debug symbols are generated during the building of your solution. This is where the Assembler/Compiler is told what to do with these things.
And inside Team Build (if you're running TFS) there is an option to generate these symbol files during the automated build process. These files can then be referenced from IntelliTrace or WinDbg to provide you a richer debugging experience without having to deploy a debug version or the debug symbols to your production environment (deploying the .pdb files to you production environment causes higher memory usage and a small performance hit because the runtime will load these symbols into memory. When exceptions happen it will also cause additional overhead, because these are enriched with the information found in the symbol files.
See: http://msdn.microsoft.com/en-us/library/vstudio/hh190722.aspx

How to generate pdb files for parallel builds?

We are seeking ideas on resolving a problem with linking/pdb generation when running multiple devenv.com using Visual Studio 2005.
We are getting the following intermittently errors when doing parallel builds using devenv.com.
I.e. when the following get run at the same time on the same build server:
devenv.com master.sln /build "Release|Win32"
devenv.com master.sln /build "Debug|x64"
fatal error LNK1318: Unexpected PDB error; RPC (23) '(0x000006BA)'
error C2471: cannot update program database
We want the pdb files, so turning them off isn't realy an option.
Running the builds serially doesn't cause the issue, but of course slows down the build process.
References found so far indicate
that there are issues with length of file names exceeding the 256 file path limit, this doesn't seem to be our problem as we can build individually, and the path+filename length is around 160 chars.
there are issues with incremental builds (but mainly in Visual Studio 2008) and we have incremental linking turned off.
We are looking for input on resolving this multiple process issue, if possible.
How do we resolve it?
Try enabling source-code parallel builds instead. That will efficiently use all cores on your server, unless you have more cores than source files in your solution. Here is more information on how you have enable source-code parallel builds: http://vagus.wordpress.com/2008/02/15/source-level-parallel-build-in-visual-studio-2005/
this may be a related problem&solution, because it produces
Unexpected PDB error; RPC (23) '(0x000006BA)'
https://software.intel.com/en-us/articles/unexpected-pdb-error-rpc-23-0x000006ba/

Any recommended VC++ settings for better PDB analysis on release builds

Are there any VC++ settings I should know about to generate better PDB files that contain more information?
I have a crash dump analysis system in place based on the project crashrpt.
Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.
"Are there any VC++ settings I should know about"
Make sure you turn off Frame pointer ommision. Larry osterman's blog has the historical details about fpo and the issues it causes with debugging.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code.
What version of VS are you using? (Or are you using Windbg?) ... in VS it should defintely prompt for source the first time if it doesn't find the location. However it also keeps a list of source that was 'not found' so it doesn't ask you for it every time. Sometimes the don't look list is a pain ... to get the prompt back up you need to go to solution explorer/solution node/properties/debug properties and edit the file list in the lower pane.
Finally you might be using 'stripped symbols'. These are pdb files generated to provide debug info for walking the callstack past FPO, but with source locations stripped out (along with other data). The public symbols for windows OS components are stripped pdbs. For your own code these simply cause pain and are not worth it unless you are providing your pdbs to externals. How would you have one of these horrible stripped pdbs? You might have them if you use "binplace" with the -a command.
Good luck! A proper mini dump story is a godsend for production debugging.
If your build directly from your sourcecode management system, you should annotate your pdb files with the file origins. This allows you to automatically fetch the exact source files while debugging. (This is the same proces as used for retrieving the .Net framework sourcecode).
See http://msdn.microsoft.com/en-us/magazine/cc163563.aspx for more information. If you use subversion as your SCM you can check out the SourceServerSharp project.
You could trying using the MS-DOS subst command to assign your source code directory to the D: drive.
This is the procedure I used after some trouble similar to yours:
a) Copied to the production server all the EXE & DLL files that were built, each with its corresponding PDB to the same directory, started the system, and waited for the crash to happen.
b) Copied back all the EXE, DLL & PDB files to the development machine (to a temporary folder) along with the minidump (in the same folder). Used Visual Studio to load the minidump from that folder.
Since VS found the source files where they were originally compiled, it was always able to identify them and load them correctly. As with you, in the production machine the drive used was not C:, but in the development machine it was.
Two more tips:
One thing I did often was to copy an EXE/DLL rebuilt and forget to copy the new PDB. This ruined the debug cycle, VS would not be able to show me the call stack.
Sometimes, I got a call stack that didn't make sense in VS. After some headache, I discovered that windbg would always show me the correct stack, but VS often wouldn't. Don't know why.
In case anyone is interested, a co-worker replied to this question to me via email:
Artem wrote:
There is a flag to MiniDumpWriteDump()
that can do better crash dumps that
will allow seeing full program state,
with all global variables, etc. As for
call stacks, I doubt they can be
better because of optimizations...
unless you turn (maybe some)
optimizations off.
Also, I think disabling inline
functions and whole program
optimization will help quite a lot.
In fact, there are many dump types,
maybe you could choose one small
enough but still having more info
http://msdn.microsoft.com/en-us/library/ms680519(VS.85).aspx
Those types won't help with call stack
though, they only affect the amount of
variables you'll be able to see.
I noticed some of those dump types
aren't supported in dbghelp.dll
version 5.1 that we use. We could
update it to the newest, 6.9 version
though, I've just checked the EULA for
MS Debugging Tools -- the newest
dbghelp.dll is still ok to
redistribute.
Is Visual Studio prompting you for the path to the source file? If it isn't then it doesn't think it has symbols for the callstack. Setting the source path should work without having to map the exact original location.
You can tell if symbols are loaded by looking at the 'modules' window in Visual Studio.
Assuming you are building a PDB then I don't think there are any options that control the amount of information in the PDB directly. You can change the type of optimizations performed by the compiler to improve debuggabilty, but this will cost performance -- as your co-worker points out, disabling inline will help make things more obvious in the crash file, but will cost at runtime.
Depending on the nature of your application I would recommend working with full dump files if you can, they are bigger, but give you all the information about the process ... and how often does it crash anyway :)
Is Visual Studio prompting you for the
path to the source file?
No.
If it isn't then it doesn't think it has symbols
for the callstack. Setting the source
path should work without having to map
the exact original location.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code. I can of course search in files for the line in question, but this is hard work :)

Resources