I have a program which is instrumented to generate mini-dumps on exceptions. I have archived copies of the .exe, .pdb and the source files. The only way that I have found to get Visual Studio to find the .pdb file and analyze a dump when I receive one from a client, is to place the archived files in the exactly the same location that the original build took place on the disk.
I have tried adding the path to the .pdb file to Visual Studio's debug symbol directories, but the path is always ignored. The path in the .exe file seems to be used instead.
This is terribly inconvenient, since it means moving the code which is currently under development to some temporary location, while the archived code takes its place for crash dump analysis.
Is there some simple way (i.e. without setting up symbol and source servers) to direct Visual Studio to access the debugging context in some location other than the original build location?
What you need is a symbol server or at least a directory that has the same structure. If you have TFS, you just might need to configure it correctly.
If not, you have the following options:
a) add symbols manually for each delivered version using symstore
b. add symbols automatically for each build using symstore in a post build step
c) either of a) or b) and publish the result onto a webserver that acts as a HTTP symbol server.
You can do a) or b) if you're working alone. You should really consider c) if you're working in a team.
The things are not so simple and Stack Overflow is not thought for writing fully-fledged tutorials. Therefore I give you the following hints:
You need to understand that a symbol path can have several tiers. You are currently using a 0-tier symbol store, which is a flat directory. This is the worst option. Good news: if you have the symbols, you can still get other tier types set up.
Once you understood point 1. about the tiers and want to go for option c) without TFS, build an HTTP server.
IMHO you should find all necessary information in How to get a symbol server set up. If you don't want it on the network, you can also put it on local disk.
Related
when building my application some PDB files are generated which are useful during debugging. Now when a user experiences a crash, I get the dump-file and using this I can analyse the problem.
Unfortunately for this the version of the PDB file needs to fit to the build the crashdump was generated from - or in other words, for every release I build I need to store the related PDB in order to have them available for later analysis.
Now I know that MS offers a product named "symbolserver" which does the complete job of storing and managing the PDB files of a build. Unfortunately this is a way too complex solution for me.
So my question: is there an easy to use and simply to handle alternative available for storing multiple versions of a PDB files in order to have them available for crash dump analysis?
Thanks!
Note that you don't need a symbol server, just a directory. See also How to set up a symbol server. With symstore, you only need to set up your symbol path once, save it in the WinDbg workspace or in Visual Studio and you're ready to go.
The alternative is to manage create a folder with version number of your build and move all PDBs there.
The debugging then goes like this (WinDbg):
lm m <yourapp>
to find out the version number
.sympath C:\path\to\symbols\<version>
Similar in Visual Studio: you need to find out the version and then change the symbol path. It's always 2 steps instead of 0.
I've searched far and wide for a solution to this but can't find one.
I have configured TeamCity to publish packages with both the symbols and source in them to ProGet. This process works great and ProGet correctly identifies the symbols.
I have setup Visual Studio as per the instructions on ProGet's knowledge base i.e.
Adding the symbol locations in options->Debugging->Symbols
Enabled source server support options->Debugging->general
I've checked in Fiddler and the symbols are downloaded when I launch our app in debug.
Then when stepping in to one of the methods in our package it opens the wrong file. The file it opens is however named the same (we have a file called Component in each of our packages and also in local solution that pulls in the package).
If I change the name of the file and re-package and publish it to ProGet the problem goes away and I can step in to file during debugging but this seems like a hack.
Does anyone know how you can get Visual Studio to favour the file on the symbol server over any local files in the solution with the same name?
Symbol files in the project directory are always loaded, for this issue, a sample could help us understand the symbols loaded easily. If your local cache folder had the symbols file which was downloaded from the Symbol Server before, I know that it would not download it again during you debug your app. So my understanding is that since your symbol file has the same name, the VS debugging would search and load the symbol from your local project folder firstly, and then download it from the symbol server or others if your local machine has no them. That's the reason why you got this issue.
The workarounds I could think of:
(1) Load the symbols manually from Debug modules windows if you really want to use two files with the same name.
(2) Using different names would be better.
Right Click on the project containing the file you want to open and choose "Set as StartUp Project".
Now when you try to debug, it will run the correct file.
When I use Visual Studio 2010 to debug a crash dump file (native code), it attempts to load C/C++ source files from the original build folder (and it gives the message "The source file is different from when the module was built. Would you like the debugger to use it anyway?"). The message is correct; the file is not the correct version.
What I would like VS2010 to do is to check out the source file using source server. If the file does not currently exist in its original build location, VS2010 will correctly use source server and retrieve the appropriate revision of the file (from Subversion). In order to force it to check out the correct revision, I have to physically delete the file from the original build location.
As a side note, VS2005 works as desired (well ... as I desire, perhaps not as others desire). VS2005 will always check out the correct revision from source control regardless of whether a copy of the file exists in the original build folder.
I believe the question comes down to one of the following:
Is there some kind of setting available that will change VS2010's precedence for finding source files?
Alternatively, is it possible to make VS2010 offer a choice/option to check out the source file in question? (Currently the only option I see in this situation is to browse for it.)
Or is it possible to completely exclude a specific path (folder) from the search?
I have the same problem with VS2010 and made an attempt to figure it out. I monitored devenv.exe with procmon but didn't see anything out of the order with the files & registry keys it was accessing. Pretty much the same information you see in the error report when VS2010 can't find the source. My solution is to use VS2005 as it works fine. I did see some correspondence on MSDN about a similar (if not the same) bug and they claimed it would be fixed in the final release of 2012. I believe I have that final release of 2012 and it has the same problem.
Here's a maybe slightly complicated solution
1) Create a script that will download and replace the pdb file (a .bat, a python script, whatever)
2) Create a new External Tool within VS2010 (Tools -> External Tools -> Add)
3) Point the tool to your script and pass any project-specific stuff to it as arguments
4) Create a post-build or pre-build step in your project that will call your new External Tool (project properties -> Build Events -> whatever)
This is a lot of work, but at least it will fully integrate it into your building process.
Note: Sometimes I've noticed that my post-build steps won't run unless I've compiled at least on cpp file. I usually press F7 and build some source and then build fully, to make sure everything works as expected.
You can change the local source directory to a different name when you are debugging crash dump file.
Or you can change the build directory to a different path with your local directory.
Every piece of documentation I've found (references 1 through 5) talks about setting up a symbol server by using a shared UNC path, and then putting the correct settings available to the local debugger instance (whether _NT_SYMBOL_PATH or the Visual Studio IDE Debugging settings).
Microsoft provides a symbol server (reference 6) available via http for their public symbol stores.
I want to create, for my own code, a symbol server accessible over http transport, instead of over UNC file sharing. The Mozilla folks appear to have done so (reference 7), but it is no longer functional.
Are there better references available for performing this task than I have found so far?
References
https://msdn.microsoft.com/en-us/library/b8ttk8zy(v=vs.80).aspx
http://msdn.microsoft.com/en-us/library/ms680693(v=vs.85).aspx
http://stackhash.com/blog/post/Setting-up-a-Symbol-Server.aspx
http://entland.homelinux.com/blog/2006/07/06/…
http://msdn.microsoft.com/en-us/windows/hardware/gg462988
http://support.microsoft.com/kb/311503
http://developer.mozilla.org/en/Using_the_Mozilla_symbol_server
I believe the answer is a very simple, "Just share the directory via some sort of http path." According to Chad Austin's entry on "Creating Your Very Own Symbol Server", this will just work.
In other words, the directory which symstore.exe uses to store the symbols, when served up as http://symbols.example.com/public_symbols/ , will be usable as the symbol server target for the Windows Debugging Tools.
Be careful when having multiple users use Symstore.exe directly against the same symbol store. Microsoft's white papers on this subject make it sound like you simply create a share and have everyone update through the SYMSTORE.EXE program delivered as part of Debugging Tools for Windows. The white papers advised you to have this done by each build.
And it works great with single users or when funneling all updates through a single person who is updating the symbol server for a team.
Unfortunately, the "fine print" at the bottom of some of the white papers says that only one user running symstore.exe can update the shared symbol server at the same time without breaking the content.
(Example: At http://msdn.microsoft.com/en-us/library/ms681417(VS.85).aspx, Microsoft says: "Note SymStore does not support simultaneous transactions from multiple users. It is recommended that one user be designated "administrator" of the symbol store and be responsible for all add and del transactions.")
So there is no inherent mechanism to serialize updates to the symbol store. It appears that multiple, simultaneous attempts to update the symbol store can break the symbol store and/or its index.
We cannot have builds for our entire multi-thousand man, international corporation in all time zones dependent upon coordination thru one man in one location.
Based on those white papers, I raised this issues with Microsoft in March of 2009; who confirmed this was a possible issue. After that discussion, we chose to implement a symbol update service which serializes the updates via direct Windows Debugging Tools SDK DbgEng.DLL SymbolSrvStoreFile() API calls so there is never a possibility of two simultaneous updates against the same area of symbols at the same time. Users have a build action that queues their symbols through the service instead of directly updating the symbol store. The service then serializes the updates to make sure true concurrent update attempts never happen.
The limited documentation available about using SymSrvStoreFile was not very clear at the time. I did get it working. Hopefully it has been improved since then. if not, the most crucial issue was the that the input path must be specified in a format similar to _NT_SYMBOL_PATH. So instead of, for example, using "C:\Data\MyProject\bin" as the input path, you would instead specify "srv*C:\Data\MyProject\bin".
Our service now also logs the updates through a database. The database both serves as a backup to the symbol store (in case it ever gets corrupted and must be rebuilt) and also creates a reporting point so that managers and support people know who is actually saving their symbols and who is not. We generate a weekly "symbol check-in" report which is auto-EMailed to stakeholders.
A symbol server served via HTTP has the same structure as a symbol server served via a UNC file path, so the simplest thing to do would be to use symstore.exe to store the files in a folder somewhere and then use a simple HTTP server which exposes that folder via HTTP (even running python -m SimpleHTTPServer in the symbols dir would work).
A small gotcha is that if a symbol file does not exist, the HTTP server must return a 404 error code (tested under Visual Studio 2013 at least). I ran into an issue where an HTTP server returning 403 for missing files caused Visual Studio to stop making requests after the first failed request.
symstore.exe creates a number of auxilliary files and folders (the 000Admin/ folder, refs.ptr and files.ptr files). None of these are needed for the symbol server to work.
If you want to create a symbol store without using symstore.exe, you can upload the files with this structure:
BinaryName.pdb/$BUILD_ID/BinaryName.pdb
BinaryName.exe/$LINK_ID/BinaryName.exe
Where BUILD_ID is a GUID embedded in the PDB file and executable and LINK_ID is a combination of build timestamp and file size in the executable. These can be obtained by reading the output of the dump_syms.exe tool from the breakpad library. See http://www.chromium.org/developers/decoding-crash-dumps
Our (Mozilla's) symbol server works fine, AFAICT. We're not doing anything particularly complicated, we just put the PDB files into the right directory structure (we have a script for that, but you could use symstore.exe) and serve it up via Apache. I think the only special thing we have are some Rewrite rules to allow accessing the files in a non-case-sensitive manner, because Microsoft's tools are really inconsistent about filename/GUID case.
There is also Electron's variant of this, which sits in front of S3.
It has the additional helpers of converting 403's to 404's (to not upset the debugger), and converting all paths to lowercase, so that incoming requests are case-insensitive.
https://github.com/electron/symbol-server
Are there any VC++ settings I should know about to generate better PDB files that contain more information?
I have a crash dump analysis system in place based on the project crashrpt.
Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.
"Are there any VC++ settings I should know about"
Make sure you turn off Frame pointer ommision. Larry osterman's blog has the historical details about fpo and the issues it causes with debugging.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code.
What version of VS are you using? (Or are you using Windbg?) ... in VS it should defintely prompt for source the first time if it doesn't find the location. However it also keeps a list of source that was 'not found' so it doesn't ask you for it every time. Sometimes the don't look list is a pain ... to get the prompt back up you need to go to solution explorer/solution node/properties/debug properties and edit the file list in the lower pane.
Finally you might be using 'stripped symbols'. These are pdb files generated to provide debug info for walking the callstack past FPO, but with source locations stripped out (along with other data). The public symbols for windows OS components are stripped pdbs. For your own code these simply cause pain and are not worth it unless you are providing your pdbs to externals. How would you have one of these horrible stripped pdbs? You might have them if you use "binplace" with the -a command.
Good luck! A proper mini dump story is a godsend for production debugging.
If your build directly from your sourcecode management system, you should annotate your pdb files with the file origins. This allows you to automatically fetch the exact source files while debugging. (This is the same proces as used for retrieving the .Net framework sourcecode).
See http://msdn.microsoft.com/en-us/magazine/cc163563.aspx for more information. If you use subversion as your SCM you can check out the SourceServerSharp project.
You could trying using the MS-DOS subst command to assign your source code directory to the D: drive.
This is the procedure I used after some trouble similar to yours:
a) Copied to the production server all the EXE & DLL files that were built, each with its corresponding PDB to the same directory, started the system, and waited for the crash to happen.
b) Copied back all the EXE, DLL & PDB files to the development machine (to a temporary folder) along with the minidump (in the same folder). Used Visual Studio to load the minidump from that folder.
Since VS found the source files where they were originally compiled, it was always able to identify them and load them correctly. As with you, in the production machine the drive used was not C:, but in the development machine it was.
Two more tips:
One thing I did often was to copy an EXE/DLL rebuilt and forget to copy the new PDB. This ruined the debug cycle, VS would not be able to show me the call stack.
Sometimes, I got a call stack that didn't make sense in VS. After some headache, I discovered that windbg would always show me the correct stack, but VS often wouldn't. Don't know why.
In case anyone is interested, a co-worker replied to this question to me via email:
Artem wrote:
There is a flag to MiniDumpWriteDump()
that can do better crash dumps that
will allow seeing full program state,
with all global variables, etc. As for
call stacks, I doubt they can be
better because of optimizations...
unless you turn (maybe some)
optimizations off.
Also, I think disabling inline
functions and whole program
optimization will help quite a lot.
In fact, there are many dump types,
maybe you could choose one small
enough but still having more info
http://msdn.microsoft.com/en-us/library/ms680519(VS.85).aspx
Those types won't help with call stack
though, they only affect the amount of
variables you'll be able to see.
I noticed some of those dump types
aren't supported in dbghelp.dll
version 5.1 that we use. We could
update it to the newest, 6.9 version
though, I've just checked the EULA for
MS Debugging Tools -- the newest
dbghelp.dll is still ok to
redistribute.
Is Visual Studio prompting you for the path to the source file? If it isn't then it doesn't think it has symbols for the callstack. Setting the source path should work without having to map the exact original location.
You can tell if symbols are loaded by looking at the 'modules' window in Visual Studio.
Assuming you are building a PDB then I don't think there are any options that control the amount of information in the PDB directly. You can change the type of optimizations performed by the compiler to improve debuggabilty, but this will cost performance -- as your co-worker points out, disabling inline will help make things more obvious in the crash file, but will cost at runtime.
Depending on the nature of your application I would recommend working with full dump files if you can, they are bigger, but give you all the information about the process ... and how often does it crash anyway :)
Is Visual Studio prompting you for the
path to the source file?
No.
If it isn't then it doesn't think it has symbols
for the callstack. Setting the source
path should work without having to map
the exact original location.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code. I can of course search in files for the line in question, but this is hard work :)