Debugging Source Indexed Software with Visual Studio is a pain - visual-studio

I'm responsible for some internal tools and I want to make debugging them as easy as possible on client machines. We used to ship with full source code and debugging information and built them in exactly the same location as the typical client install path. This made debugging extremely easy.
We have now moved to using Windows Installer to deploy our software, and we symbol serve the debugging information and source index the pdbs.
This now makes debugging a right pain, as for each client I want to debug, I have to change several options in Visual Studio to enable source indexing and add a file into a Visual Studio directory to stop it complaining about running the source control commands. I also can't search the source code easily as the files are only extracted on demand.
Is there a better way we can do this? I have tried hosting archives of the source code on a network share, and using symbolic links to alias the network share with the local hard drive on build, but that requires UNC paths, which the VS2008 manifest tool can't cope with.
We also have used an undocumented linker option (/sourcemap) which does let you redirect all the source paths to an arbitrary location, but this does not work for .Net apps (as that option is availble for link.exe)
Even if we supplied the source code with the installation, the paths in the PDB would still be wrong.
Is there any way to patch a PDB (supported or unsupported by Microsoft) to redirect the paths elsewhere?

I can't help you with a direct answer, but perhaps I can inspire you with a meta-answer.
Debugging in a production environment is fundamentally wrong. Others have tried it (Netscape comes to mind) and failed miserably. For something like tools, it's even more pertinent to ensure proper function through solid unit testing, and automated tests for your release. You can setup your tests to work with different qualities of input (from perfect-textbook to medium to poor to awful) and quantities (i.e. test limits). There's a lot of books available on testing and particularly unit testing.
I understand it's tough getting out of the day-to-day to make something like this work (I'm neck deep in it myself), but it's one of the sanest, if not the only sane way out.

Related

ALINK error 1065 even when Windows Long Paths enabled

I am trying to get a C# Visual Studio 2019/MSBuild job to build on a Jenkins build server. I know that my file paths are too long, so I have enabled Long File Paths in the Group Policy Editor (and verified that it has persisted in the registry editor after a server restart).
However, now I am getting the following error "ALINK: fatal error AL1065: File name ... is too long or invalid".
A quick google search led me to this page for Alchemy Software. However I have no idea what Alchemy Software is and why it is being used in the build process and why it is failing. (Although for the last point, I'm guessing that the Alchemy Software .dll is not "Long Address Aware", which I believe is necessary for an application to take advantage of Long Paths in Windows. But since I can't locate any .dll or .exe associated with this software, I can't be certain.)
Does anyone know why my build is still failing with this error, what Alchemy Software is, and how to get it to take advantage of Long Paths in Windows?
P.S. And please, no comments about how I should restructure my file paths to be shorter. I have tried doing that but it's impractical for this application. And anyway, it keeps popping up and is becoming a whack-a-mole situation, so I'd really rather fix the root cause rather than constantly putting band-aids everywhere.
I am going to set this as an answer since, after Hans Passant's very helpful comments and subsequent research, I think it's pretty definitive that this can mostly only be worked around, not resolved. (As possible exception will be discussed at the end of this answer.)
As stated in those comments, this error originates from a linker module called al.exe that is utilized by MSBuild for certain project configurations (more on that later). This linker can be found in a couple places, but for me it was being called from C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools.
MSBuild can handle long file paths as long as Windows is configured to do so (by enabling long paths in the Group Policy Editor or by modifying the registry directly). However this al.exe module cannot. And as far as I can tell, there's no way to force it to do so. So if your build tool chain requires this al.exe module to be used, you're kind of SOL.
For my particular situation, for a job in Jenkins that was failing because my paths were too long, I worked around this by changing the workspace of my job. So now instead of the default of something like D:\Jenkins\workspaces\[product]\Releases\[product_version], I changed it to D:\j\dd0_1c, which is an encoding that makes sense to me. This shortening of the folder path avoids any subsequent file paths from exceeding the limit of 260 characters. It's not a satisfying solution, but it works for my particular situation.
I did mention that there was a possible exception to all of this: if you can get MSBuild to avoid using al.exe altogether, then you can avoid this error.
I don't know all of the scenarios or workflows in which MSBuild utilizes this module, but I do know that it does get utilized when your application has localized resources, and somehow, some way, MSBuild uses al.exe when it is generating those resources. This was exactly my scenario, and I found this page and this page describing how you can reconfigure your localization projects such that MSBuild does not utilize al.exe. I did try the steps described in these pages and was able to verify that I no longer got this ALINK error from al.exe. However I never got my project to fully build since this reconfiguration caused other build errors to crop up. So in the end, for the sake of expediency (and because it was cleaner than performing a major refactor of my code), I went with the Jenkins workspace workaround.
However, it is interesting to note that you can get MSBuild to avoid using al.exe as long as your project is conducive to the solution given in those two links. So hopefully, if someone runs into this same issue, they might have more success than I in utilizing this method.

How can I make ReSharper go into my own external sources while debugging?

Ever since I started using ReSharper it's never been clear to me how I can step into my own external sources. Sometimes it's working, but most of the times it is not.
As my frustrations are at its peak I would like to figure out how this works once and for all.
I have two C# solution files (one for my Framework and one for my Platform). I am using code from my Framework in my Platform solution through Nugets.
Both solutions are located on my disk (C:\<project>\framework and C:\<project>\platform). The Framework solution contains several projects (e.g. Framework.Core and Framework.Logging).
When I am debugging my Platform solution I cannot navigate into a method (F11) that is called on one of my Framework components.
As said, this has been working fine for me in the past but now it's not working anymore and I cannot find the solution.
Thanks for your help!
ReSharper doesn't control anything about stepping into external source while debugging. The options in your screenshot control navigating into external source from standard ReSharper navigation commands (go to type, find usages, etc).
In order to debug external sources, you'll need to make sure you have access to the .pdb files for your external code. This must either be side-by-side with the assembly, or available in the symbol cache, or downloaded from a symbol server.

Multiple DLL Resource Management

I have an existing MFC product and am planning on supporting a couple of other national languages thru the use of resource-only DLLs. I've read a number of articles and tutorials on how to go about this, but admit that I don't have a lot of in-depth knowledge of Windows resources (mostly just use VS 2008's graphical interface).
The major area that I am trying to understand is that it seems like all of the resource source files (i.e., resource.rc) for these DLLs -- and the main program -- should be sharing the same copy of resource.h. After all, all those IDD_xxx values have to be consistent, and it seems like making updates to the resources would be even more complicated by having to keep multiple resource.h files in sync!
So am I correct on this, and does anyone have any tips for how to best implement this? Should I modify resource.rc in the DLL projects to point to the "master" resource.h in the main program directory?
Yes, use the same resource.h file for sure.
One way is to just copy the resources you need to be translated into the the new resource project--stuff like menus, strings, dialogs. Bitmaps and icons probably don't need to be translated unless you put some text on them that is language specific. If you know your localse, at program startup you can call AfxSetResourceHandle() with the resource DLL you manually load.
Another way to approach the problem if you have a multitude of DLLs and EXEs is to use binary resource editing tools. What they do is create token files from your resources. Your translators edit the token file with the binary editing tool. When all is done, you run a tool to apply the translation to the binaries. Basically, you don't distribute resource DLLs, but distribute different versions of your DLLs for each language. The tools are smart enough so that if you make a change like add a string or dialog, it will get picked up and your translator can see that he needs to translate something new. The previously translated work will be saved in the token files. This is how we do it at my shop. We used to use Microsoft's Localization Resource toolkit. I don't know if we still use it or not since it is somebody else's responsibility now.
I found the MSDN article ID 198846 a good starting point for sharing of resources via a dll, though it does need updating for newer versions of visual studio, it was quite easy to follow and understand.
http://support.microsoft.com/kb/198846

Why have separate Debug and Release folders in Visual Studio?

By default, of course, Visual Studio creates separate bin folders for Debug and Release builds. We are having some minor issues dealing with those from the perspective of external dependencies, where sometimes we want release binaries and sometimes debug. It would make life slightly easier to just have a single bin folder on all projects and make that the target for both Debug and Release. We could then point our external scripts, etc. at a single location.
A co-worker questioned why we couldn't just do that--change the VS project settings to go to the same bin folder? I confess I couldn't really think of a good reason to keep them, other than easily being able to see on my local filesystem which are Debug and which are Release. But so what; what does that gain?
My question(s):
How do you leverage having distinct Debug and Release folders? What processes does this enable in your development?
What bad thing could happen if you fail to retain this distinction?
Inversely, if you have gone the "single folder" route, how has this helped you?
I am NOT asking why have separate Debug and Release builds. I understand the difference, and the place of each. My question concerns placing them in separate folders.
Dave, if you will compile Debug and Release to single folder, you may encounter the situation where some dll-s will not be recompiled after switching from Release to Debug and vice versa because dll files will be newer than source files. Yes, the "Rebuild" should help you, but if you forget this - you can have a few extra hours of debugging.
The way I see it, this is simply a convenience on the developer's machine allowing them to compile and run both Debug and Release builds simultaneously.
If you have scripts or tools running inside Visual Studio, the IDE allows you to use the ConfigurationName and other macros to obtain paths which are configuration-independent.
If you are running scripts and tools externally from the command-line (i.e. you are structuring some kind of release or deployment process around it), it is better to do this on a build server, where the distinction between Debug and Release goes away.
For example, when you invoke msbuild from the command-line (on the build server) you can specify the Configuration property for Debug or Release, and the OutputPath property to build to one location only (regardless of the Configuration).
One reason I use separate folders is that it guarantees that I only generate installers that use Release-build code. I use WiX, which allows me to specify the exact path to the files I want to include in the installer, so I end up specifying the path in the Release folder. (Of course, you can do the same using normal VS installers, so that isn't really the point.) If you forget to switch your project to Release before building, the installer doesn't build unless you have old code in the Release folder, in which case you end up with an old installer, so that's a bit of a pitfall. The way I get around that is to use post-build event on the WiX installer project that clears out the release folder after the WiX installer builds.
In a previous company we got round this problem by changing the names of the debug executable and dlls by appending a "D". So
MainProgram.exe & library.dll
become
MainProgramD.exe & libraryD.dll
This meant that they could co-exist in the same output folder and all scripts, paths etc. could just reference that. The intermediate files still need to go to separate folders as the names of these can't be changed.
Obviously you need to change all your references to point to the modified name for debug - which you will forget to do at some point.
I usually compile in Debug mode, but sometimes need to compile in Release mode. Unfortunately, they don't behave exactly the same in certain error situations. By having separate folders, I don't need to recompile everything just to change modes (and a full recompile of our stuff in Release mode will take a while).
I have an experience from somewhat bigger project. If there are few solutions using file references to other solutions, you have to point the reference to ONE directory, so obviously it has to be the "release" one for continuous/night build. Now you can imagine what happens if developer wants to work with debug versions - all the references point to release ones. If it pointed to the same directory, switching to debug would be only matter of recompiling all related stuff in debug mode and the file references would automatically point to debug versions since then.
On the other side, I don't see the point why developer would ever want to work with release versions (and switching back and forth) - the release mode is only useful for full/nighlty builds, so the solutions in VS can stay by default in debug mode, and build script (anyway) always does clean, release build.
Occasionally one may run into a particularly-nasty uninitialized memory problem that only occurs with a release build. If you are unable to maintain (as ChrisF suggests) separate names for your debug vs. release binaries it's really easy to loose track of which binary you're currently using.
Additionally, you may find yourself tweaking the compiler settings (i.e. optimization level, release-with-debug symbols for easy profiling, etc.) and it's much easier to keep these in order with separate folders.
It's all a matter of personal preference though - which is why Visual Studio makes it easy to change the options.
Visual Studio kinds of IDEs works what best for the crowd. They create the default project structure, binary folders. You could map the binaries to the single folder. Then you need to educate the other developers that Release/Debug files are stored in the same folder.
Developers would ask you, who you do like that?
In VC++, we have different libraries generated and you need to link the appropriate versions. Otherwise you will get linker error.
Being consistent in your assemblies is a good thing. You don't want to deal with issues around conditional compilation/etc. where your release and debug dlls are incompatible, but you're trying to run them against each other.
What everyone elsesaid about technical aspects are important. Another aspect is that you may run into race conditions if one build relies on the single-output-location build, but there's no synchronization between the two builds. If the first build can be re-run (especially in a different mode) after the second build starts, you won't really know if you're using a debug of release build.
And don't forget the human aspect: it's far easier to know what you're working with (and fix broken builds) if the two builds output to different locations.

Any recommended VC++ settings for better PDB analysis on release builds

Are there any VC++ settings I should know about to generate better PDB files that contain more information?
I have a crash dump analysis system in place based on the project crashrpt.
Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.
"Are there any VC++ settings I should know about"
Make sure you turn off Frame pointer ommision. Larry osterman's blog has the historical details about fpo and the issues it causes with debugging.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code.
What version of VS are you using? (Or are you using Windbg?) ... in VS it should defintely prompt for source the first time if it doesn't find the location. However it also keeps a list of source that was 'not found' so it doesn't ask you for it every time. Sometimes the don't look list is a pain ... to get the prompt back up you need to go to solution explorer/solution node/properties/debug properties and edit the file list in the lower pane.
Finally you might be using 'stripped symbols'. These are pdb files generated to provide debug info for walking the callstack past FPO, but with source locations stripped out (along with other data). The public symbols for windows OS components are stripped pdbs. For your own code these simply cause pain and are not worth it unless you are providing your pdbs to externals. How would you have one of these horrible stripped pdbs? You might have them if you use "binplace" with the -a command.
Good luck! A proper mini dump story is a godsend for production debugging.
If your build directly from your sourcecode management system, you should annotate your pdb files with the file origins. This allows you to automatically fetch the exact source files while debugging. (This is the same proces as used for retrieving the .Net framework sourcecode).
See http://msdn.microsoft.com/en-us/magazine/cc163563.aspx for more information. If you use subversion as your SCM you can check out the SourceServerSharp project.
You could trying using the MS-DOS subst command to assign your source code directory to the D: drive.
This is the procedure I used after some trouble similar to yours:
a) Copied to the production server all the EXE & DLL files that were built, each with its corresponding PDB to the same directory, started the system, and waited for the crash to happen.
b) Copied back all the EXE, DLL & PDB files to the development machine (to a temporary folder) along with the minidump (in the same folder). Used Visual Studio to load the minidump from that folder.
Since VS found the source files where they were originally compiled, it was always able to identify them and load them correctly. As with you, in the production machine the drive used was not C:, but in the development machine it was.
Two more tips:
One thing I did often was to copy an EXE/DLL rebuilt and forget to copy the new PDB. This ruined the debug cycle, VS would not be able to show me the call stack.
Sometimes, I got a call stack that didn't make sense in VS. After some headache, I discovered that windbg would always show me the correct stack, but VS often wouldn't. Don't know why.
In case anyone is interested, a co-worker replied to this question to me via email:
Artem wrote:
There is a flag to MiniDumpWriteDump()
that can do better crash dumps that
will allow seeing full program state,
with all global variables, etc. As for
call stacks, I doubt they can be
better because of optimizations...
unless you turn (maybe some)
optimizations off.
Also, I think disabling inline
functions and whole program
optimization will help quite a lot.
In fact, there are many dump types,
maybe you could choose one small
enough but still having more info
http://msdn.microsoft.com/en-us/library/ms680519(VS.85).aspx
Those types won't help with call stack
though, they only affect the amount of
variables you'll be able to see.
I noticed some of those dump types
aren't supported in dbghelp.dll
version 5.1 that we use. We could
update it to the newest, 6.9 version
though, I've just checked the EULA for
MS Debugging Tools -- the newest
dbghelp.dll is still ok to
redistribute.
Is Visual Studio prompting you for the path to the source file? If it isn't then it doesn't think it has symbols for the callstack. Setting the source path should work without having to map the exact original location.
You can tell if symbols are loaded by looking at the 'modules' window in Visual Studio.
Assuming you are building a PDB then I don't think there are any options that control the amount of information in the PDB directly. You can change the type of optimizations performed by the compiler to improve debuggabilty, but this will cost performance -- as your co-worker points out, disabling inline will help make things more obvious in the crash file, but will cost at runtime.
Depending on the nature of your application I would recommend working with full dump files if you can, they are bigger, but give you all the information about the process ... and how often does it crash anyway :)
Is Visual Studio prompting you for the
path to the source file?
No.
If it isn't then it doesn't think it has symbols
for the callstack. Setting the source
path should work without having to map
the exact original location.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code. I can of course search in files for the line in question, but this is hard work :)

Resources