Why have separate Debug and Release folders in Visual Studio? - visual-studio

By default, of course, Visual Studio creates separate bin folders for Debug and Release builds. We are having some minor issues dealing with those from the perspective of external dependencies, where sometimes we want release binaries and sometimes debug. It would make life slightly easier to just have a single bin folder on all projects and make that the target for both Debug and Release. We could then point our external scripts, etc. at a single location.
A co-worker questioned why we couldn't just do that--change the VS project settings to go to the same bin folder? I confess I couldn't really think of a good reason to keep them, other than easily being able to see on my local filesystem which are Debug and which are Release. But so what; what does that gain?
My question(s):
How do you leverage having distinct Debug and Release folders? What processes does this enable in your development?
What bad thing could happen if you fail to retain this distinction?
Inversely, if you have gone the "single folder" route, how has this helped you?
I am NOT asking why have separate Debug and Release builds. I understand the difference, and the place of each. My question concerns placing them in separate folders.

Dave, if you will compile Debug and Release to single folder, you may encounter the situation where some dll-s will not be recompiled after switching from Release to Debug and vice versa because dll files will be newer than source files. Yes, the "Rebuild" should help you, but if you forget this - you can have a few extra hours of debugging.

The way I see it, this is simply a convenience on the developer's machine allowing them to compile and run both Debug and Release builds simultaneously.
If you have scripts or tools running inside Visual Studio, the IDE allows you to use the ConfigurationName and other macros to obtain paths which are configuration-independent.
If you are running scripts and tools externally from the command-line (i.e. you are structuring some kind of release or deployment process around it), it is better to do this on a build server, where the distinction between Debug and Release goes away.
For example, when you invoke msbuild from the command-line (on the build server) you can specify the Configuration property for Debug or Release, and the OutputPath property to build to one location only (regardless of the Configuration).

One reason I use separate folders is that it guarantees that I only generate installers that use Release-build code. I use WiX, which allows me to specify the exact path to the files I want to include in the installer, so I end up specifying the path in the Release folder. (Of course, you can do the same using normal VS installers, so that isn't really the point.) If you forget to switch your project to Release before building, the installer doesn't build unless you have old code in the Release folder, in which case you end up with an old installer, so that's a bit of a pitfall. The way I get around that is to use post-build event on the WiX installer project that clears out the release folder after the WiX installer builds.

In a previous company we got round this problem by changing the names of the debug executable and dlls by appending a "D". So
MainProgram.exe & library.dll
become
MainProgramD.exe & libraryD.dll
This meant that they could co-exist in the same output folder and all scripts, paths etc. could just reference that. The intermediate files still need to go to separate folders as the names of these can't be changed.
Obviously you need to change all your references to point to the modified name for debug - which you will forget to do at some point.

I usually compile in Debug mode, but sometimes need to compile in Release mode. Unfortunately, they don't behave exactly the same in certain error situations. By having separate folders, I don't need to recompile everything just to change modes (and a full recompile of our stuff in Release mode will take a while).

I have an experience from somewhat bigger project. If there are few solutions using file references to other solutions, you have to point the reference to ONE directory, so obviously it has to be the "release" one for continuous/night build. Now you can imagine what happens if developer wants to work with debug versions - all the references point to release ones. If it pointed to the same directory, switching to debug would be only matter of recompiling all related stuff in debug mode and the file references would automatically point to debug versions since then.
On the other side, I don't see the point why developer would ever want to work with release versions (and switching back and forth) - the release mode is only useful for full/nighlty builds, so the solutions in VS can stay by default in debug mode, and build script (anyway) always does clean, release build.

Occasionally one may run into a particularly-nasty uninitialized memory problem that only occurs with a release build. If you are unable to maintain (as ChrisF suggests) separate names for your debug vs. release binaries it's really easy to loose track of which binary you're currently using.
Additionally, you may find yourself tweaking the compiler settings (i.e. optimization level, release-with-debug symbols for easy profiling, etc.) and it's much easier to keep these in order with separate folders.
It's all a matter of personal preference though - which is why Visual Studio makes it easy to change the options.

Visual Studio kinds of IDEs works what best for the crowd. They create the default project structure, binary folders. You could map the binaries to the single folder. Then you need to educate the other developers that Release/Debug files are stored in the same folder.
Developers would ask you, who you do like that?
In VC++, we have different libraries generated and you need to link the appropriate versions. Otherwise you will get linker error.

Being consistent in your assemblies is a good thing. You don't want to deal with issues around conditional compilation/etc. where your release and debug dlls are incompatible, but you're trying to run them against each other.

What everyone elsesaid about technical aspects are important. Another aspect is that you may run into race conditions if one build relies on the single-output-location build, but there's no synchronization between the two builds. If the first build can be re-run (especially in a different mode) after the second build starts, you won't really know if you're using a debug of release build.
And don't forget the human aspect: it's far easier to know what you're working with (and fix broken builds) if the two builds output to different locations.

Related

Why might it be necessary to force rebuild a program?

I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.

How can Visual Studio automatically build and test C# code?

I'm used to Eclipse for Java projects which automatically builds whenever I save a file. (I can turn this off.)
I then install Infinitest which automatically runs all tests affected by the change saved.
Hows do I do this for Visual Studio, writing C# software?
Important note:
If you're only concerned about C#/.NET code and only want to run Unit Tests, then this feature already exists in Visual Studio 2017 (Enterprise edition only) called Live Unit Testing, read more about it here: https://learn.microsoft.com/en-us/visualstudio/test/live-unit-testing-intro?view=vs-2017
Live Unit Testing is a technology available in Visual Studio 2017 version 15.3 that executes your unit tests automatically in real time as you make code changes.
My original answer:
(I used to be an SDE at Microsoft working on Visual Studio (2012, 2013, and 2015). I didn't work on the build pipeline myself, but I hope I can provide some insight:)
How can Visual Studio automatically build and test code?
It doesn't, and in my opinion, it shouldn't, assuming by "build and test code" you mean it should perform a standard project build and then run the tests.
Eclipse only builds what is affected by the change. Works like a charm.
Even incremental builds aren't instant, especially if there's significant post-compile activity (e.g. complicated linking and optimizations (even in debug mode), external build tasks (e.g. embedding resources, executable compression, code signing, etc).
In Eclipse, specifically, this feature is not perfect. Eclipse is primarily a Java IDE and in Java projects it's quite possible to perform an incremental build very quickly because Java's build time is very fast anyway, and an incremental build is as simple as swapping an embedded .class file in a Java .jar. In Visual Studio, for comparison, a .NET assembly build time is also fast - but not as simple, as the output PE (.exe/.dll) file is not as simple to rebuild.
However in other project types, especially C++, build times are much longer so it's inappropriate to have this feature for C/C++ developers, in fact Eclipse's own documentation advises C/C++ users to turn this feature off:
https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.cdt.doc.user%2Ftasks%2Fcdt_t_autobuild.htm
By default, the Eclipse workbench is configured to build projects automatically. However, for C/C++ development you should disable this option, otherwise your entire project will be rebuilt whenever, for example, you save a change to your makefile or source files. Click Project > Build Automatically and ensure there is no checkmark beside the Build Automatically menu item.
Other project types don't support this feature either, such as Eclipse's plugin for Go:
https://github.com/GoClipse/goclipse/releases/tag/v0.14.0
Changes in 0.14.0:
[...]
Project builder is no longer invoked when workspace "Build Automatically" setting is enabled and a file is saved. (this was considered a misfeature anyways)
(That parenthetical remark is in GoClipse's changelist and certainly makes clear that plugin's authors' opinions of Automatic Builds)
I then install Infinitest which automatically runs all tests affected by the change saved.
Visual Studio can run your tests after a build automatically (but you still need to trigger the build yourself), this is a built-in feature, see here:
https://learn.microsoft.com/en-us/visualstudio/test/run-unit-tests-with-test-explorer?view=vs-2017
To run your unit tests after each local build, choose Test on the standard menu, and then choose Run Tests After Build on the Test Explorer toolbar.
As for my reasons why Visual Studio does not support Build-on-Save:
Only the most trivial C#/VB and TypeScript projects build in under one second, the other project types, like C, C++, SQL Database, etc take between a few seconds for warm rebuilds of simple projects - to literally hours for large-scale C++ projects with lots of imported headers on a single-core CPU, low RAM and with a 5,400rpm IDE hard-drive.
Many builds are very IO-intensive (especially C/C++ projects with lots of headers *cough*like <windows.h>*cough*) rather than CPU-bound, and disk IO delays are a major cause of lockups and slowdowns in other applications running on the computer because a disk paging operation might be delayed or because they're performing disk IO operations on the GUI thread, and so on - so with this feature enabled in a disk IO-heavy build it just means your computer will jitter a lot every time you press Ctrl+S or whenever autosave runs.
Not every project type supports incremental builds, or can support a fast incremental build. Java is the exception to this rule because Java was designed so that each input source .java file maps 1-to-1 to an output .class file, this makes incremental builds very fast as only the actually modified file needs be rebuilt, but other projects like C# and C++ don't have this luxury: if you make even an inconsequential 1-character edit to a C preprocessor macro or C++ template you'll need to recompile everything else that used that template - and then the linker and optimizer (if code-inlining) will both have to re-run - not a quick task.
A build can involve deleting files on disk (such as cleaning your build output folder) or changing your global system state (such as writing to a non-project build log) - in my opinion if a program ever deletes anything under a directory that I personally own (e.g. C:\git\ or C:\Users\me\Documents\Visual Studio Projects) it had damn well better ask for direct permission from me to do so every time - especially if I want to do something with the last build output while I'm working on something. I don't want to have to copy the build output to a safe directory first. This is also why the "Clean Project" command is separate and not implied by "Build Project".
Users often press Ctrl+S habitually every few seconds (I'm one of those people) - I press Ctrl+S even when I've written incomplete code in my editor: things with syntax errors or perhaps even destructive code - I don't want that code built at all because it isn't ready to be built! Even if I have no errors in my code there's no way for the IDE to infer my intent.
Building a project is one way to get a list of errors with your codebase, but that hasn't been necessary for decades: IDEs have long had design-time errors and warnings without needing the compiler to run a build (thanks to things like Language Servers) - in fact running a build will just give me double error messages in the IDE's error window because I will already have error messages from the design-time error list.
Visual Studio, at least (I can't speak for Eclipse) enters a special kind of read-only mode during a build: so you can't save further changes to disk while a build is in progress, you can't change project or IDE settings, and so on - this is because a the build process is a long process that depends on the project source being in a fixed, known state - the compiler can't do its job if the source files are being modified while it's reading them! So if the IDE was always building (even if just for a few seconds) after each save, users won't like how the IDE is blocking them from certain editing tasks until the build is done (remember IDEs do more than just show editors: some specialized tool window might need to write to a project file just to open).
Finally, Building is not free of side-effects (in fact, that's the whole point!) - there is always a risk something could go wrong in the build process and break something else on your system. I'm not saying building is risky, but if you have a custom build script that does something risky (e.g. it runs a TRUNCATE TABLE CriticalSystemParameters) and the build breaks (because they always do) it might leave your system in a bad state.
Also, there's the (slightly philosophical) problem of: "What happens if you save incomplete changes to the build script of your project?".
Now I admit that some project types do build very, very quickly like TypeScript, Java and C#, and others have source-files that don't need compiling and linking at all and just run validation tools (like PHP or JavaScript) - and having Build-on-Save might be useful for those people - but arguably for the limited number of people whose experience it improves it demonstrably worsens it for the rest of the users.
And if you really want build-on-save, it's trivial enough to write as an extension for Visual Studio (hook the "File Save" command and then invoke the Project Build command in your handler) - or get into the habit of pressing Ctrl+B after Ctrl+S :)
VS 2017 Enterprise Edition supports a Live Unit Testing feature. For older versions, or lower editions, some third party providers such as Mighty Moose or NCrunch are available (other third party solutions almost certainly exist also)

Should you build components every time you build a main app

We have started using Final Builder to create builds for our vb6 and .net projects. We are also using Visual Source Safe to manage our source. Some of our vb6 exe's are dependent on certain ocx's, such that a particular vb6 exe may require a particular version of an ocx.
The question is, should the final builder script for our exe project also re-build the ocx project, or is it better to simply pull a particular version of the already compiled ocx. My concern is that other developers could have broken the build (or created a bug) for the ocx which could then break the exe we are trying to build. Moreover, re-building the ocx project would result in the same version of the ocx but with a different date, resulting in confusion if dllhell(ocx hell) issues arise.
There is no difference in terms of building and maintaining your app between a ocx and a activex dll. The ocx should use binary compatibility and be part of your compile process.
This is however a general rule. You may have some components that rarely change if ever. In my own VB6 application I have a handful of components that reside at the bottomost level of my reference hierarchy that rarely get updated. They maybe get updated one or twice a year at best. Some haven't been updated for several years now.
However based on your description it sounds like the controls are still being modified. So I doubt the second case applies.
In the end use your best judgment.
There are two ways to use OCX/DLLs: code reusability vs. fragmentation of an over-large project.
Those meant for re-use would be absurd to build, build, and rebuild, and almost never should be customized to fit a new application. These are your crown jewels, and most people should have no ability to modify the source. They are the domain of your organization's "library writers" because that's what they are: libraries.
If you simply have large, monolithic, unweildy applications you may have to go the other route. Then OCXs and DLLs simply become an awkward extension of the "module" concept. This is why we have Project Groups.
Your library users should not be fiddling with libraries though. I'm sure they all fancy themselves able to "ensure they are up to date and performant" but that's a different debate entirely.

Visual Studio 2008 Unnecessary Project Building

I have a C# project which includes one exe and 11 library files. The exe references all the libraries, and lib1 may reference lib2, lib3, lib4, etc.
If I make a change to a class in lib1 and built the solution, I assumed that only lib1 and the exe would need to be changed. However, all dll's and the exe are being built if I want to run the solution.
Is there a way that I can stop the dependencies from being built if they have not been changed?
Is the key this phrase? "However, all dll's and the exe are being built if I want to run the solution"
Visual Studio will always try to build everything when you run a single project, even if that project doesn't depend on everything. This choice can be changed, however. Go to Tools|Options|Projects and Solutions|Build and Run and check the box "Only build startup projects and dependencies on Run". Then when you hit F5, VS will only build your startup project and the DLLs it depends on.
I just "fixed" the same problem with my VS project. Visual Studio did always a rebuild, even if didn't change anything. My Solution: One cs-File had a future timestamp (Year 2015, this was my fault). I opened the file, saved it and my problem was solved!!!
I am not sure if there is a way to avoid dependencies from being built. You can find some info here like setting copylocal to false and putting the dlls in a common directory.
Optimizing Visual Studio solution build - where to put DLL files?
We had a similar problem at work. In post-build events we were manually embedding manifests into the outputs in the bin directory. Visual Studio was copying project references from the obj dir (which weren't modified). The timestamp difference triggered unnecessary rebuilds.
If your post-build events modify project outputs then either modify the outputs in the bin and obj dir OR copy the modified outputs in the bin dir on top of those in the obj dir.
You can uncheck the build option for specified projects in your Solution configuration:
(source: microsoft.com)
You can can create your own solution configurations to build specific project configurations...
(source: microsoft.com)
We actually had this problem on my current project, in our scenario even running unit tests (without any code changes) was causing a recompile. Check your build configuration's "Platform".
If you are using "Any CPU" then for some reason it rebuilds all projects regardless of changes. Try using processor specific builds, i.e. x86 or x64 (use the platform which is specific to the machine architecture of your machine). Worked for us for x86 builds.
(source: episerver.com)
Now, after I say this, some propeller-head is going to come along and contradict me, but there is no way to do what you want to do from Visual Studio. There is a way of doing it outside of VS, but first, I have a question:
Why on earth would you want to do this? Maybe you're trying to save CPU cycles, or save compile time, but if you do what you're suggesting you will suddenly find yourself in a marvelous position to shoot yourself in the foot. If you have a library 1 that depends upon library 2, and only library 2 changes, you may think you're OK to only build the changed library, but one of these days you are going to make a change to library 2 that will break library 1, and without a build of library 2 you will not catch it in the compilation. So in my humble opinion, DON'T DO IT.
The reason this won't work in VS2005 and 2008 is because VS uses MSBuild. MSBuild runs against project files, and it will examine the project's references and build all referenced projects first, if their source has changed, before building the target project. You can test this yourself by running MSBuild from the command line against one project that has not changed but with a referenced project that has changed. Example:
msbuild ClassLibrary4.csproj
where ClassLibrary4 has not changed, but it references ClassLibrary5, which has changed. MSBuild will build lib 5 first, before it builds 4, even though you didn't mention 5.
The only way to get around all these failsafes is to use the compiler directly instead of going through MSBuild. Ugly, ugly, but that's it. You will basically be reduced to re-implementing MSBuild in some form in order to do what you want to do.
It isn't worth it.
Check out the following site for more detailed information on when a project is built as well as the differences between build and rebuild.
I had this problem too, and noticed these warning messages when building on Windows 7 x64, VS2008 SP1:
cl : Command line warning D9038 : /ZI is not supported on this platform; enabling /Zi instead
cl : Command line warning D9007 : '/Gm' requires '/Zi'; option ignored
I changed my project properties to:
C/C++ -> General -> Debug Information Format = /Zi
C/C++ -> Code Generation -> Enable Minimal Build = No
After rebuilding I switched them both back and dependencies work fine again. But prior to that no amount of cleaning, rebuilding, or completely deleting the output directory would fix it.
I don't think there's away for you to do it out of the box in VS. You need this add-in
http://workspacewhiz.com/
It's not free but you can evaluate it before you buy.
Yes, exclude the non-changing bits from the solution. I say this with a caveat, as you can compile in a way where a change in build number for the changed lib can cause the non built pieces to break. This should not be the case, as long as you do not break interface, but it is quite common because most devs do not understand interface in the .NET world. It comes from not having to write IDL. :-)
As for X projcts in a solution, NO, you can't stop them from building, as the system sees a dependency has changed.
BTW, you should look at your project and figure out why your UI project (assume it is UI) references the same library as everything else. A good Dependency Model will show the class(es) that should be broken out as data objects or domain objects (I have made an assumption that the common dependency is some sort of data object or domain object, of course, but that is quite common). If the common dependency is not a domain/data object, then I would rethink my architecture in most cases. In general, you should be able to create a path from UI to data without common dependencies other than non-behavioral objects.
Not sure of an awesome way to handle this, but in the past if I had a project or two that kept getting rebuilt, and assuming I wouldn't be working in them, I would turn the build process off for them.
Right click on the sln, select configuration manager and uncheck the check boxes. Not perfect, but works when Visual Studio isn't behaving.
If you continue to experience this problem, it may be due to a missing or out of date calculated dependency (like a header) that is listed in your project, but does not exist.
This happens to me especially common after migrating to a new version (for example: from 2012 to 2013) because VS may have recalculated dependencies in the conversion, or you are migrating to a new location.
A quick check is to double-click every file in offending project from solution explorer. If you discover a file does not exist, that is your problem.
Failing a simple missing file: You may have a more complicated build date relationship between source and target. You can use a utility to find out what front-end test is triggering the build. To get that information you can enable verbose CPS logging. See: Andrew Arnott - Enable C++ and Javascript project system tracing (http://blogs.msdn.com/b/vsproject/archive/2009/07/21/enable-c-project-system-logging.aspx). I use the DebugView option. Invaluable tool when you need it.
(this is a C# specific question, but a different post was merged as identical)

Finding out-of-date or missing dependencies or output files in a Visual C++ solution (or: Why does VS insist on rebuilding projects without changes?)

I've got a solution containing multiple projects. I'm only changing the code in one of them, but every time I hit Ctrl+Shift+B, Visual Studio rebuilds all of the others.
I want it to build the other projects, so this is good. What's not good is that, normally, it would see that there was nothing to do. I have a wonky dependency somewhere, so this isn't working.
Is there a tool or macro (or switch) that'll explore the dependency tree and tell me which files are missing or out-of-date, so that I can get it to stop?
I know that I can solve this specific case, by (e.g.) touching all of the project files.
Unfortunately, I've often seen this situation when a file is configured to produce an output file (e.g. an IDL file is configured to output a typelibrary, but doesn't contain a 'library' block, so it'll never create a TLB).
This wouldn't be resolved by touching all of the project files, so I'm looking for something more general to add to my personal toolbox that'll easily tell me why a file is being rebuilt, whether it be because it's older than a dependency, or because the project is misconfigured to expect an output file that will never be produced.
In Options / Projects and Solutions / Build and Run turn up the MSBuild project build output verbosity to Detailed. It should give you an idea of why it is rebuilding all the projects.
If I understand you right, you might solve this by touching all your project's files. It may be caused by a source-file having a last-modified-time that's in the future.
Edit:
I know that I can solve this specific case, by (e.g.) touching all of the project files, but I'd like to add something to my personal box of tricks that I can use in the future, in the general case.
I'm confused - what's the 'general case' of this problem?
Not that I've found. If you know that a project is not going to change often, you can tell the Configuration Manager not to build it. (Right-click on the Solution, and select Configuration Management)
As far as I know ctrl + shift + b is by default bound to BuildSolution, so that would be why all your projects are being build. i'm not really sure what else you could use except for rightclicking the project and pressing build :)
You might want to check in Tools>Option>Projects and Solutions and check if your option is set to Only Build startup project and dependencies instead of all the solution.
Or instead of using ctrl+shirt+b you should simply press F6 on the project you want to build :)
You can use shift+F6 to build just the current project.
While not directly answering my question: "is there a tool that'll work this out for me?", I found the specific problem by using SysInternals Process Monitor:
The project was configured with /analyze, which requires Visual Studio Team Edition, but the version on this PC is Visual Studio Professional, which doesn't support it. Unfortunately, there appears to be a bug in Visual Studio, where it thinks that the .pchast file should be created, even though it has no way to do so. I've raised this on Connect.
I think I might write a macro for Visual Studio Professional that, if /analyze is turned on, simply creates an empty .pchast file at the end of the build...

Resources