I'm used to Eclipse for Java projects which automatically builds whenever I save a file. (I can turn this off.)
I then install Infinitest which automatically runs all tests affected by the change saved.
Hows do I do this for Visual Studio, writing C# software?
Important note:
If you're only concerned about C#/.NET code and only want to run Unit Tests, then this feature already exists in Visual Studio 2017 (Enterprise edition only) called Live Unit Testing, read more about it here: https://learn.microsoft.com/en-us/visualstudio/test/live-unit-testing-intro?view=vs-2017
Live Unit Testing is a technology available in Visual Studio 2017 version 15.3 that executes your unit tests automatically in real time as you make code changes.
My original answer:
(I used to be an SDE at Microsoft working on Visual Studio (2012, 2013, and 2015). I didn't work on the build pipeline myself, but I hope I can provide some insight:)
How can Visual Studio automatically build and test code?
It doesn't, and in my opinion, it shouldn't, assuming by "build and test code" you mean it should perform a standard project build and then run the tests.
Eclipse only builds what is affected by the change. Works like a charm.
Even incremental builds aren't instant, especially if there's significant post-compile activity (e.g. complicated linking and optimizations (even in debug mode), external build tasks (e.g. embedding resources, executable compression, code signing, etc).
In Eclipse, specifically, this feature is not perfect. Eclipse is primarily a Java IDE and in Java projects it's quite possible to perform an incremental build very quickly because Java's build time is very fast anyway, and an incremental build is as simple as swapping an embedded .class file in a Java .jar. In Visual Studio, for comparison, a .NET assembly build time is also fast - but not as simple, as the output PE (.exe/.dll) file is not as simple to rebuild.
However in other project types, especially C++, build times are much longer so it's inappropriate to have this feature for C/C++ developers, in fact Eclipse's own documentation advises C/C++ users to turn this feature off:
https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.cdt.doc.user%2Ftasks%2Fcdt_t_autobuild.htm
By default, the Eclipse workbench is configured to build projects automatically. However, for C/C++ development you should disable this option, otherwise your entire project will be rebuilt whenever, for example, you save a change to your makefile or source files. Click Project > Build Automatically and ensure there is no checkmark beside the Build Automatically menu item.
Other project types don't support this feature either, such as Eclipse's plugin for Go:
https://github.com/GoClipse/goclipse/releases/tag/v0.14.0
Changes in 0.14.0:
[...]
Project builder is no longer invoked when workspace "Build Automatically" setting is enabled and a file is saved. (this was considered a misfeature anyways)
(That parenthetical remark is in GoClipse's changelist and certainly makes clear that plugin's authors' opinions of Automatic Builds)
I then install Infinitest which automatically runs all tests affected by the change saved.
Visual Studio can run your tests after a build automatically (but you still need to trigger the build yourself), this is a built-in feature, see here:
https://learn.microsoft.com/en-us/visualstudio/test/run-unit-tests-with-test-explorer?view=vs-2017
To run your unit tests after each local build, choose Test on the standard menu, and then choose Run Tests After Build on the Test Explorer toolbar.
As for my reasons why Visual Studio does not support Build-on-Save:
Only the most trivial C#/VB and TypeScript projects build in under one second, the other project types, like C, C++, SQL Database, etc take between a few seconds for warm rebuilds of simple projects - to literally hours for large-scale C++ projects with lots of imported headers on a single-core CPU, low RAM and with a 5,400rpm IDE hard-drive.
Many builds are very IO-intensive (especially C/C++ projects with lots of headers *cough*like <windows.h>*cough*) rather than CPU-bound, and disk IO delays are a major cause of lockups and slowdowns in other applications running on the computer because a disk paging operation might be delayed or because they're performing disk IO operations on the GUI thread, and so on - so with this feature enabled in a disk IO-heavy build it just means your computer will jitter a lot every time you press Ctrl+S or whenever autosave runs.
Not every project type supports incremental builds, or can support a fast incremental build. Java is the exception to this rule because Java was designed so that each input source .java file maps 1-to-1 to an output .class file, this makes incremental builds very fast as only the actually modified file needs be rebuilt, but other projects like C# and C++ don't have this luxury: if you make even an inconsequential 1-character edit to a C preprocessor macro or C++ template you'll need to recompile everything else that used that template - and then the linker and optimizer (if code-inlining) will both have to re-run - not a quick task.
A build can involve deleting files on disk (such as cleaning your build output folder) or changing your global system state (such as writing to a non-project build log) - in my opinion if a program ever deletes anything under a directory that I personally own (e.g. C:\git\ or C:\Users\me\Documents\Visual Studio Projects) it had damn well better ask for direct permission from me to do so every time - especially if I want to do something with the last build output while I'm working on something. I don't want to have to copy the build output to a safe directory first. This is also why the "Clean Project" command is separate and not implied by "Build Project".
Users often press Ctrl+S habitually every few seconds (I'm one of those people) - I press Ctrl+S even when I've written incomplete code in my editor: things with syntax errors or perhaps even destructive code - I don't want that code built at all because it isn't ready to be built! Even if I have no errors in my code there's no way for the IDE to infer my intent.
Building a project is one way to get a list of errors with your codebase, but that hasn't been necessary for decades: IDEs have long had design-time errors and warnings without needing the compiler to run a build (thanks to things like Language Servers) - in fact running a build will just give me double error messages in the IDE's error window because I will already have error messages from the design-time error list.
Visual Studio, at least (I can't speak for Eclipse) enters a special kind of read-only mode during a build: so you can't save further changes to disk while a build is in progress, you can't change project or IDE settings, and so on - this is because a the build process is a long process that depends on the project source being in a fixed, known state - the compiler can't do its job if the source files are being modified while it's reading them! So if the IDE was always building (even if just for a few seconds) after each save, users won't like how the IDE is blocking them from certain editing tasks until the build is done (remember IDEs do more than just show editors: some specialized tool window might need to write to a project file just to open).
Finally, Building is not free of side-effects (in fact, that's the whole point!) - there is always a risk something could go wrong in the build process and break something else on your system. I'm not saying building is risky, but if you have a custom build script that does something risky (e.g. it runs a TRUNCATE TABLE CriticalSystemParameters) and the build breaks (because they always do) it might leave your system in a bad state.
Also, there's the (slightly philosophical) problem of: "What happens if you save incomplete changes to the build script of your project?".
Now I admit that some project types do build very, very quickly like TypeScript, Java and C#, and others have source-files that don't need compiling and linking at all and just run validation tools (like PHP or JavaScript) - and having Build-on-Save might be useful for those people - but arguably for the limited number of people whose experience it improves it demonstrably worsens it for the rest of the users.
And if you really want build-on-save, it's trivial enough to write as an extension for Visual Studio (hook the "File Save" command and then invoke the Project Build command in your handler) - or get into the habit of pressing Ctrl+B after Ctrl+S :)
VS 2017 Enterprise Edition supports a Live Unit Testing feature. For older versions, or lower editions, some third party providers such as Mighty Moose or NCrunch are available (other third party solutions almost certainly exist also)
Related
This happens sometimes when I make small changes to the source code, like removing a line of code or changing some value.
case A
change player.y = 100; to player.y = 250;
compile code with CTRL+F5
player still appear at y: 100
change player.symbol = "O"; to player.symbol = "P";
compile code with CTRL+F5
player now appear at the correct location y: 250
So far I've tried
Clean Solution - this works, but I'd rather not have to rely on this whenever I make a change
Build is checked for the project under Build > Configuration Manager...
CTRL-SHIFT-B still compiles the wrong code.
running VS as administrator - still compiles the wrong code.
I've looked at the executable's creation date. It doesn't update when I compile.
deleting the executable before compiling re-creates the same executable without the changes.
I had the same issue on another visual studio installation on the same computer
The project is located on a local hard drive.
edit:
I found this link. The problem seem to be very similar, although when testing right now it extends to more than just editing float numbers. I'm using Visual Studio Community Version 15.7.1, but had this problem prior to patching as well.
This doesn't seem to occur (so far) while building for debug.
It may seem like a pain, but after making any significant changes it is usually a good idea to clean the solution and rebuild the project. You may think that recompiling a single translation unit will work and in some cases that may be true, but if other object files rely on the translation unit you just updated they need to be recompiled as well otherwise visual studio will use the older object files that have previously been built.
As for CTRL+F5 whether you are in debug or release mode just tells Visual Studio that you want to run without debugging as it has nothing to do with the compilation and building of the project or solution. Are you sure you are not confusing CTRL+F5 with CTRL+F7 as user SoronelHateir has mentioned in the commmets? After you make any changes to source code it is good to first recompile that individual file to make sure there are no compilation errors. Then it is good to rebuild that current project within the solution.
On the other hand if your solution has multiple projects for example: one project is a library that is statically linked and the other project is your main executable project and the changes you made are in your library and depending on how the main project depends on the library project you will probably have to more than likely at the least rebuild the library project and in some cases you may even have to rebuild the main startup project. Most of the times just rebuilding the statically linked library should suffice; it is rare that the main program that is linking the library would have to be rebuilt to see the new changes, but there are some rare cases that both will have to be rebuilt.
I'll say it again; yes it may seem like a pain to have to do a clean solution and complete rebuild, but after a while this is the safest path to travel to ensure that all of the object files are up to date. What you are seeing in your own solution is a side effect of running the program with updated code but the output matches that of the older object files. This is a result of the IDE using the older or stale object files that already exist since you did not rebuild the solution. Cleaning the solution clears out all of the old object files as well as any intermediate files that are used to compile the source code into translation units to make the appropriate object files.
There are settings within the IDE that others have mentioned above such as user: Bo Persson has stated from their answer that you can change the behavior of how Visual Studio will recompile, rebuild a project or solution.
In visual studio, after making changes, I always click(in the menu bar):
Build -- Rebuild Solution
Note that this is different from "CTRL+F5" and "CTRL-SHIFT-B"
And it will always pick up all the changes and recompile your program
I tried everything mentioned in the current solutions and it didn't work for me. However after a lot of frustration I realized I was running my debugger. Try going to the Debug tab then press Stop Debugging. Finally make sure to select Release from this dropdown box(Mine was on Debug).
I'm working on a large, inherited C++ (really, mostly C) project developed and maintained under Visual Studio 2008. Technically, in Visual Studio terms, it is a "solution" consisting of eight "projects", and therein appears to be the rub.
As I'm sure most of you know, Visual Studio grays out the code that it believes to be #ifdef'd out. What I'm finding, though, is that it doesn't seem to do this correctly for the different projects. To keep matters simple, let's just call them Proj1, Proj2, ... Proj8. When I'm working on the Win32 Debug configuration of Proj5, I'd expect that the macros defined in the C/C++ Preprocessor properties configuration of Proj5 would determine what is grayed (or at least that there would be some straightforward way to make it so). Instead, I seem to be seeing views based on properties of Proj1. That is, if Proj1 defines some preprocessor macros that eliminate part of the code, I'm seeing that part grayed even when I'm working on Proj5. And the macros for Proj5 have no effect at all on what I see.
Yes, I did a complete clean and build (several, actually, and even saved everything off to SVN and started in a new top-level folder), and so I'm pretty sure this is not because of some vestigial files produced by an old build. And I'm pretty sure that in other respects Visual Sourcesafe "understands" the context correctly, because (1) the Build menu contains options related to Proj5, not Proj1; (b) at the bottom of the Project menu is "Proj5 properties..." not "Proj1 properties..."; and (c) there is no question that the #ifdef's are working in the program that is built: there are major feature differences, and they are as I'd expect them to be.
ADDED 27 Sept 2010 I still don't have an answer, so let me try this a different way: Assuming I've already run successful builds (which I have) is there anything other than the preprocessor properties and configuration of the currently selected project (and, as noted below, those of individual files, but that is moot in this case) that should influence what code is grayed?
Please see if these preprocessor directives might be set for the individual files. You can do this by right-clicking on a source file and selecting "Properties" from the context menu. It's somewhat counterintuitive to think that the directives can be set for individual files, not just for projects.
If that doesn't help, your best bet might be to use a text editor to look for the problem preprocessor definitions in each of the project files to try to get a better idea of what might be happening.
VS2010 is supposed to help with this. In VS2008, the no-compile browse cache is done after preprocessing, so it can accommodate only one set of macro definitions.
I don't believe that "Build Clean" or "Rebuild" will delete the .ncb files, since they are not part of the build process at all. I do know that deleting these files by hand fixes all kinds of weird behavior, but I'm afraid that in your case it's not going to be a lasting solution (the .ncb files will still get filled in based on a single configuration.)
Make the project the Startup Project and Remove and Re-add the offending Preprocessor Definition
Details:
I found a process that that fixes my MFC projects when this happens (at least temporarily). First I set the project I am working on as the startup project. Do that by right-clicking the project in the solution explorer and selecting Set as Startup Project. Then I go into the preprocessor definitions for the project and deleting the one that is not triggering and clicking OK. You will see a little progress bar a few seconds later appear in the lower right. Once it finishes, go back into preprocessor definitions for the project and add it back in again and click OK. After the little progress bar finishes again, you should see the correct code active again.
By default, of course, Visual Studio creates separate bin folders for Debug and Release builds. We are having some minor issues dealing with those from the perspective of external dependencies, where sometimes we want release binaries and sometimes debug. It would make life slightly easier to just have a single bin folder on all projects and make that the target for both Debug and Release. We could then point our external scripts, etc. at a single location.
A co-worker questioned why we couldn't just do that--change the VS project settings to go to the same bin folder? I confess I couldn't really think of a good reason to keep them, other than easily being able to see on my local filesystem which are Debug and which are Release. But so what; what does that gain?
My question(s):
How do you leverage having distinct Debug and Release folders? What processes does this enable in your development?
What bad thing could happen if you fail to retain this distinction?
Inversely, if you have gone the "single folder" route, how has this helped you?
I am NOT asking why have separate Debug and Release builds. I understand the difference, and the place of each. My question concerns placing them in separate folders.
Dave, if you will compile Debug and Release to single folder, you may encounter the situation where some dll-s will not be recompiled after switching from Release to Debug and vice versa because dll files will be newer than source files. Yes, the "Rebuild" should help you, but if you forget this - you can have a few extra hours of debugging.
The way I see it, this is simply a convenience on the developer's machine allowing them to compile and run both Debug and Release builds simultaneously.
If you have scripts or tools running inside Visual Studio, the IDE allows you to use the ConfigurationName and other macros to obtain paths which are configuration-independent.
If you are running scripts and tools externally from the command-line (i.e. you are structuring some kind of release or deployment process around it), it is better to do this on a build server, where the distinction between Debug and Release goes away.
For example, when you invoke msbuild from the command-line (on the build server) you can specify the Configuration property for Debug or Release, and the OutputPath property to build to one location only (regardless of the Configuration).
One reason I use separate folders is that it guarantees that I only generate installers that use Release-build code. I use WiX, which allows me to specify the exact path to the files I want to include in the installer, so I end up specifying the path in the Release folder. (Of course, you can do the same using normal VS installers, so that isn't really the point.) If you forget to switch your project to Release before building, the installer doesn't build unless you have old code in the Release folder, in which case you end up with an old installer, so that's a bit of a pitfall. The way I get around that is to use post-build event on the WiX installer project that clears out the release folder after the WiX installer builds.
In a previous company we got round this problem by changing the names of the debug executable and dlls by appending a "D". So
MainProgram.exe & library.dll
become
MainProgramD.exe & libraryD.dll
This meant that they could co-exist in the same output folder and all scripts, paths etc. could just reference that. The intermediate files still need to go to separate folders as the names of these can't be changed.
Obviously you need to change all your references to point to the modified name for debug - which you will forget to do at some point.
I usually compile in Debug mode, but sometimes need to compile in Release mode. Unfortunately, they don't behave exactly the same in certain error situations. By having separate folders, I don't need to recompile everything just to change modes (and a full recompile of our stuff in Release mode will take a while).
I have an experience from somewhat bigger project. If there are few solutions using file references to other solutions, you have to point the reference to ONE directory, so obviously it has to be the "release" one for continuous/night build. Now you can imagine what happens if developer wants to work with debug versions - all the references point to release ones. If it pointed to the same directory, switching to debug would be only matter of recompiling all related stuff in debug mode and the file references would automatically point to debug versions since then.
On the other side, I don't see the point why developer would ever want to work with release versions (and switching back and forth) - the release mode is only useful for full/nighlty builds, so the solutions in VS can stay by default in debug mode, and build script (anyway) always does clean, release build.
Occasionally one may run into a particularly-nasty uninitialized memory problem that only occurs with a release build. If you are unable to maintain (as ChrisF suggests) separate names for your debug vs. release binaries it's really easy to loose track of which binary you're currently using.
Additionally, you may find yourself tweaking the compiler settings (i.e. optimization level, release-with-debug symbols for easy profiling, etc.) and it's much easier to keep these in order with separate folders.
It's all a matter of personal preference though - which is why Visual Studio makes it easy to change the options.
Visual Studio kinds of IDEs works what best for the crowd. They create the default project structure, binary folders. You could map the binaries to the single folder. Then you need to educate the other developers that Release/Debug files are stored in the same folder.
Developers would ask you, who you do like that?
In VC++, we have different libraries generated and you need to link the appropriate versions. Otherwise you will get linker error.
Being consistent in your assemblies is a good thing. You don't want to deal with issues around conditional compilation/etc. where your release and debug dlls are incompatible, but you're trying to run them against each other.
What everyone elsesaid about technical aspects are important. Another aspect is that you may run into race conditions if one build relies on the single-output-location build, but there's no synchronization between the two builds. If the first build can be re-run (especially in a different mode) after the second build starts, you won't really know if you're using a debug of release build.
And don't forget the human aspect: it's far easier to know what you're working with (and fix broken builds) if the two builds output to different locations.
I have a C# project which includes one exe and 11 library files. The exe references all the libraries, and lib1 may reference lib2, lib3, lib4, etc.
If I make a change to a class in lib1 and built the solution, I assumed that only lib1 and the exe would need to be changed. However, all dll's and the exe are being built if I want to run the solution.
Is there a way that I can stop the dependencies from being built if they have not been changed?
Is the key this phrase? "However, all dll's and the exe are being built if I want to run the solution"
Visual Studio will always try to build everything when you run a single project, even if that project doesn't depend on everything. This choice can be changed, however. Go to Tools|Options|Projects and Solutions|Build and Run and check the box "Only build startup projects and dependencies on Run". Then when you hit F5, VS will only build your startup project and the DLLs it depends on.
I just "fixed" the same problem with my VS project. Visual Studio did always a rebuild, even if didn't change anything. My Solution: One cs-File had a future timestamp (Year 2015, this was my fault). I opened the file, saved it and my problem was solved!!!
I am not sure if there is a way to avoid dependencies from being built. You can find some info here like setting copylocal to false and putting the dlls in a common directory.
Optimizing Visual Studio solution build - where to put DLL files?
We had a similar problem at work. In post-build events we were manually embedding manifests into the outputs in the bin directory. Visual Studio was copying project references from the obj dir (which weren't modified). The timestamp difference triggered unnecessary rebuilds.
If your post-build events modify project outputs then either modify the outputs in the bin and obj dir OR copy the modified outputs in the bin dir on top of those in the obj dir.
You can uncheck the build option for specified projects in your Solution configuration:
(source: microsoft.com)
You can can create your own solution configurations to build specific project configurations...
(source: microsoft.com)
We actually had this problem on my current project, in our scenario even running unit tests (without any code changes) was causing a recompile. Check your build configuration's "Platform".
If you are using "Any CPU" then for some reason it rebuilds all projects regardless of changes. Try using processor specific builds, i.e. x86 or x64 (use the platform which is specific to the machine architecture of your machine). Worked for us for x86 builds.
(source: episerver.com)
Now, after I say this, some propeller-head is going to come along and contradict me, but there is no way to do what you want to do from Visual Studio. There is a way of doing it outside of VS, but first, I have a question:
Why on earth would you want to do this? Maybe you're trying to save CPU cycles, or save compile time, but if you do what you're suggesting you will suddenly find yourself in a marvelous position to shoot yourself in the foot. If you have a library 1 that depends upon library 2, and only library 2 changes, you may think you're OK to only build the changed library, but one of these days you are going to make a change to library 2 that will break library 1, and without a build of library 2 you will not catch it in the compilation. So in my humble opinion, DON'T DO IT.
The reason this won't work in VS2005 and 2008 is because VS uses MSBuild. MSBuild runs against project files, and it will examine the project's references and build all referenced projects first, if their source has changed, before building the target project. You can test this yourself by running MSBuild from the command line against one project that has not changed but with a referenced project that has changed. Example:
msbuild ClassLibrary4.csproj
where ClassLibrary4 has not changed, but it references ClassLibrary5, which has changed. MSBuild will build lib 5 first, before it builds 4, even though you didn't mention 5.
The only way to get around all these failsafes is to use the compiler directly instead of going through MSBuild. Ugly, ugly, but that's it. You will basically be reduced to re-implementing MSBuild in some form in order to do what you want to do.
It isn't worth it.
Check out the following site for more detailed information on when a project is built as well as the differences between build and rebuild.
I had this problem too, and noticed these warning messages when building on Windows 7 x64, VS2008 SP1:
cl : Command line warning D9038 : /ZI is not supported on this platform; enabling /Zi instead
cl : Command line warning D9007 : '/Gm' requires '/Zi'; option ignored
I changed my project properties to:
C/C++ -> General -> Debug Information Format = /Zi
C/C++ -> Code Generation -> Enable Minimal Build = No
After rebuilding I switched them both back and dependencies work fine again. But prior to that no amount of cleaning, rebuilding, or completely deleting the output directory would fix it.
I don't think there's away for you to do it out of the box in VS. You need this add-in
http://workspacewhiz.com/
It's not free but you can evaluate it before you buy.
Yes, exclude the non-changing bits from the solution. I say this with a caveat, as you can compile in a way where a change in build number for the changed lib can cause the non built pieces to break. This should not be the case, as long as you do not break interface, but it is quite common because most devs do not understand interface in the .NET world. It comes from not having to write IDL. :-)
As for X projcts in a solution, NO, you can't stop them from building, as the system sees a dependency has changed.
BTW, you should look at your project and figure out why your UI project (assume it is UI) references the same library as everything else. A good Dependency Model will show the class(es) that should be broken out as data objects or domain objects (I have made an assumption that the common dependency is some sort of data object or domain object, of course, but that is quite common). If the common dependency is not a domain/data object, then I would rethink my architecture in most cases. In general, you should be able to create a path from UI to data without common dependencies other than non-behavioral objects.
Not sure of an awesome way to handle this, but in the past if I had a project or two that kept getting rebuilt, and assuming I wouldn't be working in them, I would turn the build process off for them.
Right click on the sln, select configuration manager and uncheck the check boxes. Not perfect, but works when Visual Studio isn't behaving.
If you continue to experience this problem, it may be due to a missing or out of date calculated dependency (like a header) that is listed in your project, but does not exist.
This happens to me especially common after migrating to a new version (for example: from 2012 to 2013) because VS may have recalculated dependencies in the conversion, or you are migrating to a new location.
A quick check is to double-click every file in offending project from solution explorer. If you discover a file does not exist, that is your problem.
Failing a simple missing file: You may have a more complicated build date relationship between source and target. You can use a utility to find out what front-end test is triggering the build. To get that information you can enable verbose CPS logging. See: Andrew Arnott - Enable C++ and Javascript project system tracing (http://blogs.msdn.com/b/vsproject/archive/2009/07/21/enable-c-project-system-logging.aspx). I use the DebugView option. Invaluable tool when you need it.
(this is a C# specific question, but a different post was merged as identical)
I know the ideal way to build projects is without requiring IDE based project files, since it theoretically causes all sort of trouble with automation and what not. But I've yet to work on a project that compiles on Windows that doesn't depend on the VisualStudio project (Ok, obviously some Open Source stuff gets done with Cygwin, but I'm being general here).
On the other hand if we just use VS to run a makefile, we loose all the benefits of the compile options window, and it becomes a pain to maintain the external makefile.
So how do people that use VS actually handle external makefiles? I have yet to find a painless system to do this...
Or in reality most people don't do this, although its preached as good practice?
Take a look at MSBuild!
MSBuild can work with the sln/csproj files from VS, so for simple projects you can just call them directly.
if you need more control, wrap the projects in your own build process, add your own tasks etc. - it is very extensible!
(I wanted to add a sample but this edior totally messed up the XML... sorry)
Ideally perhaps, in practice no.
Makefiles would be my preference as the build master, however, the developers spend all their time inside the visual studio IDE and when they make a change, it's to the vcproj file, not the makefile. So if I'm doing the global builds with makefiles, it's too easily put out of synch with the project/solution files in play by 8 or 10 others.
The only way I can stay in step with the whole team is to run devenv.exe on the solution files directly in my build process scripts.
There are very few makefiles in my builds, where there are they are in the pre-build or custom build sections or a separate utility project.
One possibility is to use CMake - you describe with a script how you project is to be built, and CMake generates the Visual Studio solution/project files for you.
And if you need to build your project from the command line, or in a continuous integration tool, you use CMake to generate a Makefile for NMake.
And if you project is a cross-platform one - you can run CMake to generate the makefiles for the toolchain of your choice.
A simple CMake script looks like this:
project(hello)
add_executable(hello hello.cpp)
Compare these two lines with a makefile or the way you setup a simple project in your favorite IDE.
In a nutshell CMake does not only cross-platform-enables your project it also makes it cross-IDE. If you like to just test your project with eclipse or KDevelop, or codeblocks, just run CMake to generate the corresponding project files.
Well, in practice it is no always so easy, but the CMake idea just rocks.
For example, if you consider using CMake with Visual Studio there is some tweaking required to obtain the familiar VS project feeling, main obstacle is to organize your header and source files, but it is possible - check the CMake wiki (and by writting a short script you might even simplify this task).
We use a NAnt script, which at the compile step calls MSBuild. Using NAnt allows us to perform both pre- and post-build tasks, such as setting version numbers to match source control revision numbers, collating code coverage information, assembling and zipping deployment sources. But still, at the heart of it, it's MSBuild that's actually doing the compiling.
You can integrate a NAnt build as a custom tool into the IDE, so that it can be used both on a build or continuous integration server and by the developers in the same way.
Personally, I use Rake to call msbuild on my solution or project. For regular development I use the IDE and all the benefits that provides.
Rake is set up so that I can just compile, compile and run tests or compile run tests and create deployable artifacts.
Once you have a build script it is really easy to start doing things like setting up continuous integration and using it to automate deployment.
You can also use most build tools from within the IDE if you follow these steps to set it up.
We use the devenv.exe (same exe that launches the IDE) to build our projects from the build scripts (or the command line). When specifying the /Build option the IDE is not displayed and everything is written back to the console (or the log file if you specify the /Out option)
See http://msdn.microsoft.com/en-us/library/xee0c8y7(VS.80).aspx for more information
Example:
devenv.exe [solution-file-name] /Build [project-name] /Rebuild "Release|Win32" /Out solution.log
where "Release|Win32" is the configuration as defined in the solution and solution.log is the file that gets the compiler output (which is quite handy when you need to figure out what went wrong in the compile)
We have a program that parses the vcproj files and generates makefile fragments from that. (These include the list of files and the #defines, and there is some limited support for custom build steps.) These fragments are then included by a master makefile which does the usual GNU make stuff.
(This is all for one of the systems we target; its tools have no native support for Visual Studio.)
This didn't require a huge amount of work. A day to set it up, then maybe a day or two in total to beat out some problems that weren't obvious immediately. And it works fairly well: the compiler settings are controlled by the master makefile (no more fiddling with those tiny text boxes), and yet anybody can add new files and defines to the build in the usual way.
That said, the combinatorical problems inherent to Visual Studio's treatment of build configurations remain.
Why would you want to have project that "compiles on Windows that doesn't depend on the VisualStudio project"? You already have a solution file - you can just use it with console build.
I'd advise you to use msbuild with conjunction with makefile, nant or even simple batch file if your build system is not as convoluted as ours...
Is there something I'm missing?
How about this code?
public TRunner CleanOutput()
{
ScriptExecutionEnvironment.LogTaskStarted("Cleaning solution outputs");
solution.ForEachProject(
delegate (VSProjectInfo projectInfo)
{
string projectOutputPath = GetProjectOutputPath(projectInfo.ProjectName);
if (projectOutputPath == null)
return;
projectOutputPath = Path.Combine(projectInfo.ProjectDirectoryPath, projectOutputPath);
DeleteDirectory(projectOutputPath, false);
string projectObjPath = String.Format(
CultureInfo.InvariantCulture,
#"{0}\obj\{1}",
projectInfo.ProjectName,
buildConfiguration);
projectObjPath = Path.Combine(productRootDir, projectObjPath);
DeleteDirectory(projectObjPath, false);
});
ScriptExecutionEnvironment.LogTaskFinished();
return ReturnThisTRunner();
}
public TRunner CompileSolution()
{
ScriptExecutionEnvironment.LogTaskStarted ("Compiling the solution");
ProgramRunner
.AddArgument(MakePathFromRootDir(productId) + ".sln")
.AddArgument("/p:Configuration={0}", buildConfiguration)
.AddArgument("/p:Platform=Any CPU")
.AddArgument("/consoleloggerparameters:NoSummary")
.Run(#"C:\Windows\Microsoft.NET\Framework\v3.5\msbuild.exe");
ScriptExecutionEnvironment.LogTaskFinished ();
return ReturnThisTRunner ();
}
You can find the rest of it here: http://code.google.com/p/projectpilot/source/browse/trunk/Flubu/Builds/BuildRunner.cs
I haven't tried it myself yet, but Microsoft has a Make implementation called NMake which seems to have a Visual Studio integration:
NMake
Creating NMake Projects
Visual Studio since VS2005, uses "msbuild" to define and run builds. When you fiddle with project settings in the Visual Studio designer - let's say you turn XML doc generation on or off, or you add a new dependency, or you add a new project or Assembly reference - Visual Studio will update the .csproj (or .vbproj, etc) file, which is an msbuild file.
Like Java's ant or Nant before it, msbuild uses an XML schema to describe the project and build. It is run from VS when you do a "F6" build, and you can also run it from the command line, without ever opening VS or running devenv.exe.
So, use the VS tool for development and command-line msbuild for automated builds - same build, and same project structure.