Pre-build task - deleting the working copy in CruiseControl.NET - visual-studio

I'm currently in the process of setting up a continuous integration environment at work. We are using VisualSVN Server and CrusieControl.NET. Occasionally a build will fail and a symptom is that there are conflicts in the CruiseControl.NET working copy. I believe this is due to the way I've setup the Visual Studio solutions. Hopefully the more projects we run in this environment the better our understanding of how to set them up will be so I'm not questioning why the conflicts happen at this stage. To fix the builds I delete the working copy and force a new build - this works every time (currently). So my questions are: is deleting the working copy a valid part of a continuous integration build process, and how do I go about it?
I've tried solutions including MSTask and calling delete from the command line but I'm not having any luck.
Sorry for being so wordy - good job this is a beta :)

Doing a full delete before or after your build is good practice. This means that there is no chance of your build environment picking up an out of date file. Your building exactly against what is in the repository.
Deleting the working copy is possible as I have done it with Nant.
In Nant I would have a clean script in its own folder outwith the one I want to delete and would then invoke it from CC.net.
I assume this should also be possible with a batch file. Take a look at the rmdir command http://www.computerhope.com/rmdirhlp.htm
#pauldoo
I prefer my CI server to do a full delete as I don't want any surprise when I go to do a release build, which should always be done from a clean state. But it should be able to handle both, no reason why not

#jamie: There is one reason why you may not be able to do a clean build every time when using a continuous integration server -- build time. On some projects I've worked on, clean builds take 80+ minutes (an embedded project consisting of thousands of C++ files to checkout and then compile against multiple targets). In this case, you have to weigh the benefit of fast feedback against the likelihood that a clean build will catch something that an incremental build won't. In our case, we worked on improving and parallelizing the build process while at the same time allowing incremental builds on our CI machine. We did have a few problems because we weren't doing clean builds, but by doing a clean build nightly or weekly you could remove the risk without losing the fast feedback of your CI machine.

If you check out CC.NET's jira there is a patch checked in to implement CleanCopy for Subversion which does exactly what you want and just set CleanCopy equal to true inside your source control block just like with the TFS one.

It is very common and generally a good practice for any build process to do a 'clean' before doing any significant build. This prevents any 'artifacts' from previous builds to taint the output.
A clean is essentially what you are doing by deleting the working copy.

#Brad Barker
Clean means to just wipe out build products.
Deleting the working copy deletes everything else too (source and project files etc).
In general it's nice if you're build machine can operate without doing a full delete, as this replicates what a normal developer does. Any conflicts it finds during update are an early warning to what your developers can expect.
#jamie
For formal releases yes it's better to do a completely clean checkout. So I guess it depends on the purpose of the build.

Related

Incremental builds in VS2010

How can i turn on incremental builds in VS2010? I've noticed that my project rebuild everything each time i press build. Is there any option in VS to turn this on? Or should i modify *.csproj on my own?
Projects should build incrementally already (just make sure that you do Build instead of Rebuild). The best way to check if incremental building works is to run the build from the command line. The second time you build it should take almost no time.
If things are still getting rebuilt, then perhaps you've modified your projects in some way that's messing up with the build order. Looking at the build logs (via the /v option) can help you poinpoint what's going on.

Run command before pulling from SVN in TeamCity

I'm having an issue with TeamCity, which relates to the fact that it runs the source control step before it runs the build steps. My project is a windows service, so there are complications with this.
TeamCity often decides to delete the entire contents of the project directory, even though I have the clean build option unchecked. However, since this is a windows service this does not fly, as when trying to delete the dll's it errors out since they're in use:
Error while applying patch: Failed to delete: F:\PathToService\bin\Release\Library.dll
The most frustrating part is that the dll's aren't even under source control, TeamCity seems to have a mind of its own and decides to delete them anyway.
Is there a way to get around this, to be able to run a build step BEFORE doing the svn checkout so that I can stop the windows service first?
I would try to set up your CI environment so it uninstalls the windows service once you are done testing it. I am not aware of Teamcity pre-checkout hook.
The answer was to split up each service into a separate working directory. That prevents teamcity from deleting the dll's.

TeamCity Multi-Part Build - How to checkout the code just once

I am trying to create 1 package with multiple build configurations. The first will checkout the code, build it (Solution File configuration), and run nunit tests. If that succeeds, another will then build in release mode. If that succeeds, a final script witll package up the output, and mark it as an artifact.
The problem I'm having is that I don't know how to tell TeamCity not to create new directories for each step, and as a result, the steps are failing. Is there a setting for this? It seems like the dependencies tab would be an appropriate place to look, but I don't seem to understand the instructions, and my tinkering so far has been fruitless.
I basically skipped most of the TeamCity workflow, and instead used a scripting language to handle all of this. (I used Rake and Albacore, which I highly recommend for .net projects)
I'd caution you not to use powershell w/ TeamCity. You have to wrap everything in .bat file, which is fairly excruciating.
So the result, is that I have 1 checkout, and everything builds from this point. It's drastically cut down the amount of time required for the builds, though perhaps that wouldn't be the case if I had a lot of agents available.

Is it safe to use incremental rebuild for generating release build in visual C++?

I sometimes have issues with the incremental rebuild on visual C++ (2003 currently ). Some dependencies does not seem correctly checked and some files aren't build when they should. I suppose thoses issues come from the timestamp approach to incremental rebuild.
I don't consider it a huge issue when building debug build on my desk, however for distribuable build this is a issue.
Is it safe to use incremental build for a build server or is a full build a requirement ?
You need a build you distrubute to be recreatable again should users experience problems that need investigating.
I would not rely on an incremental build. Also, I would always delete all source from the build machine, and fetch it from scratch from the source control system before building a release. This way, you know you can repeat the build process again by fetching the same source code.
If you use an incremental build, the build will build differently each time because only a subset of the system will need to be built. I think its just good to eliminate as many possible differences between release builds as possible. So, for this reason incremental builds are out.
It's a good idea to label or somehow mark the versions of each source file in the source control system with the version number of the build. This enables you to keep track of the exact source that went into building the release. With a decent source code control system the labels can be used to track down all the changes that were made to the code between one release and the next. This can be a real help when trying to track down a bug that you know was introduced between the two releases.
Incremental builds can still be useful on a development machine when you're not distributing the build, just for saving time during the code/debug/test/repeat development cycle.
I'd argue that you should avoid the "is it safe" question. It's simply not worth looking into.
The advantage of incremental builds is the cost saving. Small per build, it adds up for all the debug builds a developer typically makes, and then adds up further for all developers on your team.
On the other hand, release builds are rare. The costs of a release build are not in developers waiting on them, they're found much more in the team of testers that need to validate it.
For that reason, I find that incremental builds are without a doubt cost-saving for debug builds, but I refuse to spend any time calculating the small saving I'd get out of incrementally building a release build.
I would always prefer to do a full clean and rebuild for any release for peace of mind if nothing else. Assuming that you're talking about externally released builds and not just builds using the Release configuration, the fact that these builds are relatively infrequent means that any time saving will be minimal in the long run.
Binaries built using with the incremental build feature are going to be bigger and slower, even for release builds, because they necessarily contain scaffolding to enable the incremental builds that isn't necessary for fully optimized builds. I'd recommend you not to use incremental builds for this reason.

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources