Is it possible to use CruiseControl.Net to set up a build farm? We currently have 4 different build machines building different things at different times and have a bit of a headache to manually balance the load somehow. I would prefer to designate one of them to be the master build machine, which would delegate work to the other ones when they are free.
As far as I can determine, there is no support in CruiseControl.Net for build farms - at least not operating the way you describe. CCNet's interpretation of "farm" seems to assume that projects are assigned manually to a machine and a given project will always be built on the same machine.
If you wanted to dynamically select which machine actually performs the build, you would need to create your own mechanism to select that machine and trigger the build on it. There is likely to be quite a bit of complexity associated with this. For instance you would probably need to ensure that the same project does not get built simultaneously on two different machines if a second commit occurs while the previous commit is still being processed.
If there is a shared location that all the build machines can access, it may be possible to use the Filesystem source control block or CCNet's ForceBuild mechanism to start the build on the designated machine, but have all the build machines publish their output for a given project to the same final location.
See load-Balancing the Build Farm with CruiseControl.NET blogpost for a possible solution
Related
I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:
I'm migrating our build system over to TeamCity and, because we have quite long build times, I'm trying to make good use of parallelism in build configurations.
If two configs can run in parallel they are obviously not dependent on each other. However there are some cases where, if two parallel builds are serialised (due to lack of available agents) then I would prefer one to run ahead of another (for example one is a set of regression tests that I'd like to see the result of before a packaging step is run - but if resources are available I'd like them both to run concurrently).
I can't find an explicit way to specify ordering of logically independent builds. However I've observed that the build order tends to be lexicographical - although I'm not sure if that's on the config name or ID.
I could experiment but would prefer a more definite answer, if possible.
This used to be available as a plugin, but has since been bundled into the product.
Go to the build queue and click on Configure Build Priorities
If you add a class with a high number, you can then associate that with the build you'd like to be built first
Managing Build Priorities - TeamCity documentation
Hope this helps
Our busy enterprise server has 550 active build configurations running on 30 agents. We need a way to prevent some builds from running while other builds are running. I already understand artefact dependencies and this does not solve our problem as the builds do not depend on each other they just share some resources like server port numbers and database connections. Some build configurations conflict with each other when run simultaneously and we need to prevent this by having a way to queue a build (rather than run it) when one of a number of other builds are running.
We already use dumb tricks like restricting builds to specific agents, etc... I'm thinking about adding a first build step that checks for a flag in a db table or something but this would add lots of failed builds when what we really need is proper build queueing.
Am I missing something? Does this functionality already exist in TeamCity?
It looks like what you need is on its way in the form of TeamCity 8 Shared Resources. You can already get access to the EAP here.
My huge respect for 550 configurations and 30 agents.
I think you already found the solution:
Have one environment per agent
Assign configuration for specific agents
I could imagine how much it is work. But I think it's much cleaner and will work much faster than restriction to run several configurations at the same time.
Coming "from" TFS and using TeamCity in a customer project....
...is there a way to install multiple agent instances on one computer? I could easily do that with TFS.
The reason is that we have build scripts that are linear in execution for some (large) part and take a significant amount of time. Basically with a a modern server (4, 6, 8, 12 cores) there is nothing stopping the server from actually efficiently running multiple builds AT THE SAME TIME - except there seems to be no way to install multiple agent instances on one machine.
Yes it is possible (I also have 2 agents installed on one machine) see TeamCity docs:
Several agents can be installed on a single machine. They function as
separate agents and TeamCity works with them as different agents, not
utilizing the fact that they share the same machine.
After installing one agent you can install additional one, providing the
following conditions are met:
the agents are installed in the separate directories
they have distinctive work and temp directories
buildAgent.properties is configured to have different values for name and ownPort properties
Make sure, there are no build configurations
that have absolute checkout directory specified (alternatively, make
sure such build configurations have "clean checkout" option enabled
and they cannot be run in parallel).
Under Windows, to install additional agents as services, modify
\launcher\conf\wrapper.conf to change:
wrapper.console.title,
wrapper.ntservice.name
wrapper.ntservice.displayname
wrapper.ntservice.description
properties to have distinct name within the computer.
More resources:
another question
excellent post
Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).