Right now a project I'm working on has reached a level of complexity that requires more than a few steps (actually its become arcane!) to produce a complete/usable product. And unfortunately we didn't start out with a Continuos Integration mindset, so as you can imagine its kind of painful at times, and at others I can easily waste half a day trying to get a clean/tested build.
Anyways as any HUGE project it consists of many components in many different languages (not only enterprise style Java or C# for example), as well as many graphical, and textual resources. Now the problem is that when I look for Continuos Integration, I always find best practices and techniques that assume one is starting a new project, from the ground up. However this isn't a new project, so I was wondering what are some good resources to proactively start migrating from Arcane Integration towards Continuos Integration :)
Thanks in advance!
Here it is in two simple (hah) steps.
Go for the repeatable build:
Use source control, get all code checked in.
Establish and document all tools used to build (mainly, which compiler version). Have a repeatable deployment and set up process for these tools.
Establish and document clearly any resources which are necessary to build, but are not checked in (third party installations, service packs, etc). Have a repeatable deployment and set up process for these dependencies.
Before commiting to source control, developers must
update their working copy
successfully build
run and pass automated tests
These steps can be done 1 at a time, sort of a path to follow. You'll get benefits at each stage. For example, if you aren't using source control at all, just getting the code into source control (without anything else) is a big step forward. Also, if there are no automated tests, then developers can't run them - but they can still get the prior commits and get the compiler to check their work.
If you can do all of these, you'll get to a nice sane place.
The goals are repeatable build processes and developers that are plugged in to how their changes affect the build and other developers.
Then you can reap the bonuses by establishing higher compliance:
Developers establish a frequent commit habit. Code that is in the working copy should never be more than 1 day old.
Automated build process monitors source control for check-ins and gets the results to a place where the users can accept them (such as a test environment, a preview website, or even simply placing an .exe where the user can find it).
The same way you eat an elephant (one bite at a time) ;-) Continuous integration requires an automated build. Start with that. Automate the building of each piece. Ant or NAnt is a great way to do this. Have each component's construction be a NAnt task. Then your entire system build can aggregate those individual tasks.
From there, you can add tasks for deployment, for unit testing, etc. If you want to use a CI technology, you can wire it up to your NAnt build.
I would start by first writing down all the steps it takes you to do the build and test manually. After that you at least have a guide for doing it the old way, and writing things down gives you the chance to look at it as a complete process.
Then look for parts to script.
Ideally you want to trigger a build and test from a code commit and only rebuild and retest the changed parts, with perhaps a full build and test nightly or weekly. You'll need log files or database entries and reports on the build success or lack of it.
You'll want to search out and evaluate pre-built products and open-source build-your-own kits. You can certainly write all the scripting and reporting yourself, but it will take a while and you'll probably end up with a just barely good enough reporting system since your job is coding the product, not coding the build system. :-)
I would guess that migrating isn't really an option--Half-ass solutions will only make it worse.
My approach would be to take one creative engineer who understands the build process, sit him down and say "Fix this". Give him a week or two.
The end goal would be a process that runs beginning to end with a single make command.
I also recommend an automated "Setup" procedure where you simply do a checkout and run a batch file from a network share to install and build all your tools. The amount of time this will save overall is staggering if you bring in new programmers. Most projects take one to three days to get set up on a new computer--and it's always the "new" programmer who doesn't know what's going on doing the installs on his own system...
In short: Incrementally
Choose a framework that will work across the diverse range of projects.
One by one, add components to the framework.
If you are not familiar with the framework, tackle a couple of the easier components first, to reduce risk of screwing up.
If you do understand the framework, tackle some of the more difficult and/or commonly built components first, so your team (and management) will appreciate the benefits early, and support the effort more.
Be sure to have a plan to include all of your components, because that's when the full benefit will be realized.
Bring your team with you; make sure you have consensus that this is going to be valuable, or people won't maintain it as the components change.
Related
So, I have an interesting situation. I have a group that is interested in using CI to help catch developer errors. Great - we are all for that. The problem I am having wrapping my head around things is that they want to build interdependent projects isolated from one another.
For example, we have a solution for our common/shared libraries that contains multiple projects, some of which depend on others in the solution. I would expect that if someone submits a change to one of the projects in the solution, the CI server would try to build the solution. After all, the solution is aware of dependencies and will build things, optimized, in the correct order.
Instead, they have broken out each project and are attempting to build them independently, without regard for dependencies. This causes other errors because dependent DLLs might not yet exist(!) and a possible chicken-and-egg problem if related changes were made in two or more projects.
This results in a lot of emails about broken builds (one for each project), even if the last set of changes did not really break anything! When I raised this issue, I was told that this was a known issue and the CI server would "rebuild things a few times to work out the dependencies!"
This sounds like a bass-ackwards way to do CI builds. But maybe that is because I am older and ignorant of newer trends - so, as anyone known of a CI setup that builds interdependent projects independently? Any for what good reason?
Oh, and they are expecting us to use the build outputs from the CI process, and each built DLL gets a potentially different version number. So we no longer have a single version number for the output of all related DLLs. The best we can ascertain is that something was built on a specific calendar day.
I seem to be unable to convince them that this is A Bad Thing.
So, what am I missing here?
Thanks!
Peace!
I second your opinion.
You risk spending more time dealing with the unnecessary noise than with the actual issues. And repeating portions of the CI verification pipeline only extends the overall CI execution time, which goes against the CI goal of reducing the feedback loop.
If anything you should try to bring as many dependent projects as possible (ideally all of them) under the same CI umbrella, to maximize the benefit of fast feedback of breakages/regressions on any part of the system as a whole.
Personally I also advocate using a gating CI system (based on pre-commit verifications) to prevent regressions rather than just detecting them and relying on human intervention for repairs.
I've been dealing with the problem of scaling CI at my company and at the same time trying to figure out which approach to take when it comes to CI and multiple branches. There is a similar question at stackoverflow, Multiple feature branches and continuous integration. I've started a new one because I'd like to get more of discussion and provide some analysis in the question.
So far I've found that there are 2 main approaches that I can take (or maybe some others???).
Multiple set of jobs (talking about Jenkins/Hudson here) per branch
Write tooling to manage the extra jobs
Create/modify/delete Jobs in bulk
Custom settings for each job per branch (SCM url, dep management repos duplications)
Some examples of people tackling this problem with shell tools, ant scripts and Jenkins CLI. See:
http://jenkins.361315.n4.nabble.com/Multiple-branches-best-practice-td2306578.html
http://jenkins.361315.n4.nabble.com/Is-it-possible-to-handle-multiple-branches-where-some-jobs-should-run-on-each-one-without-duplicatin-td954729.html
http://jenkins.361315.n4.nabble.com/Parallel-development-with-branches-td1013013.html
Configure or Create hudson job automatically
Will cause more load on your CI cluster
Feedback cycle for devs slows down (if the infrastructure cannot handle the new load)
Multiple set of jobs per 2 branches (dev & stable)
Manage the two sets manually (if you change the conf of a job then be sure to change in the other branch)
PITA but at least so few to manage
Other extra branches won't get a full test suite before they get pushed to dev
Unsatisfied devs. Why should a dev care about CI scaling problems. He has a simple request, when I branch I would like to test my code. Simple.
So it seems if I want to provide devs with CI for their own custom branches I need special tooling for Jenkins (API or shellscripts or something?) and handle scaling. Or I can tell them to merge more often to DEV and live without CI on custom branches. Which one would you take or are there other options?
When you talk about scaling CI you're really talking about scaling the use of your CI server to handle all your feature branches along with your mainline. Initially this looks like a good approach as the developers in a branch get all the advantages of the automated testing that the CI jobs include. However, you run into problems managing the CI server jobs (like you have discovered) and more importantly, you aren't really doing CI. Yes, you are using a CI server, but you aren't continuously integrating the code from all of your developers.
Performing real CI means that all of your developers are committing regularly to the mainline. Easy to say, but the hard part is doing it without breaking your application. I highly recommend you look at Continuous Delivery, especially the Keeping Your Application Releasable section in Chapter 13: Managing Components and Dependencies. The main points are:
Hide new functionality until it's finished (A.K.A Feature Toggles).
Make all changes incrementally as a series of small changes, each of which is releasable.
Use branch by abstraction to make large-scale changes to the codebase.
Use components to decouple parts of your application that change at different rates.
They are pretty self explanatory except branch by abstraction. This is just a fancy term for:
Create an abstraction over the part of the system that you need to change.
Refactor the rest of the system to use the abstraction layer.
Create a new implementation, which is not part of the production code path until complete.
Update your abstraction layer to delegate to your new implementation.
Remove the old implementation.
Remove the abstraction layer if it is no longer appropriate.
The following paragraph from the Branches, Streams, and Continuous Integration section in Chapter 14: Advanced Version Control summarises the impacts.
The incremental approach certainly requires more discipline and care - and indeed more creativity - than creating a branch and diving gung-ho into re-architecting and developing new functionality. But it significantly reduces the risk of your changes breaking the application, and will save your and your team a great deal of time merging, fixing breakages, and getting your application into a deployable state.
It takes quite a mind shift to give up feature branches and you will always get resistance. In my experience this resistance is based on developers not feeling safe committing code the the mainline and this is a reasonable concern. This in turn usually stems from a lack of knowledge, confidence or experience with the techniques listed above and possibly with the lack of confidence with your automated tests. The former can be solved with training and developer support. The latter is a far more difficult problem to deal with, however branching doesn't provide any extra real safety, it just defers the problem until the developers feel confident enough with their code.
I would set up separate jobs for each branch. I've done this before and it isn't hard to manage and set up if you've set up Hudson/Jenkins correctly. A quick way to create multiple jobs is to copy from an existing job that has similar requirements and modify them as needed. I'm not sure if you want to allow each developer to setup their own jobs for their own branches, but it isn't much work for one person (i.e. a build manager) to manage. Once the custom branches have been merged into stable branches, corresponding jobs can be removed when they are no longer necessary.
If you're worried about the load on the CI server, you could set up separate instances of the CI or even separate slaves to help balance the load across multiple servers. Make sure that the server you are running Hudson/Jenkins on is adequate. I've used Apache Tomcat and just had to ensure that it had enough memory and processing power to process the build queue.
It's important to be clear on what you want to achieve using CI and then figure out a way to implement it without much manual effort or duplication. There's nothing wrong with using other external tools or scripts that are executed by your CI server that help simplify your overall build management process.
I would choose dev+stable branches. And if you still want custom branches and afraid of the load, then why not move these custom ones to the cloud and let developers manage it themselves, e.g. http://cloudbees.com/dev.cb
This is the company where Kohsuke is now.
There is an Eclipse Tooling also, so if you are on Eclipse, you will have it tightly integrated right into dev env.
Actually what is really problematic is build isolation with feature branches. In our company we have a set of separate maven projects all be part of a larger distribution. These projects are maintained by different teams but for each distribution all projects need to be released. A featurebranch may now overlap from one project to another and thats when build isolation gets painfully. There are several solutions we've tried:
create separate snapshot repositories in nexus for each feature branch
share local repositories on dedicated slaves
use the repository-server-plugin with upstream repositories
build all within one job with one private repository
As a matter of fact, the last solution is the most promising. All other solutions lack in one or another way. Together with the job-dsl plugin it is easy to setup a new feature branch. simply copy and paste the groovy script, adapt branches and let the seed job create the new jobs. Make sure that the seed job removes nonmanaged jobs. Then you can easily scale with feature branches over different maven projects.
But as tom said well above, it would be nicer to overcome the necessity of feature branches and teach devs to integrate cleanly, but that is a longer process and the outcome is not clear with many legacy system parts you won't touch any more.
my 2 cents
We have a large collection of nAnt scripts that build our various products. They almost all have the following structure:
Erase old working copy.
Check out complete fresh copy from version control.
Increment build number in appropriate file (custom nAnt task).
Run static analysis (StyleCop, Perl scripts)
Build solution using Visual Studio - ends up with MSI output.
Run unit tests (nUnit, JSUnit)
Run static analysis (FxCop)
Zip up deliverables (MSI, readme, etc) into well-named package.
Put this zip package onto a server share.
Email results to team.
From our research, it seems that CruiseControl(.net?)/Hudson/BuildBot would only add the trigger that causes the build, which at the moment is double-clicking the nAnt script over Remote Desktop and a status dashboard.
Are we missing anything else significant?
The question is subjective, and thus so is my answer.
In the projects I've automated before, CruiseControl was used essentially for that one purpose: so we didn't have to remote into the build machine and trigger builds. The CI part is that CruiseControl will monitor the repository for you, triggering builds at the intervals you define.
It also gave us the dashboard from which could trigger releases, or go back to examine logs and artefacts from past builds.
For us that was enough benefit to implement CruiseControl. Perhaps it doesn't "seem" like much until you've finished it and a month later realized you haven't had to touch your build system because it's off silently and thanklessly doing its thing for you.
A Continuous Integration server such as Hudson would do 1, 2, 3, 9 and 10 for you so that you don't have to implement them yourself. If you've already got it working that's maybe not a huge improvement for your current project but it makes things simpler for subsequent projects. It would also, as you mention, take care of when to trigger the build.
Hudson will also chart various trends over time, such as test coverage, build time, static analysis results. You can also have more sophisticated notifications than just e-mail if you choose.
The most important thing it gives you is visual feedback (the bigger the screen is better). When you have one machine, dedicated to displaying buildresults, visible to all team members, it works like a catalyst to people see that something is wrong and fixes it.
If you have something like that standing in a place where your boss can see it and ask you "Hey Wilkinson, why is this screen red?" will you fix your build faster?
Thay all look the same, you can pick whatever you think fits your needs, just have one setup and running.
At work where we do LOB .NET/MSSQL developement, many projects we have are 2 person or even 1 person projects that have development life cycles of 1-3 months. The developers serve as business analyst/project managers/QA so things get done fast with minimal 'BS time' spent. We do get the bigger projects that can take 6 months and have a team of 5 devs on it, but these are more uncommon.
We're doing a push to initiate everyone doing TDD going forward (my most recent project has full code coverage and was developed solely), and I was doing research on the architecture required to take maximum benefit of it. It seems that most people doing TDD are doing CI, have a build server and are doing automated builds and have some kind of automated client build tool (FinalBuilder or nAnt) etc.
So my questions - I see the obvious benefits on the uncommon large projects where you have 5 people working on the same codebase at once - but will we see much benefit from doing CI on the small 2 man projects? What about on a 1 man project - for those is it just a complete waste since you're really not 'integrating' with anyone? And, how would you pitch CI / automated builds / build server to management?
Having an automated/repeatable build process, and being able to prove that the current build passes all tests and runs in a server environment is worth the effort on any size project IMHO.
I'd pitch it this way: manual builds are manual. Things can get mixed up even on small projects . An automated build solves this problem. The amount of time it takes to set up the build script will be made up many times over during the lifecycle of the application.
As far as CI with test runs etc... goes: It's a constant health check on the quality of the code base. It's good to know as soon as possible when one thing inadvertently breaks another thing.
On a small project, you don't need most of the infrastructure to do CI, especially the build server. What you do need is the tests, the build automation, revision control, and a controlled build environment. You can just as well have your build and test servers be virtual machine images you run on your workstations... just so long as the images are under revision control like the rest of the project.
Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).