How to prevent builds when another build fails? - continuous-integration

I have created many build configurations in Hudson for a single solution (eg. Release, Debug, Test)
When I commit something wrong, I receive 1 build failed e-mail for every build configuration.
I would like to receive a single e-mail.
I think if I could to make one build dependent on the success or failure of another, I could receive less e-mails.
How to do that?
BTW: I use MsBuild, Subversion and NAnt

It sounds like you have multiple jobs (build configurations) for the same set of source code that are configured to always build. You could, as someone else suggested, use build triggers to chain these jobs together. However, if all the jobs run on each commit, I suggest combining the jobs into a single job with multiple steps. That way when one step fails, the entire build will fail, no unnecessary Hudson cycles will be spent, and you will not receive redundant emails. To add steps to a build, click "Add build step" and select "Invoke Ant" (or whatever other action you want it to take).

You can use the build trigger "build after other projects are build" to put projects up- or downstream from one another. Then you typically let the lighter builds go first (like simple compile).

Related

Is there a build step in TeamCity that exits the build successfully without performing additional build steps?

I have a project that needs to execute either three or four build steps depending on the branch in source control. More specifically, if I'm merging in a PR and running the build (for GitHub status notifications) I have one extra build step that is required.
It's that last build step that I need to omit if it's a non-PR branch.
Is there a way to add a build step that checks the trigger and exits the build successfully? Or a way to exclude a build step based on a branch filter?
You can check the condition and modify the step logic inside the build script. See the related ticket and an example of the script.
BTW, It is not a good practice to change the logic of the build inside build script. In this case you no longer "compare" builds in the build configuration: they start to form multiple unrelated sequences. Also the statistics of the builds will be uninformative. The recommended setup is to create several build configurations based on template.
Depending on which type of runner you are launching, but you can, in some cases, add few lines of code to get your current branch name with the property : %teamcity.build.branch%
In my case, I just add this as an extra parameter for powershell scripts and if this is a number, do something, else, do other stuff. ;)

Multiple types of build for different build triggers

I have a setup where our code builds to dev every 5 hours on a schedule trigger. This works great, but the downside that the code could sit in teamcity for hours before it triggers and alerts us to a build error.
Is there a way to have a VCS trigger also run the build as soon as its checked in, but passing something to our NANT script to say "just build, don't deploy"?
I know I must be missing something.. is there any way to achieve this?
The only way I could think of was to have an entirely separate build configuration, but that seemed rather wasteful
You can setup a new build with a VCS trigger and then have that build have a env/system variable set that your build script can read to determine whether or not to deploy to dev.
See TeamCity Docs for information around this. I've used something like this in setting up builds before and it works well.

How to split a big Jenkins job/project into smaller jobs without compromising functuality?

we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.

How to recombine builds in TeamCity?

We have a lot of tests. I can break these up so that they run on seperate agents after an initial compile build happens, but is there a way I can recombine these results? Having 8 build configurations that all need to be green makes it hard to see if you've got one ubergreen build.
Is there a way in TeamCity to recombine / join builds once we've split them out? TW-9990 might help - allowing ANDs in the dependencies.
We found the answer which certainly works from TeamCity 5:
One compile build,
N test only builds that take compile.zip!** and copy to where the compile output would normally be. (via a template)
Consolidated finish:
Finish Build Trigger: Wait for a successful build in: ...
Snapshot Dependencies: Do not run new build if there is a suitable one
Only use successful builds from suitable ones
This all seems to work nicely and the whole shbang is easily copied for branches etc. Am very happy - this has worked well for us for many months now.
No idea how to do that natively. Here's my first thoughts on how I would try and tackle such a thing though:
Saving test results to files
Publishing the test result files as build artifacts
Creating a 'Merge build'
Adding artifact dependency onto the individual test projects
Writing a custom 'build' script using something like (N)Ant. This would parse the individual test results and publish the results as per the TC KB
Good luck!
Thinking outside the box you could have an overall build which doesn't really do anything (or use one of your test build configs as your 'master'), with snapshot dependencies on each of your split test builds. That way if any of them fail, the 'master' will fail because one the dependent build failed.
TW-9990 looks to be concerned with build triggering rather than dependencies.

Can hudson be configured to continue rest of build steps if one fails?

I don't expect this to be useful in my day-to-day workflow, but when initially configuring projects for hudson there are times when I wish I could get it to try all the build steps - not just stop after the first failure.
Again, I am not advocating this for everyday use - just for configuration of the builds. (One of my projects takes about an hour or so and I'd rather not have to iterate through fixing each build step independently - I would like to fix each of them in parallel.
So, is there a way to tell hudson to continue the build steps when one fails?
The best solution right now is to modify each of your build steps to make sure they unconditionally return success, instead of an error code.
There is an open enhancement request to do exactly what you want in HUDSON-4819
This actually can be quite useful in a day to day workflow. We use Zed Builds And Bugs and it has this feature. For each build step, you simply toggle whether you want the build step to fail the build if it fails. By default it is turned on (sensible).
Where this has come in handy are things like optional steps - e.g. copying final binaries to other distribution servers. Sometimes these servers are up and sometimes not. It doesn't really matter if this particular step fails, but when it does fail, I don't want the whole build to fail.

Resources