Can hudson be configured to continue rest of build steps if one fails? - continuous-integration

I don't expect this to be useful in my day-to-day workflow, but when initially configuring projects for hudson there are times when I wish I could get it to try all the build steps - not just stop after the first failure.
Again, I am not advocating this for everyday use - just for configuration of the builds. (One of my projects takes about an hour or so and I'd rather not have to iterate through fixing each build step independently - I would like to fix each of them in parallel.
So, is there a way to tell hudson to continue the build steps when one fails?

The best solution right now is to modify each of your build steps to make sure they unconditionally return success, instead of an error code.
There is an open enhancement request to do exactly what you want in HUDSON-4819

This actually can be quite useful in a day to day workflow. We use Zed Builds And Bugs and it has this feature. For each build step, you simply toggle whether you want the build step to fail the build if it fails. By default it is turned on (sensible).
Where this has come in handy are things like optional steps - e.g. copying final binaries to other distribution servers. Sometimes these servers are up and sometimes not. It doesn't really matter if this particular step fails, but when it does fail, I don't want the whole build to fail.

Related

How to set up GitLab CI to run multiple build steps efficiently while indicating at which step it is?

I am pretty new to GitLab. I've set up pipelines and stages via .gitlab-ci.yml and they seem to work but I've just discovered that some of my assumptions were wrong.
I have a large, multi-project Gradle setup, producing many artifacts. We are in the process of setting up GitLab and I really wanted to make use of the GitLab UI to show the progress of the build. The idea was to nicely indicate to developers and reviewers how far the build got before it failed, something like:
Got its dependencies
Compiled main code, YAY!
Compiled test code, yippie!
Passed unit tests, we rock!
Passed integration tests, awesome!
Passed various static code analysis tests. We're almost good to go!
Generated documentation - can we ship it?
I've set up each of these as individual jobs of their individual stages, (incorrectly) assuming that Gradle will be able to do its incremental build magic and that this will be almost as quick as running it as a single step.
Then I noticed that each stage causes what seems to be a Docker container reinitialization. This also means that the Gradle daemon has to restart and has no knowledge of the past. It has to get all the dependencies. I think I could cache these, but it seems that they would be separately cached for each job. Finally, these some jobs end up repeating what jobs before them already did because their output isn't available to them. My thinking that serialized jobs would execute inside the same container instance was proven wrong. Each of the subsequent jobs generally have to repeat what jobs before them already did, significantly increasing the build time.
I think I understand that I could declare artifacts of each job and make them available to dependent jobs that way, but that does not eliminate all of the overhead and adds some of its own - copying the artifacts out to "somewhere" and then back, while also hitting the limits of how much I can pass on. In fact, my unit test job is now failing and I can't see why because of the log size limit, but it seems it has to do only with artifacts (the report) as the unit test pass nicely when I run them outside GitLab.
I also think I understand that the idea behind jobs was to be able to run them in parallel on separate runners. That is a very fine feature and I probably can use them for later stages, but not for (1)-(5) as they heavily rely on a lot of output of at least some of the previous jobs.
I could merge (1)-(5) into a single job (and a single stage) for performance reasons, but then there is no indication in the UI (that I know of) as to how far the build got ... and the logs would be even longer and nastier to figure out even if the log limit got lifted.
Do any of you have any suggestions as to what am I missing / should do here?
After further research, I found that this is not possible (yet). Jobs are meant to be units of (potentially) concurrent execution and can only communicate by copying artifacts, obviously.
What I would be interested in is steps lesser than jobs that would be indicated in the UI and that could post their artifacts when they (steps) complete but before the entire job is done. This would eliminate 1-2 minutes of job startup overhead that I am facing now.

How to deal with a very long process in TeamCity?

We want to introduce new tests that will be driven from TeamCity. The build part itself is reasonably fast but we expect the processes that follow to take a very long time (hours to days). A different machine will produce results and the results will be analysed. And of course we want to see the results at the end in TeamCity.
While we fully expect the long runtimes, we don't want to keep our TC server occupied for hours or days while it's waiting for the final results.
We see several basic options:
estimate the runtime and run a follow-up test at a predetermined time period
keep checking at regular intervals
run another build manually when the initial run is complete
How do you deal with a situation like this? What kind of considerations need to be taken into account? Did you try something like this and did it work (or not)?
Your three listed options sound reasonable but you are missing important features of teamcity that are at your disposal.
An alternative:
Set up a build that will run ONLY the very long process. Lets call it the Long_Build
Have a second build which does the analysis depend on the success of the Long_Build. Lest's call this one the Analysis_Build
Setup an Agent capable of running the Long_Build and Teamcity will run the build only on that agent.
Have your QA build, or main build, or whatever build succeed IFF (if and only if) the Analysis_Build checks out. This build specifically is an information gathering build that will likely check all of your others tests and everything that your QA deployment depends on in order to call a changeset or set of changesets good enough for deployment.
My advice is to never run builds manually. Whatever criteria you would use to run a build manually can be scripted and run automatically. This way you are closer to a continuous integration process where you can guarantee quality.
Also, if you are not already doing it, make sure you label your source control with successful builds. Whether it is your Long_Build, Analysis_Build or QA build you should have a way to obtain successful builds that have passed a series of quality specs.

How to prevent builds when another build fails?

I have created many build configurations in Hudson for a single solution (eg. Release, Debug, Test)
When I commit something wrong, I receive 1 build failed e-mail for every build configuration.
I would like to receive a single e-mail.
I think if I could to make one build dependent on the success or failure of another, I could receive less e-mails.
How to do that?
BTW: I use MsBuild, Subversion and NAnt
It sounds like you have multiple jobs (build configurations) for the same set of source code that are configured to always build. You could, as someone else suggested, use build triggers to chain these jobs together. However, if all the jobs run on each commit, I suggest combining the jobs into a single job with multiple steps. That way when one step fails, the entire build will fail, no unnecessary Hudson cycles will be spent, and you will not receive redundant emails. To add steps to a build, click "Add build step" and select "Invoke Ant" (or whatever other action you want it to take).
You can use the build trigger "build after other projects are build" to put projects up- or downstream from one another. Then you typically let the lighter builds go first (like simple compile).

TeamCity Multi-Part Build - How to checkout the code just once

I am trying to create 1 package with multiple build configurations. The first will checkout the code, build it (Solution File configuration), and run nunit tests. If that succeeds, another will then build in release mode. If that succeeds, a final script witll package up the output, and mark it as an artifact.
The problem I'm having is that I don't know how to tell TeamCity not to create new directories for each step, and as a result, the steps are failing. Is there a setting for this? It seems like the dependencies tab would be an appropriate place to look, but I don't seem to understand the instructions, and my tinkering so far has been fruitless.
I basically skipped most of the TeamCity workflow, and instead used a scripting language to handle all of this. (I used Rake and Albacore, which I highly recommend for .net projects)
I'd caution you not to use powershell w/ TeamCity. You have to wrap everything in .bat file, which is fairly excruciating.
So the result, is that I have 1 checkout, and everything builds from this point. It's drastically cut down the amount of time required for the builds, though perhaps that wouldn't be the case if I had a lot of agents available.

Pre-build task - deleting the working copy in CruiseControl.NET

I'm currently in the process of setting up a continuous integration environment at work. We are using VisualSVN Server and CrusieControl.NET. Occasionally a build will fail and a symptom is that there are conflicts in the CruiseControl.NET working copy. I believe this is due to the way I've setup the Visual Studio solutions. Hopefully the more projects we run in this environment the better our understanding of how to set them up will be so I'm not questioning why the conflicts happen at this stage. To fix the builds I delete the working copy and force a new build - this works every time (currently). So my questions are: is deleting the working copy a valid part of a continuous integration build process, and how do I go about it?
I've tried solutions including MSTask and calling delete from the command line but I'm not having any luck.
Sorry for being so wordy - good job this is a beta :)
Doing a full delete before or after your build is good practice. This means that there is no chance of your build environment picking up an out of date file. Your building exactly against what is in the repository.
Deleting the working copy is possible as I have done it with Nant.
In Nant I would have a clean script in its own folder outwith the one I want to delete and would then invoke it from CC.net.
I assume this should also be possible with a batch file. Take a look at the rmdir command http://www.computerhope.com/rmdirhlp.htm
#pauldoo
I prefer my CI server to do a full delete as I don't want any surprise when I go to do a release build, which should always be done from a clean state. But it should be able to handle both, no reason why not
#jamie: There is one reason why you may not be able to do a clean build every time when using a continuous integration server -- build time. On some projects I've worked on, clean builds take 80+ minutes (an embedded project consisting of thousands of C++ files to checkout and then compile against multiple targets). In this case, you have to weigh the benefit of fast feedback against the likelihood that a clean build will catch something that an incremental build won't. In our case, we worked on improving and parallelizing the build process while at the same time allowing incremental builds on our CI machine. We did have a few problems because we weren't doing clean builds, but by doing a clean build nightly or weekly you could remove the risk without losing the fast feedback of your CI machine.
If you check out CC.NET's jira there is a patch checked in to implement CleanCopy for Subversion which does exactly what you want and just set CleanCopy equal to true inside your source control block just like with the TFS one.
It is very common and generally a good practice for any build process to do a 'clean' before doing any significant build. This prevents any 'artifacts' from previous builds to taint the output.
A clean is essentially what you are doing by deleting the working copy.
#Brad Barker
Clean means to just wipe out build products.
Deleting the working copy deletes everything else too (source and project files etc).
In general it's nice if you're build machine can operate without doing a full delete, as this replicates what a normal developer does. Any conflicts it finds during update are an early warning to what your developers can expect.
#jamie
For formal releases yes it's better to do a completely clean checkout. So I guess it depends on the purpose of the build.

Resources