Using Jenkins with multiple C projects, ClearCase and shell/batch - shell

BACKGROUND :
We are working on a few C projects, and the sources are in ClearCase. Each project have his own project ClearCase. My mission consists in putting all of them into Jenkins. There is no Continuous integration so far on this projects.
Some details, but very important :
The projects MUST be build with Windows AND Linux. Clients wishes.
The projects are developed on Eclipse, but no Maven, Ant... Only manual build with shell and batch. Already got some makefile to execute, using MinGW and GCC.
Same for the futures tests, they will be shell and batch scripts. They use real time, so, one test could be 3 minutes long. Average of 15 tests for each projects).
Here we are.
I'm thinking using a free-style project, with 2 slaves (Windows and Unix) for the build. The duration of the test are, now, not really a problem, they should be execute the night. So, maybe using slaves, don't know yet.
What do you guys think ? Got some advice before I get my hand dirty ?
Also, is it possible to have only one Jenkins project, with multiple sub-project with distinct ClearCase sources (and job and sub jub on each of this projects) ? (was looking to MultiJobs, matrix project, multi-SCM Jenkins: best way to build a project with sub projects, but couldn't understand or find the right method)

You do not exactly "put a project into Jenkins"
You define Jenkins jobs (start with one per project, you will worry about pipeline later). Those jobs must access the sources of your project in order to populate a Jenkins build workspace and execute whatever you define as a job (in a freestyle job, that can be any script in any language).
You will need the ClearCase plugin (or ClearCase UCM plugin of your projects are using UCM): that will allow your job to get the sources of the project.
Start by testing a job directly on the master (that node will be its own slave).
Once it is working, depending on the number of jobs and their frequency, you will scale by adding more slaves.

Related

Is there a way to specify several Jenkins files for the pipeline?

At the moment, we are generating Jenkins jobs using the Job DSL plugin. Typically, we have the following jobs per project:
CI build (SNAPSHOT build)
Deployment, one per stage
Integration test (nightly build)
Creation of a release
Reports (Maven site, during the night)
Am I right that there can be only one Jenkins file in the project's repository? How could I map our requirements to the new Jenkins pipeline?
I'm asking because we're going to install version 2 of Jenkins, and I'm not sure whether we should abandon our Jenkins job generation and use Jenkins files.
There are a couple of options, which might help you to migrate over to Jenkins pipelines. But you don't have to, especially not all at once.
You can use a shared library to define functions that can be used in multiple jobs, e.g. a buildOurThing function.
You can trigger an existing job (of whatever kind) using the build step. So you could model your new pipeline around existing jobs.
You can still use parameterized builds, if you want to use the deployment job with different targets.
Jenkins pipeline is really worth using, but maybe don't force you to an immediate switch. If JobDSL works for you, keep it. If you have new jobs (pipelines) to create, get familiar with pipelines.

TeamCity: trigger project trigger with a single click and common build number

I'm fairly new to TeamCity (but not to CI systems) and I've been trying to figure out how to work this configuration out:
I have latest TeamCity Professional Version 9.1.3 (3 build agents, 20 configs) installed
Here's my TeamCity Project layout:
Project A
-- Build Product X (WIN)
-- Build Product Y (WIN)
-- Build Product Z (Linux)
I have dedicated 3 agents to build the above build configurations accordingly - 2 on Windows and 1 agent on Linux
WIN products are built using a mix of batch, powershell and msbuild scripts
Linux product is built using shell script
Triggering these 3 builds (under Project A), manually, works just fine. However, this is not feasible as we have many feature branches and all of them would have similar build configurations - 3 clicks for each build + setting Build Parameters on each Build configuration is expensive.
So, Here are my questions:
Is there a way to trigger the entire Project to build with a Single click? doing so, should run these builds in parallel
If 1 is possible, then how do I set the same build number (build params) across these 3 build configurations?
Is it possible to set up a VCS trigger that would poll for changes on any of the repos that build these and trigger the entire project (provided 1 is possible)
Please note I tried configuring both snapshot and artifact dependencies to make this work, but creating dependencies only pauses the other build configurations to wait until the dependent project is complete but that's not feasible for us - they need to run in parallel. (our builds take approx 45mins to complete) - Yes, we have a huge product to package.
I'll be grateful for any pointers in the right direction
Thanks
Yes, one-click triggering of parallel runs is achievable: create a 4th (or 0th) build configuration that does nothing (no build steps). Let's call this config "Zero". Your three build configs will each have a "Finished build" trigger on Zero (trigger when Zero finishes) as well as a snapshot dependency on Zero.
The best bit: you only need to define the common/shared parameters in Zero and the other three configs can reuse these. For example, if you define %foo% in Zero, the other three can all use %dep.MyProject_Zero.foo%. This also means you can get at Zero's build number: %dep.MyProject_Zero.build.number%. In each of your three build configs, switch to the "General Settings" and set your "Build number format" to the above.
For VCS triggering, just set Zero to span all the three VCS areas. You're suggesting they are each in a different repo... I have no experience with that but assume that Zero can span all three repos.
Last, if you use feature branches, and your VCS is git, mercurial, or Perforce Streams, make sure you've read about TeamCity feature branches support; it can save you a lot of time!

How to split a big Jenkins job/project into smaller jobs without compromising functuality?

we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.

TeamCity - non-trivial build sequence, please advice

I am tasked to improve quality and implement TeamCity for continuous integration. My experience with TeamCity is very limited - I use mostly TFS myself and have some experience with CC.NET.
A lot should happen within a build process... actually the build is already pushed into three different configurations that will run one after the next.
My main problem is that in each of those I actually would need to start multiple runners. For example, the first build step shall consist of:
The generation of new AssemblyInfo.cs files for consistent in assembly numbering
The actual compilation
A partial unit test run (all tests that run fast and check core functionality)
An FxCop run
A StyleCop run
The current version of TeamCity only allows to configure one runner ... which leaves me stuck with a lot of things.
How you would approach this? My current idea is going towards using the MsBuild runner for everything and basically start my own MsBuild based script which then does all the things, pretty much the way that TFS handles it (and the same way i did things back in the cc.net way with my own Nant build script).
On a further problem the question is how to present statistical information, for example from unit tests running in different stages (build configurations). We have some further down that take some time to run and want that to run in a 2nd or 3rd step (the latest for example testing database generation code which, including loading base data, takes about 15+ minutes to run). OTOH we would really like test results to be somehow consolidated.
Anyone any ideas?
Thanks.
TeamCity 6.0 allows multiple build steps for a single build configuration. Isn't it what you're looking for?
You'll need to script this out, at least parts of it. TeamCity provides some nice UI based config for some of your needs, but not all. Here's my suggestion:
Create an msbuild script to handle your first two bullet points, AssemblyInfo generation and compilation. Configure the msbuild runner to run your script, and to run your tests. Collect your assemblies as artifacts.
Create a second build configuration for FxCop. Trigger it from the first build. Give it an 'artifact dependency' on the first build, which is how it gets a hold of your dlls.
For StyleCop, TC doesn't support it out of the box like it does FxCop. Add it to your msbuild script manually, and have it produce an html report (which TeamCity can then display).
You need to take a look at the Dependencies functionality in the TeamCity. This feature allows you to create a sequence of build configurations. In other words, you need to create a build configuration for each step and then link all them as dependencies.
For consolidating test results please take a loot at the Artifact Dependencies. It might help.

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources