How to minimize build server project specific configuration? - maven

My case is about having too much complex project configuration logic inside Jenkins jobs definition and in time this becoming harder and hard to deal with. This also prevents you from easily execute build jobs under other build/CI tools.
If those projects would be Java based anyone would probably tell me to use maven as I could put most of the things inside the pom.xml files and have them with the project. Still, in my case is more about C/C++ or even .NET projects for which the all the build scripts are usually in bash (cygwin being a dependency on Windows).
I do know that theoretically I could code the parts that are now inside jenkins job configuration in those bash files but this would clearly require significant effort and would be really hard to tune them to allow to enable and disable different steps based on external conditions.
So, what I am trying here is to achieve a high level of independency regarding the build system, so if I want I could switch it in the long future.
What would you recommend as a solution for that? Obviously I need something that can be used multiplatform and not tightened to a specific build system.
Does it make sense to use maven for that, even if those projects are not Java ones? Personally I am not a big fan of XML configuration files, YAML, JSON and INI being seen as more friendly.
What kind of extra logic existing in Jenkins configuration are we talking about?
One would deployment, as I want to be able to deploy to Nexus or similar repositories, executing tests, code coverage and maybe posting the results somewhere.
As a sidenote, looking at Travis configuration files makes me wonder why Jenkins didn't go for such approach.

Look at Groovy. Jenkins allows direct Groovy code manipulating almost everything. A Groovy script could be used to take care of everything from project specific configuration, and it could even be checked in together with the source code. Then in the Jenkins job, you just have a single build step to call the Groovy script.
The above suggestion, however, is very Jenkins dependent.
Another possibility is an Ant script. The AntExec plugin allows to execute Ant script, along with ant-contrib if needed, using the same tools installation process that the rest of Jenkins use. Therefore, you don't need to worry about Ant being installed on the node: Jenkins will take care of it on demand.
The benefits of the Ant script is that it's not tied to Java concepts as Maven is, it's cross platform (Windows and Linux), and just like the Groovy script example above, it can be checked in along with the rest of the source code.

Related

Approach for using GitLab-CI for complex builds

I'm new to continuous integration. I'm interested in systems that would be able to test if the changes that I made to a code break the compilation of the code on a list of different build types.
Properties of code (Which I will call CodeA):
1.) Has dependencies to numerical libraries like SUNDIALS and PETSC
2.) Has dependencies on two other codes (CodeB CodeC) which themselves have dependencies to things like HDF5, MPI, etc.
Is it feasible to use the CI feature of GitLab to set up a system that would be able to build CodeA (linked with CodeB and CodeC) on Linux machines with different system flavors (Ubuntu, OpenSuSe, RHEL, Fedora, etc)?
Most of the examples that I've found of using GitLab for CI have been things like testing to see if HelloWold.cpp compiles if lines are changed on it. Just simple builds with very little external dependency management/integration.
So it sounds like you've got a few really great questions in here. I'll break them apart as I see them and you let me know if this fully answers your question.
How can I build in different flavors of linux?
The approach I would take would be to to use docker files as Connor Shea mentioned in the comment. This allows you to continue using generic build agents in your CI system but test across multiple platforms.
Another option would be to look at how you're distributing your application and see if you could use a snap package. That would allow you to not have to worry about the environment you're deploying to.
How do I deal with dependencies?
This is where it's really useful to consider and artifact repository. Both jfrog's artifactory and sonatype's nexus work wonders here. This will allow you to hook up your build pipeline for any app or library and push an artifact that the others can consume. These can be locked down with a set of credentials that you supply to your build.
I hope this helped.

IDE-independent master build for Continuous Integration in a JEE6 project

SHORT VERSION:
In the context of a JEE6 application: What's the best way to setup an IDE-independent, master build process, in a dedicated build server (for CI / Integration test), while still using an IDE at the developer workstations? Which Java IDE is more suitable to such an approach?
LONG VERSION:
We are building a JSF2/PrimeFaces/EJB3.1 JEE6 application to be deployed on a JBoss application server. We need the EAR, WAR and JAR artefacts to be generated in a dedicated machine using an Ant script and be subject to integration testing in a CI fashion (perhaps using Jenkins).
It is also a requirement that the team will use a Java IDE (one kind only, no extra worries there). As a result, developers will produce their local artefacts for the own testing and development using their own, locally-installed IDE. However, I want the main build script on the build server to be independent of any IDE. If the IDE used by the team cannot accept and integrate closely with an externally provided Ant script this can lead to a situation where the main Ant build script will be different and will have to be maintained and evolved independently. So ideally, I would like the same hand-crafted build script to be used both by the developers' IDEs and the build server. In such a setting, if a developer sees the need to modify the Ant script, the modification reaches through git / svn the CI server and is used for subsequent builds. I am stressing "hand-crafted" since, e.g. in NetBeans case, I don't want to use the Ant build script created automatically from the IDE. See the mind-boggling minutiae one has to deal with in this approach (and which I would rather avoid), here.
Therefore I would like to know which of the major Java IDEs (Eclipse, Netbeans, IntelliJ IDEA, other?) is in your opinion more amenable to accepting an externally provided Ant script as the "project definition" and integrating closely to it (in terms of auto-complete and debug, addition of libraries, etc.). The discussion in this SO article seems relevant but is not quite the same situation as my team won't be using two different IDEs. Finally, I understand that Maven might provide a solution since NetBeans can use Maven-based projects where the Maven pom.xml is the project file and one doesn't have to deal with other IDE-specific artefacts, however there is enough Maven FUD on the web (I don't cite sources as I don't want to sidetrack the discussion) that I don't feel comfortable using it for a major undertaking without any significant prior exposure to it.

Build system for multi-language project

I am getting ready to embark on a project mainly for experimenting with languages, but also with a hint of usefulness. It will consist of a server-application, written in Erlang, and client-libraries in a number of languages. Initially I will want to write clients in Java, Ruby and Python. The actual protocol for communication will be Thrift.
I'm looking for a build system that will allow me to build the server and all the client libraries in one go, running unit-tests in each language, then packaging up a releasable artifact of some sort in whatever way is the "standard" for each language.
That means a Jar for Java, a RubyGem and a distribute/setuptools tarball for Python. Erlang probably has something too, but I'm not yet familiar with that. It should also be able to run the Thrift compiler to generate the various Thrift-stubs in each language.
On the pad at the start is Maven. I'm fairly certain Maven can do all I need, but I fear it's too Java-centric, and leaves me with a ton of work for every new language I need to add.
well one should know what the requirements are for every language to create a deliverable artifact.
if copy files from here to there and zip it is enough maven could cover most of the scripting languages.
But you may end up writing plug-ins to support a custom packaging (which is not that complicated, so if there is no build system that may be a good choice).
It might not suit the build systems for every language to force maven upon them. So Maybe use the specific build tools available and wrap them in simple script and execute them using a continuous integration server (like bamboo, jenkings/hudson, teamcity, ...) to have them build in a specific order (to 'fake' dependencies)?
I'm not aware of a cross language system.
Gradle might be more flexible as its approach is more script oriented.
And there is http://eclipse.org/buckminster/ - just for completeness (havent had a look for quite some time)
regards
werner
"It will consist of a server-application, written in Erlang,
and client-libraries in a number of languages.
Initially I will want to write clients in Java, Ruby and Python. "
Maven is good if you follow its way. It is actually more of my way or high way.
See: http://community.jboss.org/wiki/MavenVsGradle
For a lot of standard java projects it is actually very good. But if you need to use other things it becomes fairly complicated very quickly.
From your description it is on its way to become complicated very quickly.
I suggest to look gant (groovy + ant) and gradle. You can call other scripts from gant and/or gradle. JPython, JRuby will be your friend. Ant has a lot of tasks which will be very useful.
I have successfully implemented complicated Java/C++/C build project using Gant. Groovy scripting is powerful and easy to use. Gradle is similar and in some ways more powerful than gant.

TeamCity Multi-Part Build - How to checkout the code just once

I am trying to create 1 package with multiple build configurations. The first will checkout the code, build it (Solution File configuration), and run nunit tests. If that succeeds, another will then build in release mode. If that succeeds, a final script witll package up the output, and mark it as an artifact.
The problem I'm having is that I don't know how to tell TeamCity not to create new directories for each step, and as a result, the steps are failing. Is there a setting for this? It seems like the dependencies tab would be an appropriate place to look, but I don't seem to understand the instructions, and my tinkering so far has been fruitless.
I basically skipped most of the TeamCity workflow, and instead used a scripting language to handle all of this. (I used Rake and Albacore, which I highly recommend for .net projects)
I'd caution you not to use powershell w/ TeamCity. You have to wrap everything in .bat file, which is fairly excruciating.
So the result, is that I have 1 checkout, and everything builds from this point. It's drastically cut down the amount of time required for the builds, though perhaps that wouldn't be the case if I had a lot of agents available.

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources