I am the Configuration manager for an IT firm. Currently we are using anthill build management server for all our build related purposes. We are looking to implement Continuous Integration in our development life cycle.
Currently the building process is done manually. Suppose there are 5 projects A,B,C,D,E and E is the parent project and the dependency chain does like this:
A->B->C->D->E
What we do is we build A first update project.xml of B to the latest version of A, build B so on and so forth untill all dependent projects get built and finally parent project gets built.
What I am thinking is automating the entire process i.e. automatically finding out dependencies and building them first and then updating the version of parent projects and building them again to a newer version.
Would continuum do this for me? If not is here any other CI tool that does this?
Hudson does this really well, if you're using Maven, it'll even automatically figure out the build dependencies for you automatically after the first build, otherwise you can manually define the build dependencies. I.e., it lets you configure the system to build project B after a successful project A build.
I'm not sure if it matters to you, but Hudson is also open source.
If not is here any other CI tool that does this?
I like TeamCity, which does pretty much everything you'll need. With the latest version (and a plugin from JetBrains), there's even Git support.
On the other hand, any continuous integration system should handle dependencies easily.
We use Zed Builds and Bugs for a setup similar to this. We have a master project that has sub-project dependencies and the build system handles everything in the proper order.
We also have very small, tight builds for the sub-projects so that each of them can be built when the developers commit to source control. The Zed Server is capable of pulling the latest artifacts from these small builds and putting them together into larger builds, but we haven't yet used that feature.
Our check-ins trigger the small CI builds, and then twice per day the entire application is re-built from scratch, following the dependency chain.
I'd agree with OregonGhost, though, any CI system should be able to set up this type of chain.
I don't think you need a CI tool for this. Try to automate this using a buildscript and use Continuum (or any other CI tool) to trigger your preferred buildtool.
Related
I have a hard time do understand when to use composite builds vs multi-module builds. It seems both can be used to achieve similar things.
Are there still valid use cases for multi-module builds?
In my opinion, a multi-module build is a single system which is built and released together. Every module in the build should have the same version and is likely developed by the same team and committed to a single repository (git/svn etc).
I think that a composite build is for development only and for use in times when a developer is working on two or more systems (likely in different repositories with different release cycles/versions). eg:
Developing a patch for an open source library whilst validating the changes in another system
Tweaking a utility library in a separate in-house repository (perhaps shared by multiple teams) whilst validating the changes in another system
A fix/improvement that spans two or more systems (likely in separate repos)
I don't think that a composite build should be committed to source control or built by continuous integration. I think CI should use jars from a repo (eg nexus). Basically I think composite builds serves the same purpose as the resolve workspace artifacts checkbox in m2e
Please note that one of the restrictions on a composite build is that it can not include another composite build. So I think it's safer to commit multi-module builds to source control and use composite builds to join them together locally for development only.
These are my opinions on how the two features should be used, I'm sure there are valid exceptions to the above
We use our own monorepo with monobuild-type detection and use composite builds for CI and CD to staging (any microservices that end up building from your changes auto-deploy to staging). I disagree that composite builds are just for development as we use it to get to production in a monorepo/monobuild.
multi-project build estimated time at Orderly Health is about 15-20 minutes based on webpieces 5 minutes AND based on modifying a library in OrderlyHealth that affects EVERY project takes about 15 minutes.
Instead, we detect projects changed, what leaf nodes depend on it and all leaf nodes are composite projects pulling in libraries that pull in libraries and the general average build time is 3 minutes. (That is a 5x boost right there on build time).
later
Dean
Our company currently uses TFS for source control and build server. Most of our projects are written in C/C++, but we also have some .NET projects and wouldn't want to be limited if we need to use other languages in the future.
We'd like to use Git for our source control and we're trying to understand what would be the best choice for a build server. We have started looking into TeamCity, but there are some issues we're having trouble with which will probably be relevant regardless of our choice of build server:
Build dependencies - We'd like to be able to control the build dependencies for each <project, branch>. For example, have <MyProj, feature_branch> depend on <InfraProj1, feature_branch> and <InfraProj2, master>.
From what we’ve seen, to do that we might need to use Gradle or something similar to build our projects instead of plain MSBuild. Is this correct? Are there simpler ways of achieving this?
Local builds - Obviously we'd like to be able to build projects locally as well. This becomes somewhat of a problem when project dependencies are introduced, as we need a way to reference these resources or copy them locally for the build to succeed. How is this usually solved?
I'd appreciate any input, but a sample setup which covers these issues will also be a great help.
IMHO both issues you mention fall really in the config management category, thus, as you say, unrelated to the build server choice.
A workspace for a project build (doesn't matter if centralized or local) should really contain all necessary resources for the build.
How can you achieve that? Have a project "metadata" git repo with a "content" file containing all your project components and their dependencies (each with its own git/other repo) and their exact versions - effectively tying them together coherently (you may find it useful to store other metadata in this component down the road as well, like component specific SCM info if using a mix of SCMs across the workspace).
A workspace pull wrapper script would first pull this metadata git repo, parse the content file and then pull all the other project components and their dependencies according with the content file info. Any build in such workspace would have all the parts it needs.
When time comes to modify either the code in a project component or the version of one of the dependencies you'll need to also update this content file in the metadata git repo to reflect the update and commit it - this is how your project makes progress coherently, as a whole.
Of course, actually managing dependencies is another matter. Tons of opinions out there, some even conflicting.
We have several projects undergoing, and there are dependencies relationships among them. All projects makes up a final software.
We set up a DEV build environment to do snapshots build by using LASTEST dependencies. Any change will trigger a snapshot build (jekins job) and all dependent's snapshot build will be triggered too, and so if any changes break some project, that project's own build will notify the owner.
The question is about the release. The DEV build is continuous, and we want to release EVERY project against certain timestamp when it was a GREEN dev build across all projects.
How to get such release process setup?
thanks.
jenkins provides some Post-Build-Actions. You can use them to publish/archive every successfully built artifact to whereever you want.
Your Release-Job can take all the artifacts and deploy them. So you're sure all artifacts are from GREEN builds and is also independent from all the continuous jobs.
If you want to be really cool, do some smoke tests (e.g. is database connection working, external APIs working, etc) in the Release-Job as well.
best,
marco
we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.
We have a lot of different solutions/projects which are managed by different teams. Our solution needs to reference several projects that another team owns. We don't want to add these dependencies as project references because we do not intend on modifying that code, we just want to use it. Also we already have quite a bit of projects in our solution and don't want to add a bunch more since it will slow down Visual Studio. So we are building these projects in a separate solution and adding them as file references to our solution.
My question is, how do people manage these types of dependencies? Should I just have some automated process what looks for changes to those projects, builds them and checks the dlls into our source control, after which we treat them like other 3rd party dependencies? Is there a recommended way of doing this?
One solution, although it may not necessarily be what you are looking for, is to have each dependent sub-system perform a release. This release could be in the form of a MSI install, or just a network share of assemblies. When a significant change is made, that team could let you know, and you could run the install or a script to copy the files.
Once you got the release, you could put them into the GAC, that way you would not have to worry about copying them to your project bin folders.
Another solution, assuming you are using a build server or continuous integration of some kind, is to have a post build step or process stage the files. Than at any given moment, the developers of the other teams could grab the new files , or have a script or bat file pull them down locally.
EDIT - ANOTHER SOLUTION
It might be best to ask why do you have these dependencies? Do you really need them locally when building your part of the application? Could you mock out the dependencies in your solution, allowing you to code, build, and run unit tests? The the actual application would wire these up in your DEV/Test/Prod environments. Keeping your solution decoupled and dependent free may be a better solution for the individual team. Leave the integration and coupling when the application runs in a real setting.
(Not a complete answer, but still:)
Any delivery is better stored in a file/binary repository, as opposed to a VCS used to manage sources history.
We prefer managing those deliveries in a repo like Nexus, and we are using maven to get back the right dependencies.
Even if those tools can be more Java-oriented, Nexus can store anything, and maven is only there to read the pom.xml of each artifact and compute the right dependencies.