Using environment variables instead of reverse.dep to make builds "suitable" - teamcity

This question was migrated from Super User because it can be answered on Stack Overflow.
Migrated last month.
Context:
We're running the free version of Teamcity to manage our projects. Some of those projects have dependencies between each others.
The problem
Some projects have chained Snapshot Dependencies, and those dependencies are always being built instead of the latest artifacts from those dependencies being used.
Example: A depends on B, B depends on C. Push A triggers a build of C, followed by a build of B and finally a build of A.
Ideally: A would be built based on the latest built versions of B and C
Where I think the problem lies (but I might be wrong)
Each of our projects has a number of Snapshot dependencies, and each snapshot dependency is configured with the following parameters turned on:
[x] Do not run new build if there is a suitable one
[x] Only use successful builds from suitable ones
For the first option, the documentation says:
If this option is set, TeamCity will not run a new dependency build, if another dependency build in progress or completed with the appropriate source revision already exists. See also Suitable builds: (https://www.jetbrains.com/help/teamcity/2022.10/snapshot-dependencies.html#Suitable+Builds).
If we look in the Suitable Builds doc, it shows a list of requirements for a build to be considered suitable. The one I think is relevant is here:
It must not have any custom settings, including those defined via reverse.dep. (related feature request: TW-23700: (http://youtrack.jetbrains.com/issue/TW-23700)).
However, we currently have reverse.dep.*.env.SOME_PARAMETER as a Configuration Parameter in every single one of our builds (it's inherited through a template).
Based on that, it seems to me that the "Do not run new build if there is a suitable one" option is doing nothing, and therefore that's why all our dependencies are built every time (or am I wrong?)
We also have, in every one of our builds, an environment variable called env.SOME_PARAMETER which has the same value as the reverse.dep configuration parameter.
My question
Is there a way to avoid using reverse.dep in my situation so that the Do not run new build if there is a suitable one option works? Perhaps by using the environment variable instead?
I asked the senior developer at the company I work in, and they said that in theory it should work, but in practice it doesn't, but he seems recitent to explain further. I'm just a beginner in Teamcity, so detailed explanations are welcome

First things first: what is a Snapshot Dependency in a nutshell?
A Snapshot Dependency in a nutshell is a dependency between two build configurations which are linked by shared VCS Roots. VCS Roots are sort of like time lines in a book: they represent a chain of events (e.g. git commit history) and let you build from a givent point in time (e.g. commit).
Now, TeamCity excels at doing what it is intended to do: Continuous Integration and Deployment. It does so by being tied up closely to VCS Roots and effectively changes in these (optionally narrowed down scopes of the) VCS roots. A Snapshot Dependency is a dependency which links together the dependency based on VCS Roots and their changes.
An example
Build A has two VCS Roots, Foo and Bar. Foo and Bar are, say, different Git repositories which Build A needs to fetch before it is able to build the "A" artifact. Build B only needs Foo, and thus only has Foo attached as a VCS Root. Build A has a Snapshot Dependency on Build B, which we configure like yours: "Do not run new build if there is a suitable one" and "Only use successful builds from suitable ones".
So far so good. Now let's push a new commit to the "Foo" repository. TeamCity notices this and potentially triggers a new build of Build A, because the latest build of A is at that point outdated (it did not have our latest Foo commit included). The Snapshot Dependency of B in A links these two build configurations together so that - with our above configuration of the Dep. - we can require a build of B which includes the same revision of Foo, that build A was kicked off with (e.g. the latest commit). Because this does not (yet) exist, a build of B is started and put above build A in the queue.
Simply put: the VCS Root is a timeline, the Snapshot Dependency is a link between two build configurations based on the timeline(s) they have in common, and the configuration of the dependency dictates what should happen when a dependency is needed (e.g. "Do not run new build if there is a suitable one").
If we had manually started a build B with the latest Foo revision included, this would have been a suitable candidate for reuse, and TeamCity would simply remove the queued B build once it discovered that a build of B already exists, which shares the same changes that build A is started with (the latest push to Foo).
If you want just the latest artifacts of B and C...
...use Artifact Dependencies and only these. Removing the Snapshot Dependency of the build removes your need of having the dependency build every time Build A is triggered by a new change in its VCS Root. It however also means that there is no timeline linkage between the two builds and that you yourself need to ensure or be sure that the artifacts produced by B and C are not tightly linked to A. E.g. Build C could produce a driver as an artifact, B could produce a user manual of the driver and build A could just be using the driver, only expecting that it is in a working condition (but otherwise does not depend on changes in it).
Your question about reverse.dep.*...
I've not heard about this causing trouble before. I would however expect that a Snapshot Dependency (and not just an artifact dependency) is required by TeamCity for you to be allowed to use it.
Question is: do you need it? It sounds like you've got the value elsewhere already, and honestly fetching values from previous builds is likely going to cause you trouble in the long run, especially if you don't have a specifically good reason to do so.

Related

Grade substitute dependency if not found

Maybe I’m approaching this incorrectly but say we have a module A which is built on by B (and others). We will make some changes to each of these two modules on dev branches before merging back to trunk. Sometimes we will make a change to both A and B at the same time. In this case, as they are built as independent modules, we have to publish a snapshot of the A branch and change the A-Snapshot dependency in B to point to the branch snapshot of A.
We currently use the branch information to determine the A dependency, when building the trunk the version ends up as a trunk-snapshot, when on a release branch it defaults to a snapshot of that branch. What would be nice would be to determine we are on a task branch of B, which we could do, and then try to resolve a matching A dependency and if that doesn’t exist revert back to the current approach which would give us a trunk snapshot for example.
So in B we would have something like
ifExists(‘myOrg:A:dev-snapshot’).orElse(‘myOrg:A:trunk-snapshot’)
I’ve seen there are some substitution mechanisms but I can’t see that any deal with missing dependencies so I’m not sure it’s possible.
We obviously can and do deal with this manually when it occurs it would just be nice to incorporate into the grade script somehow to avoid any accidental merge of the dev snapshot dependency onto the B trunk.
Thanks for any suggestions.

Is there deployment trigger for teamcity?

I know there are build triggers. But do deployment triggers exist in team city? I googled quite a bit and looked at the doc and cannot seem to find it. I basically want custom code to check few things before deploying. But it needs to happen from team city's deploy page.
You should be able to do this by adding a Build, Snapshot, or Artifact dependency. I'm guessing you want a snapshot dependency;
A snapshot dependency is a dependency between two build configurations
that allows launching builds from both build configurations in a
specific order and ensure they use the same sources snapshot (sources
revisions correspond to the same moment).
When you have a number of build configurations interconnected by
snapshot dependencies, they form a build chain.
You can then put your tests into that target (or another one in the chain) depending upon your needs. There's really too much to put into a brief answer here. It's best if you read the complete documentation.

Maven publishing artefacts to remote repository and using $release in the artefact version

Wondering how people manage their project artefacts through an environment lifecycle of say DEV - AQA - CQA - RELEASE and if there's some best practices to follow.
I use a Jenkins build server to build my projects (code checkout then maven build). My artefacts all have version 1.0.0-SNAPSHOT and are published to a local .m2 repo on the build server. There are also Jenkins jobs that rebuild the DEV system (on the same server) using those artefacts. The project build is automated whenever someone checks in code. The DEV build is automated on a nightly basis.
At some point, my lead developer determines that our project is fit to go to AQA (the first level of testing environment on a different server).
For this I need to mark the artefacts as version 1.0.0-1 and publish to a remote AQA repository (it's actually a Nexus repo).
The Maven deploy plugin sounds like the right approach, but how do I change the version number to be effectively 1.0.0-$release (where $release is just an incrementing number starting from 1)? Would Maven/Nexus be able to manage the value of $release, or would I need a simple properties file in my project to store/update the last used $release.
Furthermore, someone tests AQA and determines its fit to move on to CQA (second testing env). This is 'promote to AQA'. So my requirement is to copy the artefact from the AQA Nexus repo and publish to the CQA Nexus repo.
Likewise, after CQA, there'd be a 'promote to RELEASE' job too.
I think the version value remains unchanged during the 'promote' phases. I'd expect the AQA repo to see all versions 1-50, but CQA only 25 and 50, then RELEASE only 50, for example.
I can find loads of info about Maven plugins/goals/phases, but very little about a prescriptive method on how or where to use outside of the immediate development environment.
Any suggestions gratefully received.
Staging/promoting is out of scope for Maven. Once deployed/uploaded to a remote repository, that system is responsible for the completion of the release cycle. Read this chapter about staging: http://books.sonatype.com/nexus-book/reference/staging.html if you use Nexus.
Build numbers are just that build numbers. They are not promotion / staging numbers.
You should come up with another means of tracking your promotions, because otherwise one might get confused in "knowing" that build 10.1.4-2 is the same as 10.1.4-6. Certainly, all of the Maven related software will see those two builds as different builds.
In addition, if a person "grabs" the wrong copy of the build, the way you are managing staging within your build number will increase confusion. As if you don't kill all of the 10.1.4-2 builds, then someone might get a copy of that not realizing that the build has been promoted to 10.1.4-6. This means that for the "last" staging number to be the most likely one to be grabbed, you must do two things (which are impossible in combination)
Remove all the old staging numbers, updating them to the new ones.
Ensure that no copy of an old staging number escaped the update.
Since people generally can copy files without being tracked, or said files might not be reachable at time up "update", or timing between reaching all the files cannot be simultaneous, such a system is doomed to fail.
Instead, I recommend (if you must track by file), placing the same file in different "staging directories". This defines release gateways by whether the file exists in a certain directory, and makes it clear that it is the same file that is going through the entire process. In addition, it becomes easy to have various stages of verification poll their respective directories (and you can write Jenkins tasks to promote from one directory to another, if you really wish).

Setting up a Staging process for Java projects

I currently have a large number of projects with a complex dependency structure being developed by 50+ developers using Maven 2/3 with Nexus and Jenkins. We develop at a fast pace and more often that not we build and deploy releases with SNAPSHOT dependencies. Most of the time we cannot wait for one team or another to build a release version before we have to deploy. Due to this we have had a number of issues with bugs and works-in-process entering production and having no idea what changes are in those SNAPSHOTS.
What I am looking to do is to eliminate SNAPSHOTS and move to automatic releases with the versions-maven-plugin and to implement a release Staging policy.
And here in lays the problem. For developers and for the CI build it needs to be configured to resolve dependencies from "Staging" and "Release" and publish ONLY to "Staging". We then want to be able to "Promote" that build as a release, re-building it to resolve dependencies from "Release" and publish to "Release" (initially this will be a manual promotion but we may wish to automate this also). Does this all sound reasonable?
I do not want to use the maven-release-plugin because we have tried to use this before and we don't like the fact that it modifies the pom automatically and makes changes in our scm, triggering more builds.
Also, is there even a way to tell maven to use one repository for resolving and another to publish to? Could I do this with gradle?
Any comments/ideas on this would be greatly appreciated.
It sounds a lot like my build system.
I do this using gradle and 2 separate nexus repositories (both "release" repositories, 1 staging and 1 final). The build has the concept of a target maturity and it uses this to calculate the version number (using semantic versioning) so an RC build produces a version like 1.0.0-rc1 and a final build produces a version like 1.0.0. We then have tests tagged in different ways (testng groups, cucumber tags etc) so that different slices of the tests run at different times. The build decides which repositories to depend on or publish to based on the target maturity and hence can ensure that only sufficiently mature artefacts are consumed.
This is configured to run automatically via teamcity build chains so a commit in some core lib ripples through the downstream builds and on to integration testing/deployment (by rundeck via its java api by a set of gradle tasks) if necessary.
Gradle publication and resolution repositories are configured separately so can be different. The same can be done in maven too.
For example, given a dependency graph like
corelib -> mylib -> myapp
Each thing would have a set of tests associated with them that are tagged as either beta or rc. The intention being that a build that produces an RC is one that has passed the beta tests (i.e. if you pass beta then you're mature enough to be an RC) & that build is one that finishes quickly (e.g. does unit tests only) whereas the rc tests (that produces a final build) might do some integration tests or some more long running tests. This definition is our own and is completely arbitrary, you could make any distinction you like. Ours is just based on applying increasingly rigorous and/or long lasting tests only once you have a certain confidence level in the product.
We then setup a build chain so that the rc builds depend on upstream rc builds and the final builds depend on the upstream final build and your own rc build
i.e.
--> mylib final --
/ \
mylib rc -- --> myapp final
\ /
--> myapp rc -----
and so on. In this example the flow is
commit a change to mylib
mylib rc build runs
runs beta tests
publish result to rc repository
mylib final and myapp rc can run in parallel
mylib final build runs
runs rc tests
publish result to final repository
myapp rc build runs
depends on rc repository so picks up result of previous mylib rc build
runs beta tests
myapp final runs
depends on final repository so picks up result of previous mylib final build
runs rc tests
Version numbers at each point are calculated by interrogating source control
Dependencies, on our own artefacts, are dynamic (1.0.+ in ivy terms for example), major.minor is set statically (in source control) and the build is left to produce patch and candidate numbers itself, i.e. myapp 1.0 will depend on mylib 1.0.+ . It is much simpler IMO to have 2 separate repositories as the filtering mechanism rather than have to dig into the resolution logic in gradle/ivy to filter out the ones we didn't want.
Look at the maven-version-plugin, specifically the lock/unlock snapshot goals. They can help to identify which dependencies an assembly is using, and might be sufficient for you.
If not, yes - maven can use staging repositories, but it might lead to confusion if anything you want to stage is also pulled on local developers repositories.
What we're using right now to solve a similar problem is a 'scripted' somewhat equivalent to maven-release-plugin (it actually calls it for some operation), and an awkward versioning scheme: our development branch stays 1.2-SNAPSHOT, but our released/published builds use a different versioning scheme (1.2.3, 1.2.4). Our script tags git as well.
I recommend not to use the maven release plugin and not to use snapshots.
Look at here to release in a fast and clean way.

Continuum finding dependencies and building on chain-dependent projects

I am the Configuration manager for an IT firm. Currently we are using anthill build management server for all our build related purposes. We are looking to implement Continuous Integration in our development life cycle.
Currently the building process is done manually. Suppose there are 5 projects A,B,C,D,E and E is the parent project and the dependency chain does like this:
A->B->C->D->E
What we do is we build A first update project.xml of B to the latest version of A, build B so on and so forth untill all dependent projects get built and finally parent project gets built.
What I am thinking is automating the entire process i.e. automatically finding out dependencies and building them first and then updating the version of parent projects and building them again to a newer version.
Would continuum do this for me? If not is here any other CI tool that does this?
Hudson does this really well, if you're using Maven, it'll even automatically figure out the build dependencies for you automatically after the first build, otherwise you can manually define the build dependencies. I.e., it lets you configure the system to build project B after a successful project A build.
I'm not sure if it matters to you, but Hudson is also open source.
If not is here any other CI tool that does this?
I like TeamCity, which does pretty much everything you'll need. With the latest version (and a plugin from JetBrains), there's even Git support.
On the other hand, any continuous integration system should handle dependencies easily.
We use Zed Builds and Bugs for a setup similar to this. We have a master project that has sub-project dependencies and the build system handles everything in the proper order.
We also have very small, tight builds for the sub-projects so that each of them can be built when the developers commit to source control. The Zed Server is capable of pulling the latest artifacts from these small builds and putting them together into larger builds, but we haven't yet used that feature.
Our check-ins trigger the small CI builds, and then twice per day the entire application is re-built from scratch, following the dependency chain.
I'd agree with OregonGhost, though, any CI system should be able to set up this type of chain.
I don't think you need a CI tool for this. Try to automate this using a buildscript and use Continuum (or any other CI tool) to trigger your preferred buildtool.

Resources