Is there deployment trigger for teamcity? - teamcity

I know there are build triggers. But do deployment triggers exist in team city? I googled quite a bit and looked at the doc and cannot seem to find it. I basically want custom code to check few things before deploying. But it needs to happen from team city's deploy page.

You should be able to do this by adding a Build, Snapshot, or Artifact dependency. I'm guessing you want a snapshot dependency;
A snapshot dependency is a dependency between two build configurations
that allows launching builds from both build configurations in a
specific order and ensure they use the same sources snapshot (sources
revisions correspond to the same moment).
When you have a number of build configurations interconnected by
snapshot dependencies, they form a build chain.
You can then put your tests into that target (or another one in the chain) depending upon your needs. There's really too much to put into a brief answer here. It's best if you read the complete documentation.

Related

Using environment variables instead of reverse.dep to make builds "suitable"

This question was migrated from Super User because it can be answered on Stack Overflow.
Migrated last month.
Context:
We're running the free version of Teamcity to manage our projects. Some of those projects have dependencies between each others.
The problem
Some projects have chained Snapshot Dependencies, and those dependencies are always being built instead of the latest artifacts from those dependencies being used.
Example: A depends on B, B depends on C. Push A triggers a build of C, followed by a build of B and finally a build of A.
Ideally: A would be built based on the latest built versions of B and C
Where I think the problem lies (but I might be wrong)
Each of our projects has a number of Snapshot dependencies, and each snapshot dependency is configured with the following parameters turned on:
[x] Do not run new build if there is a suitable one
[x] Only use successful builds from suitable ones
For the first option, the documentation says:
If this option is set, TeamCity will not run a new dependency build, if another dependency build in progress or completed with the appropriate source revision already exists. See also Suitable builds: (https://www.jetbrains.com/help/teamcity/2022.10/snapshot-dependencies.html#Suitable+Builds).
If we look in the Suitable Builds doc, it shows a list of requirements for a build to be considered suitable. The one I think is relevant is here:
It must not have any custom settings, including those defined via reverse.dep. (related feature request: TW-23700: (http://youtrack.jetbrains.com/issue/TW-23700)).
However, we currently have reverse.dep.*.env.SOME_PARAMETER as a Configuration Parameter in every single one of our builds (it's inherited through a template).
Based on that, it seems to me that the "Do not run new build if there is a suitable one" option is doing nothing, and therefore that's why all our dependencies are built every time (or am I wrong?)
We also have, in every one of our builds, an environment variable called env.SOME_PARAMETER which has the same value as the reverse.dep configuration parameter.
My question
Is there a way to avoid using reverse.dep in my situation so that the Do not run new build if there is a suitable one option works? Perhaps by using the environment variable instead?
I asked the senior developer at the company I work in, and they said that in theory it should work, but in practice it doesn't, but he seems recitent to explain further. I'm just a beginner in Teamcity, so detailed explanations are welcome
First things first: what is a Snapshot Dependency in a nutshell?
A Snapshot Dependency in a nutshell is a dependency between two build configurations which are linked by shared VCS Roots. VCS Roots are sort of like time lines in a book: they represent a chain of events (e.g. git commit history) and let you build from a givent point in time (e.g. commit).
Now, TeamCity excels at doing what it is intended to do: Continuous Integration and Deployment. It does so by being tied up closely to VCS Roots and effectively changes in these (optionally narrowed down scopes of the) VCS roots. A Snapshot Dependency is a dependency which links together the dependency based on VCS Roots and their changes.
An example
Build A has two VCS Roots, Foo and Bar. Foo and Bar are, say, different Git repositories which Build A needs to fetch before it is able to build the "A" artifact. Build B only needs Foo, and thus only has Foo attached as a VCS Root. Build A has a Snapshot Dependency on Build B, which we configure like yours: "Do not run new build if there is a suitable one" and "Only use successful builds from suitable ones".
So far so good. Now let's push a new commit to the "Foo" repository. TeamCity notices this and potentially triggers a new build of Build A, because the latest build of A is at that point outdated (it did not have our latest Foo commit included). The Snapshot Dependency of B in A links these two build configurations together so that - with our above configuration of the Dep. - we can require a build of B which includes the same revision of Foo, that build A was kicked off with (e.g. the latest commit). Because this does not (yet) exist, a build of B is started and put above build A in the queue.
Simply put: the VCS Root is a timeline, the Snapshot Dependency is a link between two build configurations based on the timeline(s) they have in common, and the configuration of the dependency dictates what should happen when a dependency is needed (e.g. "Do not run new build if there is a suitable one").
If we had manually started a build B with the latest Foo revision included, this would have been a suitable candidate for reuse, and TeamCity would simply remove the queued B build once it discovered that a build of B already exists, which shares the same changes that build A is started with (the latest push to Foo).
If you want just the latest artifacts of B and C...
...use Artifact Dependencies and only these. Removing the Snapshot Dependency of the build removes your need of having the dependency build every time Build A is triggered by a new change in its VCS Root. It however also means that there is no timeline linkage between the two builds and that you yourself need to ensure or be sure that the artifacts produced by B and C are not tightly linked to A. E.g. Build C could produce a driver as an artifact, B could produce a user manual of the driver and build A could just be using the driver, only expecting that it is in a working condition (but otherwise does not depend on changes in it).
Your question about reverse.dep.*...
I've not heard about this causing trouble before. I would however expect that a Snapshot Dependency (and not just an artifact dependency) is required by TeamCity for you to be allowed to use it.
Question is: do you need it? It sounds like you've got the value elsewhere already, and honestly fetching values from previous builds is likely going to cause you trouble in the long run, especially if you don't have a specifically good reason to do so.

Build dependencies and local builds with continuous integration

Our company currently uses TFS for source control and build server. Most of our projects are written in C/C++, but we also have some .NET projects and wouldn't want to be limited if we need to use other languages in the future.
We'd like to use Git for our source control and we're trying to understand what would be the best choice for a build server. We have started looking into TeamCity, but there are some issues we're having trouble with which will probably be relevant regardless of our choice of build server:
Build dependencies - We'd like to be able to control the build dependencies for each <project, branch>. For example, have <MyProj, feature_branch> depend on <InfraProj1, feature_branch> and <InfraProj2, master>.
From what we’ve seen, to do that we might need to use Gradle or something similar to build our projects instead of plain MSBuild. Is this correct? Are there simpler ways of achieving this?
Local builds - Obviously we'd like to be able to build projects locally as well. This becomes somewhat of a problem when project dependencies are introduced, as we need a way to reference these resources or copy them locally for the build to succeed. How is this usually solved?
I'd appreciate any input, but a sample setup which covers these issues will also be a great help.
IMHO both issues you mention fall really in the config management category, thus, as you say, unrelated to the build server choice.
A workspace for a project build (doesn't matter if centralized or local) should really contain all necessary resources for the build.
How can you achieve that? Have a project "metadata" git repo with a "content" file containing all your project components and their dependencies (each with its own git/other repo) and their exact versions - effectively tying them together coherently (you may find it useful to store other metadata in this component down the road as well, like component specific SCM info if using a mix of SCMs across the workspace).
A workspace pull wrapper script would first pull this metadata git repo, parse the content file and then pull all the other project components and their dependencies according with the content file info. Any build in such workspace would have all the parts it needs.
When time comes to modify either the code in a project component or the version of one of the dependencies you'll need to also update this content file in the metadata git repo to reflect the update and commit it - this is how your project makes progress coherently, as a whole.
Of course, actually managing dependencies is another matter. Tons of opinions out there, some even conflicting.

multiple maven projects release against a common timestamp

We have several projects undergoing, and there are dependencies relationships among them. All projects makes up a final software.
We set up a DEV build environment to do snapshots build by using LASTEST dependencies. Any change will trigger a snapshot build (jekins job) and all dependent's snapshot build will be triggered too, and so if any changes break some project, that project's own build will notify the owner.
The question is about the release. The DEV build is continuous, and we want to release EVERY project against certain timestamp when it was a GREEN dev build across all projects.
How to get such release process setup?
thanks.
jenkins provides some Post-Build-Actions. You can use them to publish/archive every successfully built artifact to whereever you want.
Your Release-Job can take all the artifacts and deploy them. So you're sure all artifacts are from GREEN builds and is also independent from all the continuous jobs.
If you want to be really cool, do some smoke tests (e.g. is database connection working, external APIs working, etc) in the Release-Job as well.
best,
marco

How to split a big Jenkins job/project into smaller jobs without compromising functuality?

we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.

Continuum finding dependencies and building on chain-dependent projects

I am the Configuration manager for an IT firm. Currently we are using anthill build management server for all our build related purposes. We are looking to implement Continuous Integration in our development life cycle.
Currently the building process is done manually. Suppose there are 5 projects A,B,C,D,E and E is the parent project and the dependency chain does like this:
A->B->C->D->E
What we do is we build A first update project.xml of B to the latest version of A, build B so on and so forth untill all dependent projects get built and finally parent project gets built.
What I am thinking is automating the entire process i.e. automatically finding out dependencies and building them first and then updating the version of parent projects and building them again to a newer version.
Would continuum do this for me? If not is here any other CI tool that does this?
Hudson does this really well, if you're using Maven, it'll even automatically figure out the build dependencies for you automatically after the first build, otherwise you can manually define the build dependencies. I.e., it lets you configure the system to build project B after a successful project A build.
I'm not sure if it matters to you, but Hudson is also open source.
If not is here any other CI tool that does this?
I like TeamCity, which does pretty much everything you'll need. With the latest version (and a plugin from JetBrains), there's even Git support.
On the other hand, any continuous integration system should handle dependencies easily.
We use Zed Builds and Bugs for a setup similar to this. We have a master project that has sub-project dependencies and the build system handles everything in the proper order.
We also have very small, tight builds for the sub-projects so that each of them can be built when the developers commit to source control. The Zed Server is capable of pulling the latest artifacts from these small builds and putting them together into larger builds, but we haven't yet used that feature.
Our check-ins trigger the small CI builds, and then twice per day the entire application is re-built from scratch, following the dependency chain.
I'd agree with OregonGhost, though, any CI system should be able to set up this type of chain.
I don't think you need a CI tool for this. Try to automate this using a buildscript and use Continuum (or any other CI tool) to trigger your preferred buildtool.

Resources