I have five TeamCity builds that are triggered to run at 01:00. Since they all run on the same Agent, in effect they are queued to run one after the other. Each build takes between 10-60 minutes to complete.
What I'd like to guarantee is that all five builds are run on the same revision. Currently this is not guaranteed because in case a contributing developer can't fall asleep and decides to commit something at 01:30, all builds that start running after that time will run on a different revision.
My question:
Is it possible to configure a build to use a specific VCS revision from a specific time? E.g., configure a build to use the "latest revision at the time of 01:00 today"?
Any other suggestions on how to deal with this problem would be highly appreciated as well.
One of the builds might be set up to have the others as snapshot dependencies. When that build is triggered (e.g. by schedule trigger), its dependencies will be triggered having VCS revision fixed.
Or even new build configuration might be added, it will do nothing except to triggering its snapshot dependencies (your existing configurations).
You can use independent build(or one of a builds will be a master) configuration with trigger at time you need. New config will starts other configs by PS script which calling REST API "Triggering a Build". You can point specified change for all builds(can be get with API "Get pending changes for a build configuration"). This will guarantee all builds will start with same revision. This way might help if dependencies are unacceptable.
Related
I'm trying to setup a pipeline for running all kinds of tests against pull requests into my repository. Repository is git repo hosted with Bitbucket Server and TeamCity is Enterprise 2019.1.5 (build 66605). There are a few key aspects to the task:
There's a lot of tests. One way or the other tests should run in parallel to achieve reasonable execution time. The tests are already partitioned as separate TeamCity build configurations, each having good enough execution time.
There's much less build agents available for the task, so it's not impossible for a particular build to spend quite some time (up to 1-2 hours) in a build queue.
The result of the testing should be reported to Bitbucket as a single aggregate value. I.e. if there are, say, 3 individual builds from p.1 with two passing and one failing then Bitbucket should receive single "failed" build status.
Ideally, pipeline should be triggered by a feature branch change (refs/pull-requests/123/from in Bitbucket lingo), but checkout merges of the feature branch into target branch (refs/pull-requests/123/merge in Bitbucket lingo).
Given above requirements, I experimented with Composite Build Configuration (https://www.jetbrains.com/help/teamcity/composite-build-configuration.html) as it seemed perfect fit for the job. So I set up single composite build with proper builds from p.1 as snapshot dependencies, "Pull Requests" and "Commit status publisher" build features. It works perfectly, except for one thing...
The only thing I cannot seem to be able to work around is the fact that VCS roots in the dependencies collect changes when the build chain is added to the build queue. This means that because of p.2 (non-negligible max time spent in build queue) some builds end up running against a little bit dated sources. Ideally, I would like to be able to run the builds against the latest sources.
So my question is if there's any way to disable revision synchronization for dependencies? Or maybe I could approach the whole problem in some completely different way without using snapshot dependencies?
Cross-posted at community forum: https://teamcity-support.jetbrains.com/hc/en-us/community/posts/360006745840
We need to setup a snapshot dependency in TeamCity that runs just before the dependant build starts.
Currently we have the problem that the snapshot maybe taken 30 minutes before the build starts. (The build starts later due to other factors). Ideally we want the snapshot to be taken and then the dependant build to immediately start.
Is this possible in TeamCity? Is there another way to achieve a similar thing.
Reading between the lines I'm guessing the problem here is that the dependent build isn't always run immediately when it's snapshot dependency finishes because an agent isn't always available. If that's the case then my suggestion would be to merge the two builds into a single build configuration as that will ensure that all the steps occur with no delay in between.
We're using TeamCity 9.0.4.
Our full builds take over three hours. While a build is in progress if new commits come in they get queued with, apparently, a VCS snapshot from the time they were queued (I can't see that behaviour specified anywhere, but it's what I've observed).
So by the time the next build is dequeued there may be many builds queued up as developers have been committing changes. The intermediate builds are usually not useful at this point - we just want it to skip straight to the latest build for that configuration.
Other build systems I've used only queue one additional build per configuration and takes its VCS snapshot at the point it is dequeued. This has the effect we want.
I can't work out how to achieve this with TeamCity. What am I missing?
According to our documentation, TeamCity should perform the following build queue optimizations: https://confluence.jetbrains.com/display/TCD9/Build+Queue#BuildQueue-BuildQueueOptimizationbyTeamCity
If it does not work for you, I'd recommend upgrading your server to the most recent version first, and if it does not help, create an issue in our tracker with details about these builds.
I think you've specified this in your trigger.
Edit Configuration Settings | Triggers | VCS Trigger | Show advanced options | Trigger a build on each check-in
That option should be unchecked. The wording is a little confusing I guess. Even with this unchecked, each VCS commit will queue a build but it won't force them to be built in isolation.
I am using Jenkins to create nightly builds and deploy them to my maven repository. In order to reduce the daily bandwidth for developers I want to change the deployment logic so that it's only deployed if changes with reference to our latest deployment are detected.
I found some plugins that seem to be suitable, but since I'm not too involved into the Jenkins use-ability I wanted to ask if there is an "easy" way to implement this?
I thought about some simple test before executing the deployment process:
Check if there are changes made within the project code or the dependencies
=> NO: then nothing should be deployed
=> YES: deploy the new version
I just stumpled over the Conditional BuildStep Plugin. But I'm not sure if this plugin fits best for our approach. I don't want to mess up the whole configuration.
FYI: I am using Jenkins 1.608, Tortoise SVN, and the deploy-Plugin 1.1
Every answer and help is highly appreciated!
Setup your job to run nightly (current configuration)
Setup a conditional build step (shell or batch, depends on your OS) that will check the revision number in repository and compare to current revision number in workspace.
If not higher, quit the build.
If higher, continue build as usual
Edit:
To avoid the build even being triggered, here is a theoretical solution.
Install Poll SCM Now plugin, this will add a "poll now" button to the job, that polls SCM for changes. If changes found, a build is triggered, if not, nothing happens.
Configure the polling permissions to anonymous users (else you'd need to implement login/token in next steps)
Configure the job to poll SCM infrequently (like once a year)
Configure a cron job (or a scheduled task in Windows) for a nightly schedule
Have that cron/task do a curl or wget to http://JENKINS_URL/JOB_URL/poll
we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.