I have three build configurations inside a build. Let's say A, B and C. C depends on B and B depends on A.
Suppose I trigger a build manually on C. Now, A, B, C steps will get queued for build.
Step A might cause a source update and commit to source control. When this happens, I want to stop the entire build chain first. Then I want to Retrigger (automatically) C with the same parameters as when it was run manually in the first place - but using the new source.
Is there any way to get this done?
Yes, you can absolutely do this with native functionality in 3 steps.
Force Failure (for A)
Configure "Fail when dependency fails" (for B and C)
"Retrigger on failure" (C).
Step 1
You can make a build fail manually by setting the build status. There is a post on this in the jetbrains support issues:
To fail build it should be enough to print something like this:
"##teamcity[buildStatus status='FAILURE']"
For the sake of maintainability, I recommend doing this in an extra command line build step.
Step 2
You can trigger canceling a build when a dependency fails. This can chain transitively. For your project B and C, set the following in the snapshot dependency:
Step 3
There is a build trigger called "Retry Build Trigger" that retriggers a build when that build has failed. You may decide what retry number is appropriate for you, depending how often A changes. Important: Uncheck "Trigger new build with the same revisions".
Related
I have the git commits and the tags as below
The tag of the latest commit is 12.0.0-beta.6. When I run GitVersion, the result is as below
So the build number on TeamCity should be 12.0.0-beta.6, but I don't know why the GitVersion generates build number 12.0.0-beta.6+8 on TeamCity.
My build steps are as below
Does anyone know why GitVersion generates the redundant "+8" after the FullSemVer?
How to remove this dedundant "+8"?
Thank you very much.
The +8 is metadata and not actually part of the version number. In the documentation for the version variables, BuildMetaData is described as:
The build metadata, usually representing number of commits since the VersionSourceSha.
You can see the same value 8 exposed as CommitsSinceVersionSouce and this should increase with every commit made to the branch since its "version source", which in this case is the tag 12.0.0-beta.6 made on commit 549a1d.
The metadata does not increase with every build, but with every commit. I can see how the current language is confusing and the name BuildMetaData does not help. Since BuildMetaData is an exposed variable, we can't change its name, but I've submitted a PR that will hopefully clarify its documentation.
Read carefully and understand at least part of GitVerson docs (paragraphs 1+2)
The shortest possible extraction
GitVersion will increment the metadata for each build so you can tell
builds apart. For example, 1.0.0+5 followed by 1.0.0+6. It is important
to note that build metadata is not part of the semantic version; it is
just metadata!
My Gradle build has two task:
findRevision(type: SvnInfo)
buildWAR(type: MavenExec, dependsOn: findRevision)
Both tasks are configuration based, but the buildWAR task depends on a project property that is only defined in the execution phase of the findRevision task.
This breaks the process, as Gradle cannot find said property at the time it tries to configure the buildWAR task.
Is there any way to delay binding or configuration until another task has executed?
In this specific case I can make use of the mavenexec method instead of the MavenExec task type, but what should be done in similar scenarios where no alternative method exists?
Depending on what configuration option exactly you want to change, you might change it in the execution phase of the task with buildWAR.doFirst { }. But generally this is a really bad idea. If you e. g. change something that influences the result of the UP-TO-DATE checks like input files for example, the task might execute though it would not be necessary or even worse do not execute thoug it would be necessary. You can of course make the task always execute to overcome this with outputs.upToDateWhen { false }, but there might be other problems and also this way you disable one of Gradles biggest strenghts.
It is a much better idea to redesign your build so that this is not necessary. For example determining the revision at configuration time already. Depending on whether the task needs much time this might be a viable solution or not. Also depending on what you want to do with the revision, you might consider the suggestion of #LanceJava and make your findRevision task generate a file with the revision in it that is then packaged into the WAR and used at runtime.
I want to get a better idea of my build job metrics but unfortunately, make doesn't output timestamps per se.
If I run make --print-data-base, for a given target it outputs a line
# Last modified 2016-08-15 13:53:16
but that doesn't give me the duration.
QUESTION
Is there a way to get duration of building a target without modifying each target? Some targets are inside makefiles which are generated DURING the build so not feasible to modify their recipes.
POSSIBLE SOLUTION
I could implement a pre- and post-recipe for every target and output a timestamp that way.
Is that a good idea given this is parallel make? Obviously there would be increased build time from calling a pre- and post-recipe for every target but I'd be fine with that.
If this is a parallel make, then the "preactions", "actions" and "postactions" may be interleaved. That is, you might get output like:
Pre-action 12:03:05
Pre-action 12:03:06
building foo...
building bar...
Post-action 12:04:17
Post-action 12:04:51
So it would behoove you to pass a TARGETNAME variable to the pre-action and post-action scripts.
Also, start and end times are not all there is to know about how long an action takes, when you are running things in parallel; rule A might take longer that rule B, simply because rule B is running alone while rule A is sharing the processor with rules C through J.
Other than that, I see no problem with this approach.
If I have different packages and each have a test file (pkg_test.go) is there a way for me to make sure that they run in a particular order ?
Say pkg1_test.go gets executed first and then the rest.
I tried using go channels but it seems to hang.
It isn't obvious, considering a go test ./... triggers test on all packages... but runs in parallel: see "Go: how to run tests for multiple packages?".
go test -p 1 would run the tests sequentially, but not necessarily in the order you would need.
A simple script calling go test on the packages listed in the right expected order would be easier to do.
Update 6 years later: the best practice is to not rely on test order.
So much so issue 28592 advocates for adding -shuffle and -shuffleseed to shuffle tests.
CL 310033 mentions:
This CL adds a new flag to the testing package and the go test command
which randomizes the execution order for tests and benchmarks.
This can be useful for identifying unwanted dependencies
between test or benchmark functions.
The flag is off by default.
If -shuffle is set to on then the system
clock will be used as the seed value.
If -shuffle is set to an integer N, then N will be used as the seed value.
In both cases, the seed will be reported for failed runs so that they can reproduced later on.
Picked up for Go 1.17 (Aug. 2021) in commit cbb3f09.
See more at "Benchmarking with Go".
I found a hack to get around this.
I named my test files as follow:
A_{test_file1}_test.go
B_{test_file2}_test.go
C_{test_file3}_test.go
The A B C will ensure they are run in order.
We currently have created several jobs for our components. These components all depend on each other like the following:
A -> B -> C
Currently it is possible to run these jobs separately independent from each other. If someone is running C the build uses A and B artifacts from a previous build.
Now it should be possible to optionally build these jobs in a row. My first thought was some kind of a BuildAll-job which starts the other jobs in the right order, but it does not seem to be possible to start other jobs in a buildstep.
Solving this by using the Build other projects-option is not a solution, because this would always trigger the other builds if someone e.g. starts A.
So anyone got an idea on how to solve this? Is something like this possible? Perhaps I missed an option/plugin to use other jobs as buildsteps?
I would look at using the Parameterized Trigger plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin
It allows you to trigger another job as a build step, with parameters if you need them. This would allow you to create BuildAll job that calls A, then B, then C in sequence.
Have you considered:
https://wiki.jenkins-ci.org/display/JENKINS/Join+Plugin
This can help you with the "Build-All" step if you want to go down that path.
However, one part that I do not understand is that,
if A -> B -> C,
how are any optional? If you can clarify, might be able to help you better.