So, I have a yaml pipeline that has an array storing a set of versions in bash, let's say
arrayVersions=(3.0.1 3.0.2 ....).
Now, I want to set up the pipeline that splits each of these versions in one single job in the yaml pipeline, then run them in multi-agent paradigm.
CONTEXT -
I have set up the pipeline that iterates over the array and runs, however, it is very slow since it runs sequentially. So, I tried multithreaded parallel programming in bash, but it did not work out. In ideal solution, I am thinking to split up all the versions and run them as a new job in the pipeline. It would be something like this:
jobs:
# get all the versions
# split up each version into 1 single job and run the jobs in parallel
job: 3.0.1
...
job: 3.0.2
...
Is there any way I can set it up?
Have you tried using a template and calling it from the jobs section? Here's an example:
# azure-pipelines.yml
trigger:
- none
jobs:
- job: Build
steps:
- template: build-specific-version.yml
parameters:
appVersion:
- '3.0.1'
- '3.0.2'
- '3.0.3'
# build-specific-version.yml
parameters:
- name: 'appVersion'
type: object
default:
- '1.0'
- '1.1'
steps:
- ${{ each v in parameters.appVersion }}:
- script: echo ${{ v }}
Docs: Microsoft technical documentation|Template types & usage
Also see: Loops and arrays in Azure Devops Pipelines
Related
I have a pipeline, part of which loops through some aks clusters in different regions, that have some namespaces and configs them. I am trying to dynamically build job dependencies in a condition such that a job gets enabled or disabled depending on whether files are found by a particular job dependent task.
- ${{ each clusterFolder in parameters.clusterFolders }}:
- ${{ each region in parameters.regions }}:
- ${{ each namespace in parameters.aksNamespaces }}:
- job: aksNamespaces${{replace( region ,'-','')}}${{replace( namespace ,'-','')}}${{clusterFolder}}
displayName: config aks (${{parameters.environment}} ${{region}}) ${{namespace}}
dependsOn:
- FileCheckerAks${{clusterFolder}}
condition: eq(dependencies.[format('FileCheckerAks{0}', clusterFolder)].outputs['jsonChecker.hasAksJson'], 'True')
# condition: and(eq(variables['Build.SourceBranchName'], 'main'), eq(dependencies.FileCheckerAks.outputs['jsonChecker.hasAksJson'], 'True'))
pool:
name: AgentsPool
variables:
subscriptionId: ${{parameters.subscriptionId}}
resourceGroup: aks-${{region}}-rg
KUBECONFIG: $(System.DefaultWorkingDirectory)/aks/aks-${{region}}-01
steps:
- template: config-aks.yml
parameters:
environment: ${{parameters.environment}}
subscription: ${{parameters.subscription}}
cluster: aks-${{region}}-01
resourcegroup: aks-${{region}}-rg
For example above, if a job named: FileCheckerAksDev finds a relevant json file, then
the steps template config-aks.yml should be run. Otherwise, if there are different types of files, the steps should not run.
The section of the condition [format('FileCheckerAks{0}', clusterFolder)] should create a jobName that looks like this: FileCheckerAksDev, FileCheckerAksTest etc. However, when i run the pipeline, it always sets the job name as FileCheckerAks, implying that the clusterFolder looped parameter is not being concatenated correctly or I am somehow using the wrong syntax.
Does anyway know how I should be using the parameter in the job dependency such that I can dynamically set the job name?
I have a CircleCi's workflow that has 2 jobs. The second job (gradle/test) is dependent on the first one creating some files for it.
The problem is with the first job running inside a docker, and the second job (gradle/test) is not. Hence, the gradle/test is failing since it cannot find the files the first job created. How can I set gradle/test to work on the same space?
Here is a code of the workflow:
version: 2.1
orbs:
gradle: circleci/gradle#2.2.0
executors:
daml-executor:
docker:
- image: cimg/openjdk:11.0-node
...
workflows:
checkout-build-test:
jobs:
- daml_test:
daml_sdk_version: "2.2.0"
context: refapps
- gradle/test:
app_src_directory: prototype
executor: daml-executor
requires:
- daml_test
Can anyone help me configure gradle/test correctly?
CircleCI has a mechanism to share artifacts between jobs called "workspace" (well, they have multiple ones, but workspace is what you want here).
Concretely, you would add this at the end of your daml_test job definition, as an additional step:
- persist_to_workspace:
root: /path/to/folder
paths:
- "*"
and that would add all the files from /path/to/folder to the workspace. On the other side, you can "mount" the workspace in your gradle/test job by adding something like this before the step where you need the files:
- attach_workspace:
at: /whatever/mountpoint
I like to use /tmp/workspace for the path on both sides, but that's just personal preference.
I'm having a Gitlab CI/CD pipeline, and it works OK generally.
My problem is that my testing takes more than 10 minutes and it not stable (YET..) so occasionally randomly it fails on a minor test that I don't care for.
Generally, after retry, it works, but if I need an urgent deploy I need to wait another 10 minutes.
When we have an urgent bug, another 10 minutes is waaaay too much time, so I am looking for a way to force deploy even when the test failed.
I have the next pseudo ci yaml scenario that I'd failed to find a way to accomplish
stages:
- build
- test
- deploy
setup_and_build:
stage: build
script:
- build.sh
test_branch:
stage: test
script:
- test.sh
deploy:
stage: deploy
script:
- deploy.sh
only:
- master
I'm looking for a way to deploy manually if the test phase failed.
but if I add when: manual to the deploy, then deploy never happens automatically.
so a flag like when: auto_or_manual_on_previous_fail will be great.
currently, there is no such flag in Gitlab ci.
Do you have any idea for a workaround or a way to implement it?
Another approach would be to skip the test in case of an emergency release.
For that, follow "Skipping Tests in GitLab CI" from Andi Scharfstein, and:
add "skip test" in the commit message triggering that emergency release
check a variable on the test stage
That is:
.test-template: &test-template
stage: tests
except:
variables:
- $CI_COMMIT_MESSAGE =~ /\[skip[ _-]tests?\]/i
- $SKIP_TESTS
As you can see above, we also included the variable $SKIP_TESTS in the except block of the template.
This is helpful when triggering pipelines manually from GitLab’s web interface.
Here’s an example:
It's possible to control the job attribute of your deploy job by leveraging parent-child pipelines (gitlab 12.7 and above). This will let you decide if you want the job in the child pipeline to run as manual, or always
Essentially, you will need to have a .gitlab-ci.yml with:
stages:
- build
- test
- child-deploy
child-deploy stage will be used to run the child pipeline, in which the deploy job will run with the desired attribute.
Your test could generate as artifact the code for deploy section. For example, in the after_script section of your test, you can check the value of CI_JOB_STATUS builtin variable to decide if you want to write the child job to run as manual or always:
my_test:
stage: test
script:
- echo "testing, exit 0 on success, exit 1 on failure"
after_script:
- if [ "$CI_JOB_STATUS" == "success" ]; then WHEN=always; else WHEN=manual; fi
- |
cat << 'EOF' > deploy.yml
stages:
- deploy
my_deploy:
stage: deploy
rules:
- when: $WHEN
script:
- echo "deploying"
EOF
artifacts:
when: always
paths:
- deploy.yml
Note that variable expension is disabled in the heredoc section, by the use of single quoted 'EOF'. If you need variable expension, remember to escape the $ of $WHEN.
Finally, you can trigger the child pipeline with deploy.yml
gen_deploy:
stage: child-deploy
when: always
trigger:
include:
- artifact: deploy.yml
job: my_test
strategy: depend
I'm setting up an Azure Pipelines build that needs to package a C# .NET class library into a NuGet package.
In this documentation, it lists a couple different ways to automatically generate SemVer strings. In particular, I want to implement this one:
$(Major).$(Minor).$(rev:.r), where Major and Minor are two variables
defined in the build pipeline. This format will automatically
increment the build number and the package version with a new patch
number. It will keep the major and minor versions constant, until you
change them manually in the build pipeline.
But that's all they say about it, no example is provided. A link to learn more takes you to this documentation, where it says this:
For byBuildNumber, the version will be set to the build number, ensure
that your build number is a proper SemVer e.g. 1.0.$(Rev:r). If you
select byBuildNumber, the task will extract a dotted version, 1.2.3.4
and use only that, dropping any label. To use the build number as is,
you should use byEnvVar as described above, and set the environment
variable to BUILD_BUILDNUMBER.
Again, no example is provided. It looks like I want to use versioningScheme: byBuildNumber, but I'm not quite sure how to set the build number, I think it pulls it from the BUILD_BUILDNUMBER environment variable, but I can't find a way to set environment variables, only script variables. Furthermore, am I suppose to just set that to 1.0.$(Rev:r), or to $(Major).$(Minor).$(rev:.r)? I'm afraid that would just interpret it literally.
Googling for the literal string "versioningScheme: byBuildNumber" returns a single result... Does anyone have a working azure-pipelines.yml with this versioning scheme?
Working YAML example for Packaging/Versioning using byBuildNumber
NOTE the second parameter of the counter - it is a seed value, really useful when migrating builds from other build systems like TeamCity; It allows you to set the next build version explicitly upon migration. After the migration and initial build in Azure DevOps, the seed value can be set back to zero or whatever start value (like 100) you may prefer every time majorMinorVersion is changed:
reference: counter expression
name: $(majorMinorVersion).$(semanticVersion) # $(rev:r) # NOTE: rev resets when the default retention period expires
pool:
vmImage: 'vs2017-win2016'
# pipeline variables
variables:
majorMinorVersion: 1.0
# semanticVersion counter is automatically incremented by one in each execution of pipeline
# second parameter is seed value to reset to every time the referenced majorMinorVersion is changed
semanticVersion: $[counter(variables['majorMinorVersion'], 0)]
projectName: 'MyProjectName'
buildConfiguration: 'Release'
# Only run against master
trigger:
- master
# Build
- task: DotNetCoreCLI#2
displayName: Build
inputs:
projects: '**/*.csproj'
arguments: '--configuration $(BuildConfiguration)'
# Package
- task: DotNetCoreCLI#2
displayName: 'NuGet pack'
inputs:
command: 'pack'
configuration: $(BuildConfiguration)
packagesToPack: '**/$(ProjectName)*.csproj'
packDirectory: '$(build.artifactStagingDirectory)'
versioningScheme: byBuildNumber # https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#yaml-snippet
# Publish
- task: DotNetCoreCLI#2
displayName: 'Publish'
inputs:
command: 'push'
nuGetFeedType: 'internal'
packagesToPush: '$(build.artifactStagingDirectory)/$(ProjectName)*.nupkg'
publishVstsFeed: 'MyPackageFeedName'
byBuildNumber uses the build number you define in your YAML with the name field.
Ex: name: $(Build.DefinitionName)-$(date:yyyyMMdd)$(rev:.r)
So if you set your build format to name: 1.0.$(rev:.r), it should work as you expect.
I had the similar issue and now let me make it clear.
Firstly, what is the definition of Build Number?
By the official document of Azure Pipeline YAML scheme, it is
name: string # build numbering format
resources:
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: { string: string } | [ variable | templateReference ]
trigger: trigger
pr: pr
stages: [ stage | templateReference ]
Look at the first line:
name: string # build numbering format
Yes, that's it!
So you could define it like
name: 1.0.$(Rev:r)
if you prefer to Semantic Versioning. Then
Secondly, what's the meaning of versioningScheme: 'byBuildNumber' in task NuGetCommand#2?
It's really straightforward: just use the format defined by name!
Last but not least
The official document on Package Versioning and Pack NuGet packages don't make it clear that what a build number really is and how to define it. It's really misleading. And I'm so sad as an MS employee as I'd to resort to external resource to make all that clear.
Azure Pipeline Nuget Package Versioning Scheme, How to Get “1.0.$(Rev:r)”
This should be a issue in the documentation. I reproduced this issue when I set $(Major).$(Minor).$(rev:.r) in the Build number format in the Options of build pipeline:
However, I suddenly noticed that the build number is not correct with that format after many build tests:
There are two points . between 0 and 2 (Open above image in a new tab). Obviously this is very strange. So, I changed the Build number format to:
$(Major).$(Minor)$(rev:.r)
Or
$(Major).$(Minor).$(rev:r)
Now, everything is working fine.
As test, I just set the Build number format to $(rev:.r), and the build number is .x. So, we could confirm that the value of $(rev:.r) including the . by default.
Note: Since where Major and Minor are two variables defined in the build pipeline, so we need defined them in the variables manually.
Hope this helps.
Issues
My issues:
when trying the answer by #Emil, my first package started at 2.0 (I did no further testing to investigate)
when trying the answer by #Leo Liu-MSFT, I was unable to find the matching "Options" tab.
I therefore used this solution by #LanceMcCarthy.
Fix
Set the variables:
variables:
major: '1'
minor: '0'
revision: $[counter(variables['minor'], 1)] # This will get reset every time minor gets bumped.
nugetVersion: '$(major).$(minor).$(revision)'
then use nugetVersion as an environment variable when packing:
- task: NuGetCommand#2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'
versionEnvVar: 'nugetVersion'
versioningScheme: 'byEnvVar'
I think the issue many of use have is that there is no option menu. I upvoted SHarpC's post as this worked for me.
I have a workaround to add a suffix (i.e. '-beta'), since it's for some reason ignored by the Nuget pack command when using the Classic editor and setting auto versioning by build number:
Set a new environment variable, set the value as the predefined $(Build.BuildNumber) variable:
Set the build number:
Set NuGet pack command to auto-name by environment variable and specify newly added variable name:
If you're interested in the whole build/release pipeline design and YAML, have a look at my article here
The core of the problem is solved in the approved answer and refined in #Emils answer, so this is just another approach to the azure-pipelines.yml that works for us with DevOps artifacts.
name: $(majorMinorVersion).$(semanticVersion)
trigger:
- main
variables:
majorMinorVersion: 1.0
semanticVersion: $[counter(variables['majorMinorVersion'], 0)]
pool:
vmImage: windows-latest
steps:
- task: DotNetCoreCLI#2
displayName: 'Create Packages'
inputs:
command: pack
configuration: 'Release'
packagesToPack: '**/<VS projectname>.csproj'
versioningScheme: byBuildNumber
- task: NuGetAuthenticate#0
displayName: 'NuGet Authenticate'
- task: NuGetCommand#2
displayName: 'NuGet Push to feed'
inputs:
command: push
publishVstsFeed: '<DevOps projectname>/<feed name>'
BTW: Don't forget this little hinch
Im trying to create a very abstract .yaml-configuration which can be reused in different places.
Right now it looks like this:
.release: &release
stage: release
script:
- docker tag $IMAGE_TESTING $IMAGE_RELEASE
- docker push $IMAGE_RELEASE
only:
- master
when: manual
.amd64: &amd64
BASE_ARCH: 'amd64'
.debalike: &debalike
FLAVOUR: 'debalike'
release_debalike_amd64:
<<: *release
variables:
<< : [*amd64, *debalike]
Which correctly gets parsed into ...
.release:
stage: release
script:
- 'docker tag $IMAGE_TESTING $IMAGE_RELEASE'
- 'docker push $IMAGE_RELEASE'
only:
- master
when: manual
.amd64:
BASE_ARCH: amd64
.debalike:
FLAVOUR: debalike
release_debalike_amd64:
stage: release
script:
- 'docker tag $IMAGE_TESTING $IMAGE_RELEASE'
- 'docker push $IMAGE_RELEASE'
only:
- master
when: manual
variables:
BASE_ARCH: amd64
FLAVOUR: debalike
Which is the desired behaviour.
But would it be possible to avoid using the variables tag in release_debalike_amd64 and use include the anchors directly?
Something similar to this (which does not work):
.release: &release
stage: release
script:
- docker tag $IMAGE_TESTING $IMAGE_RELEASE
- docker push $IMAGE_RELEASE
only:
- master
when: manual
.amd64: &amd64
variables:
BASE_ARCH: 'amd64'
.debalike: &debalike
variables:
FLAVOUR: 'debalike'
release_debalike_amd64:
<<: *release
<<: [*amd64, *debalike]
Right now the yaml parser ignores the *debalike and just includes the values from *amd64.
Any way to achieve this? This is a .gitlab-ci.yml if it matters.
Unfortunately not. It is not possible to do deep merges using only YAML anchors and aliases.
GitLab EE introduced CI includes in 10.5, which was enhanced in 10.8 to do deep merges of CI jobs. I don't think it's going to help you in this particular case, but it's something you might be able to leverage in other ways depending on how you organize your CI files.
See include for more information about the include parameter.