Azure Pipeline Nuget Package Versioning Scheme, How to Get "1.0.$(Rev:r)" - continuous-integration

I'm setting up an Azure Pipelines build that needs to package a C# .NET class library into a NuGet package.
In this documentation, it lists a couple different ways to automatically generate SemVer strings. In particular, I want to implement this one:
$(Major).$(Minor).$(rev:.r), where Major and Minor are two variables
defined in the build pipeline. This format will automatically
increment the build number and the package version with a new patch
number. It will keep the major and minor versions constant, until you
change them manually in the build pipeline.
But that's all they say about it, no example is provided. A link to learn more takes you to this documentation, where it says this:
For byBuildNumber, the version will be set to the build number, ensure
that your build number is a proper SemVer e.g. 1.0.$(Rev:r). If you
select byBuildNumber, the task will extract a dotted version, 1.2.3.4
and use only that, dropping any label. To use the build number as is,
you should use byEnvVar as described above, and set the environment
variable to BUILD_BUILDNUMBER.
Again, no example is provided. It looks like I want to use versioningScheme: byBuildNumber, but I'm not quite sure how to set the build number, I think it pulls it from the BUILD_BUILDNUMBER environment variable, but I can't find a way to set environment variables, only script variables. Furthermore, am I suppose to just set that to 1.0.$(Rev:r), or to $(Major).$(Minor).$(rev:.r)? I'm afraid that would just interpret it literally.
Googling for the literal string "versioningScheme: byBuildNumber" returns a single result... Does anyone have a working azure-pipelines.yml with this versioning scheme?

Working YAML example for Packaging/Versioning using byBuildNumber
NOTE the second parameter of the counter - it is a seed value, really useful when migrating builds from other build systems like TeamCity; It allows you to set the next build version explicitly upon migration. After the migration and initial build in Azure DevOps, the seed value can be set back to zero or whatever start value (like 100) you may prefer every time majorMinorVersion is changed:
reference: counter expression
name: $(majorMinorVersion).$(semanticVersion) # $(rev:r) # NOTE: rev resets when the default retention period expires
pool:
vmImage: 'vs2017-win2016'
# pipeline variables
variables:
majorMinorVersion: 1.0
# semanticVersion counter is automatically incremented by one in each execution of pipeline
# second parameter is seed value to reset to every time the referenced majorMinorVersion is changed
semanticVersion: $[counter(variables['majorMinorVersion'], 0)]
projectName: 'MyProjectName'
buildConfiguration: 'Release'
# Only run against master
trigger:
- master
# Build
- task: DotNetCoreCLI#2
displayName: Build
inputs:
projects: '**/*.csproj'
arguments: '--configuration $(BuildConfiguration)'
# Package
- task: DotNetCoreCLI#2
displayName: 'NuGet pack'
inputs:
command: 'pack'
configuration: $(BuildConfiguration)
packagesToPack: '**/$(ProjectName)*.csproj'
packDirectory: '$(build.artifactStagingDirectory)'
versioningScheme: byBuildNumber # https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#yaml-snippet
# Publish
- task: DotNetCoreCLI#2
displayName: 'Publish'
inputs:
command: 'push'
nuGetFeedType: 'internal'
packagesToPush: '$(build.artifactStagingDirectory)/$(ProjectName)*.nupkg'
publishVstsFeed: 'MyPackageFeedName'

byBuildNumber uses the build number you define in your YAML with the name field.
Ex: name: $(Build.DefinitionName)-$(date:yyyyMMdd)$(rev:.r)
So if you set your build format to name: 1.0.$(rev:.r), it should work as you expect.

I had the similar issue and now let me make it clear.
Firstly, what is the definition of Build Number?
By the official document of Azure Pipeline YAML scheme, it is
name: string # build numbering format
resources:
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: { string: string } | [ variable | templateReference ]
trigger: trigger
pr: pr
stages: [ stage | templateReference ]
Look at the first line:
name: string # build numbering format
Yes, that's it!
So you could define it like
name: 1.0.$(Rev:r)
if you prefer to Semantic Versioning. Then
Secondly, what's the meaning of versioningScheme: 'byBuildNumber' in task NuGetCommand#2?
It's really straightforward: just use the format defined by name!
Last but not least
The official document on Package Versioning and Pack NuGet packages don't make it clear that what a build number really is and how to define it. It's really misleading. And I'm so sad as an MS employee as I'd to resort to external resource to make all that clear.

Azure Pipeline Nuget Package Versioning Scheme, How to Get “1.0.$(Rev:r)”
This should be a issue in the documentation. I reproduced this issue when I set $(Major).$(Minor).$(rev:.r) in the Build number format in the Options of build pipeline:
However, I suddenly noticed that the build number is not correct with that format after many build tests:
There are two points . between 0 and 2 (Open above image in a new tab). Obviously this is very strange. So, I changed the Build number format to:
$(Major).$(Minor)$(rev:.r)
Or
$(Major).$(Minor).$(rev:r)
Now, everything is working fine.
As test, I just set the Build number format to $(rev:.r), and the build number is .x. So, we could confirm that the value of $(rev:.r) including the . by default.
Note: Since where Major and Minor are two variables defined in the build pipeline, so we need defined them in the variables manually.
Hope this helps.

Issues
My issues:
when trying the answer by #Emil, my first package started at 2.0 (I did no further testing to investigate)
when trying the answer by #Leo Liu-MSFT, I was unable to find the matching "Options" tab.
I therefore used this solution by #LanceMcCarthy.
Fix
Set the variables:
variables:
major: '1'
minor: '0'
revision: $[counter(variables['minor'], 1)] # This will get reset every time minor gets bumped.
nugetVersion: '$(major).$(minor).$(revision)'
then use nugetVersion as an environment variable when packing:
- task: NuGetCommand#2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'
versionEnvVar: 'nugetVersion'
versioningScheme: 'byEnvVar'

I think the issue many of use have is that there is no option menu. I upvoted SHarpC's post as this worked for me.

I have a workaround to add a suffix (i.e. '-beta'), since it's for some reason ignored by the Nuget pack command when using the Classic editor and setting auto versioning by build number:
Set a new environment variable, set the value as the predefined $(Build.BuildNumber) variable:
Set the build number:
Set NuGet pack command to auto-name by environment variable and specify newly added variable name:
If you're interested in the whole build/release pipeline design and YAML, have a look at my article here

The core of the problem is solved in the approved answer and refined in #Emils answer, so this is just another approach to the azure-pipelines.yml that works for us with DevOps artifacts.
name: $(majorMinorVersion).$(semanticVersion)
trigger:
- main
variables:
majorMinorVersion: 1.0
semanticVersion: $[counter(variables['majorMinorVersion'], 0)]
pool:
vmImage: windows-latest
steps:
- task: DotNetCoreCLI#2
displayName: 'Create Packages'
inputs:
command: pack
configuration: 'Release'
packagesToPack: '**/<VS projectname>.csproj'
versioningScheme: byBuildNumber
- task: NuGetAuthenticate#0
displayName: 'NuGet Authenticate'
- task: NuGetCommand#2
displayName: 'NuGet Push to feed'
inputs:
command: push
publishVstsFeed: '<DevOps projectname>/<feed name>'
BTW: Don't forget this little hinch

Related

How to reference another yml file from the main github action yaml file?

I'm defining a github action script that's referencing to another yaml file, hoping to put the configuration into a more organised way.
Here is my job file, named as deploy.yml in the path of ./.github/workflows/, where the first . is the root folder of my project.
....
jobs:
UnitTest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: ./.github/workflows/unittest.yml
In the same ./.github/workflows/ folder, I created another file called unittest.yml as below:
name: "UnitTest"
description: "Perform Unit Test"
runs:
# using: "composite"
- name: Dependency
run: |
echo "Dependency setup commands go here"
- name: UnitTest
run: make test.unit
However, when I tried to test the script locally using act with command act --secret-file .secrets --container-architecture linux/amd64, I received the following error:
[Deploy/UnitTest] ✅ Success - Main actions/checkout#v3
[Deploy/UnitTest] ⭐ Run Main ./.github/workflows/unittest.yml
[Deploy/UnitTest] ❌ Failure - Main ./.github/workflows/unittest.yml
[Deploy/UnitTest] file does not exist
[Deploy/UnitTest] 🏁 Job failed
I have tried to put just the file name unittest.yml or ./unittest.yml or myrepo_name/.github/workflows/unittest.yml or put the file into a subfolder like step 2 of this document illustrated, but all no luck.
Based on examples of runs for composition actions, I would imagine this should work.
Would anyone please advise?
P.S. You might have noticed the commented line of using: "composite" in the unittest.yml. If I uncomment the line, I'll receive error:
Error: yaml: line 3: did not find expected key
Composite actions are not referenced by YAML file, but a folder. In that folder, you are expected to have an action.yml describing the action.
This is why you're getting the error with using: composite, you're defining a workflow (because it's in ./github/workflows), but you are using action syntax.
I would advise this folder structure:
.github/
|-- workflows/
| -- deploy.yml
unittest-action/
|-- action.yml
With this structure, you should be able to reference the action with
- uses: actions/checkout#v3
- uses: ./unittest-action
Please see the docs for more information.
Depending on your use-case and setup, you might also want to consider reusable workflows.
You can define a reusable workflow in your .github/workflows directory like so:
# unittest.yml
on: workflow_call
jobs:
deploy:
# ...
and then call it like so:
jobs:
UnitTest:
uses: ./.github/workflows/unittest.yml
Note how the reusable workflow is an entire job. This means, you can't do the checkout from the outside and then just run the unit test in the reusable job. The reusable job (unittest.yml) needs to do the checkout first.
Which one to pick?
Here's a blog post summarising some of the differences between composite actions and reusable workflows, like:
reusable workflows can contain several jobs, composite actions only contain steps
reusable workflows have better support for using secrets
composite actions can be nested, but as of Jul '22, reusable workflows can't call other reusable workflows

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

Is it possible to generate tags dynamically in Google Cloud Build?

First of all: I am somewhat new to cloud build. Compared to previously used methods, I find it a wrenching, unripe and fairly annoying framework. Endless time is spend getting builders to work that supposedly work out of the box (like the helm builder for example), and it's limitations are astonishing and frustrating. Perhaps the following problem is a good example:
I would like to build and push a docker image. According to the documentation, the images to be pushed to the docker repository at the end (I'm using GCR for this) reside in the following configuration section in my cloudbuild.yaml file:
images:
- 'eu.gcr.io/$PROJECT_ID/my-project:${_TAG}'
- 'eu.gcr.io/$PROJECT_ID/my-project:latest'
I can set the _TAG substitution manually by using the section:
substitutions:
_TAG: x.y.z
but that means I have to manually fix the version number in this file every time. Worse still: if I branch out, I need to maintain the version number all the time. I have a python project in this case and it uses setuptools, the version is naturally contained in the setup.py file and I can parse it out with no problem. Attempts to parse the number into a specific file and use $(cat VERSION) in the images section fail, because the system claims it can't substitute the $(cat VERSION) part. So how can I overwrite the _TAG variable inside of another build step such, that it appears correct in the 'images' section?
If you are using triggered builds from Cloud Source Repositories, GitHub, or Bitbucket you can tag your commit and use the $TAG_NAME default substitution variable.
images:
- 'eu.gcr.io/$PROJECT_ID/my-project:$TAG_NAME'
- 'eu.gcr.io/$PROJECT_ID/my-project:latest'
On the other hand if you are using the Cloud SDK to submit the Cloud Build build you can provide values with the --substitutions argument:
gcloud builds submit [SOURCE] --config config.yaml --substitutions _TAG=x.y.z
Also I believe you would find this GitOps-style continuous delivery with Cloud Build tutorial very helpful. It explains how to create a continuous integration and delivery (CI/CD) pipeline on Google Cloud Platform using Cloud Build.
You can tag your image with several tags using cloudbuild.yaml and define the Docker build step with:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest'
- .
- '-f'
- Dockerfile.prod
id: Build
And resulting images with:
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest'

circleci v2 config - how do we filter by owner in a workflow?

In the circleci version 1 config, there was the option to specify owner as an option in a deployment. An example from the circleci docs ( https://circleci.com/docs/1.0/configuration/ ) with owner: circleci being the key line:
deployment:
master:
branch: master
owner: circleci
commands:
- ./deploy_master.sh
In version 2 of the config, there is the ability to use filters and tags to specify which branches are built, but I have yet to find (in the docs, or on the interwebs) anything that gives me the same capability.
What I'm trying to achieve is run build and test steps on forks, but only run the deploy steps if the repository owner is the main repo. Quite often people fork using the same branch name - in this case master - so having a build fail due to an inability to deploy is counter-intuitive, especially as I would like to use a protected branch in git and only merge commits based on a successful build in a pull request.
I realise we could move to only running builds based on tags being present, but nothing is stopping somebody with a fork also creating a tag in their fork, which puts us back at square one.
Is anybody aware of how to specify the owner of a repo in the version 2 config?
An example from the version 2 config document ( https://circleci.com/docs/2.0/workflows/ ) in case it helps jog somebodies memory:
workflows:
version: 2
un-tagged-build:
jobs:
- build:
filters:
tags:
ignore: /^v.*/
tagged-build:
jobs:
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
disclaimer: Developer Evangelist at CircleCI
That feature is not available on CircleCI 2.0. You can request it here.
As an alternative, you might be able to look for the branch name, say master, as well as the CIRCLE_PR_NUMBER environment variable. If that variable has any value, then it's a fork of master and you shouldn't deploy.

Concat variable names in GitLab

We use a Gitlab Project in a team. Each developer has his own Kubernetes cluster in the cloud and an own branch within GitLab. We use GitLab-CI to automatically build new containers and deploy them to our Kubernetes clusters.
At the moment we have a .gitlab-ci.yml looks something like this:
variables:
USERNAME: USERNAME
CI_K8S_PROJECT: ${USERNAME_CI_K8S_PROJECT}
REGISTRY_JSON_KEY_FILE: ${USERNAME_REGISTRY_JSON_KEY_FILE}
[...]
stages:
- build
- deploy
- remove
build-zeppelin:
stage: build
image: docker:latest
variables:
image_name: "zeppelin"
only:
- ${USERNAME}#Gitlab-Repo
tags:
- cloudrunner
script:
- docker login -u _json_key -p "${REGISTRY_JSON_KEY_FILE?}" https://eu.gcr.io
- image_name_fqdn="eu.gcr.io/${CI_K8S_PROJECT?}/${image_name?}:latest"
- docker build -t ${image_name_fqdn?} .
- docker push ${image_name_fqdn?}
- echo "Your new image is '${image_name_fqdn?}'. Have fun!"
[...]
So in the beginning we reference the important information by using a USERNAME-prefix. This works quite well, but is problematic, since we need to correct them after every pull request from another user.
So we search for a way to keep the gitlab-ci file the same to every developer while still referencing some gitlab-variables different for every developer.
Things we thought about, that don't seem to work:
Use multiple yml files and import them into each other => not supported.
Try to combine Gitlab Environment variables as Prefix:
CI_K8S_PROJECT: ${${GITLAB_USER_ID}_CI_K8S_PROJECT}
or
INDIVIDUAL_CI_K8S_PROJECT: ${GITLAB_USER_ID}_CI_K8S_PROJECT
CI_K8S_PROJECT: ${INDIVIDUAL_CI_K8S_PROJECT}
We found a solution using indirect expansion (bash feature):
before_script:
- variableName=${GITLAB_USER_ID}_CI_K8S_PROJECT
- export wantedValue=${!variableName}
But we also recognised, that our setup was somehow stupid: It does not make sense to have multiple branches for each user and use prefixed variables, since this leads to problems such as the above and security concerns, since all variables are accessible to all users.
It is way easier if each user forks the root project and simply creates a merge request for new features. This way there is no renaming/prefixing of variables or branches necessary at all.
Solution from #nik will work only for bash. For sh will work:
before_script:
- variableName=...
- export wantedValue=$( eval echo \$$variableName )
Something like this works (on 15.0.5-ee):
variables:
IMAGE_NAME: "test-$CI_PROJECT_NAME"

Resources