Is it possible to generate tags dynamically in Google Cloud Build? - google-cloud-build

First of all: I am somewhat new to cloud build. Compared to previously used methods, I find it a wrenching, unripe and fairly annoying framework. Endless time is spend getting builders to work that supposedly work out of the box (like the helm builder for example), and it's limitations are astonishing and frustrating. Perhaps the following problem is a good example:
I would like to build and push a docker image. According to the documentation, the images to be pushed to the docker repository at the end (I'm using GCR for this) reside in the following configuration section in my cloudbuild.yaml file:
images:
- 'eu.gcr.io/$PROJECT_ID/my-project:${_TAG}'
- 'eu.gcr.io/$PROJECT_ID/my-project:latest'
I can set the _TAG substitution manually by using the section:
substitutions:
_TAG: x.y.z
but that means I have to manually fix the version number in this file every time. Worse still: if I branch out, I need to maintain the version number all the time. I have a python project in this case and it uses setuptools, the version is naturally contained in the setup.py file and I can parse it out with no problem. Attempts to parse the number into a specific file and use $(cat VERSION) in the images section fail, because the system claims it can't substitute the $(cat VERSION) part. So how can I overwrite the _TAG variable inside of another build step such, that it appears correct in the 'images' section?

If you are using triggered builds from Cloud Source Repositories, GitHub, or Bitbucket you can tag your commit and use the $TAG_NAME default substitution variable.
images:
- 'eu.gcr.io/$PROJECT_ID/my-project:$TAG_NAME'
- 'eu.gcr.io/$PROJECT_ID/my-project:latest'
On the other hand if you are using the Cloud SDK to submit the Cloud Build build you can provide values with the --substitutions argument:
gcloud builds submit [SOURCE] --config config.yaml --substitutions _TAG=x.y.z
Also I believe you would find this GitOps-style continuous delivery with Cloud Build tutorial very helpful. It explains how to create a continuous integration and delivery (CI/CD) pipeline on Google Cloud Platform using Cloud Build.

You can tag your image with several tags using cloudbuild.yaml and define the Docker build step with:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest'
- .
- '-f'
- Dockerfile.prod
id: Build
And resulting images with:
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest'

Related

How to use custom substitutions with secretmanager in cloudbuild?

I'm having an issue with using custom substitutions in my cloudbuild.yaml.
substitutions:
_CUSTOM_SUBSTITUTION: this-is-a-path
availableSecrets:
secretManager:
- versionName: projects/$_CUSTOM_SUBSTITUTION/secrets/client_id/versions/1
env: CLIENT_ID
- versionName: projects/$_CUSTOM_SUBSTITUTION/secrets/client_secret/versions/1
env: CLIENT_SECRET
From what I can tell from trial and error, using something like $PROJECT_ID in the place of $_CUSTOM_SUBSTITUTION will run the build, but if I use a custom substitution like above, the trigger does not run a build at all when a commit is pushed.
I've also tested with various other base substitutions, like $BRANCH_NAME to the same effect. I'm getting the feeling that it's just not possible to do this in cloudbuild at the moment?
It ended up being a combination of need curly braces ${_CUSTOM_SUBSTITUTION} and some syntax fixing in the cloudbuild.yaml. I didn't have enough experience with cloudbuild to find that.
The offending part was something this:
AUTH_TOKEN=$$(cat /workspace/token.txt). Originally I had just 1 $ there, which was also working code pulled from another project.
For anyone running into this in the future, using gloud builds submit can let you run it directly for troubleshooting.

Got "ZIP does not support timestamps before 1980" while deploying a Go Cloud Function on GCP via Triggers

Problem:
I am trying to deploy a function with this step in a second level compilation (second-level-compilation.yaml)
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions',
'deploy', '${_FUNCTION_NAME}',
'--source', 'path/to/function',
'--runtime', 'go111',
'--region', '${_GCP_CLOUD_FUNCTION_REGION}',
'--entry-point', '${_ENTRYPOINT}',
'--env-vars-file', '${_FUNCTION_PATH}/.env.${_DEPLOY_ENV}.yaml',
'--trigger-topic', '${_TRIGGER_TOPIC_NAME}',
'--timeout', '${_FUNCTION_TIMEOUT}',
'--service-account', '${_SERVICE_ACCOUNT}']
I get this error from Cloud Build using the Console.
Step #1: Step #11: ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory path/to/function: ZIP does not support timestamps before 1980
Here is the global flow:
The following step is in a first-level compilation (first-level-compilation.yaml). Which is triggered by Cloud build using a Github repository (via Application GitHub Cloud Build) :
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'launch-second-level-compilation.sh ${_MY_VAR}']
The script "launch-second-level-compilation.sh" does specific operations based on ${_MY_VAR} and then launches a second-level compilation passing a lot of substitutions variables with "gcloud builds submit --config=second-level-compilation.yaml --substitutions=_FUNCTION_NAME=val,_GCP_CLOUD_FUNCTION_REGION=val,....."
Then, the "second-level-compilation.yaml" described at the beginning of this question is executed, using the substitutions values generated and passed through the launch-second-level-compilation.sh script.
The main idea here is to have a generic first-level-compilation.yaml in charge of calling a second-level compilation with specific dynamically generated substitutions.
Attempts / Investigations
As described in this issue Cloud Container Builder, ZIP does not support timestamps before 1980, I tried to "ls" the files in the /workspace directory. But none of the files at the /workspace root have strange DATE.
I changed the path/to/function from a relative path to /workspace/path/to/function, but no success, without surprise as the directory ends to be the same.
Please make sure you don't have folders without files. For example:
|--dir
|--subdir1
| |--file1
|--subdir2
|--file2
In this example dir doesn't directly contain any file, only subdirectories. During local deployment gcp sdk puts dir into tarball without copying last modified field.
So it is set to 1st Jan 1970 that causes problems with ZIP.
As possible workaround just make sure every directory contains at least one file.

Azure Pipeline Nuget Package Versioning Scheme, How to Get "1.0.$(Rev:r)"

I'm setting up an Azure Pipelines build that needs to package a C# .NET class library into a NuGet package.
In this documentation, it lists a couple different ways to automatically generate SemVer strings. In particular, I want to implement this one:
$(Major).$(Minor).$(rev:.r), where Major and Minor are two variables
defined in the build pipeline. This format will automatically
increment the build number and the package version with a new patch
number. It will keep the major and minor versions constant, until you
change them manually in the build pipeline.
But that's all they say about it, no example is provided. A link to learn more takes you to this documentation, where it says this:
For byBuildNumber, the version will be set to the build number, ensure
that your build number is a proper SemVer e.g. 1.0.$(Rev:r). If you
select byBuildNumber, the task will extract a dotted version, 1.2.3.4
and use only that, dropping any label. To use the build number as is,
you should use byEnvVar as described above, and set the environment
variable to BUILD_BUILDNUMBER.
Again, no example is provided. It looks like I want to use versioningScheme: byBuildNumber, but I'm not quite sure how to set the build number, I think it pulls it from the BUILD_BUILDNUMBER environment variable, but I can't find a way to set environment variables, only script variables. Furthermore, am I suppose to just set that to 1.0.$(Rev:r), or to $(Major).$(Minor).$(rev:.r)? I'm afraid that would just interpret it literally.
Googling for the literal string "versioningScheme: byBuildNumber" returns a single result... Does anyone have a working azure-pipelines.yml with this versioning scheme?
Working YAML example for Packaging/Versioning using byBuildNumber
NOTE the second parameter of the counter - it is a seed value, really useful when migrating builds from other build systems like TeamCity; It allows you to set the next build version explicitly upon migration. After the migration and initial build in Azure DevOps, the seed value can be set back to zero or whatever start value (like 100) you may prefer every time majorMinorVersion is changed:
reference: counter expression
name: $(majorMinorVersion).$(semanticVersion) # $(rev:r) # NOTE: rev resets when the default retention period expires
pool:
vmImage: 'vs2017-win2016'
# pipeline variables
variables:
majorMinorVersion: 1.0
# semanticVersion counter is automatically incremented by one in each execution of pipeline
# second parameter is seed value to reset to every time the referenced majorMinorVersion is changed
semanticVersion: $[counter(variables['majorMinorVersion'], 0)]
projectName: 'MyProjectName'
buildConfiguration: 'Release'
# Only run against master
trigger:
- master
# Build
- task: DotNetCoreCLI#2
displayName: Build
inputs:
projects: '**/*.csproj'
arguments: '--configuration $(BuildConfiguration)'
# Package
- task: DotNetCoreCLI#2
displayName: 'NuGet pack'
inputs:
command: 'pack'
configuration: $(BuildConfiguration)
packagesToPack: '**/$(ProjectName)*.csproj'
packDirectory: '$(build.artifactStagingDirectory)'
versioningScheme: byBuildNumber # https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#yaml-snippet
# Publish
- task: DotNetCoreCLI#2
displayName: 'Publish'
inputs:
command: 'push'
nuGetFeedType: 'internal'
packagesToPush: '$(build.artifactStagingDirectory)/$(ProjectName)*.nupkg'
publishVstsFeed: 'MyPackageFeedName'
byBuildNumber uses the build number you define in your YAML with the name field.
Ex: name: $(Build.DefinitionName)-$(date:yyyyMMdd)$(rev:.r)
So if you set your build format to name: 1.0.$(rev:.r), it should work as you expect.
I had the similar issue and now let me make it clear.
Firstly, what is the definition of Build Number?
By the official document of Azure Pipeline YAML scheme, it is
name: string # build numbering format
resources:
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: { string: string } | [ variable | templateReference ]
trigger: trigger
pr: pr
stages: [ stage | templateReference ]
Look at the first line:
name: string # build numbering format
Yes, that's it!
So you could define it like
name: 1.0.$(Rev:r)
if you prefer to Semantic Versioning. Then
Secondly, what's the meaning of versioningScheme: 'byBuildNumber' in task NuGetCommand#2?
It's really straightforward: just use the format defined by name!
Last but not least
The official document on Package Versioning and Pack NuGet packages don't make it clear that what a build number really is and how to define it. It's really misleading. And I'm so sad as an MS employee as I'd to resort to external resource to make all that clear.
Azure Pipeline Nuget Package Versioning Scheme, How to Get “1.0.$(Rev:r)”
This should be a issue in the documentation. I reproduced this issue when I set $(Major).$(Minor).$(rev:.r) in the Build number format in the Options of build pipeline:
However, I suddenly noticed that the build number is not correct with that format after many build tests:
There are two points . between 0 and 2 (Open above image in a new tab). Obviously this is very strange. So, I changed the Build number format to:
$(Major).$(Minor)$(rev:.r)
Or
$(Major).$(Minor).$(rev:r)
Now, everything is working fine.
As test, I just set the Build number format to $(rev:.r), and the build number is .x. So, we could confirm that the value of $(rev:.r) including the . by default.
Note: Since where Major and Minor are two variables defined in the build pipeline, so we need defined them in the variables manually.
Hope this helps.
Issues
My issues:
when trying the answer by #Emil, my first package started at 2.0 (I did no further testing to investigate)
when trying the answer by #Leo Liu-MSFT, I was unable to find the matching "Options" tab.
I therefore used this solution by #LanceMcCarthy.
Fix
Set the variables:
variables:
major: '1'
minor: '0'
revision: $[counter(variables['minor'], 1)] # This will get reset every time minor gets bumped.
nugetVersion: '$(major).$(minor).$(revision)'
then use nugetVersion as an environment variable when packing:
- task: NuGetCommand#2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'
versionEnvVar: 'nugetVersion'
versioningScheme: 'byEnvVar'
I think the issue many of use have is that there is no option menu. I upvoted SHarpC's post as this worked for me.
I have a workaround to add a suffix (i.e. '-beta'), since it's for some reason ignored by the Nuget pack command when using the Classic editor and setting auto versioning by build number:
Set a new environment variable, set the value as the predefined $(Build.BuildNumber) variable:
Set the build number:
Set NuGet pack command to auto-name by environment variable and specify newly added variable name:
If you're interested in the whole build/release pipeline design and YAML, have a look at my article here
The core of the problem is solved in the approved answer and refined in #Emils answer, so this is just another approach to the azure-pipelines.yml that works for us with DevOps artifacts.
name: $(majorMinorVersion).$(semanticVersion)
trigger:
- main
variables:
majorMinorVersion: 1.0
semanticVersion: $[counter(variables['majorMinorVersion'], 0)]
pool:
vmImage: windows-latest
steps:
- task: DotNetCoreCLI#2
displayName: 'Create Packages'
inputs:
command: pack
configuration: 'Release'
packagesToPack: '**/<VS projectname>.csproj'
versioningScheme: byBuildNumber
- task: NuGetAuthenticate#0
displayName: 'NuGet Authenticate'
- task: NuGetCommand#2
displayName: 'NuGet Push to feed'
inputs:
command: push
publishVstsFeed: '<DevOps projectname>/<feed name>'
BTW: Don't forget this little hinch

circleci v2 config - how do we filter by owner in a workflow?

In the circleci version 1 config, there was the option to specify owner as an option in a deployment. An example from the circleci docs ( https://circleci.com/docs/1.0/configuration/ ) with owner: circleci being the key line:
deployment:
master:
branch: master
owner: circleci
commands:
- ./deploy_master.sh
In version 2 of the config, there is the ability to use filters and tags to specify which branches are built, but I have yet to find (in the docs, or on the interwebs) anything that gives me the same capability.
What I'm trying to achieve is run build and test steps on forks, but only run the deploy steps if the repository owner is the main repo. Quite often people fork using the same branch name - in this case master - so having a build fail due to an inability to deploy is counter-intuitive, especially as I would like to use a protected branch in git and only merge commits based on a successful build in a pull request.
I realise we could move to only running builds based on tags being present, but nothing is stopping somebody with a fork also creating a tag in their fork, which puts us back at square one.
Is anybody aware of how to specify the owner of a repo in the version 2 config?
An example from the version 2 config document ( https://circleci.com/docs/2.0/workflows/ ) in case it helps jog somebodies memory:
workflows:
version: 2
un-tagged-build:
jobs:
- build:
filters:
tags:
ignore: /^v.*/
tagged-build:
jobs:
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
disclaimer: Developer Evangelist at CircleCI
That feature is not available on CircleCI 2.0. You can request it here.
As an alternative, you might be able to look for the branch name, say master, as well as the CIRCLE_PR_NUMBER environment variable. If that variable has any value, then it's a fork of master and you shouldn't deploy.

Concat variable names in GitLab

We use a Gitlab Project in a team. Each developer has his own Kubernetes cluster in the cloud and an own branch within GitLab. We use GitLab-CI to automatically build new containers and deploy them to our Kubernetes clusters.
At the moment we have a .gitlab-ci.yml looks something like this:
variables:
USERNAME: USERNAME
CI_K8S_PROJECT: ${USERNAME_CI_K8S_PROJECT}
REGISTRY_JSON_KEY_FILE: ${USERNAME_REGISTRY_JSON_KEY_FILE}
[...]
stages:
- build
- deploy
- remove
build-zeppelin:
stage: build
image: docker:latest
variables:
image_name: "zeppelin"
only:
- ${USERNAME}#Gitlab-Repo
tags:
- cloudrunner
script:
- docker login -u _json_key -p "${REGISTRY_JSON_KEY_FILE?}" https://eu.gcr.io
- image_name_fqdn="eu.gcr.io/${CI_K8S_PROJECT?}/${image_name?}:latest"
- docker build -t ${image_name_fqdn?} .
- docker push ${image_name_fqdn?}
- echo "Your new image is '${image_name_fqdn?}'. Have fun!"
[...]
So in the beginning we reference the important information by using a USERNAME-prefix. This works quite well, but is problematic, since we need to correct them after every pull request from another user.
So we search for a way to keep the gitlab-ci file the same to every developer while still referencing some gitlab-variables different for every developer.
Things we thought about, that don't seem to work:
Use multiple yml files and import them into each other => not supported.
Try to combine Gitlab Environment variables as Prefix:
CI_K8S_PROJECT: ${${GITLAB_USER_ID}_CI_K8S_PROJECT}
or
INDIVIDUAL_CI_K8S_PROJECT: ${GITLAB_USER_ID}_CI_K8S_PROJECT
CI_K8S_PROJECT: ${INDIVIDUAL_CI_K8S_PROJECT}
We found a solution using indirect expansion (bash feature):
before_script:
- variableName=${GITLAB_USER_ID}_CI_K8S_PROJECT
- export wantedValue=${!variableName}
But we also recognised, that our setup was somehow stupid: It does not make sense to have multiple branches for each user and use prefixed variables, since this leads to problems such as the above and security concerns, since all variables are accessible to all users.
It is way easier if each user forks the root project and simply creates a merge request for new features. This way there is no renaming/prefixing of variables or branches necessary at all.
Solution from #nik will work only for bash. For sh will work:
before_script:
- variableName=...
- export wantedValue=$( eval echo \$$variableName )
Something like this works (on 15.0.5-ee):
variables:
IMAGE_NAME: "test-$CI_PROJECT_NAME"

Resources