Including default substitutions inside other `substitutions` - yaml

From here... https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values
I am trying to use substitutions inside my cloudbuild.yaml file but almost all my substitutions depend on the project ID of the project I am deploying to.
I have my yaml file like this...
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['functions', 'deploy', 'functionName', ...other args... , '--service-account', '${_SERVICE_ACCOUNT}', '--source', '${_SOURCE_REPO}']
substitutions:
_SOURCE_REPO: 'https://source.developers.google.com/projects/$PROJECT_ID/repos/my-repo-id/moveable-aliases/master/paths/functions/src'
_SERVICE_ACCOUNT: 'blah#${PROJECT_ID}.iam.gserviceaccount.com'
And whatever I try with the substitutions (I have tried both formats $_FOO and ${_FOO}) and what I end up with is blah#${PROJECT_ID}.iam.gserviceaccount.com with the ${PROJECT_ID} text still in there rather than it having the actual project ID.
If I move the text into the step and just use it instead of the substitution then it works. But ideally I'd like to use this method as these values will be used a lot so would like to save repetition.
EDIT
Well, I've tried a few different options including some of the suggestions that #ralemos mentioned but nothing seems to be working.
If I use this format...
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['functions', 'deploy', 'functionName', ...other args... , '--service-account', '${SERVICE_ACCOUNT}', '--source', '${SOURCE_REPO}']
options:
env:
SOURCE_REPO: 'https://source.developers.google.com/projects/${PROJECT_ID}/repos/my-repo-id/moveable-aliases/master/paths/functions/src'
SERVICE_ACCOUNT: 'blah#${PROJECT_ID}.iam.gserviceaccount.com
It complains that the env lines are invalid.
If I use this format...
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['functions', 'deploy', 'functionName', ...other args... , '--service-account', '${SERVICE_ACCOUNT}', '--source', '${SOURCE_REPO}']
options:
env:
'SOURCE_REPO=https://source.developers.google.com/projects/${PROJECT_ID}/repos/my-repo-id/moveable-aliases/master/paths/functions/src'
'SERVICE_ACCOUNT=blah#${PROJECT_ID}.iam.gserviceaccount.com
It complains that SERVICE_ACCOUNT and SOURCE_REPO are not valid built-in substitutions.
If I use $$SOURCE_REPO as the syntax it just replaces it with $SOURCE_REPO rather than using the substitution.
It seems for now that what I'm trying just isn't possible.

You could use env variables instead of substitutions, as you can see on the answer of this community post.
The result would be something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['functions', 'deploy', 'functionName', ...other args... , '--service-account', '${SERVICE_ACCOUNT}', '--source', '${SOURCE_REPO}']
options:
env:
SOURCE_REPO: 'https://source.developers.google.com/projects/${PROJECT_ID}/repos/my-repo-id/moveable-aliases/master/paths/functions/src'
SERVICE_ACCOUNT: 'blah#${PROJECT_ID}.iam.gserviceaccount.com

Related

Gitlab-CI Include and merge mutiple variables section

I think the problem speak for itselft. I have mutiples included hidden jobs. In my .gitlab-ci.yml and I would like to enjoy the 'variables' section of all of them.
I thought I would have found what I need here https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html but:
Anchors does not allow included files
!reference cannot be used mutiples times in the 'Variables' section
extends does not merge the content but take the last one.
If anyone as an idea. Here the behavior I am trying to achieve:
hidden_1.yml
.hidden_1:
variables:
toto1: toto1
hidden_2.yml
.hidden_2
variables:
toto2: toto2
hidden_3
.hidden_3
variables:
toto2: toto3
result.yml
include:
- 'hidden_3'
- 'hidden_2'
- 'hidden_1'
Job_test:
stage: test
variables:
toto2: toto3
toto1: toto1
toto2: toto3
script: Echo '$toto1, $toto2, $toto3'

how to change a value of element on kustomize.yaml using github actions

if i having this kustomize.yaml file :
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- patch: |-
- op: replace
path: /spec/rules/0/host
value: the.new.domain.com
target:
kind: Ingress
name: the_name_of_ingress
and i want to replace this value:the.new.domain.com with a new domain name using kustomize command with github actions like this : kustomize edit set
any idea how to make it ? even if u have another idea can letting me implement it inside the github actions its ok
tnx anyway.
You can make use of some yaml processor like yq for this.
Example:
yq -i '.patches[0].patch = "- op: replace
path: /spec/rules/0/host
value: chetantalwar.com"' tes.yaml
I used this using CLI and it updated the file, and similarly you can put it in Github Action as well like give below.
- name: Set foobar to cool
uses: mikefarah/yq#master
with:
cmd: yq -i '.patches[0].patch = "Your Value"' 'kustomize.yml'
Links:
YQ
YQ Github Action
There is one more option which you can try is, templating your kustomize.yaml and in Github Action you can update the respective value using sed.
I had same usecase as #stack-acc and heavily inspired by response from #Chetan, found this
kustomization.yaml
patches:
- patch: |-
- op: replace
path: "/metadata/name"
value: proc-cls-s2e2-tcp
yq command to replace just the value
yq -i '.patches.[0].patch |= sub("value: .*?$", "value: publ-cls-s2e2-udp")' kustomization.yaml
Don't have to repeat whole op: replace section in the script and just replace the value.

How to specify a condition for a loop in yaml pipelines

I'm trying to download multiple artifacts into different servers(like web, db) using environments. Currently i have added the task DownloadPipelineArtifact#2 in a file and using template to add that task in azure-pipelines.yml. As i'm having multiple artifacts, im trying to use for loop where i'm getting issues.
#azure-pipelines.yml
- template: artifacts-download.yml
parameters:
pipeline:
- pipeline1
- pipeline2
- pipeline3
path:
- path1
- path2
- path3
I need to write loop in yaml so that it should download the pipeline1 artifacts to path1 and so on. Can someone please help??
Object-type parameters are your friend. They are incredibly powerful. As qBasicBoy answered, you'll want to make sure that you group the multiple properties together. If you're finding that you have a high number of properties per object, though, you can do a multi-line equivalent.
The following is an equivalent parameter structure to what qBasicBoy posted:
parameters:
- name: pipelines
type: object
default:
- Name: pipeline1
Path: path1
- Name: pipeline2
Path: path2
- Name: pipeline3
Path: path3
An example where you can stack many properties to a single object is as follows:
parameters:
- name: big_honkin_object
type: object
default:
config:
- appA: this
appB: is
appC: a
appD: really
appE: long
appF: set
appG: of
appH: properties
- appA: and
appB: here
appC: we
appD: go
appE: again
appF: making
appG: more
appH: properties
settings:
startuptype: service
recovery: no
You can, in essence, create an entire dumping ground for everything that you want to do by sticking it in one single object structure and properly segmenting everything. Sure, you could have had "startuptype" and "recovery" as separate string parameters with defaults of "service" and "no" respectively, but this way, we can pass a single large parameter from a high level pipeline to a called template, rather than passing a huge list of parameters AND defining said parameters in the template yaml scripts (remember, that's necessary!).
If you then want to access JUST a single setting, you can do something along the lines of:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Apps start as a "${{ parameters.settings.startuptype }}
Write-Host "Do the applications recover? "${{ parameters.settings.recovery }}
This will give you the following output:
Apps start as a service
Do the applications recover? no
YAML and Azure Pipelines are incredibly powerful tools. I can't recommend enough going through the entire contents of learn.microsoft.com on the subject. You'll spend a couple hours there, but you'll come out the other end with an incredibly knowledge of how these pipelines can be tailored to do everything you could ever NOT want to do yourself!
Notable links that helped me a TON (only learned this a couple months ago):
How to work with the YAML language in Pipelines
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema
How to compose expressions (also contains useful functions like convertToJSON for your object parameters!)
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops
How to create variables (separate from parameters, still useful)
https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml
SLEEPER ALERT!!! Templates are HUGELY helpful!!!
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops
You could use an object with multiple properties
parameters:
- name: pipelines
type: object
default:
- { Name: pipeline1, Path: path1 }
- { Name: pipeline2, Path: path2 }
- { Name: pipeline3, Path: path3 }
steps:
- ${{each pipeline in parameters.pipelines}}:
# use pipeline.Name or pipeline.Path

"gcloud builds submit" is not triggering error for missing required substitutions

I need some help with cloud build --substitutions.
This is the doc: https://cloud.google.com/cloud-build/docs/build-config#substitutions
Here is what is says:
cloudbuild.yaml
substitutions:
_SUB_VALUE: world
options:
substitution_option: 'ALLOW_LOOSE'
The following snippet uses substitutions to print "hello world." The ALLOW_LOOSE substitution option is set, which means the build will not return an error if there's a missing substitution variable or a missing substitution.
My case: I'm NOT using the ALLOW_LOOSE option. I need my substitutions to be required. I don't want any default values being applied. And I need it to fail immediately if I forget to pass any of the substitutions that I need.
Here is my cloudbuild.yaml file:
cloudbuild.yaml
substitutions:
_SERVER_ENV: required
_TAG_NAME: required
_MIN_INSTANCES: required
I'm initializing their default value as required specifically because I'm expecting the build call to fail if I forget to pass any of them to the gcloud builds submit call.
I'm expecting it to fail if I call gcloud builds submit and don't pass any of the defined substitutions. But it's not failing and the build completes normally without that value.
There is this observation in the docs:
Note: If your build is invoked by a trigger, the ALLOW_LOOSE option is set by default. In this case, your build will not return an error if there is a missing substitution variable or a missing substitution. You cannot override the ALLOW_LOOSE option for builds invoked by triggers.
But if I'm calling gcloud builds submit manually, that means that my build is not being invoked by any triggers, right? So the ALLOW_LOOSE options shouldn't be enabled.
Here is my full cloudbuild.yaml:
cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "--build-arg"
- "SERVER_ENV=$_SERVER_ENV"
- "--tag"
- "gcr.io/$PROJECT_ID/server:$_TAG_NAME"
- "."
timeout: 180s
- name: "gcr.io/cloud-builders/docker"
args:
- "push"
- "gcr.io/$PROJECT_ID/server:$_TAG_NAME"
timeout: 180s
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "beta"
- "run"
- "deploy"
- "server"
- "--image=gcr.io/$PROJECT_ID/server:$_TAG_NAME"
- "--platform=managed"
- "--region=us-central1"
- "--min-instances=$_MIN_INSTANCES"
- "--max-instances=3"
- "--allow-unauthenticated"
timeout: 180s
images:
- "gcr.io/$PROJECT_ID/server:$_TAG_NAME"
substitutions:
_SERVER_ENV: required
_TAG_NAME: required
_MIN_INSTANCES: required
In your cloudbuild.yaml file, when you define a substituions variable you automatically set his default value
substitutions:
# Value = "required"
_SERVER_ENV: required
# Value = ""
_TAG_NAME:
Try to use a variable that is not defined in the substitutions array, such as:
steps:
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: bash
args:
- -c
- |
# print "required"
echo $_SERVER_ENV
# print nothing
echo $_TAG_NAME
# Error, except if you allow loose. In this case, print nothing
echo $_MIN_INSTANCES
substitutions:
_SERVER_ENV: required
_TAG_NAME:

How to use variables in gitlab-ci.yml file

I'm trying to use variables in my gitlab-ci.yml file. This variable is passed as a parameter to a batch file that'll either only build or build and deploy based on parameter passed in. I've tried many different ways to pass my variable into the batch file but each time the variable is treated more like a static string instead.
I've read gitlabs docs on variables but cant seem to make it work.
- build
variables:
BUILD_PUBLISH_CONFIG_FALSE: 0
BUILD_PUBLISH_CONFIG_TRUE: 1
# BUILD ===============================
build: &build
stage: build
tags:
- webdev
script:
- ./build.bat %BUILD_CONFIG%
build:branch:
<<: *build
variables:
BUILD_CONFIG: $BUILD_PUBLISH_CONFIG_FALSE
only:
- /^(feature|hotfix|release)\/.+$/
build:branch:
<<: *build
variables:
BUILD_CONFIG: $BUILD_PUBLISH_CONFIG_TRUE
only:
- /^(stage)\/.+$/
build:branch:
<<: *build
variables:
BUILD_CONFIG: $BUILD_PUBLISH_CONFIG_TRUE
only:
- /^(master)\/.+$/
When watching gitlab's ci script execute, I expect ./build.bat 0, or ./build.bat 1.
Each time it prints out as ./build.bat %BUILD_CONFIG%
When you place variables inside job, that mean that you want to create new variable (and thats not correct way to do it). You want to output content of variable setup on top? Can u maybe add that to echo? or something like that? I didn't get it what you are trying to achieve.
https://docs.gitlab.com/ee/ci/variables/#gitlab-ciyml-defined-variables

Resources