Clean Exit from an Azure Pipeline .yaml? - yaml

Is there a better/cleaner/more idiomatic way to exit a .yaml-based Azure Pipeline than say having a script throw an error?
e.g., this works but it feels clunky:
- task: PowerShell#2
displayName: "Exit"
inputs:
targetType: 'inline'
script: |
throw 'Exiting';

- powershell: |
write-host "##vso[task.complete result=Failed;]The reason you quit"
Would be neater, but would still fail the job.
There is no equivalent to skip the rest of the job, unless you work with conditions to skip all future tasks based on a variable value:
variables:
skiprest: false
- powershell: |
write-host "##vso[task.setvariable variable=skiprest]true"
- powershell:
condition: and(succeeded(), eq(skiprest, 'false'))
- powershell:
condition: and(succeeded(), eq(skiprest, 'false'))
- powershell:
condition: and(succeeded(), eq(skiprest, 'false'))
- powershell:
condition: and(succeeded(), eq(skiprest, 'false'))
You could use a YAML iterative insertion from a template to apply that condition to all tasks in a job I suppose. I don't have a working sample at hand, but the docs show how to inject a dependsOn:, the trick would be very similar I suppose:
# job.yml
parameters:
- name: 'jobs'
type: jobList
default: []
jobs:
- job: SomeSpecialTool # Run your special tool in its own job first
steps:
- task: RunSpecialTool#1
- ${{ each job in parameters.jobs }}: # Then do each job
- ${{ each pair in job }}: # Insert all properties other than "dependsOn"
${{ if ne(pair.key, 'dependsOn') }}:
${{ pair.key }}: ${{ pair.value }}
dependsOn: # Inject dependency
- SomeSpecialTool
- ${{ if job.dependsOn }}:
- ${{ job.dependsOn }}

Related

How to loop inside one object type Parameters again in AzureDevops pipeline

is there any way to loop inside one object type Parameters again in Azuredevops
I am planning to automate tag create/update to resources using Azuredevops pipeline and I decided to use Azure CLI command for the same(not sure if this is the right choice)
So I created a template (template.yaml) file as below.
parameters:
- name: myEnvironments
type: object
- name: tagList
type: object
stages:
- ${{ each environment in parameters.myEnvironments }}:
- stage: Create_Tag_${{ environment }}
displayName: 'Create Tag in ${{ environment }}'
pool:
name: my-spoke
jobs:
- ${{ each tag in parameters.tagList }}:
- ${{ if eq(tag.todeploy, 'yes') }}:
- job: Create_Tag_For_${{ tag.resourcename }_${{ environment }}}
displayName: 'Tag the reource ${{ tag.resourcename }'
condition: eq('${{ tag.todeploy }}', 'yes')
workspace:
clean: all
pool:
name: myspoke
steps:
- task: AzureCLI#2
displayName: "Tag the resource"
inputs:
azureSubscription: ${{ variables.subscription }}
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: az tag update --resource-id ${{ tag.resourceid }} --operation replace --tags key1=value1 key3=value3
and my pipeline input as below
stages:
- template: template.yaml
parameters:
myEnvironments:
- development
################################################################################################
# Tag List #
################################################################################################
tagList:
- resourcename: myaksservice
todeploy: yes
tagname1: tagvalue of 1
tagname2: tagvalue of 2
.
.
.
.
tagn : tagvalue of n
- resourcename: myappservice
todeploy: yes
tagname1: tagvalue of 1
tagname2: tagvalue of 2
.
.
.
.
tagn : tagvalue of n
- resourcename: mystorageaccount
todeploy: yes
tagname1: tagvalue of 1
tagname2: tagvalue of 2
.
.
.
.
tagn : tagvalue of n
But I was able to loop through the envlist , and the taglist elelments, but not able to loop through the tag values for each resources to crate them at a shot.
trigger:
- none
pool:
vmImage: ubuntu-latest
parameters:
- name: myEnvironments
type: object
default:
- 111
- 222
- 333
- name: tagList
type: object
default:
- resourcename: myaksservice
todeploy: yes
tagname1_1: tagvalue of 1
tagname2_1: tagvalue of 2
- resourcename: myappservice
todeploy: yes
tagname1_2: tagvalue of 1
tagname2_2: tagvalue of 2
- resourcename: mystorageaccount
todeploy: yes
tagname1_3: tagvalue of 1
tagname2_3: tagvalue of 2
stages:
- ${{ each environment in parameters.myEnvironments }}:
- stage:
displayName: 'Create Tag in ${{ environment }}'
pool:
vmImage: ubuntu-latest
jobs:
- ${{ each tag in parameters.tagList }}:
- ${{ each tagcontent in tag }}:
- ${{ if and(ne(tagcontent.Key, 'resourcename'),ne(tagcontent.Key, 'todeploy')) }}:
- job:
displayName: 'Tag the reource ${{ tag.resourcename }}'
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
Write-Host ${{tagcontent.Key}}
For the first stage, the pipeline will foreach the tagname in taglist and output:
tagname1_1
tagname2_1
tagname1_2
tagname2_2
tagname1_3
tagname2_3
So the key is 'object.Key' and 'object.Value', use them to get other contents in yaml object.

Sequencing GitHub actions job on conditions

I have: Job A, Job B & Job C.
when Job A completes
If job B runs I need job C to run (after job B has completed with success)
if Job B skipped I need Job C to run (If job A has completed with success)
See below for code snip:
check_if_containers_exist_to_pass_to_last_known_tagger_job: (JobA)
name: check_if_containers_exist
environment: test
runs-on: ubuntu-latest
#needs: [push_web_to_ecr, push_cron_###_to_ecr, push_to_###_shared_ecr, push_to_###_ecr]
needs: push_###_to_shared_ecr
#if: ${{ github.ref == 'refs/heads/main' }}
outputs:
signal_job: ${{ steps.step_id.outputs.run_container_tagger_job }}
steps:
- name: Configure AWS credentials
id: config-aws-creds
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-2
- name: Check if container exists (If containers don't exist then don't run last known tagging job for rollback)
id: step_id
run: |
aws ecr describe-images --repository-name anonymizer --image-ids imageTag=testing-latest
if [ $? == 254 ]; then echo "::set-output name=run_container_tagger_job::false"; else echo "::set-output name=run_container_tagger_job::true"; fi
tag_latest_testing_containers_as_last_known_testing_containers: (Job B)
needs: check_if_containers_exist_to_pass_to_last_known_tagger_job
if: needs.check_if_containers_exist_to_pass_to_last_known_tagger_job.outputs.signal_job == 'true'
uses: ###/###/.github/workflows/container-tagger.yml####
with:
tag_to_identify_containers: testing-latest
new_tag_to_apply_to_containers: last-known-testing
aws-region: eu-west-2
run_cron_and_cycle_containers: false
secrets:
SHARED_AWS_ACCESS_KEY_ID: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
SHARED_AWS_SECRET_ACCESS_KEY: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
tag_testing_containers_to_testing_latest: (Job C)
needs: [check_if_containers_exist_to_pass_to_last_known_tagger_job,tag_latest_testing_containers_as_last_known_testing_containers]
if: ${{ always() }}
uses: ###/##/.github/workflows/container-tagger.yml####
with:
tag_to_identify_containers: dev-${{ github.sha }}
new_tag_to_apply_to_containers: james-cron-test
aws-region: eu-west-2
run_cron_and_cycle_containers: true
secrets:
SHARED_AWS_ACCESS_KEY_ID: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
SHARED_AWS_SECRET_ACCESS_KEY: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
ENVIRONMENT_AWS_ACCESS_KEY_ID: ${{ secrets.TESTING_AWS_ACCESS_KEY_ID }}
ENVIRONMENT_AWS_SECRET_ACCESS_KEY: ${{ secrets.TESTING_AWS_SECRET_ACCESS_KEY }}
It might not be the most elegant solution, but it works.
The workaround would consist of adding 2 extra steps at the end of the Job A, and set the 2 steps always execute (if: always()).
The first one is used to create a text file and write the job status into it.
The second one is used to upload this text file as an artifact.
Then, in Job B and Job C, you will need to add the steps to download the artifacts and read the status of Job A to then perform or not specific operations.
Here is a demo of how it might look:
jobs:
JOB_A:
name: Job A
...
steps:
- name: Some steps of job A
...
- name: Create file status_jobA.txt and write the job status into it
if: always()
run: |
echo ${{ job.status }} > status_jobA.txt
- name: Upload file status_jobA.txt as an artifact
if: always()
uses: actions/upload-artifact#v1
with:
name: pass_status_jobA
path: status_jobA.txt
JOB_B:
needs: [JOB_A]
if: always()
name: Job B
...
steps:
- name: Download artifact pass_status_jobA
uses: actions/download-artifact#v1
with:
name: pass_status_jobA
- name: Set the status of Job A as output parameter
id: set_outputs
run: echo "::set-output name=status_jobA::$(<pass_status_jobA/status_jobA.txt)"
- name: Check Job A status
if: ${{ steps.set_outputs.outputs.status_jobA }} == "success"
run: |
...
JOB_C:
needs: [JOB_A]
if: always()
name: Job C
...
steps:
- name: Download artifact pass_status_jobA
uses: actions/download-artifact#v1
with:
name: pass_status_jobA
- name: Set the status of Job A as output parameter
id: set_outputs
run: echo "::set-output name=status_jobA::$(<pass_status_jobA/status_jobA.txt)"
- name: Check Job A status
if: ${{ steps.set_outputs.outputs.status_jobA }} == "failure"
run: |
...
Note that here, all jobs will always run, but Job B steps after the check will only run for Job A success, and Job C steps after the check will only run for Job A failure.
Job A --> Success --> Job B + Job C checks --> Job B steps
Job A --> Failure --> Job B + Job C checks --> Job C steps
Reference

How to add multiply variables with YAML conditional insertion

I read this document https://learn.microsoft.com/zh-cn/azure/devops/pipelines/process/expressions?view=azure-devops#conditional-insertion
but not like what demoed in the document, I need add three variables with same condition as below:
name: arm_temp
resources:
repositories:
- repository: self
type: git
variables:
- ${{ if in(lower(coalesce(variables['ENV'], variables['Build.SourceBranchName'])), 'release', 'prod') }}:
- newEnv: 'Prod'
- account: '$(ACCOUNT)'
- password: '$(PASSWORD)'
- ${{ if eq(lower(coalesce(variables['ENV'], variables['Build.SourceBranchName'])), 'qa') }}:
- newEnv: 'QA'
- account: '$(ACCOUNT)'
- password: '$(PASSWORD)'
- resGroupName: ${{ format('RESGROUP-{0}', variables['newEnv']) }}
ACCOUNT, PASSWORD and ENV are variables defined in azure build pipeline
but I always get error before run the build pipeline.
and error notification is about line under the if conditiona.
From your Yaml sample, it seems that the Yaml format has some issues.
You could refer to the following YAML Sample:
variables:
${{ if in(lower(coalesce(variables['ENV'], variables['Build.SourceBranchName'])), 'release', 'prod') }}:
newEnv: 'Prod'
account: $(myaccount)
password: $(mypassword)
${{ if eq(lower(coalesce(variables['ENV'], variables['Build.SourceBranchName'])), 'qa') }}:
newEnv: 'QA'
account: $(myaccount)
password: $(mypassword)
resGroupName: ${{ format('RESGROUP-{0}', variables['newEnv']) }}
pool:
vmimage: windows-latest
steps:
- script: |
echo $(newEnv)
echo $(account)
echo $(password)
Variable:
Result:
Note: You need to change the variable name $(ACCOUNT) $(PASSWORD). They cannot have the same name as the variable defined in yaml($(account), $(password)). Or the variable couldn't pass successfully.

How to use an each expression to concatenate a bash script in Azure Pipelines

I have a few places, where I need to define a set of K8s secrets during deployment at various stages, so I want to extract the recurring script into a template:
parameters:
- name: secretName
type: string
default: ""
- name: secrets
type: object
default:
Foo: Bar
steps:
- task: Bash#3
displayName: Create generic secret ${{ parameters.secretName }}
inputs:
targetType: inline
script: |
echo "Creating generic secret ${{ parameters.secretName }}"
microk8s kubectl delete secret ${{ parameters.secretName }}
microk8s kubectl create secret generic ${{ parameters.secretName }} ${{ each secret in parameters.secrets }}: --from-literal=${{ secretKey }}="${{ secret.value }}"
I want to call it like this multiple times, to create all neccessary secrets for the deployment to each stage
- job: CreateSecrets
pool:
name: $(poolName)
steps:
- template: "Templates/template-create-secret.yml"
parameters:
secretName: "testSecret"
secrets:
username: $(staging-user)
password: $(staging-password)
foo: $(bar)
And it should simply execute a scriptn similar to this one:
kubectl create secret generic secretName \
-- from-literal=username=user1 \
-- from-literal=password=pass1 \
-- ...etc
With my current approach I am receiving the error:
/Code/BuildScripts/Templates/template-create-secret.yml (Line: 18,
Col: 15): The directive 'each' is not allowed in this context.
Directives are not supported for expressions that are embedded within
a string. Directives are only supported when the entire value is an
expression.
How is it possible to iterate over a parameter of type object and use its key and value to build a string for bash? The alternative would be to simply use a single key-value-pair per secret and create multiple secrets, which I'd like to avoid
If you concatenate the command arguments into a variable you can use that in a future step/ task. This example will concatenate all secrets within the key vaults indicated in the keyVaultSecretSources parameter into one command. It shouldn't be too hard to adjust so you can specify which secrets you'd like to include/ exclude:
- name: environment
type: string
- name: namespace
type: string
- name: releaseName
type: string
# contains an array of Azure key vault names
- name: keyVaultSecretSources
type: object
stages:
- stage: MountSecrets${{ parameters.releaseName }}
pool: [Your k8s Pool]
displayName: Mount Key Vault Secrets ${{ parameters.releaseName }}
# Then key vault arguments will be concatenated into the stage variable secretArgs
variables:
secretArgs: ""
jobs:
- deployment: [Your Job Deployment Name]
displayName: [Your Job Display Name]
strategy:
runOnce:
deploy:
steps:
# skip artifacts download for this stage
- download: none
- ${{ each keyVault in parameters.keyVaultSecretSources }}:
# 1. obtain all secrets from keyVault.name key vault
# 2. remove all Json formatting, left with each secret name on one line
# 3. assign to local variable secretNameArray as an array
# 4. loop through secretNameArray and assign the secret to local variable kvs
# 5. construct the argument --from-literal and append to local variable mountCommand
# 6. append mountCommand to the stage variable secretArgs
- task: AzureCLI#2
displayName: 'Concatenate Keyvault Secrets'
inputs:
azureSubscription: [Your subscription]
scriptType: 'bash'
failOnStandardError: true
scriptLocation: 'inlineScript'
inlineScript: |
secretNameArray=($(az keyvault secret list --vault-name ${{ keyVault.name }} --query "[].name" | tr -d '[:space:][]"' | sed -r 's/,+/ /g'));
for i in "${secretNameArray[#]}"; do kvs="$(az keyvault secret show --vault-name ${{ keyVault.name }} --name "$i" --query "value" -o tsv)"; mountCommand="$mountCommand --from-literal=$(echo -e "$i" | sed -r 's/-/_/g')='$kvs'"; done;
echo "##vso[task.setvariable variable=secretArgs;issecret=true]$(secretArgs)$mountCommand"
- task: Kubernetes#1
displayName: 'Kubectl Login'
inputs:
kubernetesServiceEndpoint: [Your Service Connection Name]
command: login
namespace: ${{ parameters.namespace }}
- task: AzureCLI#2
displayName: 'Delete Secrets'
inputs:
azureSubscription: [Your subscription]
scriptType: 'bash'
failOnStandardError: false
scriptLocation: 'inlineScript'
inlineScript: |
kubectl delete secret ${{ parameters.releaseName }}-keyvault -n '${{ parameters.namespace}}'
exit 0
- task: AzureCLI#2
displayName: 'Mount Secrets'
inputs:
azureSubscription: [Your subscription]
scriptType: 'bash'
failOnStandardError: false
scriptLocation: 'inlineScript'
inlineScript: |
kubectl create secret generic ${{ parameters.releaseName }}-keyvault$(secretArgs) -n '${{ parameters.namespace}}'
exit 0
- task: Kubernetes#1
displayName: 'Kubectl Logout'
inputs:
command: logout
According the doc: Parameter data types, we can know that we need to use the 'each' key words before the script and here is the doc to help you know more about the Runtime parameters. Here is the demo script:
parameters:
- name: listOfStrings
type: object
default:
- one
- two
steps:
- ${{ each value in parameters.listOfStrings }}:
- script: echo ${{ value }}

Azure Pipelines Data Driven Matrix

In GitHub Actions, I can write a matrix job like so:
jobs:
test:
name: Test-${{matrix.template}}-${{matrix.os}}
runs-on: ${{matrix.os}}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
template: ['API', 'GraphQL', 'Orleans', 'NuGet']
steps:
#...
This will run every combination of os and template. In Azure Pipelines, you have to specify each combination manually like so:
stages:
- stage: Test
jobs:
- job: Test
strategy:
matrix:
Linux:
os: ubuntu-latest
template: API
Mac:
os: macos-latest
template: API
Windows:
os: windows-latest
template: API
# ...continued
pool:
vmImage: $(os)
timeoutInMinutes: 20
steps:
#...
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
The answer is yes. This is a known issue that has already been reported on github:
Add cross-product matrix strategy
In addition, there is workaround that mentioned this issue in the official documentation:
Note
The matrix syntax doesn't support automatic job scaling but you can
implement similar functionality using the each keyword. For an
example, see nedrebo/parameterized-azure-jobs.
jobs:
- template: azure-pipelines-linux.yml
parameters:
images: [ 'archlinux/base', 'ubuntu:16.04', 'ubuntu:18.04', 'fedora:31' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
- template: azure-pipelines-windows.yml
parameters:
images: [ 'vs2017-win2016', 'windows-2019' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
azure-pipelines-windows.yml:
jobs:
- ${{ each image in parameters.images }}:
- ${{ each pythonVersion in parameters.pythonVersions }}:
- ${{ each swVersion in parameters.swVersions }}:
- job:
displayName: ${{ format('OS:{0} PY:{1} SW:{2}', image, pythonVersion, swVersion) }}
pool:
vmImage: ${{ image }}
steps:
- script: echo OS version &&
wmic os get version &&
echo Lets test SW ${{ swVersion }} on Python ${{ pythonVersion }}
Not an ideal solution, but for now, you can loop over parameters. Write a template like the following, and pass your data to it.
# jobs loop template
parameters:
jobs: []
jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools#1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry#1 # Post steps
condition: always()
See here for more examples: https://github.com/Microsoft/azure-pipelines-yaml/blob/master/design/each-expression.md#scenario-wrap-jobs

Resources