How can i have an if condition in YAML script below is my sample code.
variables:
global_variable: Test # this is available to all jobs
parameters:
name : configs
displayName: SF Instance
type: string
default : Old
values:
Old
Older
Newer
Newest
trigger:
none
pool:
vmImage: 'ubuntu-latest'
steps:
checkout: self
persistCredentials: true
clean: true
task: UseNode#1
bash:
${{ if contains(parameters.configs, 'Old') }}:
echo $(global_variable)
Related
What I'm trying to achieve is that on pull request completion, I run a pipeline which requests the user to select 1 of four options, run a pipeline in which I can choose which version to build/release, all versions, undo a particular version, or a hotfix pipeline that skips the build stage and merely deploys a particular version.
Sorry in advance for the poor formatting.
'''
name: ADO-self-hosted-pipeline-Master-calls-Specific-Template
trigger: none
parameters:
- name: pipeline
displayName: Choose Pipeline
type: string
default: All-Versions
values:
- All-Versions
- Cherry-Pick-Versions
- Hotfix
- Undo-Versions
stages:
- stage: CherryPick
pool: $(AGENT_POOL)
displayName: 'Cherry pick'
jobs:
- job: SelectCherryPick
displayName: Select the Cherry Pick pipeline
- template: azure-pipelines_Cherry Pick.yml#self
condition: and(succeeded(), ${{eq(parameters['pipeline'], 'Cherry-Pick-Versions' ) }}
- stage: Hotfix
pool: $(AGENT_POOL)
displayName: 'Hotfix'
jobs:
- job: SelectHotfix
displayName: Select the Hotfix pipeline
- template: azure-pipelines_Hotfix.yml#self
condition: and(succeeded(), ${{eq(parameters['pipeline'], 'Hotfix' ) }}
- stage: All-Versions
pool: $(AGENT_POOL)
displayName: 'All-Versions'
jobs:
- job: SelectAll
displayName: Select the All Migrations pipeline
- template: azure-pipelines_All Migrations.yml#self
condition: and(succeeded(), ${{eq(parameters['pipeline'], 'All-Versions' ) }}
- stage: Undo-Versions
pool: $(AGENT_POOL)
displayName: 'Undo-Versions'
jobs:
- job: SelectUndo
displayName: Select the Undo pipeline
- template: azure-pipelines_undo.yml#self
condition: and(succeeded(), ${{eq(parameters['pipeline'], 'Undo-Versions' ) }}
variables:
BUILD_NAME: 'Build'
FLYWAY: 'E:\Agents\$(Agent.Name)\Flyway\flyway -user="$(userName)" -password="$(password)" -licenseKey=$(FLYWAY_LICENSE_KEY) -outOfOrder=true'
RELEASE_PREVIEW: 'Release-Preview.sql'
DRIFT_AND_CHANGE_REPORT: 'Drift-And-Change-Report.html'
DRIFT_AND_CHANGE_REPORT_DISPLAY_NAME: 'Drift And Change Report'
group: flyway_vars
#cherryPickVersions: ${{parameters.cherryPickVersions}}
'''
The pipeline works well with result:
Master Pipeline to Select further Pipelines
So I can choose which pipeline template from there, but when I press Run, I get an error:
Run after selecting cherry pick
The error is: /azure-pipelines_Cherry Pick.yml (Line: 61, Col: 1): Unexpected value 'stages'
The full Cherry Pick YAML Template is:
#name: ADO-self-hosted-pipeline-Cherry-Pick
# This is the default pipeline for a self-hosted Windows agent on Azure Devops.
# Install flyway cli on agent, add flyway to PATH: https://download.red-gate.com/maven/release/org/flywaydb/enterprise/flyway-commandline
# Install python3 on agent and add pip to PATH if staticCodeAnalysis is set to true
# Make sure this file is in the same directory as the migrations folder of the Flyway Enterprise project.
# Provision a dev, shadow, build databases, as well as any target environments that need to be created: https://documentation.red-gate.com/fd/proof-of-concept-checklist-152109292.html
# Further instructions if needed here: https://documentation.red-gate.com/fd/self-hosted-windows-agent-yaml-pipeline-in-azure-devops-158564470.html
# For video reference, see: https://www.red-gate.com/hub/university/courses/flyway/flyway-desktop/setting-up-a-flyway-desktop-project/basic-flyway-desktop-project-setup-and-configuration
#trigger:
# branches:
# include:
# - master
# paths:
# include:
# - migrations/*
#This is the Cherry-Pick Pipeline
# IMPORTANT: DO NOT ADD DEPLOYMENT STEPS TO THE BUILD STAGE - THE BUILD IS A DESTRUCTIVE ACTION
parameters:
- name: cherryPickVersions
displayName: 'Scripts To Deploy: Comma Separated List Of Full Version Numbers'
default: ''
type: string
- name: buildStage
type: object
default:
stage: 'Build'
displayName: 'Temp Build'
variableGroupName: 'RCS DB Common' #userName, password, JDBC, Database.Name
# This is the extensible definition of your target environments.
# Every parameter in deploymentStages corresponds to an environment - here it's Test and Prod.
# Pay attention to the 'dependsOn' field - this determines order of operations.
# IMPORTANT: check_JDBC will have schema dropped
- name: deploymentStages
type: object
default:
- stage: 'Test'
dependsOn: 'Build'
displayName: 'Deploy Test'
pauseForCodeReview: false
generateDriftAndChangeReport: true #requires check database to be provisioned
staticCodeAnalysis: false #requires python3 installed on agent and pip on PATH
variableGroupName: 'RCS DB Test' #userName, password, JDBC, Database.Name, check_JDBC
- stage: 'Prod'
dependsOn: 'Test'
displayName: 'Deploy Prod'
pauseForCodeReview: true
generateDriftAndChangeReport: true #requires check database to be provisioned
staticCodeAnalysis: false #requires python3 installed on agent and pip on PATH
variableGroupName: 'RCS DB Test Prod' #userName, password, JDBC, Database.Name, check_JDBC
***stages:*** # This is line 61
- stage: Build
pool: $(AGENT_POOL)
displayName: ${{parameters.buildStage.displayName}}
jobs:
- job: Build
variables:
- group: ${{parameters.buildStage.variableGroupName}}
- group: flyway_vars
steps:
- script: '$(FLYWAY) clean info -url="$(JDBC)"'
failOnStderr: true
displayName: 'Clean Build DB'
env:
FLYWAY_CLEAN_DISABLED: false
- script: '$(FLYWAY) migrate info -url="$(JDBC)" -baselineOnMigrate=true -baselineVersion=$(BASELINE_VERSION)'
failOnStderr: true
displayName: 'Validate Migrate Scripts'
- script: '$(FLYWAY) undo info -url="$(JDBC)" -target="$(FIRST_UNDO_SCRIPT)"?'
continueOnError: true
displayName: 'Validate Undo Scripts'
- task: CopyFiles#2
inputs:
targetFolder: '$(System.ArtifactsDirectory)'
- task: PublishBuildArtifacts#1
displayName: 'Publish Build Artifact'
inputs:
ArtifactName: '$(BUILD_NAME)'
PathtoPublish: '$(System.ArtifactsDirectory)'
- ${{each stage in parameters.deploymentStages}}:
- stage: ${{stage.stage}}
pool: $(AGENT_POOL)
displayName: ${{stage.displayName}}
dependsOn: ${{stage.dependsOn}}
jobs:
- job: PreRelease
displayName: Release Preview
variables:
- group: ${{stage.variableGroupName}}
- group: flyway_vars
steps:
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: '$(BUILD_NAME)'
downloadPath: '$(System.ArtifactsDirectory)'
- script: '$(FLYWAY) migrate -dryRunOutput="$(System.ArtifactsDirectory)\${{stage.stage}}-$(RELEASE_PREVIEW)" -url="$(JDBC)" -baselineOnMigrate=true -baselineVersion=$(BASELINE_VERSION)'
workingDirectory: '$(System.DefaultWorkingDirectory)'
failOnStderr: true
displayName: 'Pre-Release Deployment Report'
env:
FLYWAY_CLEAN_DISABLED: true
- task: PublishBuildArtifacts#1
displayName: 'Publish Release Preview'
inputs:
ArtifactName: 'Release Preview'
PathtoPublish: '$(System.ArtifactsDirectory)\${{stage.stage}}-$(RELEASE_PREVIEW)'
- ${{if eq(stage.staticCodeAnalysis, true)}}:
- job: ChangeReport
timeoutInMinutes: 0 # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: 2 # how much time to give 'run always even if cancelled tasks' before stopping them
dependsOn: 'PreRelease'
displayName: Change Report With Code Analysis
variables:
- group: ${{stage.variableGroupName}}
- group: flyway_vars
steps:
- script: 'pip install sqlfluff==1.2.1'
displayName: 'Install SQL Fluff'
failOnStderr: true
- script: '$(FLYWAY) check -changes -drift -code -check.buildUrl=$(check_JDBC) -url="$(JDBC)" -check.reportFilename="$(System.ArtifactsDirectory)\$(Database.Name)-$(Build.BuildId)-$(DRIFT_AND_CHANGE_REPORT)"'
workingDirectory: '$(System.DefaultWorkingDirectory)'
failOnStderr: true
displayName: '$(DRIFT_AND_CHANGE_REPORT_DISPLAY_NAME)'
env:
FLYWAY_CLEAN_DISABLED: false
- task: PublishBuildArtifacts#1
displayName: 'Publish $(DRIFT_AND_CHANGE_REPORT_DISPLAY_NAME)'
inputs:
ArtifactName: '$(DRIFT_AND_CHANGE_REPORT_DISPLAY_NAME)'
PathtoPublish: '$(System.ArtifactsDirectory)\$(Database.Name)-$(Build.BuildId)-$(DRIFT_AND_CHANGE_REPORT)'
- ${{if and(eq( stage.generateDriftAndChangeReport, true), eq( stage.staticCodeAnalysis, false))}}:
- job: ChangeReport
displayName: Change Report
timeoutInMinutes: 0
dependsOn: 'PreRelease'
variables:
- group: ${{stage.variableGroupName}}
- group: flyway_vars
steps:
- script: '$(FLYWAY) check -cherryPick=$(cherryPickVersions) -drift -check.buildUrl=$(check_JDBC) -url="$(JDBC)" -check.reportFilename="$(System.ArtifactsDirectory)\$(Database.Name)-$(Build.BuildId)-$(DRIFT_AND_CHANGE_REPORT)"'
workingDirectory: '$(System.DefaultWorkingDirectory)'
failOnStderr: true
displayName: '$(DRIFT_AND_CHANGE_REPORT_DISPLAY_NAME)'
env:
FLYWAY_CLEAN_DISABLED: false
- task: PublishBuildArtifacts#1
displayName: 'Publish $(DRIFT_AND_CHANGE_REPORT_DISPLAY_NAME)'
inputs:
ArtifactName: '$(DRIFT_AND_CHANGE_REPORT_DISPLAY_NAME)'
PathtoPublish: '$(System.ArtifactsDirectory)\$(Database.Name)-$(Build.BuildId)-$(DRIFT_AND_CHANGE_REPORT)'
- ${{if and(eq( stage.generateDriftAndChangeReport, false), eq( stage.staticCodeAnalysis, false))}}:
- job: ChangeReport
pool: server
displayName: Skipping Change Report
dependsOn: 'PreRelease'
- ${{if eq(stage.pauseForCodeReview, true)}}:
- job: CodeReview
displayName: Code Review
dependsOn: 'ChangeReport'
pool: server
steps:
- task: ManualValidation#0
displayName: 'Review Change Report Prior To Release'
timeoutInMinutes: 4320 # job times out in 1 hour
inputs:
notifyUsers: |
user#email.com
example#example.com
instructions: 'Review changes'
- ${{if eq(stage.pauseForCodeReview, false)}}:
- job: CodeReview
pool: server
displayName: Skipping Code Review
dependsOn: 'ChangeReport'
- job: Deploy
displayName: Deployment
dependsOn: 'CodeReview'
variables:
- group: ${{stage.variableGroupName}}
- group: flyway_vars
steps:
- script: '$(FLYWAY) -cherryPick=$(cherryPickVersions) info migrate info -url="$(JDBC)" -baselineOnMigrate=true -baselineVersion=$(BASELINE_VERSION)'
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: ${{stage.displayName}}
failOnStderr: true
env:
FLYWAY_CLEAN_DISABLED: true # clean drops a target DB schema, keep disabled except for build step
If anyone has tried to do something similar, i'd love to hear from you.
I have a job in Github Actions workflow that runs unit tests and then uploads reports to Jira Xray. The thing is tests step takes quite a while to complete, so I want to split task execution into a few smaller chunks using matrix.
I did it for linting and it works well, however for unit tests I'm struggling with how can I collect and merge all reports together so they can be uploaded after all matrix steps are done.
Here's how current unit tests step looks like
unit-test:
runs-on: ubuntu-latest
needs: setup
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: npx nx affected:test --parallel=3 --base=${{ env.BASE_REF}} --head=HEAD # actual unit tests
- name: Check file existence #checking whether there're reports at all
if: success() || failure()
id: check_files
uses: andstor/file-existence-action#v1
with:
# all reports will be placed in this directory
# for matrix job reports will be separated between agents, so it's required to merge them
files: 'reports/**/test-*.xml'
- name: Import results to Xray
if: (success() || failure()) && steps.check_files.outputs.files_exists == 'true' && github.event_name == 'push'
uses: mikepenz/xray-action#v2
with:
username: ${{ secrets.XRAY_CLIENT_ID }}
password: ${{ secrets.XRAY_CLIENT_SECRET }}
testFormat: 'junit'
testPaths: 'reports/**/test-*.xml' # that's where I need to grab all reports
projectKey: 'MY_KEY'
combineInSingleTestExec: true
Matrix job for linting looks like this. I would like to do the same for unit tests, but at the same time I want to collect all reports as it works in the job above
linting:
runs-on: ubuntu-latest
needs: [setup]
strategy:
matrix:
step: ${{ fromJson(needs.setup.outputs.lint-bins) }} # this will be something like [1,2,3,4]
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
# some nodejs logic to run few jobs, it uses "execSync" from "child_process" to invoke the task
- run: node scripts/ci-run-many.mjs --target=lint --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD
Figured it myself
unit-test:
runs-on: ubuntu-latest
needs: [setup]
strategy:
fail-fast: false
matrix:
step: ${{ fromJson(needs.setup.outputs.unit-test-bins) }}
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: node scripts/ci-run-many.mjs --target=test --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD
- name: Upload reports' artifacts
if: success() || failure()
uses: actions/upload-artifact#v3
with:
name: ${{ env.RUN_UNIQUE_ID }}_artifact_${{ matrix.step }}
if-no-files-found: ignore
path: reports
retention-days: 1
process-test-data:
runs-on: ubuntu-latest
needs: unit-test
if: success() || failure()
steps:
- uses: actions/checkout#v3
- name: Download reports' artifacts
uses: actions/download-artifact#v3
with:
path: downloaded_artifacts
- name: Place reports' artifacts
run: rsync -av downloaded_artifacts/*/*/ unit_test_reports/
- name: Check reports existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: 'unit_test_reports/**/test-*.xml'
- name: Import results to Xray
run: ls -R unit_test_reports/
My ci_pipeline.yml contains the following:
- stage: DeployDEV
pool:
vmImage: 'windows-latest'
variables:
- group: [REDACTED]
- name: test_var
value: firstval
jobs:
- job: BuildDeploy
displayName: "REDACTED"
steps:
- script: echo "test_var = $(test_var)"
displayName: first variable pass
- bash: echo "##vso[task.setvariable variable=test_var]secondValue"
displayName: set new variable value
- script: echo "test_var = $(test_var)"
displayName: second variable pass
name: Env
displayName: "Extract Source Branch Name"
- template: pipeline_templates/ci_pipeline_templates/build_deploy_dev.yml
parameters:
testvar: $(test_var)
and in the build_deploy_dev.yml template:
parameters:
- name: testvar
jobs:
- job: Test
displayName: "Testjob"
steps:
- script: echo "test_var=${{ parameters.testvar }}"
name: TestVar
displayName: 'test'
I need to be able to modify the variable in the main yml file BEFORE passing it into the template. However, test_var still remains as firstval. What am I doing wrong? It seems like the change is successful in the main yml file. The second variable pass script displays test_var=secondValue. How to make the change be permanent so the template can have it ?
As stated above: Parameters are evaluated in compile time. So, using parameters won't solve your problem, however you can create a new variable as an output and make your template job dependent from the job which generates a new variable afterwards you can use it in your template as follows:
ci_pipeline.yml
- stage: DeployDEV
pool:
vmImage: 'windows-latest'
variables:
- group: [REDACTED]
- name: test_var
value: firstval
jobs:
- job: BuildDeploy
displayName: "REDACTED"
steps:
- script: echo "test_var = $(test_var)"
displayName: first variable pass
- bash: echo "##vso[task.setvariable variable=new_var;isOutput=true]SomeValue"
displayName: set new variable value
name: SetMyVariable
name: Env
displayName: "Extract Source Branch Name"
- template: pipeline_templates/ci_pipeline_templates/build_deploy_dev.yml
Then in your build_deploy_dev.yml you can use the variable you've created previously:
jobs:
- job: Test
variables:
test_var: $[ dependencies.BuildDeploy.outputs['SetMyVariable.new_var'] ]
displayName: "Testjob"
dependsOn: BuildDeploy
steps:
- script: echo "test_var=$(test_var)"
name: TestVar
displayName: 'test'
Note that you can still leverage your $(test_var) for instance to check whether it has the value 'firstValue' and then in case positive you create the 'newVar' and use it on the other job, you could even use the value of $(test_var) in your 'newVar' and use the value which you previously set:
- bash: echo "##vso[task.setvariable variable=new_var]$(test_var)"
displayName: set new variable value
In this way you can dynamically 'change' the behavior of your pipeline and keep your template file.
Unfortunately you cannot use variables set during runtime, ie. $test_var, as parameters for templates. Think of templates less like functions and more like snippets. The pipeline essentially swaps the guts of the template for the reference to the template and all parameters are evaluated at that time and swapped.
So when you set $test_var to 'firstval', the template is evaluated at that time and the parameter is set to 'firstval' as well. Then when you set $test_var later in the yaml, it is too late. See the documentation below:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#context
It may feel dirty to reuse code but unfortunately this would be the recommended solution.
- stage: DeployDEV
pool:
vmImage: 'windows-latest'
variables:
- group: [REDACTED]
- name: test_var
value: firstval
jobs:
- job: BuildDeploy
displayName: "REDACTED"
steps:
- script: echo "test_var = $(test_var)"
displayName: first variable pass
- bash: echo "##vso[task.setvariable variable=test_var]secondValue"
displayName: set new variable value
- script: echo "test_var = $(test_var)"
displayName: second variable pass
- job: Test
displayName: "Testjob"
steps:
- script: echo "test_var=$(test_var)"
name: TestVar
displayName: 'test'
I have a few places, where I need to define a set of K8s secrets during deployment at various stages, so I want to extract the recurring script into a template:
parameters:
- name: secretName
type: string
default: ""
- name: secrets
type: object
default:
Foo: Bar
steps:
- task: Bash#3
displayName: Create generic secret ${{ parameters.secretName }}
inputs:
targetType: inline
script: |
echo "Creating generic secret ${{ parameters.secretName }}"
microk8s kubectl delete secret ${{ parameters.secretName }}
microk8s kubectl create secret generic ${{ parameters.secretName }} ${{ each secret in parameters.secrets }}: --from-literal=${{ secretKey }}="${{ secret.value }}"
I want to call it like this multiple times, to create all neccessary secrets for the deployment to each stage
- job: CreateSecrets
pool:
name: $(poolName)
steps:
- template: "Templates/template-create-secret.yml"
parameters:
secretName: "testSecret"
secrets:
username: $(staging-user)
password: $(staging-password)
foo: $(bar)
And it should simply execute a scriptn similar to this one:
kubectl create secret generic secretName \
-- from-literal=username=user1 \
-- from-literal=password=pass1 \
-- ...etc
With my current approach I am receiving the error:
/Code/BuildScripts/Templates/template-create-secret.yml (Line: 18,
Col: 15): The directive 'each' is not allowed in this context.
Directives are not supported for expressions that are embedded within
a string. Directives are only supported when the entire value is an
expression.
How is it possible to iterate over a parameter of type object and use its key and value to build a string for bash? The alternative would be to simply use a single key-value-pair per secret and create multiple secrets, which I'd like to avoid
If you concatenate the command arguments into a variable you can use that in a future step/ task. This example will concatenate all secrets within the key vaults indicated in the keyVaultSecretSources parameter into one command. It shouldn't be too hard to adjust so you can specify which secrets you'd like to include/ exclude:
- name: environment
type: string
- name: namespace
type: string
- name: releaseName
type: string
# contains an array of Azure key vault names
- name: keyVaultSecretSources
type: object
stages:
- stage: MountSecrets${{ parameters.releaseName }}
pool: [Your k8s Pool]
displayName: Mount Key Vault Secrets ${{ parameters.releaseName }}
# Then key vault arguments will be concatenated into the stage variable secretArgs
variables:
secretArgs: ""
jobs:
- deployment: [Your Job Deployment Name]
displayName: [Your Job Display Name]
strategy:
runOnce:
deploy:
steps:
# skip artifacts download for this stage
- download: none
- ${{ each keyVault in parameters.keyVaultSecretSources }}:
# 1. obtain all secrets from keyVault.name key vault
# 2. remove all Json formatting, left with each secret name on one line
# 3. assign to local variable secretNameArray as an array
# 4. loop through secretNameArray and assign the secret to local variable kvs
# 5. construct the argument --from-literal and append to local variable mountCommand
# 6. append mountCommand to the stage variable secretArgs
- task: AzureCLI#2
displayName: 'Concatenate Keyvault Secrets'
inputs:
azureSubscription: [Your subscription]
scriptType: 'bash'
failOnStandardError: true
scriptLocation: 'inlineScript'
inlineScript: |
secretNameArray=($(az keyvault secret list --vault-name ${{ keyVault.name }} --query "[].name" | tr -d '[:space:][]"' | sed -r 's/,+/ /g'));
for i in "${secretNameArray[#]}"; do kvs="$(az keyvault secret show --vault-name ${{ keyVault.name }} --name "$i" --query "value" -o tsv)"; mountCommand="$mountCommand --from-literal=$(echo -e "$i" | sed -r 's/-/_/g')='$kvs'"; done;
echo "##vso[task.setvariable variable=secretArgs;issecret=true]$(secretArgs)$mountCommand"
- task: Kubernetes#1
displayName: 'Kubectl Login'
inputs:
kubernetesServiceEndpoint: [Your Service Connection Name]
command: login
namespace: ${{ parameters.namespace }}
- task: AzureCLI#2
displayName: 'Delete Secrets'
inputs:
azureSubscription: [Your subscription]
scriptType: 'bash'
failOnStandardError: false
scriptLocation: 'inlineScript'
inlineScript: |
kubectl delete secret ${{ parameters.releaseName }}-keyvault -n '${{ parameters.namespace}}'
exit 0
- task: AzureCLI#2
displayName: 'Mount Secrets'
inputs:
azureSubscription: [Your subscription]
scriptType: 'bash'
failOnStandardError: false
scriptLocation: 'inlineScript'
inlineScript: |
kubectl create secret generic ${{ parameters.releaseName }}-keyvault$(secretArgs) -n '${{ parameters.namespace}}'
exit 0
- task: Kubernetes#1
displayName: 'Kubectl Logout'
inputs:
command: logout
According the doc: Parameter data types, we can know that we need to use the 'each' key words before the script and here is the doc to help you know more about the Runtime parameters. Here is the demo script:
parameters:
- name: listOfStrings
type: object
default:
- one
- two
steps:
- ${{ each value in parameters.listOfStrings }}:
- script: echo ${{ value }}
In GitHub Actions, I can write a matrix job like so:
jobs:
test:
name: Test-${{matrix.template}}-${{matrix.os}}
runs-on: ${{matrix.os}}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
template: ['API', 'GraphQL', 'Orleans', 'NuGet']
steps:
#...
This will run every combination of os and template. In Azure Pipelines, you have to specify each combination manually like so:
stages:
- stage: Test
jobs:
- job: Test
strategy:
matrix:
Linux:
os: ubuntu-latest
template: API
Mac:
os: macos-latest
template: API
Windows:
os: windows-latest
template: API
# ...continued
pool:
vmImage: $(os)
timeoutInMinutes: 20
steps:
#...
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
The answer is yes. This is a known issue that has already been reported on github:
Add cross-product matrix strategy
In addition, there is workaround that mentioned this issue in the official documentation:
Note
The matrix syntax doesn't support automatic job scaling but you can
implement similar functionality using the each keyword. For an
example, see nedrebo/parameterized-azure-jobs.
jobs:
- template: azure-pipelines-linux.yml
parameters:
images: [ 'archlinux/base', 'ubuntu:16.04', 'ubuntu:18.04', 'fedora:31' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
- template: azure-pipelines-windows.yml
parameters:
images: [ 'vs2017-win2016', 'windows-2019' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
azure-pipelines-windows.yml:
jobs:
- ${{ each image in parameters.images }}:
- ${{ each pythonVersion in parameters.pythonVersions }}:
- ${{ each swVersion in parameters.swVersions }}:
- job:
displayName: ${{ format('OS:{0} PY:{1} SW:{2}', image, pythonVersion, swVersion) }}
pool:
vmImage: ${{ image }}
steps:
- script: echo OS version &&
wmic os get version &&
echo Lets test SW ${{ swVersion }} on Python ${{ pythonVersion }}
Not an ideal solution, but for now, you can loop over parameters. Write a template like the following, and pass your data to it.
# jobs loop template
parameters:
jobs: []
jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools#1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry#1 # Post steps
condition: always()
See here for more examples: https://github.com/Microsoft/azure-pipelines-yaml/blob/master/design/each-expression.md#scenario-wrap-jobs