How can I use a pipeline-level variable template for a conditionally run expression at a stage template level? - yaml

Developers have access to a variable template in their source code repo. The pipeline yaml references this template and calls a stage template that has multiple "if" expressions to conditionally run a job if any number of variables was passed in via that variable template. As an example:
service-config.yaml
variables:
  buildDotNetCore: 'true'
App-Build-Pipeline.yaml
...
variables:
  template: service-config.yaml
...
stages:
  template: Master-CI-Template.yaml
Master-CI-Template.yaml
stages:
- ${{ if eq(variables.buildDotNetCore, 'true') }}:
  - stage: Build
    jobs:
    - job: BuildDotNetCore
      steps:
      - task: Powershell#2
        inputs:
          targetType: 'inline'
          script: 'dir env:'
- {{ else }}:
  - stage: Build
    jobs:
    - job: HitTheElse
      steps:
      - task: Powershell#2
        inputs:
          targetType: 'inline'
          script: 'dir env:'
The expression is not evaluating to true. Since I have an else clause it still runs the "dir env:" but the job name is HitTheElse and the variable is there.
Edit:
Giving some clarity on some points:
The screenshot I included might be causing some confusion. The issue is that the expression is not being properly expanded and interpreted:
stages:
- ${{ if eq(variables.deployDotNetCore, 'true') }}:
  - stage: Build
    jobs:
    - job: deployDotNetCoreTrue
      steps:
      - task: Powershell#2
        inputs:
          targetType: 'inline'
          script: "dir env:"
- ${{ else }}:
  - stage: Build
    jobs:
    - job: HitTheElse
      steps:
      - task: PowerShell#2
        inputs:
          targetType: 'inline'
          script: "dir env:; echo $(deployDotNetCore)"
The screenshot in the flow diagram where I show DEPLOYDOTNETCORE as “true”, to me indicates that the variable template is indeed being ingested and the variable created at the global level, but when trying to use that same variable, that is being defined at the pipeline level, it is not being expanded in the "if" expression:
- ${{ if eq(variables.deployDotNetCore, 'true') }}:
In the stages template. That comes back as eg(null, ‘true’) which results in it going to the - ${{ else }}: loop which puts in a job called “HitTheElse”. In the last picture of the diagram, same one with the variables output, you can see that the “HitTheElse” job was put in place.

I was able to pass the entire 'variables' object to the template via a parameter. I then reference the variable as 'parameters.ParentVars.deployDotNetCore'.

Related

Github Workflows: Changing or specifying environment variables in matrix jobs

How do I specify or change an environment variable as part of a job matrix?
For example, here I want my job to echo "Two" and "Three" instead of "One", but github completely ignores the definition or change of the environment variable in the matrix:
name: test-test-test
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
env:
MY_VAR: One
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- env:
MY_VAR: Two
- env:
MY_VAR: Three
steps:
- run: echo "$MY_VAR2"
yields echo "One" and echo "One" in both jobs.
Use this context to access information about environment variables that have been set in a workflow, step, or job. Note that you cannot use the env context in the value of the id and uses keys within a step. An example is env.env_var.
GitHub provides extensive documentation about contexts at
https://docs.github.com/en/actions/learn-github-actions/contexts.
The syntax you used won't work for matrix. As stated in the official documentation:
The variables that you define become properties in the matrix context, and you can reference the property in other areas of your workflow file.
In your case, to access the matrix env variable you set in the include field list, you would need to use ${{ matrix.env.MY_VAR }}.
Something like this:
env:
MY_VAR: One
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- env:
MY_VAR: Two
- env:
MY_VAR: Three
steps:
- run: |
echo "$MY_VAR" # is equal to ${{ env.MY_VAR }}
echo ${{ matrix.env.MY_VAR }}
This will generate 2 test jobs: test (Two) and test (Three).
I made a test here if you want to have a look.
Conclusion:
Basically, the env you set at the workflow level isn't the same env set in the matrix strategy, so you can't substitute the value there.
Alternative
What you could do eventually (as #Adam stated in the other answer) is to set the env value at the workflow level for it to work, in the environment context. (Reference in the documentation + another workflow example).
Example:
env:
WORKFLOW_VARIABLE: WORKFLOW
jobs:
test:
runs-on: ubuntu-latest
env:
JOB_VARIABLE: JOB
steps:
- name: Example
env:
STEP_VARIABLE: STEP
run: |
echo "Hello World"
echo "This is the $WORKFLOW_VARIABLE environment variable"
echo "This is the $JOB_VARIABLE environment variable"
echo "This is the $STEP_VARIABLE environment variable"
But depending on what you plan to achieve, a matrix strategy you be more efficient.

How to run a bash script with arguments in azure devops pipeline?

I've a script which works locally fine and while running the script i'm passing 2 arguments in the script and its work perfectly fine, Here's how i'm running the bash script locally:
./changeDB_connection.sh "fdevtestcuskv04" "test"
But, I want to do it through azure devops pipeline and for that I've a pipeline task in which I'm calling a bash script with script arguments but it failed with this error message: ##[error]Bash exited with code '1'
Here's the pipeline task:
- task: Bash#3
displayName: 'Update Mongo Connection String'
condition: and(succeeded(), eq('${{ parameters.UpdateDBstr }}', 'true'))
inputs:
azureSubscription: $(azureSubscription)
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform/templates'
targetType: 'filePath'
failOnStderr: true
filePath: "$(System.DefaultWorkingDirectory)/Terraform/Terraform-Scripts/changeDB_connection.sh"
ScriptArguments: '-keyVaultName $(kvname) -Stringintials $(strinitial)'
let me know what am i doing wrong.
The below is a sample, and it works fine on my side.
xxx.sh
#pass arguments to the script
echo "Argument 1 is $1"
echo "Argument 2 is $2"
echo "Argument 3 is $3"
RunBashScript.yml
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- task: Bash#3
inputs:
filePath: 'xxx.sh'
arguments: '1 2 3'
Successfully on my side:
Structure on my side:
For bash task with multiple arguments you can use YAML multiline syntax e.g.
- task: Bash#3
inputs:
targetType: 'filePath'
filePath: "$(System.DefaultWorkingDirectory)/Terraform/Terraform-Scripts/changeDB_connection.sh"
arguments: >
-keyVaultName $(kvname)
-Stringintials $(strinitial)
Bowman's answer regarding passing arguments to the script works, but can become troublesome if many arguments should be passed.
I would instead recommend passing arguments as environment variables to the script and then consuming the same variables in the script. This would require some rewriting of your script.
The documentation for the bash task specifies how to pass environment variables to the script-execution. In short, just add them under env in your task definition.
Example:
steps:
- task: Bash#3
inputs:
targetType: 'filpath'
filepath: 'my/file/path'
env:
MYFIRSTVARIABLE: 'some text'
MYSECONDVARIABLE: $(aPipelineVariable)
Consume the envrionment variables in the bash script by referencing $MYFIRSTVARIABLE and $MYSECONDVARIABLE respectively. If the variables are secrets, you should keep them in variable groups which the pipeline consumes.

Azure pipelines bash tries to execute the variable instead of expanding

This is really stupid but was driving me crazy for a couple of hours. I'm testing how to pass variables between Powershell and Bash. Relevant code:
steps:
- task: PowerShell#2
name: 'pwsh_script'
inputs:
targetType: 'inline'
script: |
$response = "6458ddcd4edd7b7f68bec10338d47b55d221e975"
echo "latest (harcoded) commit: $response"
Write-Host "##vso[task.setvariable variable=LastCommit;isOutput=True]$response"
- task: Bash#3
name: 'bash_script1'
inputs:
targetType: 'inline'
script: |
echo $(LastCommit)
And I keep getting errors like:
/d/a/_temp/b40e64e8-8b5f-42d4-8118-82e8cf8a28c2.sh: line 1: LastCommit: command not found
I tried with all kinds of quotes, double, simple, none. Nothing works.
If you want to use echo $(LastCommit)
then you just need to remove isOutput
Write-Host "##vso[task.setvariable variable=LastCommit]$response"
And with isOutput you need to reference via task name
steps:
- task: PowerShell#2
name: 'pwsh_script'
inputs:
targetType: 'inline'
script: |
$response = "6458ddcd4edd7b7f68bec10338d47b55d221e975"
echo "latest (harcoded) commit: $response"
Write-Host "##vso[task.setvariable variable=LastCommit;isOutput=True]$response"
- task: Bash#3
name: 'bash_script1'
inputs:
targetType: 'inline'
script: |
echo $(pwsh_script.LastCommit)
Solution:
+ Write-Host "##vso[task.setvariable variable=LastCommit;isOutput=True]$response"
- Write-Host "##vso[task.setvariable variable=LastCommit;]$response"
Turns out that the "isOutput" was breaking it, as it means you I was creating a multi-job output variable and trying to use it inside the same job.
From the official documentation:
If you want to make a variable available to future jobs, you must mark it as an output variable by using isOutput=true. Then you can map it into future jobs by using the $[] syntax and including the step name that set the variable. Multi-job output variables only work for jobs in the same stage.
To pass variables to jobs in different stages, use the stage dependencies syntax.
When you creating a multi-job output variable, you should assign the expression to a variable.
For example:
myVarFromJobA: $[ dependencies.A.outputs['setvarStep.myOutputVar'] ] # map in the variable

using for-loop in azure pipeline jobs

I'm gonna use a for-loop which scans the files (value-f1.yaml, values-f2.yaml,...) in a folder and each time use a filename as a varibale and run the job in Azure pipeline job to deploy the helmchart based on that values file. The folder is located in the GitHub repository. So I'm thinking of something like this:
pipeline.yaml
stages:
- stage: Deploy
variables:
azureResourceGroup: ''
kubernetesCluster: ''
subdomain: ''
jobs:
${{ each filename in /myfolder/*.yaml}}:
valueFile: $filename
- template: Templates/deploy-helmchart.yaml#pipelinetemplates
deploy-helmchart.yaml
jobs:
- job: Deploy
pool:
vmImage: 'ubuntu-latest'
steps:
- task: HelmInstaller#1
displayName: 'Installing Helm'
inputs:
helmVersionToInstall: '2.15.1'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: HelmDeploy#0
displayName: 'Initializing Helm'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'init'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: PowerShell#2
displayName: 'Fetching GitTag'
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Fetching the latest GitTag"
$gt = git describe --abbrev=0
Write-Host "##vso[task.setvariable variable=gittag]$gt"
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: Bash#3
displayName: 'Fetching repo-tag'
inputs:
targetType: 'inline'
script: |
echo GitTag=$(gittag)
echo BuildID=$(Build.BuildId)
echo SourceBranchName=$(Build.SourceBranchName)
echo ClusterName= $(kubernetesCluster)
- task: HelmDeploy#0
displayName: 'Upgrading helmchart'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'upgrade'
chartType: 'FilePath'
chartPath: $(chartPath)
install: true
releaseName: $(releaseName)
valueFile: $(valueFile)
arguments: '--set image.tag=$(gittag) --set subdomain=$(subdomain)'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
Another thing is that if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
Besides how can I use for-loop in the job for this case?
Any help would be appreciated.
Updated after getting comments from #Leo
Here is a PowerShell task that I added in deploy-helmchart.yaml for fetching the files from a folder in GitHub.
- task: PowerShell#2
displayName: 'Fetching Files'
inputs:
targetType: 'inline'
script: |
Write-Host "Fetching values files"
cd myfolder
$a=git ls-files
foreach ($i in $a) {
Write-Host "##vso[task.setvariable variable=filename]$i"
Write-Host "printing"$i
}
Now the question is how can I run the task: HelmDeploy#0 for each files using parameters?
if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
The answer is yes.
We could add a command line task in the jobs, like job1 to clone the GitHub repository by Github PAT, then we could access those files (value-f1.yaml, values-f2.yaml,...) in $(Build.SourcesDirectory):
git clone https://<GithubPAT>#github.com/XXXXX/TestProject.git
Besides how can I use for-loop in the job for this case?
You could create a template which will have a set of actions, and pass parameters across during your build, like:
deploy-helmchart.yaml:
parameters:
param : []
steps:
- ${{each filename in parameters.param}}:
- scripts: 'echo ${{ filename }}'
pipeline.yaml:
steps:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
Check the document Solving the looping problem in Azure DevOps Pipelines for some more details.
Command line get the latest file name in the foler:
FOR /F "delims=|" %%I IN ('DIR "$(Build.SourcesDirectory)\*.txt*" /B /O:D') DO SET NewestFile=%%I
echo "##vso[task.setvariable variable=NewFileName]NewestFile"
Update:
Now the question is how can I run the task: HelmDeploy#0 for each
files using parameters?
Its depends on whether your HelmDeploy` task has options to accept the filename parameter.
As I said before, we could use following yaml to invoke the template yaml with parameters:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
But, if the task HelmDeploy has no options to accept parameters, we could not run the task HelmDeploy#0 for each files using parameters.
Then I check the HelmDeploy#0, I found there is only one option that can accept Helm command parameters:
So, the answer for this question is depends on whether your file name can be used as a Helm command, if not, you could not run the task HelmDeploy#0 for each files using parameters. If yes, you can do it.
Please check the official document Templates for some more details.
Hope this helps.

Using an array of values to repeat a step in GitHub Actions workflow

I am trying to create a GitHub Actions workflow which would collect specific paths changed in last commit and run a step for each of collected paths, if any.
Currently, in my workflow I'm creating an array of paths, but I'm not sure how to proceed with my array:
name: Test
on:
push
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
# This step will create "an array" of strings, e.g. "path1 path2 path3"
- name: array
id: arr
run: |
arr=()
for i in "$(git diff-tree --no-commit-id --name-only -r ${{ github.sha }})"
do
if [[ $i == *"path1"* ]]; then
arr+=("path1")
fi
if [[ $i == *"path2"* ]]; then
arr+=("path2")
fi
done
echo ::set-output name=arr::${arr[#]}
# How to run this step by iterating the `${{ steps.arr.outputs.arr }}`?
- name: reviewdog-lint
uses: reviewdog/action-eslint#v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
reporter: github-pr-review
eslint_flags: 'my_project/some_folder/${{ SINGLE_PATH }}/' # `SINGLE_PATH` would be a path from the array
Is something like this even possible in the first place? If not, what would be recommended way to loop through some values and use them as arguments in other workflow steps?
There is some support for this in Github Actions. There is a very good tutorial here that explains how to do it in detail, but essentially what you'll do is split the steps into two jobs. The first job will output a JSON object that will serve as the input to the matrix of the second job.
Here's a simple example:
jobs:
setup:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.matrix.outputs.value }}
steps:
- id: matrix
run: |
echo '::set-output name=value::[\"a\", \"b\", \"c\"]'
build:
needs: [ setup ]
runs-on: ubuntu-latest
strategy:
matrix:
value: ${{fromJson(needs.setup.outputs.matrix)}}
steps:
- run: |
echo "${{ matrix.value }}"
Difficult to say without running it, but I would say you need to use the output in the second step by assigning it to a variable, something like:
env:
OUTPUT: ${{ steps.id.outputs.arr }}
Then you use $OUTPUT as an environment variable inside the action.
The problem with that particular action is that it takes one commit at a time. But you can check out the code, it's a shell script. You can fork it from line 15 and make it split input and run a loop over it, applying eslint to every one of them.

Resources