I have a base template that accepts a stageList parameter. I don't do anything with the jobs in those stages:
parameters:
- name: stages
type: stageList
default: []
stages:
- ${{ parameters.stages }}
I'm passing into that a stage that contains a deployment job. I have hardcoded the environment for testing purposes, but even so it inserts the key "name: environment" under environment:
resources:
repositories:
- repository: templates
type: git
name: basePipelineTemplatesHost/basePipelineTemplatesHost
extends:
template: templateExtendedByDeployment/template.yml#templates
parameters:
stages:
- stage: buildStage1
jobs:
- deployment:
displayName: Deploy to demo environment
environment: DTL-Demo-Env
strategy:
runOnce:
deploy:
steps:
- script: echo test
Resulting in the following rendered yaml:
environment: {
name: DTL-Demo-Env
}
This causes the job to run on a hosted vm instead of my on-prem environment agent. Is this a bug?
Just a suggestion, you should add resourceType under environment.
jobs:
- deployment:
displayName: Deploy to demo environment
environment:
name: DTL-Demo-Env
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: echo test
If not, the new created environment will always be created under hosted agent when you use your private agent. You should add it to let the environment variable under your private agent.
Related
Im building Spring boot app using gradle. Integration tests (Spock) need to access code/src/resouces/docker-compose.yml file to prepare TestContainers container:
static DockerComposeContainer postgresContainer = new DockerComposeContainer(
ResourceUtils.getFile("classpath:docker-compose.yml"))
The git file structure is:
- code
- src
- main
- test
- resources
- docker-compose.yml
This is working fine on my local machine, but once I run it in Azure pipeline, it gives
No such file or directory: '/__w/1/s/code/build/resources/test/docker-compose.yml'
My pipeline yaml is like bellow. I use Ubuntu container with Java17 as I need to build with 17 but Azure's latest is 11 (maybe this plays any role in the error I get?)
trigger: none
stages:
- stage: Test
displayName: Test Stage
jobs:
- job: Test
pool:
vmImage: 'ubuntu-22.04'
container: gradle:7.6.0-jdk17
variables:
- name: JAVA_HOME_11_X64
value: /opt/java/openjdk
displayName: Test
steps:
- script: java --version
displayName: Java version
- script: |
echo "Build reason is: $(Build.Reason)"
displayName: Build reason
- checkout: self
clean: true
- task: Gradle#2
displayName: 'Gradle Build'
enabled: True
inputs:
javaHomeSelection: 'path'
jdkDirectory: '/opt/java/openjdk'
wrapperScript: code/gradlew
cwd: code
tasks: clean build
publishJUnitResults: true
jdkVersionOption: 1.17
Thanks for help!
I've solved it by "workarround" - I realied that I dont need to use the container with jdk17 that was causing the problem (could not access files on host machine of course)
The true is the Azure silently supports Jkd17 by directive: jdkVersionOption: 1.17
But once someone will need to use the container to build the code and access the repository files that are not on classpath, the problem will raise again
trigger: none
stages:
- stage: Test
displayName: Test Stage
jobs:
- job: Test
pool:
vmImage: 'ubuntu-22.04'
displayName: Test
steps:
- script: java --version
displayName: Java version
- script: |
echo "Build reason is: $(Build.Reason)"
displayName: Build reason
- checkout: self
clean: true
- task: Gradle#2
displayName: 'Gradle Build'
enabled: True
inputs:
wrapperScript: server/code/gradlew
cwd: server/code
tasks: test
publishJUnitResults: true
jdkVersionOption: 1.17
Please follow Azure pipeline issue for more details
I'm attempting to create a GHA workflow and I am getting an error that I'm unsure how to fix as I've implemented this in similar environments before.
name: Deploy Staging
# Controls when the workflow will run
on:
# Triggers the workflow on push events only for the main branch
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# Run the build job first
build:
name: Build
uses: ./.github/workflows/build.yml
deploy-staging:
name: Staging Deploy
runs-on: ubuntu-latest
environment:
name: staging
needs: [build]
permissions:
id-token: write
contents: read
steps:
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Download build artifacts
uses: actions/download-artifact#v3
with:
name: buildResult
- name: CDK install
run: npm install -g aws-cdk
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
- name: CDK diff
run: cdk --app . diff staging
- name: CDK deploy
run: cdk --app . deploy staging --require-approval never
- name: Configure DX AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Report deployment
uses: XXXX/deployment-tracker-action#v1
if: always()
with:
application-name: XXXX
environment: staging
platform: test
deployment-status: ${{ steps.deploy-workload.outcome == 'success' && 'success' || 'fail' }}
aws-region: us-east-1
XXXX
I don't understand quite where I'm going wrong here but when I merged my actions branch and I attempted to get it to work, I received the following message:
error parsing called workflow "./.github/workflows/build.yml": workflow is not reusable as it is missing a `on.workflow_call` trigger
Below is my build file for reference.
name: Build
# Controls when the workflow will run
on:
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
buildEnvironment:
description: Build Environment
required: false
default: production
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# next build runs lint, don't need a step for it
build:
name: Build
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Install Dependencies
run: npm install
- name: CDK install
run: npm install -g aws-cdk
- name: CDK build
run: cdk synth
- name: Upload build artifacts
uses: actions/upload-artifact#v3
with:
name: buildResult
path: |
cdk.out
test:
name: Test
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Install Dependencies
run: npm install
- name: Run tests
run: npm test
If you want to call another workflow (reusable workflow), the workflow you're calling needs to have the trigger workflow_call.
Therefore, in order to resolve your error, change build.yml to:
name: Build
on:
workflow_call:
pull_request:
# etc..
So I have a job which is triggered with specific rules - creating a new tag app-prod-1.0.0 or app-dev-1.0.0. Whenever new tag is created I call the job, which in return extends other jobs
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
My thought was that jobs will be called in the order I have described inside the job: .install-packages, .build-project, .deploy-project. But that's not happening it seems that it just jumps to the last job .deploy-project, without installing and building and thus breaking my pipeline.
How to run/extend jobs in sequence?
This is the behaviour for which I didn't use multiple extends so far in my work with GitLab.
GitLab, attempts to merge the code from parent job.
Now all your parent jobs defines the script tag and in your job for e.g. build_prod the extends happening in below order
extends:
- .install-packages
- .build-project
- .deploy-project
the script code from .deploy-project is overwriting the other job's script tag.
It works differently for the variables. It will merge all the variables and overwrites if same variable is used.
See your own example updated with variables.
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
variables:
PACKAGE: 'install'
INSTALL: 'install'
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
variables:
PACKAGE: 'build'
BUILD: 'build'
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
variables:
PACKAGE: 'deploy'
DEPLOY: 'from deploy'
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
And now notice how PACKAGE variable is overwritten with the final value of '/app/prod' which comes from build-prod job itself. At the same time other variables from individual parent jobs are merged to look like below
variables:
PACKAGE: "/app/prod"
INSTALL: install
BUILD: build
DEPLOY: from deploy
ENVIRONMENT: prod
I really found View merged YAML feature best to understand how my yml file will be evaluated.
Its available in CI/CD -> Editor
It's not actually "jumps to the last job", but simply executes a single job you have provided - that is build_prod or build_dev, depending on commit tag.
As per docs when you are calling extends, you are basically just merging everything inside all the template jobs that you specified, so the last stage keyword, which comes from .deploy-project template job wins.
You should specify each job separately for each stage, and maybe even put your rules to a separate template job, i.e.
.dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
install-dev:
extends:
- .dev
- .install-packages
build-dev:
extends:
- .dev
- .build-project
deploy-dev:
extends:
- .dev
- .deploy-project
You should create similar jobs for prod env, define template job .prod, and create install-prod, build-prod, deploy-prod jobs
I have self-hosted agents on multiple environments that I am trying to run the same build/deploy processes on. I would like to be able to deploy the same code from a single repo to multiple systems concurrently. Thus, I have created an "overhead" pipeline, and several "processes" pipeline templates. Everything seems to be going very well, except for when I try to perform checkouts of the same repo twice in the same pipeline execution. I get the following error:
An error occurred while loading the YAML build pipeline. An item with the same key has already been added.
I would really like to be able to just click ONE button to trigger a main pipeline that calls all the templates requires and gives the parameters needed to get all my jobs done at once. I could of course define this "overhead" pipeline and then queue up as many instances as I need of it per systems that I need to deploy to, but I'm lazy, hence why I'm using pipelines!
As soon as I remove the checkout from Common.yml, the validation succeeds without any issues. If I keep the checkout in there but only call the Common.yml once for the entire Overhead pipeline, then it succeeds without any issues as well. But the problem is: I need to pull the contents of the repo to EACH of my agents that are running on completely separate environments that are in no way ever able to talk to each other (can't pull the information to one agent and have it do some sort of a "copy" to all the other agent locations.....).
Any assistance is very much welcomed, thank you!
The following is my "overhead" pipeline:
# azure-pipelines.yml
trigger:
none
parameters:
- name: vLAN
type: string
default: 851
values:
- 851
- 1105
stages:
- stage: vLAN851
condition: eq('${{ parameters.vLAN }}', '851')
pool:
name: xxxxx
demands:
- vLAN -equals 851
jobs:
- job: Common_851
steps:
- template: Procedures/Common.yml
- job: Export_851
dependsOn: Common_851
steps:
- template: Procedures/Export.yml
parameters:
Server: ABTS-01
- stage: vLAN1105
condition: eq('${{ parameters.vLAN }}', '1105')
pool:
name: xxxxx
demands:
- vLAN -equals 1105
jobs:
- job: Common_1105
steps:
- template: Procedures/Common.yml
- job: Export_1105
dependsOn: Common_1105
steps:
- template: Procedures/Export.yml
parameters:
Server: OTS-01
And here is the "Procedures/Common.yml":
steps:
- checkout: git://xxxxx/yyyyy#$(Build.SourceBranchName)
clean: true
enabled: true
timeoutInMinutes: 1
- task: UsePythonVersion#0
enabled: true
timeoutInMinutes: 1
displayName: Select correct version of Python
inputs:
versionSpec: '3.8'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
enabled: true
timeoutInMinutes: 5
displayName: Ensure Python Requirements Installed
inputs:
script: |
python -m pip install GitPython
And here is the "Procedures/Export.yml":
parameters:
- name: Server
type: string
steps:
- task: PythonScript#0
enabled: true
timeoutInMinutes: 3
displayName: xxxxx
inputs:
arguments: --name "xxxxx" --mode True --Server ${{ parameters.Server }}
scriptSource: 'filePath'
scriptPath: 'xxxxx/main.py'
I managed to make checkout work with variable branch names by using template expression variables ${{ ... }} instead of macro syntax $(...) variables.
The difference is that, template expressions are processed at compile time while macros are processed at runtime.
So in my case I have something like:
- checkout: git://xxx/yyy#${{ variables.BRANCH_NAME }}
For more information about variables syntax :
Understand variable syntax
I couldn't get it to work with expressions but I was able to get it to work using repository resources following the documentation at: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: git
name: MyAzureProjectName/MyGitRepo
ref: $(Build.SourceBranch)
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
#some job
steps:
- checkout: MyGitHubRepo
#some other job
steps:
- checkout: MyGitHubRepo
- script: dir $(Build.SourcesDirectory)
I'm trying to figure out how I can use yaml pipelines to deploy an application to a multi web-node environment.
I'm trying to create a pipeline with 2 stages. Stage 1 will build the project, and stage 2 will deploy it to a staging environment. This staging environment (currently) has 2 web nodes. I don't want to hardcode the actual machines to deploy to into my pipeline. So I thought I'd add a variable group with a variable containing the web nodes to deploy to, and use a "each" statement to generate a job per node.
However, this doesn't work for several reasons:
The jobs are generated before the variable group is read, so it shows an error it cannot find the variable
Apparently an array variable is not supported at all, except for the built in variables
So my question is, how do other people solve this? I'd like to define the servers to deploy to in a central place, and not in my pipeline definition.
My initial attempt is printed below. This doesn't work, but it does describe what I'm trying to accomplish.
Main yaml:
variables:
- group: LicenseServerVariables #this contains StagingWebNodes variable
stages:
- stage: Build
displayName: Build
<some build steps>
- stage: DeployTest
displayName: Deploy on test
condition: and(succeeded(), eq(variables['DeployToTest'], 'true'))
jobs:
- template: Templates\Deploy.yaml
parameters:
nodes: $(StagingWebNodes)
Deploy.yaml:
parameters:
nodes: []
jobs:
- ${{ each node in parameters.nodes }}:
- job: ${{ node }}
displayname: deploy to ${{ node }}
pool: Saas Staging
demands: ${{ node }}
steps:
- template: DeployToNode.yaml
edit:
I'm a bit closer to a solution. I was able to get the pipeline to work with the "each" construct using the following adjustment to the Deploy.yaml:
parameters:
nodes:
- name: 'Node1'
pool:
name: StagingPool
demands: 'Node1'
- name: 'Node2'
pool:
name: StagingPool
demands: 'Node2'
jobs:
- ${{ each node in parameters.nodes }}:
- job: ${{ node.name }}
displayName: deploy to ${{ node.name }}
pool: ${{ node.pool }}
steps:
- template: DeployToNode.yaml
This makes it a bit better. However, I still don't want to define the "nodes" parameter in my pipeline yaml source, but in a variable group (or some other place if anyone has a good suggestion)
With the addition of virtual machines as resource for environments this issue has gone away. I can now use a rolling deployment task to deploy the apppicatio to all web nodes