Azure devops yaml: release to multiple web nodes - yaml

I'm trying to figure out how I can use yaml pipelines to deploy an application to a multi web-node environment.
I'm trying to create a pipeline with 2 stages. Stage 1 will build the project, and stage 2 will deploy it to a staging environment. This staging environment (currently) has 2 web nodes. I don't want to hardcode the actual machines to deploy to into my pipeline. So I thought I'd add a variable group with a variable containing the web nodes to deploy to, and use a "each" statement to generate a job per node.
However, this doesn't work for several reasons:
The jobs are generated before the variable group is read, so it shows an error it cannot find the variable
Apparently an array variable is not supported at all, except for the built in variables
So my question is, how do other people solve this? I'd like to define the servers to deploy to in a central place, and not in my pipeline definition.
My initial attempt is printed below. This doesn't work, but it does describe what I'm trying to accomplish.
Main yaml:
variables:
- group: LicenseServerVariables #this contains StagingWebNodes variable
stages:
- stage: Build
displayName: Build
<some build steps>
- stage: DeployTest
displayName: Deploy on test
condition: and(succeeded(), eq(variables['DeployToTest'], 'true'))
jobs:
- template: Templates\Deploy.yaml
parameters:
nodes: $(StagingWebNodes)
Deploy.yaml:
parameters:
nodes: []
jobs:
- ${{ each node in parameters.nodes }}:
- job: ${{ node }}
displayname: deploy to ${{ node }}
pool: Saas Staging
demands: ${{ node }}
steps:
- template: DeployToNode.yaml
edit:
I'm a bit closer to a solution. I was able to get the pipeline to work with the "each" construct using the following adjustment to the Deploy.yaml:
parameters:
nodes:
- name: 'Node1'
pool:
name: StagingPool
demands: 'Node1'
- name: 'Node2'
pool:
name: StagingPool
demands: 'Node2'
jobs:
- ${{ each node in parameters.nodes }}:
- job: ${{ node.name }}
displayName: deploy to ${{ node.name }}
pool: ${{ node.pool }}
steps:
- template: DeployToNode.yaml
This makes it a bit better. However, I still don't want to define the "nodes" parameter in my pipeline yaml source, but in a variable group (or some other place if anyone has a good suggestion)

With the addition of virtual machines as resource for environments this issue has gone away. I can now use a rolling deployment task to deploy the apppicatio to all web nodes

Related

Cypress Parallelization: Automate the number of machines for best performance with Github-Actions

With Cypress and its dashboard, we can run tests in parallel to speed them up. To achieve this with Github-Actions we have to configure a workflow like this (copied from https://github.com/cypress-io/github-action#parallel):
jobs:
test:
name: Cypress run
runs-on: ubuntu-20.04
strategy:
# when one test fails, DO NOT cancel the other
# containers, because this will kill Cypress processes
# leaving the Dashboard hanging ...
# https://github.com/cypress-io/github-action/issues/48
fail-fast: false
matrix:
# run 3 copies of the current job in parallel
containers: [1, 2, 3]
steps:
- name: Checkout
uses: actions/checkout#v2
# because of "record" and "parallel" parameters
# these containers will load balance all found tests among themselves
- name: Cypress run
uses: cypress-io/github-action#v4
with:
record: true
parallel: true
group: 'Actions example'
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
# Recommended: pass the GitHub token lets this action correctly
# determine the unique run id necessary to re-run the checks
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
The jumping point here is the matrix configuration with containers: [1, 2, 3] which means that we are using 3 machines for parallel execution. The distribution, which bunch of tests will be executed on each machine is coming from the dashboard. If we take a look into the results on the dashboard, we get a recommendation about the max number of machines to get the best performance. Now, I wonder if it is possible to automate this number of machines (containersin workflow config) somehow?
Of course, all tests have to run first so that Cypress Dashboard can achieve the best number, but with each subsequent run it can be optimized.

Can't checkout the same repo multiple times in a pipeline

I have self-hosted agents on multiple environments that I am trying to run the same build/deploy processes on. I would like to be able to deploy the same code from a single repo to multiple systems concurrently. Thus, I have created an "overhead" pipeline, and several "processes" pipeline templates. Everything seems to be going very well, except for when I try to perform checkouts of the same repo twice in the same pipeline execution. I get the following error:
An error occurred while loading the YAML build pipeline. An item with the same key has already been added.
I would really like to be able to just click ONE button to trigger a main pipeline that calls all the templates requires and gives the parameters needed to get all my jobs done at once. I could of course define this "overhead" pipeline and then queue up as many instances as I need of it per systems that I need to deploy to, but I'm lazy, hence why I'm using pipelines!
As soon as I remove the checkout from Common.yml, the validation succeeds without any issues. If I keep the checkout in there but only call the Common.yml once for the entire Overhead pipeline, then it succeeds without any issues as well. But the problem is: I need to pull the contents of the repo to EACH of my agents that are running on completely separate environments that are in no way ever able to talk to each other (can't pull the information to one agent and have it do some sort of a "copy" to all the other agent locations.....).
Any assistance is very much welcomed, thank you!
The following is my "overhead" pipeline:
# azure-pipelines.yml
trigger:
none
parameters:
- name: vLAN
type: string
default: 851
values:
- 851
- 1105
stages:
- stage: vLAN851
condition: eq('${{ parameters.vLAN }}', '851')
pool:
name: xxxxx
demands:
- vLAN -equals 851
jobs:
- job: Common_851
steps:
- template: Procedures/Common.yml
- job: Export_851
dependsOn: Common_851
steps:
- template: Procedures/Export.yml
parameters:
Server: ABTS-01
- stage: vLAN1105
condition: eq('${{ parameters.vLAN }}', '1105')
pool:
name: xxxxx
demands:
- vLAN -equals 1105
jobs:
- job: Common_1105
steps:
- template: Procedures/Common.yml
- job: Export_1105
dependsOn: Common_1105
steps:
- template: Procedures/Export.yml
parameters:
Server: OTS-01
And here is the "Procedures/Common.yml":
steps:
- checkout: git://xxxxx/yyyyy#$(Build.SourceBranchName)
clean: true
enabled: true
timeoutInMinutes: 1
- task: UsePythonVersion#0
enabled: true
timeoutInMinutes: 1
displayName: Select correct version of Python
inputs:
versionSpec: '3.8'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
enabled: true
timeoutInMinutes: 5
displayName: Ensure Python Requirements Installed
inputs:
script: |
python -m pip install GitPython
And here is the "Procedures/Export.yml":
parameters:
- name: Server
type: string
steps:
- task: PythonScript#0
enabled: true
timeoutInMinutes: 3
displayName: xxxxx
inputs:
arguments: --name "xxxxx" --mode True --Server ${{ parameters.Server }}
scriptSource: 'filePath'
scriptPath: 'xxxxx/main.py'
I managed to make checkout work with variable branch names by using template expression variables ${{ ... }} instead of macro syntax $(...) variables.
The difference is that, template expressions are processed at compile time while macros are processed at runtime.
So in my case I have something like:
- checkout: git://xxx/yyy#${{ variables.BRANCH_NAME }}
For more information about variables syntax :
Understand variable syntax
I couldn't get it to work with expressions but I was able to get it to work using repository resources following the documentation at: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: git
name: MyAzureProjectName/MyGitRepo
ref: $(Build.SourceBranch)
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
#some job
steps:
- checkout: MyGitHubRepo
#some other job
steps:
- checkout: MyGitHubRepo
- script: dir $(Build.SourcesDirectory)

How do you use an input variable to specify which self-hosted runner a GitHub action would run on?

I'm using GitHub Actions.
I'm trying to create a self-hosted runner that will run on one of several cloud servers, but be able to control which specific server it runs on. This is to do rolling updates on machines that have local resources.
I've setup tags with the self-hosted runners, but so far am having to make 1..N separate YML scripts:
on:
workflow_dispatch:
jobs:
post-to-server1:
runs-on: [self-hosted, server1 ]
Then I have to repeat this script n number of times.
What I'd like to do is this:
name: 'Server Patching'
on:
workflow_dispatch:
inputs:
server-target:
description: 'Target Server:'
required: true
default: 'dev'
jobs:
test-params:
runs-on: [ self-hosted, ${{ github.event.inputs.server-target }}]
steps:
- name: Start cloning of ${{ github.event.inputs.server-target }}
run: |
echo "Starting patching of: '${{ github.event.inputs.server-target }}'"
Check failure on line 12 in .github/workflows/test_workflow.yml
GitHub Actions / .github/workflows/test_workflow.yml
Invalid workflow file
You have an error in your yaml syntax on line 12
Is there a way to do this?
I tagged this YAML also, because the variable substitution seems to be an issue with the YAML syntax as well (although it may be specific for GitHub Actions)
Update: Yes, you can, if you change this line:
runs-on: [ self-hosted, ${{ github.event.inputs.server-target }}]
to:
runs-on: ${{ github.event.inputs.server-target }}
The error actually happens on test-params: not the line below it.
My theory is that the syntax checker fires before the variable substitution?
I'm not sure what makes difference, but it works for me:
runs-on: [ self-hosted, "${{ github.event.inputs.server-target }}" ]
Creating a list like that does not seem to work with dynamic values.
As long as you choose labels that are not already in use by GitHub (e.g. ubuntu-latest, macos-11, ...) you should not have a problem with dropping the self-hosted label. So your current naming pattern should be fine by itself
That way you can just use this:
runs-on: ${{ github.event.inputs.serverToPatch }}

How to stop GitHub Action build when SonarQube scan fails

I have a scan step built into my GitHub Action build and that is working fine. I reach out to my company's SonarQub instance and the scan is initiated. The problem I am having is trying to stop a build if there is a failure. For the life of me I can't seem to find a way to do that. Also, when I watch the scan it appears as though the next steps might be happening before it finishes (not positive on that but thought I would mention it). Any ideas??
name: Build, test, & deploy
on: [push]
jobs:
sonarqube:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
# Disabling shallow clone is recommended for improving relevancy of reporting
fetch-depth: 0
# Triggering SonarQube analysis as results of it are required by Quality Gate check
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action#master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
- name: SonarQube Quality Gate check
uses: sonarsource/sonarqube-quality-gate-action#master
# Force to fail step after specific time
timeout-minutes: 5
env:
SONAR_TOKEN: ${{ secrets.ADAM_SONAR_TOKEN }}
build:
name: Project build & package
if: "!contains(github.even.head_commit.message, '[skip-ci]')"
runs-on: ubuntu-latest
env:
#environment var for this job
#### the rest of the build is below this area - I didn't think it was necessary to include
You should use needs in your build job:
build:
needs: sonarqube
name: Project build & package
You can find information here: https://docs.github.com/en/actions/using-jobs/using-jobs-in-a-workflow
The answer is using "needs": https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idneeds

Azure pipeline base template inserting name key in environment

I have a base template that accepts a stageList parameter. I don't do anything with the jobs in those stages:
parameters:
- name: stages
type: stageList
default: []
stages:
- ${{ parameters.stages }}
I'm passing into that a stage that contains a deployment job. I have hardcoded the environment for testing purposes, but even so it inserts the key "name: environment" under environment:
resources:
repositories:
- repository: templates
type: git
name: basePipelineTemplatesHost/basePipelineTemplatesHost
extends:
template: templateExtendedByDeployment/template.yml#templates
parameters:
stages:
- stage: buildStage1
jobs:
- deployment:
displayName: Deploy to demo environment
environment: DTL-Demo-Env
strategy:
runOnce:
deploy:
steps:
- script: echo test
Resulting in the following rendered yaml:
environment: {
name: DTL-Demo-Env
}
This causes the job to run on a hosted vm instead of my on-prem environment agent. Is this a bug?
Just a suggestion, you should add resourceType under environment.
jobs:
- deployment:
displayName: Deploy to demo environment
environment:
name: DTL-Demo-Env
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: echo test
If not, the new created environment will always be created under hosted agent when you use your private agent. You should add it to let the environment variable under your private agent.

Resources