How to retain environment for different jobs in azure pipeline - yaml

Stage can keep all the running environments for its jobs, but I have several different jobs that logically cannot grouped into the same stage. all of these jobs has the same running environment, I don't want to repeat the following code to each jobs, how can I abstract all this steps into some sort of functions and call by each job. Or how can I create a shared environment for those jobs. Or how can I retain the environment instead of groups them all into the same stage?
steps:
- bash: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add Conda to PATH
- bash: conda env update -f environment.yml --name $(Agent.Id)
displayName: Create Conda Environment
- bash: export PYTHONPATH="src/"
displayName: Add /src to PYTHONAPTH
- bash: source activate $(Agent.Id)
displayName: Active Test Environment

You can put above steps into a template yaml file. And use step templates to reference it in your main pipeline Yaml file.
For example, create a template yaml file setEnv.yml with above codes.
#File: setEnv.yml
steps:
- bash: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add Conda to PATH
- bash: conda env update -f environment.yml --name $(Agent.Id)
displayName: Create Conda Environment
...
Use template to reference the above template file.
# File: azure-pipelines.yml
stages:
- stage: A
jobs:
- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- template: setEnv.yml # Template reference
- othertasks:
- stage: B
jobs:
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- template: setEnv.yml # Template reference
- othertasks:
Check the document here for more information.

Related

Why can't my ci-cd config file find this Ansible command?

I'm currently testing out Gitlab CI-CD and Ansible and I wanted to combine the two. I already made an Ansible playbook which is just a small nginx server for testing.
I'm using a Docker container with an Alpine image for my runner.
My .gitlab-ci.yml file looks like this:
stages:
- install
- deploy
install-ansible:
stage: install
script:
- apk add ansible -v
deploy-job:
stage: deploy
script:
- ansible-playbook ansible_roles.yml
The first part of the Pipeline is working but it always fails in the deploy part and I get the following error message:
$ ansible-playbook ansible_roles.yml
/bin/sh: eval: line 128: ansible-playbook: not found
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 127
Install ansible in the same job where it is suppose to run: i.e. drop the install_ansible job and install ansible in deploy-job.
Note: as is, you'll have to install ansible in every stage where you want to use it.
stages:
- deploy
deploy-job:
stage: deploy
before_script:
- apk add ansible -v
script:
- ansible-playbook ansible_roles.yml
As installing packages on each ci run (and possibly in different jobs) can be costly and lengthy, you might consider creating an image already containing ansible, push it to a docker registry and use it directly in your job instead of the default one provided by your runner config:
stages:
- deploy
deploy-job:
stage: deploy
image: my.docker.repo/my/ansible/image:latest
script:
- ansible-playbook ansible_roles.yml
An other solution is to ask your runner administrator to install ansible in the default image used by your runner so that it is always available by default.

How to assign environment variables from a file in GitHub actions?

I'm currently working on a Django project. I have create a GitHub action to run python manage.py test command everytime I push the code to main branch.
The problem here is, I have many environment variables in my project. I can't set env variables as GitHub secrets for each variables.
I have a env.dev file in my repository. What I need to do is, everytime I push the code, it needs to assign environment variables by reading it from the env.dev file.
Is there any way to do this?
This is my django.yml file used for GitHub actions.
name: Django CI
on:
pull_request:
branches: [ "main"]
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.9]
steps:
- uses: actions/checkout#v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v3
with:
python-version: ${{ matrix.python-version }}
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run Tests
run: |
python manage.py test
As noted in "How do I use an env file with GitHub Actions?", you do have actions like Dotenv Action which do read a .env file from the root of this repo and provides environment variables to build steps.
But those would not be GitHub secret though.
There is an API to create secrets, which means, if your .env content does not change too often, you can script the secret creation step.

Gitlab CI/CD pipeline run/extend other jobs in sequence

So I have a job which is triggered with specific rules - creating a new tag app-prod-1.0.0 or app-dev-1.0.0. Whenever new tag is created I call the job, which in return extends other jobs
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
My thought was that jobs will be called in the order I have described inside the job: .install-packages, .build-project, .deploy-project. But that's not happening it seems that it just jumps to the last job .deploy-project, without installing and building and thus breaking my pipeline.
How to run/extend jobs in sequence?
This is the behaviour for which I didn't use multiple extends so far in my work with GitLab.
GitLab, attempts to merge the code from parent job.
Now all your parent jobs defines the script tag and in your job for e.g. build_prod the extends happening in below order
extends:
- .install-packages
- .build-project
- .deploy-project
the script code from .deploy-project is overwriting the other job's script tag.
It works differently for the variables. It will merge all the variables and overwrites if same variable is used.
See your own example updated with variables.
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
variables:
PACKAGE: 'install'
INSTALL: 'install'
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
variables:
PACKAGE: 'build'
BUILD: 'build'
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
variables:
PACKAGE: 'deploy'
DEPLOY: 'from deploy'
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
And now notice how PACKAGE variable is overwritten with the final value of '/app/prod' which comes from build-prod job itself. At the same time other variables from individual parent jobs are merged to look like below
variables:
PACKAGE: "/app/prod"
INSTALL: install
BUILD: build
DEPLOY: from deploy
ENVIRONMENT: prod
I really found View merged YAML feature best to understand how my yml file will be evaluated.
Its available in CI/CD -> Editor
It's not actually "jumps to the last job", but simply executes a single job you have provided - that is build_prod or build_dev, depending on commit tag.
As per docs when you are calling extends, you are basically just merging everything inside all the template jobs that you specified, so the last stage keyword, which comes from .deploy-project template job wins.
You should specify each job separately for each stage, and maybe even put your rules to a separate template job, i.e.
.dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
install-dev:
extends:
- .dev
- .install-packages
build-dev:
extends:
- .dev
- .build-project
deploy-dev:
extends:
- .dev
- .deploy-project
You should create similar jobs for prod env, define template job .prod, and create install-prod, build-prod, deploy-prod jobs

Can't checkout the same repo multiple times in a pipeline

I have self-hosted agents on multiple environments that I am trying to run the same build/deploy processes on. I would like to be able to deploy the same code from a single repo to multiple systems concurrently. Thus, I have created an "overhead" pipeline, and several "processes" pipeline templates. Everything seems to be going very well, except for when I try to perform checkouts of the same repo twice in the same pipeline execution. I get the following error:
An error occurred while loading the YAML build pipeline. An item with the same key has already been added.
I would really like to be able to just click ONE button to trigger a main pipeline that calls all the templates requires and gives the parameters needed to get all my jobs done at once. I could of course define this "overhead" pipeline and then queue up as many instances as I need of it per systems that I need to deploy to, but I'm lazy, hence why I'm using pipelines!
As soon as I remove the checkout from Common.yml, the validation succeeds without any issues. If I keep the checkout in there but only call the Common.yml once for the entire Overhead pipeline, then it succeeds without any issues as well. But the problem is: I need to pull the contents of the repo to EACH of my agents that are running on completely separate environments that are in no way ever able to talk to each other (can't pull the information to one agent and have it do some sort of a "copy" to all the other agent locations.....).
Any assistance is very much welcomed, thank you!
The following is my "overhead" pipeline:
# azure-pipelines.yml
trigger:
none
parameters:
- name: vLAN
type: string
default: 851
values:
- 851
- 1105
stages:
- stage: vLAN851
condition: eq('${{ parameters.vLAN }}', '851')
pool:
name: xxxxx
demands:
- vLAN -equals 851
jobs:
- job: Common_851
steps:
- template: Procedures/Common.yml
- job: Export_851
dependsOn: Common_851
steps:
- template: Procedures/Export.yml
parameters:
Server: ABTS-01
- stage: vLAN1105
condition: eq('${{ parameters.vLAN }}', '1105')
pool:
name: xxxxx
demands:
- vLAN -equals 1105
jobs:
- job: Common_1105
steps:
- template: Procedures/Common.yml
- job: Export_1105
dependsOn: Common_1105
steps:
- template: Procedures/Export.yml
parameters:
Server: OTS-01
And here is the "Procedures/Common.yml":
steps:
- checkout: git://xxxxx/yyyyy#$(Build.SourceBranchName)
clean: true
enabled: true
timeoutInMinutes: 1
- task: UsePythonVersion#0
enabled: true
timeoutInMinutes: 1
displayName: Select correct version of Python
inputs:
versionSpec: '3.8'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
enabled: true
timeoutInMinutes: 5
displayName: Ensure Python Requirements Installed
inputs:
script: |
python -m pip install GitPython
And here is the "Procedures/Export.yml":
parameters:
- name: Server
type: string
steps:
- task: PythonScript#0
enabled: true
timeoutInMinutes: 3
displayName: xxxxx
inputs:
arguments: --name "xxxxx" --mode True --Server ${{ parameters.Server }}
scriptSource: 'filePath'
scriptPath: 'xxxxx/main.py'
I managed to make checkout work with variable branch names by using template expression variables ${{ ... }} instead of macro syntax $(...) variables.
The difference is that, template expressions are processed at compile time while macros are processed at runtime.
So in my case I have something like:
- checkout: git://xxx/yyy#${{ variables.BRANCH_NAME }}
For more information about variables syntax :
Understand variable syntax
I couldn't get it to work with expressions but I was able to get it to work using repository resources following the documentation at: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: git
name: MyAzureProjectName/MyGitRepo
ref: $(Build.SourceBranch)
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
#some job
steps:
- checkout: MyGitHubRepo
#some other job
steps:
- checkout: MyGitHubRepo
- script: dir $(Build.SourcesDirectory)

The deployment environment 'Staging' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline

I'm trying to get Bitbucket Pipelines to work with multiple steps that define the deployment area. When I do, I get the error
Configuration error The deployment environment 'Staging' in your
bitbucket-pipelines.yml file occurs multiple times in the pipeline.
Please refer to our documentation for valid environments and their
ordering.
From what I read, the deployment variable has to happen on a step by step basis.
How would I set up this example pipelines file to not hit that error?
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: npm-build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:14.17.5
script:
- echo 'build initiated'
- cd build
- npm install
- npm run dev
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploychanges
name: Deploy_Changes
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring compiled assets'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/css/ -vv
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/js/ -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
<<: *deploychanges
deployment: Production
- step:
<<: *deploycompiled
deployment: Production
dev:
- step: *build
- step: *deploychanges
- step: *deploycompiled
The workaround for the issue with reusing Environment Variables without using the deployment clause for more than one steps in a pipeline I have found is to dump ENV VARS to a file and save it as an artifact that will be sourced in the following steps.
The code snippet for it would look like:
steps:
- step: &set-environment-variables
name: 'Set environment variables'
script:
- printenv | xargs echo export > .envs
artifacts:
- .envs
- step: &next-step
name: "Next step in the pipeline"
script:
- source .envs
- next_actions
pipelines:
pull-requests:
'**':
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
branches:
staging:
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
So far this solution works for me.
Update:
after having an issue of "%s" appearing in the .envs file, which caused the later source .envs statement to fail, here is a slightly different approach to the initial step. It gets around that particular issue, but also only exports those variables you know you need in your pipeline - noting that there are a lot of bitbucket environment variables available to the first script step which will be available naturally to later scripts anyway, and it makes more sense (to me anyway) that you don't just dump out all environment variables to the .env artifact, but do it in a much more controlled manner.
- step: &set-environment-variables
name: 'Set environment variables'
script:
- echo "export SSH_USER=$SSH_USER" > .envs
- echo "export SERVER_IP=$SERVER_IP" >> .envs
- echo "export ANOTHER_ENV_VAR=$ANOTHER_ENV_VAR" >> .envs
artifacts:
- .envs
In this example, .envs will now contain only those 3 environment variables, and not a whole heap of system + bitbucket variables (and of course, no pesky %s characters either!)
Normally what happens is that you deploy to an environment. So there is one step which deploys. So particularly u should put your "deployment" group to it specifically. This is how Bitbucket manages that if the deployment of the code happened or not. So its like you can have multiple steps where in one you are testing unit cases, integration cases, another one for building the binaries and the last one as an artifact deploys to the env marking deployment group. see the below example.
definitions:
steps:
- step: &test-vizdom-services
name: "Vizdom services unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- cd ./vizdom/vizdom.services.Tests
- dotnet test vizdom.services.Tests.csproj
pipelines:
custom:
DEV-AWS-api-deploy:
- step: *test-vizdom-services
- step:
name: "Vizdom Webapi unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- export ASPNETCORE_ENVIRONMENT=Dev
- cd ./vizdom/vizdom.webapi.tests
- dotnet test vizdom.webapi.tests.csproj
- step:
deployment: DEV-API
name: "API: Build > Zip > Upload > Deploy"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- apt-get update
- apt-get install zip -y
- mkdir -p ~/deployment/release_dll
- cd ./vizdom/vizdom.webapi
- cp -r ../shared_settings ~/deployment
- dotnet publish vizdom.webapi.csproj -c Release -o ~/deployment/release_dll
- cp Dockerfile ~/deployment/
- cp -r deployment_scripts ~/deployment
- cp deployment_scripts/appspec_dev.yml ~/deployment/appspec.yml
- cd ~/deployment
- zip -r $BITBUCKET_CLONE_DIR/dev-webapi-$BITBUCKET_BUILD_NUMBER.zip .
- cd $BITBUCKET_CLONE_DIR
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'upload'
APPLICATION_NAME: 'ss-webapi'
ZIP_FILE: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'deploy'
APPLICATION_NAME: 'ss-webapi'
DEPLOYMENT_GROUP: 'dev-single-instance'
WAIT: 'false'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
So as you can see I have multiple steps for running test cases but I would finally build the binaries and deploy the code in final step. I could have broken it into separate steps but I dont want to waste the minutes of having to use another step because cloning and copying the artifact takes some time. Right now there are three steps it could have been broken into 4. where the 4th one would have been the deployment step. I hope this brings some clarity.
Also you can modify the names of the deployment groups as per your needs and can have up to 50 deployment groups :)
Little did I know, it's intentional that deploy happens in one step and you can only define on one step the deployment environment. The following setup is what worked for us (plus the appropriate separate git-ftp files):
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: Build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:15.0.1
script:
- echo 'build initiated'
- cd build
- npm install
- npm run prod
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploy
name: Deploy
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: Production
dev:
- step: *build
- step: *deploy
I guess we cannot use combine the Deployment with either Artifact or Cache.
If I use standalone Deployment, so I can use the same deployment for multiple steps (as my screenshot).
In case I add cache/artifact, will get same error as yours.
Just got the same issue today.
I don't think there's currently a solution for this except rewrite the steps to not run two steps in on environment.
Waiting on https://jira.atlassian.com/browse/BCLOUD-18261 which planned to be released in July.
Related https://community.atlassian.com/t5/Bitbucket-questions/The-deployment-environment-test-in-your-bitbucket-pipelines-yml/qaq-p/971584
This is currently not available. They do have a ticket and it says it's being worked on. The best workaround currently appears to be creating multiple developer variable environments for steps that use the same variables.
Ex:
- step:
<<: *InitialSetup
deployment: Beta-Setup
- step:
<<: *Build
deployment: Beta-Build
From the comments on the ticket:
Hey everyone,
I know this is a long-winded workaround, and someone has probably already mentioned it, but I got around the issue by setting up "sub environments", one for each step. E.g. instead of having a "Staging" environment, I set up a "Staging Build" and "Staging Deploy" environment, and just had to duplicate the variables if necessary. I did the same for production.
Having to setup and maintain all these environments and variables can be a pain, but one can automate this to prevent human error, through setting up an OAuth client tool that interfaces with the API (you just need the "pipelines" scope), if one can be bothered to go to the effort (as I have: https://blog.programster.org/bitbucket-create-oauth-client-credentials).
I can't wait for this feature to be completed as that is the "real" solution, and a lot less effort!
With your cases, to solve the problems, you either solve the errors as following options:
Combine all steps into one big step
Or create different deployment variable group Staging DeployChanges and Staging DeployComplied, this way may lead to duplicate variable
ex:
- step: &deploychanges
name: Deploy_Changes
deployment: Staging DeployChanges
script:
- ....
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging DeployComplied
....

Resources