Currently have a config.yml file with the following workflow jobs:
- build-test-staging:
name: COM Staging Build
filters:
branches:
only: /^release-.*/
context: COM Deploy Settings
- deploy-staging:
name: COM Staging Deploy
requires:
- COM Staging Build
filters:
branches:
only: /^release-.*/
context: COM Deploy Settings
- build-test-staging:
name: UK Staging Build
filters:
branches:
only: /^release-.*/
context: UK Deploy Settings
- deploy-staging:
name: UK Staging Deploy
requires:
- UK Staging Build
filters:
branches:
only: /^release-.*/
context: UK Deploy Settings
There will be more of these, as well as a production version that has the same setup, but with different names.
As you can see, they all follow the same pattern: a name, a branch to run on (release for staging, master for production), and a context to bring in some env variables.
Without constant copy and pasting, is there a way to generate these jobs dynamically with something? I'm not too good at yml.
Related
I have a pipeline which I want to trigger when PR is merged into master. I have tried different things, but this did not work. Furthermore, I am neither getting any error nor pipeline is triggering.
I do not want this to be triggered on PR creation. I want this to be triggered when PR is merged into master. That is the reason I have not added pr in my yml.
What am I missing here?
Approaches:
Enabled "Continuous Integration" option
Following trigger syntax per Microsoft recommendation
trigger:
batch: True
branches:
include:
- master
paths:
include:
- cosmos
Setting up valid YML file
Pipeline:
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
batch: True
branches:
include:
- master
paths:
include:
- cosmos
stages:
- stage: SOME_PATH_dev
displayName: SOME_PATH_dev
jobs:
- deployment: 'DeployToDev'
environment: Dev
cancelTimeoutInMinutes: 1
pool:
vmImage: ubuntu-latest
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: AzureResourceGroupDeployment#2
displayName: Azure Deployment:Create Or Update Resource Group action on SOME_PATH_dev
inputs:
ConnectedServiceName: SOME_KEY
resourceGroupName: SOME_PATH_dev
location: West US
csmFile: cosmos/deploy.json
csmParametersFile: cosmos/parameters-dev.json
deploymentName: SOME_PATH.Cosmos.DEV
Repo Structure:
References:
"Configuring the trigger failed, edit and save the pipeline again" with no noticeable error and no further details
Azure Devops build pipeline: CI triggers not working on PR merge to master when PR set to none
https://medium.com/#aksharsri/add-approval-gates-in-azure-devops-yaml-based-pipelines-a06d5b16b7f4
https://erwinstaal.nl/posts/manual-approval-in-an-azure-devops-yaml-pipeline/
I was changing YML (pipeline) files instead of files inside cosmos folder and expecting trigger.
I want to be able to reuse the deployment variable on my bitbucket pipeline but this is not allowed. Is there any way to combine the steps?
pipelines:
default:
- step: *build
branches:
master:
- step: *build
- step:
<<: *deploy1
name: Deployment 1
deployment: Production
- step:
<<: *deploy2
name: Deployment 2
deployment: Production
I see that bitbucket has this on the roadmap for bb pipelines. I'm currently trying to do something similar. If I find a solution or something works for me. I'll post my solution.
I have a base template that accepts a stageList parameter. I don't do anything with the jobs in those stages:
parameters:
- name: stages
type: stageList
default: []
stages:
- ${{ parameters.stages }}
I'm passing into that a stage that contains a deployment job. I have hardcoded the environment for testing purposes, but even so it inserts the key "name: environment" under environment:
resources:
repositories:
- repository: templates
type: git
name: basePipelineTemplatesHost/basePipelineTemplatesHost
extends:
template: templateExtendedByDeployment/template.yml#templates
parameters:
stages:
- stage: buildStage1
jobs:
- deployment:
displayName: Deploy to demo environment
environment: DTL-Demo-Env
strategy:
runOnce:
deploy:
steps:
- script: echo test
Resulting in the following rendered yaml:
environment: {
name: DTL-Demo-Env
}
This causes the job to run on a hosted vm instead of my on-prem environment agent. Is this a bug?
Just a suggestion, you should add resourceType under environment.
jobs:
- deployment:
displayName: Deploy to demo environment
environment:
name: DTL-Demo-Env
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: echo test
If not, the new created environment will always be created under hosted agent when you use your private agent. You should add it to let the environment variable under your private agent.
I have a .gitlab-ci.yml file which has a couple of sections for deploying to staging servers - in our small dev team of two, we each have a staging server to test on. The portion of the file looks like this:
...
.deploy: &deploy
image: docker:stable
stage: deploy
script:
- ./deploy.sh
deploy_to_staging_sf:
<<: *deploy
only:
- staging_sf
tags:
- staging_sf
deploy_to_staging_ay:
<<: *deploy
only:
- staging_ay
tags:
- staging_ay
...
The different sections are there to match the different tags for gitlab's CI runners.
If we were to add another developer (or another platform; I was testing deployment to a Raspberry Pi once), I would need to replicate another deploy_to_... section.
I'm just wondering if there's a Gitlab or YAML way to refactor this and make it generic enough so I can add another deployment platform without modifying the file.
Thanks
I have 3 main branches in my gitlab project: dev , staging, production. I'm using postman newman for integration testing like this in .gitlab-ci.yml:
postman_tests:
stage: postman_tests
image:
name: postman/newman_alpine33
entrypoint: [""]
only:
- merge_requests
script:
- newman --version
- newman run https://api.getpostman.com/collections/zzz?apikey=zzz --environment https://api.getpostman.com/environments/xxx?apikey=xxxx
this script only run in merge request approval process from dev to staging , or staging to production. The problem is i need to only run this postman newman test when the process of merge request approval from staging to production, how can i achieve this?
This can be achieved using the 'advanced' only/except settings in combination with supplied environment variables:
postman_tests:
stage: postman_tests
image:
name: postman/newman_alpine33
entrypoint: [""]
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "staging"
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "production"
script:
- newman --version
- newman run https://api.getpostman.com/collections/zzz?apikey=zzz --environment https://api.getpostman.com/environments/xxx?apikey=xxxx
For a full list of predefined environment variables you can go here