I am stuck in the following Situation. We got an Azure DevOps Organisation with different Projects. The goal is to store all Pipelines and Releases in the "Operations" Project.
The benefit of this is, to keep the individual teams away from dealing with creating Pipelines or handling Secrets (.netrc, Container Registry, etc.)
However there seems to be no way to trigger a build Validation Pull Request in Project A, that triggers the Pipeline in the Operations Project. Build Validations of certain Branches can only trigger Pipelines inside the Project itself
So in short: PR in Project A should trigger Pipeline in Project "Operations"
Is there a workaround for this or is this feature to be implemented in the near future?
This threat looked promising that it might be implemented. But it has been quiet since.
I initially asked if you're using YAML pipelines because I have a similar setup designed for the same purpose, to keep YAML pipelines separate from the codebase.
I've heavily leveraged YAML templates to accomplish this. They're the key to keeping the bulk of the logic out of the source code repos. However, you do still need a very light-weight pipeline within the source code itself.
Here's how I'd recommend setting up your pipelines:
Create a "Pipelines" repo within your "Operations" project. Use this repo to store the stages of your pipelines.
Within the repos containing the source code of the projects you'll be deploying create a YAML file that will extend from a template within your "Operations/Pipelines" repo. It will look something like this:
name: CI-Example-Root-Pipeline-$(Date:yyyyMMdd-HHmmss)
resources:
repositories:
- repository: Templates
type: git
name: Operations/Pipelines
trigger:
- master
extends:
- template: deploy-web-app-1.yml#Templates
parameters:
message: "Hello World"
Create a template within your "Operations/Pipelines" repo that holds all stages for your app deployment. Here's an example:
parameters:
message: ""
stages:
- stage: output_message_stage
displayName: "Output Message Stage"
jobs:
- job: output_message_job
displayName: "Output Message Job"
pool:
vmImage: "ubuntu-latest"
steps:
- powershell: Write-Host "${{ parameters.message }}
With this configuration, you can control all of your pipelines within your Operations/Pipelines repo, outside of the individual projects that will be consuming. You can restrict access to it, so only authorized team members have the capability of creating/modifying pipelines.
Optionally, you can add an environment check that requires certain environments to inherit from your templates, which will stop deployments that have been modified to not use the pipelines you've created:
Microsoft has published a good primer on using YAML templates for security that will elaborate on some of the other strategies you can use as well:
https://learn.microsoft.com/en-us/azure/devops/pipelines/security/templates?view=azure-devops
Related
I have a standalone cloud source repository, (not cloned from Github).
I am using this to automate deploying of ETL pipelines . So I am folowing Google recommended guidelines, i.e committing the ETL pipeline as a .py file.
The cloud build trigger associated with the Cloud source repository will run the code as mentioned in the cloudbuild.yaml file and put the resultant .py file on the composer DAG bucket.
Composer will pick up this DAG and run it .
Now my question is, how do I orchestrate the CICD in dev and prod? I did not find any proper documentation to do this. So as of now I am following manual approach. If my code passes in dev, I am committing the same to the prod repo. Is there a way to do this in a better way?
Cloud Build Triggers allow you to conditionally execute a cloudbuily.yaml file on various ways. Have you tried setting up a trigger that fires only on changes to a dev branch?
Further, you can add substitutions to your trigger and use them in the cloudbuild.yaml file to, for example, name the generated artifacts based on some aspect of the input event.
See: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values and https://cloud.google.com/build/docs/configuring-builds/use-bash-and-bindings-in-substitutions
Say I have the following cloudbuild.yaml in my repository, and its referenced by a manual trigger during setup (ie. it looks in the repo for cloudbuild.yaml)
# repo/main/cloudbuild.yaml
steps:
- id: 'Safe step'
name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/bash'
args: ['echo', 'hello world']
Assume that the intention is that users can manually run the trigger as they please. However the cloud build service account is also high-powered and has access to certain permissions that can be destructive. This is required for other manual triggers that need to be approved.
Can the user create a branch, edit the cloudbuild.yaml to do something destructive, push he branch up and then when they go to run the trigger they just reference their branch instead thereby bypassing the control to be able to edit a trigger?
# repo/branch-xyz/cloudbuild.yaml
steps:
- id: 'Safe step'
name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/bash'
args: ['do', 'something', 'destructive']
Posting as a community wiki as this is based on #bhito's comment:
It depends on how the trigger is configured.
From what you're describing, looks like it is matching all the branches and the Service Account being used is Cloud Build's default one. With that setup, what you're describing is correct, the trigger will execute whatever the cloudbuild.yaml has defined.
You should be able to limit that behaviour by filtering the trigger by branch, limiting the service account permissions or use a custom one, or both options all together. With a combination of branch filtering and custom service accounts you have fine-grained access control.
We have a serverless framework project with different microservices in a monorepo. The structure of the project looks like:
project-root/
|-services/
|-service1/
|-handler.py
|-serverless.yml
|-service2/
...
|-serviceN/
...
|-bitbucket-pipelines.yml
As you can see, each service has its own deployment file (serverless.yml).
We want to deploy only the services that have been modified in a push/merge and, as we are using bitbucket-pipelines, we prefer to use its features to get this goal.
After doing a little research, we've found the changesets property that can condition a step to the changed files inside a directory. So, for each service we could add in out bitbucket-pipelines.yml something like:
- step:
name: step1
script:
- echo "Deploy service 1"
- ./deploy service1
condition:
changesets:
includePaths:
# only files directly under service1 directory
- "service1/**"
This seems like a perfect match, but the problem with this approach is that we must write a step for every service we have in the repo, and, if we add more services we will have to add more steps, and this affects maintainability and readability.
Is there any way to make a for loop with a parametrized step where the input parameter is the name of the service?
On the other hand, we could make a custom script that handles the condition and the deployment detecting the changes itself. Even if we prefer the bitbucket-native approach, we are open to other options; what's the best way to do this in a script?
In our Azure DevOps Server 2019 we want to trigger a build pipeline on completion of another build pipeline. The triggered build should use the same source branch the triggering build used.
According to the documentation this does not work with classic builds or the classic trigger definition, but in the YAML definition for the triggered build:
build.yaml:
# define triggering build as resource
resources:
pipelines:
- pipeline: ResourceName
source: TriggeringBuildPipelineName
trigger:
branches:
- '*'
# another ci build trigger
trigger:
branches:
include:
- '*'
paths:
include:
- SubFoldder
pool:
name: Default
When creating the pipeline like this, the trigger element under the pipeline resource gets underlined and the editor states that trigger is not expected inside a pipeline.
When saving the definition and trying to run it, it fails with this error:
/SubFolder/build.yaml (Line: 6, Col: 7): Unexpected value 'trigger'
(where "line 6" is the trigger line in the resources definition).
So my question is: How to correctly declare a trigger that starts a build pipeline on completion of another build pipeline, using the same source branch? Since the linked documentation actually explains this, the question rather is: what did I miss, why is trigger unexpected at this point?
Update: I just found this. So it seems one of the main features that they promised to have and documented as working, one of the main features we switched to DevOps for, is not even implemented, yet. :(
Updates are made every few weeks to the cloud-hosted version, Azure DevOps Services. These updates are then rolled up and made available through quarterly updates to the on-premises Azure DevOps Server and TFS.So, all features are release in Azure DevOps Service first.
A timeline of feature released and those planned in the feature release can be found at here-- Azure DevOps Feature Timeline
You could choose to directly use cloud version Azure DevOps Service instead or monitor latest update on above Feature Timeline with Azure DevOps Server. Sorry for any inconvenience.
VSTS allows you to select which branches automatically trigger a CI build by specifying a branch pattern.
However, my unit tests are using a real database which causes a problem when more than one build triggers e.g. master and feature-123 as they will clash on the database tests.
Is there a way of specifying that only one such build should be run at at time; I don't want to go away from executing tests against a real database as there are significant differences between an in-memory database and SQL Azure.
VSTS already serialize builds which are triggered by the same CI build.
Even CI build can be triggered by multiple branches, but for a certain time, only one build is running by default (unless you use pipelines to run builds concurrently).
Such as if both master branch and feature-123 branch are pushed to remote repo at the time time, the CI build definition will trriger two builds serially (not concurrently).
If you are using pipeline and need to run the triggered builds serially, you should make sure only one agent is used for the CI builds. You can use the way below:
In your CI build definition -> Options Tab -> add demands to specify which agent you want to use for the CI build.
Assume in default agent pool, there are three agents with the agent name: default1, default2 and default3.
If you need to specify default2 agent to run the CI build, then you can add the demands as below:
Now even multiple branches have been pushed at the same time, they will be triggered one by one since only one agent is available for the CI build.
If you use a YAML pipeline, you can use a deployment job in stead of a regular job.
With a deployment job you select a named Environment you want to deploy to.
You can configure the Environment in azure devops under Pipelines->Environments and you can choose to add an Exclusive Lock.
Then only one run can use the environment at a time and this serializes your runs.
Unfortunately, if you have multiple runs waiting for the environment (because one run currently has it locked), when the environment becomes unlocked only the latest run will continue. All the others waiting for the lock will be cancelled.
If you want to do it via .yml or .yaml file you can do following
- phase: Build
queue:
name: <Agent pool name>
demands:
- agent.name -equals <agent name from agent pool>
steps:
- task: <taskname>
displayName: 'some display name'
inputs:
value: '<input variable based on type of task'
variableName: '<input variable name>'