Say I have the following cloudbuild.yaml in my repository, and its referenced by a manual trigger during setup (ie. it looks in the repo for cloudbuild.yaml)
# repo/main/cloudbuild.yaml
steps:
- id: 'Safe step'
name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/bash'
args: ['echo', 'hello world']
Assume that the intention is that users can manually run the trigger as they please. However the cloud build service account is also high-powered and has access to certain permissions that can be destructive. This is required for other manual triggers that need to be approved.
Can the user create a branch, edit the cloudbuild.yaml to do something destructive, push he branch up and then when they go to run the trigger they just reference their branch instead thereby bypassing the control to be able to edit a trigger?
# repo/branch-xyz/cloudbuild.yaml
steps:
- id: 'Safe step'
name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/bash'
args: ['do', 'something', 'destructive']
Posting as a community wiki as this is based on #bhito's comment:
It depends on how the trigger is configured.
From what you're describing, looks like it is matching all the branches and the Service Account being used is Cloud Build's default one. With that setup, what you're describing is correct, the trigger will execute whatever the cloudbuild.yaml has defined.
You should be able to limit that behaviour by filtering the trigger by branch, limiting the service account permissions or use a custom one, or both options all together. With a combination of branch filtering and custom service accounts you have fine-grained access control.
Related
I don't know why Github secrets are really called secrets, because it can be printed out by any person working in organization with push access, i.e. they create branch use below trick to print secret and then delete the branch, and with snap of fingers any employee can take out secrets.
If there is optimal solution, kindly guide me to permanently secure my github action secrets.
steps:
- name: 'Checkout'
uses: actions/checkout#master
- name: 'Print secrets'
run: |
echo ${{ secrets.SOME_SECRET }} | sed 's/./& /g'
First off, GitHub has an article on Security hardening for actions, which contains some general recommendations.
In general, you want to distinguish between public and internal/private repositories.
Most of my following answer is on internal/private repositories, but if you're concerned about public repositories: Keep in mind that actions are not run on PRs from third parties until they are approved.
For internal/private repositories, you're going to have to trust your colleagues with some secrets. Going through the hassle of guarding all secrets to the extent that they can't be "extracted" by a colleagues is probably not worth the effort. And at that point, you probably also have to ask yourself what damage a malicious employee could do even without knowing these secrets (perhaps they have inside knowledge of your business, they work in IT so they might be able to put your services offline, etc). So you're going to have to trust them to some extent.
Some measures to limit the damage a malicious colleague could do:
Environment Secrets
You can create a secret for an environment and protect the environment.
For example, assume you want to prevent colleagues from taking your production secrets and deploy from their computers instead of going through GitHub actions.
You could create a job like the following:
jobs:
prod:
runs-on: ubuntu-latest
environment: production
steps:
- run: ./deploy.sh --keys ${{ secrets.PROD_SECRET }}
Steps:
Configure the secret PROD_SECRET as an environment secret for production
Create the environment production and setup protection rules
If you really want to be sure nobody does something you don't like, you can set yourself as a reviewer and then manually approve every deployment
Codeowners
You could use codeowners and protect the files in .github/workflows. more about codeowners
OIDC and reusable workflows
If you're deploying to some cloud environment, you should probably use OpenID Connect. That, combined with reusable workflows, can give you an additional layer of security: You can specify which workflow is allowed to deploy to production.
#rethab answer is great, I'll just add the answer I got from the Github Support after I contacted them for a similar issue.
Thank you for writing to GitHub Support.
Please note that it is not expected behavior that GitHub will redact every possible obfuscation of the secrets from the logs.
https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-secrets
To help prevent accidental disclosure, GitHub uses a mechanism that
attempts to redact any secrets that appear in run logs. This redaction
looks for exact matches of any configured secrets, as well as common
encodings of the values, such as Base64. However, because there are
multiple ways a secret value can be transformed, this redaction is not
guaranteed.
https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#exfiltrating-data-from-a-runner
To help prevent accidental secret disclosure, GitHub Actions
automatically redact secrets printed to the log, but this is not a
true security boundary because secrets can be intentionally sent to
the log. For example, obfuscated secrets can be exfiltrated using echo
${SOME_SECRET:0:4}; echo ${SOME_SECRET:4:200};
Notice that in this case, what is being printed in the logs is NOT the secret that is stored in the repository but an obfuscation of it. Additionally, any user with Write access will be able to access secrets without needing to print them in the logs. They can run arbitrary commands in the workflows to send the secrets over HTTP or even store the secrets as workflows artifacts and download the artifacts.
You can read more about security hardening for GitHub Actions below:
https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions
I am stuck in the following Situation. We got an Azure DevOps Organisation with different Projects. The goal is to store all Pipelines and Releases in the "Operations" Project.
The benefit of this is, to keep the individual teams away from dealing with creating Pipelines or handling Secrets (.netrc, Container Registry, etc.)
However there seems to be no way to trigger a build Validation Pull Request in Project A, that triggers the Pipeline in the Operations Project. Build Validations of certain Branches can only trigger Pipelines inside the Project itself
So in short: PR in Project A should trigger Pipeline in Project "Operations"
Is there a workaround for this or is this feature to be implemented in the near future?
This threat looked promising that it might be implemented. But it has been quiet since.
I initially asked if you're using YAML pipelines because I have a similar setup designed for the same purpose, to keep YAML pipelines separate from the codebase.
I've heavily leveraged YAML templates to accomplish this. They're the key to keeping the bulk of the logic out of the source code repos. However, you do still need a very light-weight pipeline within the source code itself.
Here's how I'd recommend setting up your pipelines:
Create a "Pipelines" repo within your "Operations" project. Use this repo to store the stages of your pipelines.
Within the repos containing the source code of the projects you'll be deploying create a YAML file that will extend from a template within your "Operations/Pipelines" repo. It will look something like this:
name: CI-Example-Root-Pipeline-$(Date:yyyyMMdd-HHmmss)
resources:
repositories:
- repository: Templates
type: git
name: Operations/Pipelines
trigger:
- master
extends:
- template: deploy-web-app-1.yml#Templates
parameters:
message: "Hello World"
Create a template within your "Operations/Pipelines" repo that holds all stages for your app deployment. Here's an example:
parameters:
message: ""
stages:
- stage: output_message_stage
displayName: "Output Message Stage"
jobs:
- job: output_message_job
displayName: "Output Message Job"
pool:
vmImage: "ubuntu-latest"
steps:
- powershell: Write-Host "${{ parameters.message }}
With this configuration, you can control all of your pipelines within your Operations/Pipelines repo, outside of the individual projects that will be consuming. You can restrict access to it, so only authorized team members have the capability of creating/modifying pipelines.
Optionally, you can add an environment check that requires certain environments to inherit from your templates, which will stop deployments that have been modified to not use the pipelines you've created:
Microsoft has published a good primer on using YAML templates for security that will elaborate on some of the other strategies you can use as well:
https://learn.microsoft.com/en-us/azure/devops/pipelines/security/templates?view=azure-devops
We have a serverless framework project with different microservices in a monorepo. The structure of the project looks like:
project-root/
|-services/
|-service1/
|-handler.py
|-serverless.yml
|-service2/
...
|-serviceN/
...
|-bitbucket-pipelines.yml
As you can see, each service has its own deployment file (serverless.yml).
We want to deploy only the services that have been modified in a push/merge and, as we are using bitbucket-pipelines, we prefer to use its features to get this goal.
After doing a little research, we've found the changesets property that can condition a step to the changed files inside a directory. So, for each service we could add in out bitbucket-pipelines.yml something like:
- step:
name: step1
script:
- echo "Deploy service 1"
- ./deploy service1
condition:
changesets:
includePaths:
# only files directly under service1 directory
- "service1/**"
This seems like a perfect match, but the problem with this approach is that we must write a step for every service we have in the repo, and, if we add more services we will have to add more steps, and this affects maintainability and readability.
Is there any way to make a for loop with a parametrized step where the input parameter is the name of the service?
On the other hand, we could make a custom script that handles the condition and the deployment detecting the changes itself. Even if we prefer the bitbucket-native approach, we are open to other options; what's the best way to do this in a script?
In our Azure DevOps Server 2019 we want to trigger a build pipeline on completion of another build pipeline. The triggered build should use the same source branch the triggering build used.
According to the documentation this does not work with classic builds or the classic trigger definition, but in the YAML definition for the triggered build:
build.yaml:
# define triggering build as resource
resources:
pipelines:
- pipeline: ResourceName
source: TriggeringBuildPipelineName
trigger:
branches:
- '*'
# another ci build trigger
trigger:
branches:
include:
- '*'
paths:
include:
- SubFoldder
pool:
name: Default
When creating the pipeline like this, the trigger element under the pipeline resource gets underlined and the editor states that trigger is not expected inside a pipeline.
When saving the definition and trying to run it, it fails with this error:
/SubFolder/build.yaml (Line: 6, Col: 7): Unexpected value 'trigger'
(where "line 6" is the trigger line in the resources definition).
So my question is: How to correctly declare a trigger that starts a build pipeline on completion of another build pipeline, using the same source branch? Since the linked documentation actually explains this, the question rather is: what did I miss, why is trigger unexpected at this point?
Update: I just found this. So it seems one of the main features that they promised to have and documented as working, one of the main features we switched to DevOps for, is not even implemented, yet. :(
Updates are made every few weeks to the cloud-hosted version, Azure DevOps Services. These updates are then rolled up and made available through quarterly updates to the on-premises Azure DevOps Server and TFS.So, all features are release in Azure DevOps Service first.
A timeline of feature released and those planned in the feature release can be found at here-- Azure DevOps Feature Timeline
You could choose to directly use cloud version Azure DevOps Service instead or monitor latest update on above Feature Timeline with Azure DevOps Server. Sorry for any inconvenience.
I am currently in the process of creating an integration-tests pipeline for a few project we have running around that require the presence of an Oracle database to work. In order to do so, I have gone through the process of creating a dockerized pre-built Oracle database using the instruction mentioned in this document.
https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance/samples/prebuiltdb
I have successfully built the image and I'm able to verify that it works indeed correctly. I have pushed the image in question to one of our custom docker repositories and I am also able to successfully fetch from the context of the runner.
My main problem is that when the application attempts to connect to the database it fails with a connection refused error as if the database is not running (mind you I'm running the runner locally in order to test this). My question are the following:
When using a custom image, what is the name the runner exposes it?
For example, the documentation states that when I use mysql:latest
then the exposed service name would be mysql. Is this the case for
custom images as well? Should I name it with an alias?
Do I need to expose ports/brigde docker networks in order to get
this to work correctly? My reasoning behind the failure leads me to
believe that the image that runs the application is not able to
properly communicate with the Oracle service.
For reference my gitlab-ci.yml for the job in question is the following:
integration_test:
stage: test
before_script:
- echo 127.0.0.1 inttests.myapp.com >> /etc/hosts
services:
- <repository>/devops/fts-ora-inttests-db:latest
script:
- ./gradlew -x:test integration-test:test
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- build
- .gradle
only:
- master
- develop
- merge_requests
- tags
except:
- api
Can anyone please lend a hand in getting this to work correctly?
I recently configured a gitlab-ci.yml to use an Oracle docker image as a service and used our custom docker repository to fetch the image.
I'll answer your question one by one below:
1.a When using a custom image, what is the name the runner exposes it? For example, the documentation states that when I use mysql:latest then the exposed service name would be mysql. Is this the case for custom images as well?
By default, the Gitlab runner will deduce the name of the service based from the following convention [1]:
The default aliases for the service’s hostname are created from its image name following these rules:
Everything after the colon (:) is stripped.
Slash (/) is replaced with double underscores (__) and the primary alias is created.
Slash (/) is replaced with a single dash (-) and the secondary alias is created (requires GitLab Runner v1.1.0 or higher).
So in your case, since the service name you used is <repository>/devops/fts-ora-inttests-db:latest, the Gitlab runner will by default generate two (2) aliases.
<repository>__devops__fts-ora_inttests-db
<repository>-devops-fts-ora-inttests-db
In order to connect to your oracle database service, you need to refer to the alias in your configuration file or source code.
e.g.
database.url=jdbc:oracle:thin#<repository>__devops__fts-ora_inttests-db:1521:xe
//OR
database.url=jdbc:oracle:thin#<repository>-devops-fts-ora-inttests-db:1521:xe
1.b Should I name it with an alias?
In my opinion, you should just declare an alias for the service in order to keep the alias name simple which the runner will automatically use and you can refer it to it the same way.
e.g.
// .gitlab-ci.yml
services:
- name: <repository>/devops/fts-ora-inttests-db:latest
alias: oracle-db
// my-app.properties
database.url=jdbc:oracle:thin#oracle-db:1521:xe
2. Do I need to expose ports/bridge docker networks in order to get this to work correctly?
Yes, the Oracle DB docker image that would be used for the service must have ports 1521 and 5500 exposed declared in the dockerfile in order for you to access it.
Sources:
Gitlab Documentation Accessing the Services