Share (parameterized) cloudbuild.yaml between multiple GitHub projects? - google-cloud-build

I've managed to trigger a Google Cloud Build on every commit to GitHub successfully. However, we have many different source code repositories (projects) on GitHub, that all use Maven and Spring Boot, and I would like all of these projects to use the same cloudbuild.yaml (or a shared template). This way we don't need to duplicate the cloudbuild.yaml in all projects (it'll be essentially the same in most projects).
For example, let's just take two different projects on GitHub, A and B. Their cloudbuild.yaml files could look something like this (but much more complex in our actual projects):
Project A:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/project-a", "--build-arg=JAR_FILE=target/project-a.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/project-a" ]
Project B:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/project-a", "--build-arg=JAR_FILE=target/project-b.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/project-b" ]
The only thing that is different is the jar file and image name, the steps are the same. Now imagine having hundreds of such projects, it can become a maintenance nightmare if we need to change or add a build step for each project.
A better approach, in my mind, would be to have a template file that could be shared:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/${PROJECT_NAME}", "--build-arg=JAR_FILE=target/${PROJECT_NAME}.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/${PROJECT_NAME}" ]
It would then be great if such a template file could be uploaded to GCS and then reused in the cloudbuild.yaml file in each project:
Project A:
steps:
import:
gcs: gs:/my-build-bucket/cloudbuild-template.yaml
parameters:
PROJECT_NAME: project-a
Project B:
steps:
import:
gcs: gs:/my-build-bucket/cloudbuild-template.yaml
parameters:
PROJECT_NAME: project-b
Does such a thing exists for Google Cloud Build? How do you import/reuse build steps in different builds as I described above? What's the recommended way to achieve this?

I contacted Google Cloud support and they told me that this is not currently available. They are aware of the issue and it's something that they're working on (no eta on when it's going to be available).
Their recommendation, in the meantime, is to use Tekton.

You have two readily available solutions on Google Cloud Build:
encapsulate these build procedures into one or more reusable Docker images
make use of gcloud builds submit
Eventually, whichever you decide, you will end up with something like this in your project build configurations:
- name: gcr.io/my-project/maven-spring
args: ['$REPO_NAME', 'project-a']
Some rationale
I don't believe these are the only two viable options you have to approach the problem of "reusable build code", but I think they play nicely with GCB and other related services you might already have provisioned.
With reusable Docker images, the benefit is that the images run in the same execution context as your build configuration, so you can imagine this is akin to the experience you already have with using existing gcr.io/cloud-builders or community images.
With gcloud builds submit, the benefit is that you can run an entirely separate build in it's own execution context, and either block until it completes or run these asynchronously.
The approach with building your custom Docker image that can perform these common tasks with the sources available on /workspace is a bit more straightforward.
In case that asynchronous builds seems like the right option, then I imagine what you can do is publish a Docker image to a shared Artifact/Container Registry, based on gcr.io/cloud-builders/gcloud, which has all the templates contained within, where on execution sources will be available on the automatically mounted /workspace path, and you can add some more nice touches like validating the Docker run arguments.

Related

What is the correct way of using secrets in strategy-matrix pattern in GitHub Actions?

I am using GitHub Actions for one of the projects. I have a use case when I need to deploy to 2 different environments. As the number of domains may grow, I want to deploy to all of them at once parametrically.
Part of my job that fails:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1'], ['old-main', 'books-v2']]
The above part works perfectly but if I need to add new variants from the secrets, the workflow doesn't work. See the snippet below:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1', ${{ secrets.URL_V1 }}], ['old-main', 'books-v2', ${{ secrets.URL_V2 }}]]
I checked GitHub Actions docs. I also searched available examples on GitHub to see existing solutions. So far, I didn't find a similar use case.
Is there a way to make it work like that? What are alternatives to my approach that will work?
GitHub Actions failure message:
You have an error in your yaml syntax on line XYZ
At the YAML level, single quotes around ${{ secrets... }} should fix the syntax error.
But, according to the Context availability, the secrets context is not allowed under stratey. The allowed contexts are:
jobs.<job_id>.strategy github, needs, vars, inputs
You can make use of the vars context for your use case.
Apart from that, linting your workflow with https://rhysd.github.io/actionlint/ would be much faster to identify potential issues.
UPDATE (by Dmytro Chasovskyi)
Here is an example with the vars context:
With a variable DOMAINS having this config:
{
"v1": {
"url": "http://localhost:80/api/v1"
},
"v2": {
"url": "http://localhost:80/api/v2"
}
}
the workflow will be:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1', '${{ vars.DOMAINS.v1.url }}'], ['old-main', 'books-v2', '${{ vars.DOMAINS.v2.url }}']]

How to reference another yml file from the main github action yaml file?

I'm defining a github action script that's referencing to another yaml file, hoping to put the configuration into a more organised way.
Here is my job file, named as deploy.yml in the path of ./.github/workflows/, where the first . is the root folder of my project.
....
jobs:
UnitTest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: ./.github/workflows/unittest.yml
In the same ./.github/workflows/ folder, I created another file called unittest.yml as below:
name: "UnitTest"
description: "Perform Unit Test"
runs:
# using: "composite"
- name: Dependency
run: |
echo "Dependency setup commands go here"
- name: UnitTest
run: make test.unit
However, when I tried to test the script locally using act with command act --secret-file .secrets --container-architecture linux/amd64, I received the following error:
[Deploy/UnitTest] ✅ Success - Main actions/checkout#v3
[Deploy/UnitTest] ⭐ Run Main ./.github/workflows/unittest.yml
[Deploy/UnitTest] ❌ Failure - Main ./.github/workflows/unittest.yml
[Deploy/UnitTest] file does not exist
[Deploy/UnitTest] 🏁 Job failed
I have tried to put just the file name unittest.yml or ./unittest.yml or myrepo_name/.github/workflows/unittest.yml or put the file into a subfolder like step 2 of this document illustrated, but all no luck.
Based on examples of runs for composition actions, I would imagine this should work.
Would anyone please advise?
P.S. You might have noticed the commented line of using: "composite" in the unittest.yml. If I uncomment the line, I'll receive error:
Error: yaml: line 3: did not find expected key
Composite actions are not referenced by YAML file, but a folder. In that folder, you are expected to have an action.yml describing the action.
This is why you're getting the error with using: composite, you're defining a workflow (because it's in ./github/workflows), but you are using action syntax.
I would advise this folder structure:
.github/
|-- workflows/
| -- deploy.yml
unittest-action/
|-- action.yml
With this structure, you should be able to reference the action with
- uses: actions/checkout#v3
- uses: ./unittest-action
Please see the docs for more information.
Depending on your use-case and setup, you might also want to consider reusable workflows.
You can define a reusable workflow in your .github/workflows directory like so:
# unittest.yml
on: workflow_call
jobs:
deploy:
# ...
and then call it like so:
jobs:
UnitTest:
uses: ./.github/workflows/unittest.yml
Note how the reusable workflow is an entire job. This means, you can't do the checkout from the outside and then just run the unit test in the reusable job. The reusable job (unittest.yml) needs to do the checkout first.
Which one to pick?
Here's a blog post summarising some of the differences between composite actions and reusable workflows, like:
reusable workflows can contain several jobs, composite actions only contain steps
reusable workflows have better support for using secrets
composite actions can be nested, but as of Jul '22, reusable workflows can't call other reusable workflows

How can I set gradle/test to work on the same docker environment where other CircleCi jobs are running

I have a CircleCi's workflow that has 2 jobs. The second job (gradle/test) is dependent on the first one creating some files for it.
The problem is with the first job running inside a docker, and the second job (gradle/test) is not. Hence, the gradle/test is failing since it cannot find the files the first job created. How can I set gradle/test to work on the same space?
Here is a code of the workflow:
version: 2.1
orbs:
gradle: circleci/gradle#2.2.0
executors:
daml-executor:
docker:
- image: cimg/openjdk:11.0-node
...
workflows:
checkout-build-test:
jobs:
- daml_test:
daml_sdk_version: "2.2.0"
context: refapps
- gradle/test:
app_src_directory: prototype
executor: daml-executor
requires:
- daml_test
Can anyone help me configure gradle/test correctly?
CircleCI has a mechanism to share artifacts between jobs called "workspace" (well, they have multiple ones, but workspace is what you want here).
Concretely, you would add this at the end of your daml_test job definition, as an additional step:
- persist_to_workspace:
root: /path/to/folder
paths:
- "*"
and that would add all the files from /path/to/folder to the workspace. On the other side, you can "mount" the workspace in your gradle/test job by adding something like this before the step where you need the files:
- attach_workspace:
at: /whatever/mountpoint
I like to use /tmp/workspace for the path on both sides, but that's just personal preference.

Bamboo specs YAML issue

I'm trying to build a bamboo specs yaml but I'm having some weird errors which have messages that are not helping much. I followed the documentation for it here but still not working.
So I have bamboo 7.2.4 and I'm trying to create a stage
version:2
stages:
- run tests:
jobs:
- Test
Test:
tasks:
- script:whatever
When running this I get
Bamboo YAML import failed: Document structure is incorrect: Tests: Property is required.
No clue what that means nor why it's happening
Bamboo YAML specs are very difficult to troubleshoot. This is one disadvantage of YAML specs when compared to Java specs. Looks like you are missing some key and essential tags in your example code. Can you re-format as below and see?
First, manually create a project (or use an existing project) but make sure to update the project-key by replacing <MYKEY> below,
---
version:2
plan:
project-key: <MYKEY>
key: MYPLN
name: My Plan
stages:
- run tests:
jobs:
- Test
Test:
key: JB1
tasks:
- script:
- echo 'My Plan'

How to configure caching for custom base image for Bitbucket Pipelines

I've a Bitbucket Pipeline that is using an custom docker image as a base. Pulling it from the ECR. Also, I'm using this image to build dockerized Go apps in the first step with make commands. I want to cache Go modules that are being downloaded in the make build process. But when I read the examples, people are using Go base images to make caching work. How can I activate caching while using base image other than Go image itself? Related parts of my pipeline is on below and Go cache seems doesn't work.
image:
name: <ECR Image>
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
definitions:
caches:
go: $GOPATH/pkg
pipelines:
tags:
'*-beta*'
-step:
name: "Image Build & Push"
services:
-docker
caches:
-go
script:
- export ENVIRONMENT=beta
- echo "Environment is ${ENVIRONMENT}"
- export DOCKER_IMAGE_BUILDER="${BITBUCKET_REPO_SLUG}:builder"
- make clean
- make build BUILD_VER=${BITBUCKET_TAG}.${BITBUCKET_BUILD_NUMBER} \ APP_NAME=${BITBUCKET_REPO_SLUG} \
DOCKER_IMAGE_BUILDER=${DOCKER_IMAGE_BUILDER}
- make test

Resources