What is the correct way of using secrets in strategy-matrix pattern in GitHub Actions? - continuous-integration

I am using GitHub Actions for one of the projects. I have a use case when I need to deploy to 2 different environments. As the number of domains may grow, I want to deploy to all of them at once parametrically.
Part of my job that fails:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1'], ['old-main', 'books-v2']]
The above part works perfectly but if I need to add new variants from the secrets, the workflow doesn't work. See the snippet below:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1', ${{ secrets.URL_V1 }}], ['old-main', 'books-v2', ${{ secrets.URL_V2 }}]]
I checked GitHub Actions docs. I also searched available examples on GitHub to see existing solutions. So far, I didn't find a similar use case.
Is there a way to make it work like that? What are alternatives to my approach that will work?
GitHub Actions failure message:
You have an error in your yaml syntax on line XYZ

At the YAML level, single quotes around ${{ secrets... }} should fix the syntax error.
But, according to the Context availability, the secrets context is not allowed under stratey. The allowed contexts are:
jobs.<job_id>.strategy github, needs, vars, inputs
You can make use of the vars context for your use case.
Apart from that, linting your workflow with https://rhysd.github.io/actionlint/ would be much faster to identify potential issues.
UPDATE (by Dmytro Chasovskyi)
Here is an example with the vars context:
With a variable DOMAINS having this config:
{
"v1": {
"url": "http://localhost:80/api/v1"
},
"v2": {
"url": "http://localhost:80/api/v2"
}
}
the workflow will be:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1', '${{ vars.DOMAINS.v1.url }}'], ['old-main', 'books-v2', '${{ vars.DOMAINS.v2.url }}']]

Related

Share (parameterized) cloudbuild.yaml between multiple GitHub projects?

I've managed to trigger a Google Cloud Build on every commit to GitHub successfully. However, we have many different source code repositories (projects) on GitHub, that all use Maven and Spring Boot, and I would like all of these projects to use the same cloudbuild.yaml (or a shared template). This way we don't need to duplicate the cloudbuild.yaml in all projects (it'll be essentially the same in most projects).
For example, let's just take two different projects on GitHub, A and B. Their cloudbuild.yaml files could look something like this (but much more complex in our actual projects):
Project A:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/project-a", "--build-arg=JAR_FILE=target/project-a.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/project-a" ]
Project B:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/project-a", "--build-arg=JAR_FILE=target/project-b.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/project-b" ]
The only thing that is different is the jar file and image name, the steps are the same. Now imagine having hundreds of such projects, it can become a maintenance nightmare if we need to change or add a build step for each project.
A better approach, in my mind, would be to have a template file that could be shared:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/${PROJECT_NAME}", "--build-arg=JAR_FILE=target/${PROJECT_NAME}.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/${PROJECT_NAME}" ]
It would then be great if such a template file could be uploaded to GCS and then reused in the cloudbuild.yaml file in each project:
Project A:
steps:
import:
gcs: gs:/my-build-bucket/cloudbuild-template.yaml
parameters:
PROJECT_NAME: project-a
Project B:
steps:
import:
gcs: gs:/my-build-bucket/cloudbuild-template.yaml
parameters:
PROJECT_NAME: project-b
Does such a thing exists for Google Cloud Build? How do you import/reuse build steps in different builds as I described above? What's the recommended way to achieve this?
I contacted Google Cloud support and they told me that this is not currently available. They are aware of the issue and it's something that they're working on (no eta on when it's going to be available).
Their recommendation, in the meantime, is to use Tekton.
You have two readily available solutions on Google Cloud Build:
encapsulate these build procedures into one or more reusable Docker images
make use of gcloud builds submit
Eventually, whichever you decide, you will end up with something like this in your project build configurations:
- name: gcr.io/my-project/maven-spring
args: ['$REPO_NAME', 'project-a']
Some rationale
I don't believe these are the only two viable options you have to approach the problem of "reusable build code", but I think they play nicely with GCB and other related services you might already have provisioned.
With reusable Docker images, the benefit is that the images run in the same execution context as your build configuration, so you can imagine this is akin to the experience you already have with using existing gcr.io/cloud-builders or community images.
With gcloud builds submit, the benefit is that you can run an entirely separate build in it's own execution context, and either block until it completes or run these asynchronously.
The approach with building your custom Docker image that can perform these common tasks with the sources available on /workspace is a bit more straightforward.
In case that asynchronous builds seems like the right option, then I imagine what you can do is publish a Docker image to a shared Artifact/Container Registry, based on gcr.io/cloud-builders/gcloud, which has all the templates contained within, where on execution sources will be available on the automatically mounted /workspace path, and you can add some more nice touches like validating the Docker run arguments.

Where should caching occur in a GitHub Action?

What is the correct placement of caching in a GitHub Action? Specifically is in correct to place it before or after running setup of tools using another Action?
For example if I'm using something like haskell/actions/setup should my use of actions/cache precede or follow that? Put another way: if setup subsequently installs updated components on a future run of my Action, will the corresponding parts of the cache be invalidated?
The cache action should be placed before any step that consumes or creates that cache. This step is responsible for:
defining cache parameters.
restoring the cache, if it was cached in the past.
GitHub Actions will then run a "Post *" step after all the steps, which will store the cache for future calls.
See the example workflow from the documentation.
For example, consider this sample workflow:
name: Caching Test
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Enable Cache
id: cache-action
uses: actions/cache#v2
with:
path: cache-folder
key: ${{ runner.os }}-cache-key
- name: Use or generate the cache
if: steps.cache-action.outputs.cache-hit != 'true'
run: mkdir cache-folder && touch cache-folder/hello
- name: Verify we have our cached file
run: ls cache-folder
This is how it looks on the first run:
And this is how it looks on the second run:
GitHub will not invalidate the cache. Instead, it is the responsibility of the developer to ensure that the cache key is unique to the content it represents.
On common way to do this, is to set the cache key so that it contains a hash of a file that lives in the repository, so that changes to this file, will yield a different cache key. A good example for such a behavior, is when you have lock files that list all your repository's dependencies (requirements.txt for pyrhon, Gemfile.lock for ruby, etc).
This is achieved by a syntax similar to this:
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
as described in the Creating a cache key section of the documentation.

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

circleci v2 config - how do we filter by owner in a workflow?

In the circleci version 1 config, there was the option to specify owner as an option in a deployment. An example from the circleci docs ( https://circleci.com/docs/1.0/configuration/ ) with owner: circleci being the key line:
deployment:
master:
branch: master
owner: circleci
commands:
- ./deploy_master.sh
In version 2 of the config, there is the ability to use filters and tags to specify which branches are built, but I have yet to find (in the docs, or on the interwebs) anything that gives me the same capability.
What I'm trying to achieve is run build and test steps on forks, but only run the deploy steps if the repository owner is the main repo. Quite often people fork using the same branch name - in this case master - so having a build fail due to an inability to deploy is counter-intuitive, especially as I would like to use a protected branch in git and only merge commits based on a successful build in a pull request.
I realise we could move to only running builds based on tags being present, but nothing is stopping somebody with a fork also creating a tag in their fork, which puts us back at square one.
Is anybody aware of how to specify the owner of a repo in the version 2 config?
An example from the version 2 config document ( https://circleci.com/docs/2.0/workflows/ ) in case it helps jog somebodies memory:
workflows:
version: 2
un-tagged-build:
jobs:
- build:
filters:
tags:
ignore: /^v.*/
tagged-build:
jobs:
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
disclaimer: Developer Evangelist at CircleCI
That feature is not available on CircleCI 2.0. You can request it here.
As an alternative, you might be able to look for the branch name, say master, as well as the CIRCLE_PR_NUMBER environment variable. If that variable has any value, then it's a fork of master and you shouldn't deploy.

Concat variable names in GitLab

We use a Gitlab Project in a team. Each developer has his own Kubernetes cluster in the cloud and an own branch within GitLab. We use GitLab-CI to automatically build new containers and deploy them to our Kubernetes clusters.
At the moment we have a .gitlab-ci.yml looks something like this:
variables:
USERNAME: USERNAME
CI_K8S_PROJECT: ${USERNAME_CI_K8S_PROJECT}
REGISTRY_JSON_KEY_FILE: ${USERNAME_REGISTRY_JSON_KEY_FILE}
[...]
stages:
- build
- deploy
- remove
build-zeppelin:
stage: build
image: docker:latest
variables:
image_name: "zeppelin"
only:
- ${USERNAME}#Gitlab-Repo
tags:
- cloudrunner
script:
- docker login -u _json_key -p "${REGISTRY_JSON_KEY_FILE?}" https://eu.gcr.io
- image_name_fqdn="eu.gcr.io/${CI_K8S_PROJECT?}/${image_name?}:latest"
- docker build -t ${image_name_fqdn?} .
- docker push ${image_name_fqdn?}
- echo "Your new image is '${image_name_fqdn?}'. Have fun!"
[...]
So in the beginning we reference the important information by using a USERNAME-prefix. This works quite well, but is problematic, since we need to correct them after every pull request from another user.
So we search for a way to keep the gitlab-ci file the same to every developer while still referencing some gitlab-variables different for every developer.
Things we thought about, that don't seem to work:
Use multiple yml files and import them into each other => not supported.
Try to combine Gitlab Environment variables as Prefix:
CI_K8S_PROJECT: ${${GITLAB_USER_ID}_CI_K8S_PROJECT}
or
INDIVIDUAL_CI_K8S_PROJECT: ${GITLAB_USER_ID}_CI_K8S_PROJECT
CI_K8S_PROJECT: ${INDIVIDUAL_CI_K8S_PROJECT}
We found a solution using indirect expansion (bash feature):
before_script:
- variableName=${GITLAB_USER_ID}_CI_K8S_PROJECT
- export wantedValue=${!variableName}
But we also recognised, that our setup was somehow stupid: It does not make sense to have multiple branches for each user and use prefixed variables, since this leads to problems such as the above and security concerns, since all variables are accessible to all users.
It is way easier if each user forks the root project and simply creates a merge request for new features. This way there is no renaming/prefixing of variables or branches necessary at all.
Solution from #nik will work only for bash. For sh will work:
before_script:
- variableName=...
- export wantedValue=$( eval echo \$$variableName )
Something like this works (on 15.0.5-ee):
variables:
IMAGE_NAME: "test-$CI_PROJECT_NAME"

Resources