github-actions decode base64 string on windows - windows

I have publishing job in github actions.
It uses certificate which is stored in base64 format in repo secrets.
I need to decode this certificate and store it on disk on windows-latest machine.
My workflow looks like that
name: publish
on: [ push ]
jobs:
build:
runs-on: windows-latest
steps:
- name: checkout
uses: actions/checkout#v2
- run: echo "${{ secrets.WINDOWS_CERT}}" | base64 --decode > $HOME/certificate.pfx
- run: cat $HOME/certificate.pfx
When i run it i get error
Run echo "***" | base64 --decode > $HOME/certificate.pfx
echo "***" | base64 --decode > $HOME/certificate.pfx
shell: C:\Program Files\PowerShell\7\pwsh.EXE -command ". '{0}'"
/usr/bin/base64: invalid input
##[error]Process completed with exit code 1.
How do i properly decode base64 encoded secrets on windows machines?
Thanks

I ran into two problems on powershell (using windows-latest default shell) on Github Actions:
Maybe due to newlines, but indeed that invalid input appeared. Needed to pass -i (or --ignore-garbage) to base64 -d so that is silenced.
But then, the output still didn't match the original binary (as verified by dir, wc and md5sum). Because in powershell, the > operator defaults to writing with UTF-16, which will mess up your binary. See
https://www.johndcook.com/blog/2008/08/25/powershell-output-redirection-unicode-or-ascii/ on possible workaround using out-file directly.
But in the end, I just changed to shell: bash for the base64 decode operation. All works promptly there.

Related

Passing secrets as output between jobs in a Github workflow

I am trying to pass a JWT token in between jobs but something prevents it to be passed correctly. According to the docs, if I want to pass variables between jobs I need to use outputs as explained here. What I am doing is the following:
name: CI
on:
pull_request:
branches:
- main
jobs:
get-service-url:
...does something not interesting to us...
get-auth-token:
runs-on: ubuntu-latest
outputs:
API_TOKEN: ${{ steps.getauthtoken.outputs.API_TOKEN }}
steps:
- name: Get Token
id: getauthtoken
run: |
API_TOKEN:<there is a full JWT token here>
echo -n "API_TOKEN=$API_TOKEN" >> $GITHUB_OUTPUT
use-token:
runs-on: ubuntu-latest
needs: [get-service-url,get-auth-token]
name: Run Tests
steps:
- uses: actions/checkout#v3
- name: Run tests
run: |
newman run ${{ github.workspace }}/tests/collections/my_collection.json --env-var "service_url=${{needs.get-service-url.outputs.service_URL}}" --env-var "auth_token=${{needs.get-auth-token.outputs.API_TOKEN}}"
So, during a run, in my output I see:
Run newman run /home/runner/work/my-repo/my-repo/tests/collections/my_collection.json --env-var "service_url=https://test.net" --env-var "auth_token="
At first I thought there was something wrong in passing the token itself between jobs. Hence I tried
to put a dummy token an export it in the output. In my get-auth-token job, the call to output it became:
echo -n "API_TOKEN=test" >> $GITHUB_OUTPUT
and in the log I saw it there:
--env-var "auth_token=test"
so the way I am passing it intra jobs is fine. Moreover, the token is there and is correct because I hard coded one to simplify my tests. Indeed if in my get-auth-token job I try to echo $API_TOKEN I see in the logs *** which makes me understand Github is correctly obfuscating it.
I then tried not to pass it in between jobs. So I created the same token, hardcoded, right before the newman run command and referenced it in the newman run directly and tada! The log now is:
Run newman run /home/runner/work/my-repo/my-repo/tests/collections/my_collection.json --env-var "service_url=https://test.net" --env-var "auth_token=***"
So the token is there! But I need it to be coming from another job. There is something preventing the token to be passed in between jobs and I don't know how to achieve that.
Found out a trick to make this happen. Consists on temporarily "obfuscating" the secret to the eyes of Github.
In the job where I retrieve the secret I encode it and export it to GITHUB_OUTPUT:
API_TOKEN_BASE64=`echo -n <my_secret> | base64 -w 0`
echo -n "API_TOKEN=$API_TOKEN_BASE64" >> $GITHUB_OUTPUT
In the job where I need the secret I decode it (and use where needed):
API_TOKEN=`echo -n ${{needs.get-auth-token.outputs.API_TOKEN}} | base64 --decode`

Pre-commit - /dev/tty block all output text defined in hook (ex: through echo) before entering user input

I'm trying to create my own hook (defined in terraform_plan.sh, can refer in terraform_plan.sh and .pre-commit-config.yaml below) and require user input to determine if this hook success or fail (This hook is about some checking by the user before commit). To activate user input function, I add exec < /dev/tty according to How do I prompt the user from within a commit-msg hook?.
The snippet code looks like this (terraform_plan.sh).
#!/bin/sh
location=$(pwd)
echo "location: ${location}"
cd ./tf_build/
project_id=$(gcloud config get-value project)
credentials=$(gcloud secrets versions access latest --secret="application-default-credentials")
echo "PROJECT_ID: ${project_id}"
echo "CREDENTIALS: ${credentials}"
terraform plan -var "PROJECT_ID=${project_id}" -var "APPLICATION_DEFAULT_CREDENTIALS=${credentials}"
exec < /dev/tty
read -p "Do yu agree this plan? (Y/N): " answer
echo "answer: ${answer}"
# for testing purpose, assign 0 directly
exit 0
I expect that the prompt Do you agree this plan? (Y/N) should appear before I can enter my answer. But actually nothing shown and it just hangs there waiting for the input.
(.venv) ➜ ✗ git commit -m "test"
sqlfluff-lint........................................(no files to check)Skipped
sqlfluff-fix.........................................(no files to check)Skipped
black................................................(no files to check)Skipped
isort................................................(no files to check)Skipped
docformatter.........................................(no files to check)Skipped
flake8...............................................(no files to check)Skipped
blackdoc.............................................(no files to check)Skipped
Terraform plan...........................................................
Only after I give an input "Y", all output strings defined in this hook (ex: output string through echo, terraform plan) comes out.
Terraform plan...........................................................Y
Passed
- hook id: terraform_plan
- duration: 222.33s
location: [remove due to privacy]
PROJECT_ID: [remove due to privacy]
CREDENTIALS: [remove due to privacy]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_bigquery_table.bar will be created
+ resource "google_bigquery_table" "bar" {
+ creation_time = (known after apply)
+ dataset_id = "default_dataset"
+ deletion_protection = true
...
I also try read -p "Do yu agree this plan? (Y/N): " answer < /dev/tty, but get the same issue.
Here is my part of .pre-commit-config.yaml config file.
repos:
- repo: local
hooks:
# there are other hooks above, remove them for easy to read
- id: terraform_plan
name: Terraform plan
entry: hooks/terraform_plan.sh
language: script
verbose: true
files: (\.tf|\.tfvars)$
exclude: \.terraform\/.*$
- repo: ../pre-commit-terraform
rev: d7e049d0b72eebcb09b719bb296589a47f4fa806
hooks:
- id: terraform_fmt
- id: terraform_tflint
- id: terraform_validate
args: [--args=-no-color]
So far I do not know what the root cause is and how to solve it. Seek for someone's help and suggestion. Thanks.
pre-commit intentionally does not allow interactivity so there is no working solution within the framework. you can sidestep the framework and utilize a legacy shell script (.git/hooks/pre-commit.legacy) but I would also not recommend that:
terraform plan is also kind of a poor choice for a pre-commit hook -- especially if you are blocking the user to confirm it. they are likely to be either surprised by such a hook or fatigued by it and will ignore the output (mash yes) or ignore the entire process (SKIP / --no-verify).
disclaimer: I wrote pre-commit

Using GitHub cache action with multiple cache paths?

I'm trying to use the official GitHub cache action (https://github.com/actions/cache) to cache some binary files to speed up some of my workflows, however I've been unable to get it working when specifying multiple cache paths.
Here's a simple, working test I've set up using a single cache path:
There is one action for writing the cache, and one for reading it (both executed in separate workflows, but on the same repository and branch).
The write-action is executed first, and creates a file "subdir/a.txt", and then caches it with the "actions/cache#v2" action:
# Test with single path
- name: Create file
shell: bash
run: |
mkdir subdir
cd subdir
printf '%s' "Lorem ipsum" >> a.txt
- name: Write cache (Single path)
uses: actions/cache#v2
with:
path: "D:/a/cache_test/cache_test/**/*.txt"
key: test-cache-single-path
The read-action retrieves the cache, prints a list of all files in the directory recursively to confirm it has restored the file from the cache, and then prints the contents of the cached txt-file:
- name: Get cached file
uses: actions/cache#v2
id: get-cache
with:
path: "D:/a/cache_test/cache_test/**/*.txt"
key: test-cache-single-path
- name: Print files
shell: bash
run: |
echo "Cache hit: ${{steps.get-cache.outputs.cache-hit}}"
cd "D:/a/cache_test/cache_test"
ls -R
cat "D:/a/cache_test/cache_test/subdir/a.txt"
This works without any issues.
Now, the description of the cache action contains an example for specifying multiple cache paths:
- uses: actions/cache#v2
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
But when I try that for my example actions, it fails to work.
In the new write-action, I create two files, "subdir/a.txt" and "subdir/b.md", and then cache them by specifying two paths:
# Test with multiple paths
- name: Create files
shell: bash
run: |
mkdir subdir
cd subdir
printf '%s' "Lorem ipsum" >> a.txt
printf '%s' "dolor sit amet" >> b.md
#- name: Write cache (Multi path)
uses: actions/cache#v2
with:
path: |
"D:/a/cache_test/cache_test/**/*.txt"
"D:/a/cache_test/cache_test/**/*.md"
key: test-cache-multi-path
The new read-action is the same as the old one, but also specifies both paths:
# Read cache
- name: Get cached file
uses: actions/cache#v2
id: get-cache
with:
path: |
"D:/a/cache_test/cache_test/**/*.txt"
"D:/a/cache_test/cache_test/**/*.md"
key: test-cache-multi-path
- name: Print files
shell: bash
run: |
echo "Cache hit: ${{steps.get-cache.outputs.cache-hit}}"
cd "D:/a/cache_test/cache_test"
ls -R
cat "D:/a/cache_test/cache_test/subdir/a.txt"
cat "D:/a/cache_test/cache_test/subdir/b.md"
This time I still get the confirmation that the cache has been read:
Cache restored successfully
Cache restored from key: test-cache-multi-path
Cache hit: true
However "ls -R" does not list the files, and the "cat" commands fail because the files do not exist.
Where is my error? What is the proper way of specifying multiple paths with the cache action?
I was able to make it work with a few modifications;
use relative paths instead of absolute
use a hash of the content for the key
It looks like with at least bash the absolute paths look like this:
/d/a/so-foobar-cache/so-foobar-cache/cache_test/cache_test/subdir
Where so-foobar-cache is the name of the repository.
.github/workflows/foobar.yml
name: Store and Fetch cached files
on: [push]
jobs:
store:
runs-on: windows-2019
steps:
- name: Create files
shell: bash
id: store
run: |
mkdir -p 'cache_test/cache_test/subdir'
cd 'cache_test/cache_test/subdir'
echo pwd $(pwd)
printf '%s' "Lorem ipsum" >> a.txt
printf '%s' "dolor sit amet" >> b.md
cat a.txt b.md
- name: Store in cache
uses: actions/cache#v2
with:
path: |
cache_test/cache_test/**/*.txt
cache_test/cache_test/**/*.md
key: multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
- name: Print files (A)
shell: bash
run: |
echo "Cache hit: ${{steps.store.outputs.cache-hit}}"
find cache_test/cache_test/subdir
cat cache_test/cache_test/subdir/a.txt
cat cache_test/cache_test/subdir/b.md
fetch:
runs-on: windows-2019
needs: store
steps:
- name: Restore
uses: actions/cache#v2
with:
path: |
cache_test/cache_test/**/*.txt
cache_test/cache_test/**/*.md
key: multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
restore-keys: |
multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
multiple-files-
- name: Print files (B)
shell: bash
run: |
find cache_test -type f | xargs -t grep -e.
Log
$ gh run view 1446486801
✓ master Store and Fetch cached files · 1446486801
Triggered via push about 3 minutes ago
JOBS
✓ store in 5s (ID 4171907768)
✓ fetch in 10s (ID 4171909690)
First job
$ gh run view 1446486801 --log --job=4171907768 | grep -e Create -e Store -e Print
store Create files 2021-11-10T22:59:32.1396931Z ##[group]Run mkdir -p 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1398025Z mkdir -p 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1398695Z cd 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1399360Z echo pwd $(pwd)
store Create files 2021-11-10T22:59:32.1399936Z printf '%s' "Lorem ipsum" >> a.txt
store Create files 2021-11-10T22:59:32.1400672Z printf '%s' "dolor sit amet" >> b.md
store Create files 2021-11-10T22:59:32.1401231Z cat a.txt b.md
store Create files 2021-11-10T22:59:32.1623649Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
store Create files 2021-11-10T22:59:32.1626211Z ##[endgroup]
store Create files 2021-11-10T22:59:32.9569082Z pwd /d/a/so-foobar-cache/so-foobar-cache/cache_test/cache_test/subdir
store Create files 2021-11-10T22:59:32.9607728Z Lorem ipsumdolor sit amet
store Store in cache 2021-11-10T22:59:33.9705422Z ##[group]Run actions/cache#v2
store Store in cache 2021-11-10T22:59:33.9706196Z with:
store Store in cache 2021-11-10T22:59:33.9706815Z path: cache_test/cache_test/**/*.txt
store Store in cache cache_test/cache_test/**/*.md
store Store in cache
store Store in cache 2021-11-10T22:59:33.9708499Z key: multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55
store Store in cache 2021-11-10T22:59:33.9709961Z ##[endgroup]
store Store in cache 2021-11-10T22:59:35.1757943Z Received 260 of 260 (100.0%), 0.0 MBs/sec
store Store in cache 2021-11-10T22:59:35.1761565Z Cache Size: ~0 MB (260 B)
store Store in cache 2021-11-10T22:59:35.1781110Z [command]C:\Windows\System32\tar.exe -z -xf D:/a/_temp/653f7664-e139-4930-9710-e56942f9fa47/cache.tgz -P -C D:/a/so-foobar-cache/so-foobar-cache
store Store in cache 2021-11-10T22:59:35.2069751Z Cache restored successfully
store Store in cache 2021-11-10T22:59:35.2737840Z Cache restored from key: multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55
store Print files (A) 2021-11-10T22:59:35.3087596Z ##[group]Run echo "Cache hit: "
store Print files (A) 2021-11-10T22:59:35.3088324Z echo "Cache hit: "
store Print files (A) 2021-11-10T22:59:35.3088983Z find cache_test/cache_test/subdir
store Print files (A) 2021-11-10T22:59:35.3089571Z cat cache_test/cache_test/subdir/a.txt
store Print files (A) 2021-11-10T22:59:35.3090176Z cat cache_test/cache_test/subdir/b.md
store Print files (A) 2021-11-10T22:59:35.3104465Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
store Print files (A) 2021-11-10T22:59:35.3106449Z ##[endgroup]
store Print files (A) 2021-11-10T22:59:35.3494703Z Cache hit:
store Print files (A) 2021-11-10T22:59:35.4456032Z cache_test/cache_test/subdir
store Print files (A) 2021-11-10T22:59:35.4456852Z cache_test/cache_test/subdir/a.txt
store Print files (A) 2021-11-10T22:59:35.4459226Z cache_test/cache_test/subdir/b.md
store Print files (A) 2021-11-10T22:59:35.4875011Z Lorem ipsumdolor sit amet
store Post Store in cache 2021-11-10T22:59:35.6109511Z Post job cleanup.
store Post Store in cache 2021-11-10T22:59:35.7899690Z Cache hit occurred on the primary key multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55, not saving cache.
Second job
$ gh run view 1446486801 --log --job=4171909690 | grep -e Restore -e Print
fetch Restore 2021-11-10T22:59:50.8498516Z ##[group]Run actions/cache#v2
fetch Restore 2021-11-10T22:59:50.8499346Z with:
fetch Restore 2021-11-10T22:59:50.8499883Z path: cache_test/cache_test/**/*.txt
fetch Restore cache_test/cache_test/**/*.md
fetch Restore
fetch Restore 2021-11-10T22:59:50.8500449Z key: multiple-files-
fetch Restore 2021-11-10T22:59:50.8501079Z restore-keys: multiple-files-
fetch Restore multiple-files-
fetch Restore
fetch Restore 2021-11-10T22:59:50.8501644Z ##[endgroup]
fetch Restore 2021-11-10T22:59:53.1143793Z Received 257 of 257 (100.0%), 0.0 MBs/sec
fetch Restore 2021-11-10T22:59:53.1145450Z Cache Size: ~0 MB (257 B)
fetch Restore 2021-11-10T22:59:53.1163664Z [command]C:\Windows\System32\tar.exe -z -xf D:/a/_temp/30b0dc24-b25f-4713-b3d3-cecee7116785/cache.tgz -P -C D:/a/so-foobar-cache/so-foobar-cache
fetch Restore 2021-11-10T22:59:53.1784328Z Cache restored successfully
fetch Restore 2021-11-10T22:59:53.5197756Z Cache restored from key: multiple-files-
fetch Print files (B) 2021-11-10T22:59:53.5483939Z ##[group]Run find cache_test -type f | xargs -t grep -e.
fetch Print files (B) 2021-11-10T22:59:53.5484730Z find cache_test -type f | xargs -t grep -e.
fetch Print files (B) 2021-11-10T22:59:53.5498140Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
fetch Print files (B) 2021-11-10T22:59:53.5498674Z ##[endgroup]
fetch Print files (B) 2021-11-10T22:59:55.8119800Z grep -e. cache_test/cache_test/subdir/a.txt cache_test/cache_test/subdir/b.md
fetch Print files (B) 2021-11-10T22:59:56.1777887Z cache_test/cache_test/subdir/a.txt:Lorem ipsum
fetch Print files (B) 2021-11-10T22:59:56.1784138Z cache_test/cache_test/subdir/b.md:dolor sit amet
fetch Post Restore 2021-11-10T22:59:56.3890391Z Post job cleanup.
fetch Post Restore 2021-11-10T22:59:56.5481739Z Cache hit occurred on the primary key multiple-files-, not saving cache.
Came here to see if I can cache multiple binary files. I see there a separate workflow for pushing cache and another one for retrieving. We had a separate usecase where we need to install certain dependencies. Sharing the same here.
Usecase
You workflow needs gcc and python3 to run.(The dependencies can be any other as well)
You have a script to install dependencies ./install-dependencies.sh and you provide appropriate env to the script like ENV_INSTALL_PYTHON=true or ENV_INSTALL_GCC=true
Points to be noted
./install-dependencies.sh takes care of installing the dependencies in the path ~/bin and produces the executable binaries in the same path. It also ensures that the $PATH environment variable is updated with the new binary paths
Instead of duplicating the check cache and install binaries 2 times (as we have 2 binaries now), we are able to do it in only one. So even if we have a requirement of installing 50 binaries, we can still do them in only two steps like this
The cache key name python-gcc-cache-key can be anything but ensure that it is unique.
The third step - name: install python, gcc takes care of creating the key with the name python-gcc-cache-key if it was not found, even though we have not mentioned this keyname anywhere in this step.
The first step is where you checkout your repository containing your ./install-dependencies.sh script.
Workflow
name: Install dependencies
on: [push]
jobs:
install_dependencies:
runs-on: ubuntu-latest
name: Install python, gcc
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
## python, gcc installation
# Check if python, gcc if present in worker cache
- name: python, gcc cache
id: python-gcc-cache
uses: actions/cache#v2
with:
path: |
~/bin/python
~/bin/gcc
key: python-gcc-cache-key
# Install python, gcc if was not found in cache
- name: install python, gcc
if: steps.python-gcc-cache.outputs.cache-hit != 'true'
working-directory: .github/workflows
env:
ENV_INSTALL_PYTHON: true
ENV_INSTALL_GCC: true
run: |
./install-dependencies.sh
- name: validate python, gcc
working-directory: .github/workflows
run: |
ENV_INSTALL_BINARY_DIRECTORY_LINUX="$HOME/bin"
export PATH="$ENV_INSTALL_BINARY_DIRECTORY_LINUX:$PATH"
python3 --version
gcc --version
Benefits
It will depend on what binaries you are trying to install.
For us the saved time was nearly 50sec everytime there was cache hit.

Getting value of a variable value in azure pipeline

enviornment: 'dev'
acr-login: $(enviornment)-acr-login
acr-secret: $(enviornment)-acr-secret
dev-acr-login and dev-acr-secret are secrets stored in keyvault for acr login and acr secret.
In Pipeline, getting secrets with this task
- task: AzureKeyVault#1
inputs:
azureSubscription: $(connection)
KeyVaultName: $(keyVaultName)
SecretsFilter: '*'
This task will create task variables with name 'dev-acr-login' and 'dev-acr-secret'
Not if I want to login in docker I am not able to do that
Following code works and I am able to login into acr.
- bash: |
echo $(dev-acr-secret) | docker login \
$(acrName) \
-u $(dev-acr-login) \
--password-stdin
displayName: 'docker login'
Following doesnot work. Is there a way that I can use variable names $(acr-login) and $(acr-secret) rather than actual keys from keyvault?
- bash: |
echo $(echo $(acr-secret)) | docker login \
$(acrRegistryServerFullName) \
-u $(echo $(acr-login)) \
--password-stdin
displayName: 'docker login'
You could pass them as environment variables:
- bash: |
echo $(echo $ACR_SECRET) | ...
displayName: docker login
env:
ACR_SECRET: $(acr-secret)
But what is the purpose, as opposed to just echoing the password values as you said works in the other example? As long as the task is creating secure variables, they will be protected in logs. You'd need to do that anyway, since they would otherwise show up in diagnostic logs if someone enabled diagnostics, which anyone can do.
An example to do that:
- bash: |
echo "##vso[task.setvariable variable=acr-login;issecret=true;]$ACR_SECRET"
env:
ACR_SECRET: $($(acr-secret)) # Should expand recursively
See Define variables : Set secret variables for more information and examples.

Run Google Cloud Build Command for Each Result of Array

I have an Nx workspace with multiple Angular apps included. When master is updated in my GitHub repo, I want a build to kick off. That part is easy enough with GCB's triggers. But what I want to happen is to run this command:
npm run affected:apps
on the trigger, and build a Docker image and push it to Google Container registry for each affected app. My cloudbuild.yaml file looks like this so far:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: node:10.15.1
entrypoint: npm
args: ['run affected:apps --base=origin/master --head=HEAD']
That command returns a result like this:
> project-name#0.0.0 affected:apps /Users/username/projects/project-folder
> nx affected:apps
Note: Nx defaulted to --base=master --head=HEAD
my-app-1
I'm not sure what to do with Google Cloud with that result. With a node script, I could do the following to print out an array of affected apps:
const { exec } = require('child_process');
function getApps() {
exec('npm run affected:apps', (err, out) => {
if (err) {
console.log(null);
} else {
const lines = out.split('\n');
const apps = lines[lines.length - 2].split(' ');
console.log(JSON.stringify(apps));
}
});
}
getApps();
That returns an array of the affected apps, and null if an error. Still, even with that, I'm not sure what I would do for the next step in Google Cloud build. With the results of either the command or that script, ideally I'd be able to run a docker build command, like this:
docker build --file ./:loop variable:/Dockerfile
where :loop variable: is the name of an affected app. I'd like to do that for each value in the array, and not do anything if, for some reason, the command returns no affected apps.
Any ideas on how to use Google Cloud Build with Nx Workspaces, or if you've just got Google Cloud build experience and know what my next step should be, that'd be great.
Continue #chinoche comment there is an example of how you could save the list of affected apps to the affected.txt file
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
IFS=' ' read -a apps <<< $(npx nx affected:apps --base=origin/master~1 --plain)
for app in "${apps[#]}"
do
echo $app
done >> affected.txt
The next step could read the file and call any other commands, e.g. create docker image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
docker build -t gcr.io/$PROJECT_ID/$project -f <path-to>/Dockerfile .
done < affected.txt
One of the tricks might be to create separate cloudbuild.yaml file for each project and then trigger a new cloud build process for each affected project. That allows having a completely different build process for each project.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
gcloud builds submit --config ./<path-to>/$project/project-specific-cloudbuild.yaml .
done < affected.txt
if you are able to get the affected apps with the node script I'd suggest you to write a file with the affected apps in a Cloud Build custom steps, this file will be written at the "/workspace" directory and will be able to any other custom step that may be executed in later steps, with this you should be able to run the docker build command

Resources