Using GitHub cache action with multiple cache paths? - windows

I'm trying to use the official GitHub cache action (https://github.com/actions/cache) to cache some binary files to speed up some of my workflows, however I've been unable to get it working when specifying multiple cache paths.
Here's a simple, working test I've set up using a single cache path:
There is one action for writing the cache, and one for reading it (both executed in separate workflows, but on the same repository and branch).
The write-action is executed first, and creates a file "subdir/a.txt", and then caches it with the "actions/cache#v2" action:
# Test with single path
- name: Create file
shell: bash
run: |
mkdir subdir
cd subdir
printf '%s' "Lorem ipsum" >> a.txt
- name: Write cache (Single path)
uses: actions/cache#v2
with:
path: "D:/a/cache_test/cache_test/**/*.txt"
key: test-cache-single-path
The read-action retrieves the cache, prints a list of all files in the directory recursively to confirm it has restored the file from the cache, and then prints the contents of the cached txt-file:
- name: Get cached file
uses: actions/cache#v2
id: get-cache
with:
path: "D:/a/cache_test/cache_test/**/*.txt"
key: test-cache-single-path
- name: Print files
shell: bash
run: |
echo "Cache hit: ${{steps.get-cache.outputs.cache-hit}}"
cd "D:/a/cache_test/cache_test"
ls -R
cat "D:/a/cache_test/cache_test/subdir/a.txt"
This works without any issues.
Now, the description of the cache action contains an example for specifying multiple cache paths:
- uses: actions/cache#v2
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
But when I try that for my example actions, it fails to work.
In the new write-action, I create two files, "subdir/a.txt" and "subdir/b.md", and then cache them by specifying two paths:
# Test with multiple paths
- name: Create files
shell: bash
run: |
mkdir subdir
cd subdir
printf '%s' "Lorem ipsum" >> a.txt
printf '%s' "dolor sit amet" >> b.md
#- name: Write cache (Multi path)
uses: actions/cache#v2
with:
path: |
"D:/a/cache_test/cache_test/**/*.txt"
"D:/a/cache_test/cache_test/**/*.md"
key: test-cache-multi-path
The new read-action is the same as the old one, but also specifies both paths:
# Read cache
- name: Get cached file
uses: actions/cache#v2
id: get-cache
with:
path: |
"D:/a/cache_test/cache_test/**/*.txt"
"D:/a/cache_test/cache_test/**/*.md"
key: test-cache-multi-path
- name: Print files
shell: bash
run: |
echo "Cache hit: ${{steps.get-cache.outputs.cache-hit}}"
cd "D:/a/cache_test/cache_test"
ls -R
cat "D:/a/cache_test/cache_test/subdir/a.txt"
cat "D:/a/cache_test/cache_test/subdir/b.md"
This time I still get the confirmation that the cache has been read:
Cache restored successfully
Cache restored from key: test-cache-multi-path
Cache hit: true
However "ls -R" does not list the files, and the "cat" commands fail because the files do not exist.
Where is my error? What is the proper way of specifying multiple paths with the cache action?

I was able to make it work with a few modifications;
use relative paths instead of absolute
use a hash of the content for the key
It looks like with at least bash the absolute paths look like this:
/d/a/so-foobar-cache/so-foobar-cache/cache_test/cache_test/subdir
Where so-foobar-cache is the name of the repository.
.github/workflows/foobar.yml
name: Store and Fetch cached files
on: [push]
jobs:
store:
runs-on: windows-2019
steps:
- name: Create files
shell: bash
id: store
run: |
mkdir -p 'cache_test/cache_test/subdir'
cd 'cache_test/cache_test/subdir'
echo pwd $(pwd)
printf '%s' "Lorem ipsum" >> a.txt
printf '%s' "dolor sit amet" >> b.md
cat a.txt b.md
- name: Store in cache
uses: actions/cache#v2
with:
path: |
cache_test/cache_test/**/*.txt
cache_test/cache_test/**/*.md
key: multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
- name: Print files (A)
shell: bash
run: |
echo "Cache hit: ${{steps.store.outputs.cache-hit}}"
find cache_test/cache_test/subdir
cat cache_test/cache_test/subdir/a.txt
cat cache_test/cache_test/subdir/b.md
fetch:
runs-on: windows-2019
needs: store
steps:
- name: Restore
uses: actions/cache#v2
with:
path: |
cache_test/cache_test/**/*.txt
cache_test/cache_test/**/*.md
key: multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
restore-keys: |
multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
multiple-files-
- name: Print files (B)
shell: bash
run: |
find cache_test -type f | xargs -t grep -e.
Log
$ gh run view 1446486801
✓ master Store and Fetch cached files · 1446486801
Triggered via push about 3 minutes ago
JOBS
✓ store in 5s (ID 4171907768)
✓ fetch in 10s (ID 4171909690)
First job
$ gh run view 1446486801 --log --job=4171907768 | grep -e Create -e Store -e Print
store Create files 2021-11-10T22:59:32.1396931Z ##[group]Run mkdir -p 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1398025Z mkdir -p 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1398695Z cd 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1399360Z echo pwd $(pwd)
store Create files 2021-11-10T22:59:32.1399936Z printf '%s' "Lorem ipsum" >> a.txt
store Create files 2021-11-10T22:59:32.1400672Z printf '%s' "dolor sit amet" >> b.md
store Create files 2021-11-10T22:59:32.1401231Z cat a.txt b.md
store Create files 2021-11-10T22:59:32.1623649Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
store Create files 2021-11-10T22:59:32.1626211Z ##[endgroup]
store Create files 2021-11-10T22:59:32.9569082Z pwd /d/a/so-foobar-cache/so-foobar-cache/cache_test/cache_test/subdir
store Create files 2021-11-10T22:59:32.9607728Z Lorem ipsumdolor sit amet
store Store in cache 2021-11-10T22:59:33.9705422Z ##[group]Run actions/cache#v2
store Store in cache 2021-11-10T22:59:33.9706196Z with:
store Store in cache 2021-11-10T22:59:33.9706815Z path: cache_test/cache_test/**/*.txt
store Store in cache cache_test/cache_test/**/*.md
store Store in cache
store Store in cache 2021-11-10T22:59:33.9708499Z key: multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55
store Store in cache 2021-11-10T22:59:33.9709961Z ##[endgroup]
store Store in cache 2021-11-10T22:59:35.1757943Z Received 260 of 260 (100.0%), 0.0 MBs/sec
store Store in cache 2021-11-10T22:59:35.1761565Z Cache Size: ~0 MB (260 B)
store Store in cache 2021-11-10T22:59:35.1781110Z [command]C:\Windows\System32\tar.exe -z -xf D:/a/_temp/653f7664-e139-4930-9710-e56942f9fa47/cache.tgz -P -C D:/a/so-foobar-cache/so-foobar-cache
store Store in cache 2021-11-10T22:59:35.2069751Z Cache restored successfully
store Store in cache 2021-11-10T22:59:35.2737840Z Cache restored from key: multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55
store Print files (A) 2021-11-10T22:59:35.3087596Z ##[group]Run echo "Cache hit: "
store Print files (A) 2021-11-10T22:59:35.3088324Z echo "Cache hit: "
store Print files (A) 2021-11-10T22:59:35.3088983Z find cache_test/cache_test/subdir
store Print files (A) 2021-11-10T22:59:35.3089571Z cat cache_test/cache_test/subdir/a.txt
store Print files (A) 2021-11-10T22:59:35.3090176Z cat cache_test/cache_test/subdir/b.md
store Print files (A) 2021-11-10T22:59:35.3104465Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
store Print files (A) 2021-11-10T22:59:35.3106449Z ##[endgroup]
store Print files (A) 2021-11-10T22:59:35.3494703Z Cache hit:
store Print files (A) 2021-11-10T22:59:35.4456032Z cache_test/cache_test/subdir
store Print files (A) 2021-11-10T22:59:35.4456852Z cache_test/cache_test/subdir/a.txt
store Print files (A) 2021-11-10T22:59:35.4459226Z cache_test/cache_test/subdir/b.md
store Print files (A) 2021-11-10T22:59:35.4875011Z Lorem ipsumdolor sit amet
store Post Store in cache 2021-11-10T22:59:35.6109511Z Post job cleanup.
store Post Store in cache 2021-11-10T22:59:35.7899690Z Cache hit occurred on the primary key multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55, not saving cache.
Second job
$ gh run view 1446486801 --log --job=4171909690 | grep -e Restore -e Print
fetch Restore 2021-11-10T22:59:50.8498516Z ##[group]Run actions/cache#v2
fetch Restore 2021-11-10T22:59:50.8499346Z with:
fetch Restore 2021-11-10T22:59:50.8499883Z path: cache_test/cache_test/**/*.txt
fetch Restore cache_test/cache_test/**/*.md
fetch Restore
fetch Restore 2021-11-10T22:59:50.8500449Z key: multiple-files-
fetch Restore 2021-11-10T22:59:50.8501079Z restore-keys: multiple-files-
fetch Restore multiple-files-
fetch Restore
fetch Restore 2021-11-10T22:59:50.8501644Z ##[endgroup]
fetch Restore 2021-11-10T22:59:53.1143793Z Received 257 of 257 (100.0%), 0.0 MBs/sec
fetch Restore 2021-11-10T22:59:53.1145450Z Cache Size: ~0 MB (257 B)
fetch Restore 2021-11-10T22:59:53.1163664Z [command]C:\Windows\System32\tar.exe -z -xf D:/a/_temp/30b0dc24-b25f-4713-b3d3-cecee7116785/cache.tgz -P -C D:/a/so-foobar-cache/so-foobar-cache
fetch Restore 2021-11-10T22:59:53.1784328Z Cache restored successfully
fetch Restore 2021-11-10T22:59:53.5197756Z Cache restored from key: multiple-files-
fetch Print files (B) 2021-11-10T22:59:53.5483939Z ##[group]Run find cache_test -type f | xargs -t grep -e.
fetch Print files (B) 2021-11-10T22:59:53.5484730Z find cache_test -type f | xargs -t grep -e.
fetch Print files (B) 2021-11-10T22:59:53.5498140Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
fetch Print files (B) 2021-11-10T22:59:53.5498674Z ##[endgroup]
fetch Print files (B) 2021-11-10T22:59:55.8119800Z grep -e. cache_test/cache_test/subdir/a.txt cache_test/cache_test/subdir/b.md
fetch Print files (B) 2021-11-10T22:59:56.1777887Z cache_test/cache_test/subdir/a.txt:Lorem ipsum
fetch Print files (B) 2021-11-10T22:59:56.1784138Z cache_test/cache_test/subdir/b.md:dolor sit amet
fetch Post Restore 2021-11-10T22:59:56.3890391Z Post job cleanup.
fetch Post Restore 2021-11-10T22:59:56.5481739Z Cache hit occurred on the primary key multiple-files-, not saving cache.

Came here to see if I can cache multiple binary files. I see there a separate workflow for pushing cache and another one for retrieving. We had a separate usecase where we need to install certain dependencies. Sharing the same here.
Usecase
You workflow needs gcc and python3 to run.(The dependencies can be any other as well)
You have a script to install dependencies ./install-dependencies.sh and you provide appropriate env to the script like ENV_INSTALL_PYTHON=true or ENV_INSTALL_GCC=true
Points to be noted
./install-dependencies.sh takes care of installing the dependencies in the path ~/bin and produces the executable binaries in the same path. It also ensures that the $PATH environment variable is updated with the new binary paths
Instead of duplicating the check cache and install binaries 2 times (as we have 2 binaries now), we are able to do it in only one. So even if we have a requirement of installing 50 binaries, we can still do them in only two steps like this
The cache key name python-gcc-cache-key can be anything but ensure that it is unique.
The third step - name: install python, gcc takes care of creating the key with the name python-gcc-cache-key if it was not found, even though we have not mentioned this keyname anywhere in this step.
The first step is where you checkout your repository containing your ./install-dependencies.sh script.
Workflow
name: Install dependencies
on: [push]
jobs:
install_dependencies:
runs-on: ubuntu-latest
name: Install python, gcc
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
## python, gcc installation
# Check if python, gcc if present in worker cache
- name: python, gcc cache
id: python-gcc-cache
uses: actions/cache#v2
with:
path: |
~/bin/python
~/bin/gcc
key: python-gcc-cache-key
# Install python, gcc if was not found in cache
- name: install python, gcc
if: steps.python-gcc-cache.outputs.cache-hit != 'true'
working-directory: .github/workflows
env:
ENV_INSTALL_PYTHON: true
ENV_INSTALL_GCC: true
run: |
./install-dependencies.sh
- name: validate python, gcc
working-directory: .github/workflows
run: |
ENV_INSTALL_BINARY_DIRECTORY_LINUX="$HOME/bin"
export PATH="$ENV_INSTALL_BINARY_DIRECTORY_LINUX:$PATH"
python3 --version
gcc --version
Benefits
It will depend on what binaries you are trying to install.
For us the saved time was nearly 50sec everytime there was cache hit.

Related

Pre-commit - /dev/tty block all output text defined in hook (ex: through echo) before entering user input

I'm trying to create my own hook (defined in terraform_plan.sh, can refer in terraform_plan.sh and .pre-commit-config.yaml below) and require user input to determine if this hook success or fail (This hook is about some checking by the user before commit). To activate user input function, I add exec < /dev/tty according to How do I prompt the user from within a commit-msg hook?.
The snippet code looks like this (terraform_plan.sh).
#!/bin/sh
location=$(pwd)
echo "location: ${location}"
cd ./tf_build/
project_id=$(gcloud config get-value project)
credentials=$(gcloud secrets versions access latest --secret="application-default-credentials")
echo "PROJECT_ID: ${project_id}"
echo "CREDENTIALS: ${credentials}"
terraform plan -var "PROJECT_ID=${project_id}" -var "APPLICATION_DEFAULT_CREDENTIALS=${credentials}"
exec < /dev/tty
read -p "Do yu agree this plan? (Y/N): " answer
echo "answer: ${answer}"
# for testing purpose, assign 0 directly
exit 0
I expect that the prompt Do you agree this plan? (Y/N) should appear before I can enter my answer. But actually nothing shown and it just hangs there waiting for the input.
(.venv) ➜ ✗ git commit -m "test"
sqlfluff-lint........................................(no files to check)Skipped
sqlfluff-fix.........................................(no files to check)Skipped
black................................................(no files to check)Skipped
isort................................................(no files to check)Skipped
docformatter.........................................(no files to check)Skipped
flake8...............................................(no files to check)Skipped
blackdoc.............................................(no files to check)Skipped
Terraform plan...........................................................
Only after I give an input "Y", all output strings defined in this hook (ex: output string through echo, terraform plan) comes out.
Terraform plan...........................................................Y
Passed
- hook id: terraform_plan
- duration: 222.33s
location: [remove due to privacy]
PROJECT_ID: [remove due to privacy]
CREDENTIALS: [remove due to privacy]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_bigquery_table.bar will be created
+ resource "google_bigquery_table" "bar" {
+ creation_time = (known after apply)
+ dataset_id = "default_dataset"
+ deletion_protection = true
...
I also try read -p "Do yu agree this plan? (Y/N): " answer < /dev/tty, but get the same issue.
Here is my part of .pre-commit-config.yaml config file.
repos:
- repo: local
hooks:
# there are other hooks above, remove them for easy to read
- id: terraform_plan
name: Terraform plan
entry: hooks/terraform_plan.sh
language: script
verbose: true
files: (\.tf|\.tfvars)$
exclude: \.terraform\/.*$
- repo: ../pre-commit-terraform
rev: d7e049d0b72eebcb09b719bb296589a47f4fa806
hooks:
- id: terraform_fmt
- id: terraform_tflint
- id: terraform_validate
args: [--args=-no-color]
So far I do not know what the root cause is and how to solve it. Seek for someone's help and suggestion. Thanks.
pre-commit intentionally does not allow interactivity so there is no working solution within the framework. you can sidestep the framework and utilize a legacy shell script (.git/hooks/pre-commit.legacy) but I would also not recommend that:
terraform plan is also kind of a poor choice for a pre-commit hook -- especially if you are blocking the user to confirm it. they are likely to be either surprised by such a hook or fatigued by it and will ignore the output (mash yes) or ignore the entire process (SKIP / --no-verify).
disclaimer: I wrote pre-commit

github-actions decode base64 string on windows

I have publishing job in github actions.
It uses certificate which is stored in base64 format in repo secrets.
I need to decode this certificate and store it on disk on windows-latest machine.
My workflow looks like that
name: publish
on: [ push ]
jobs:
build:
runs-on: windows-latest
steps:
- name: checkout
uses: actions/checkout#v2
- run: echo "${{ secrets.WINDOWS_CERT}}" | base64 --decode > $HOME/certificate.pfx
- run: cat $HOME/certificate.pfx
When i run it i get error
Run echo "***" | base64 --decode > $HOME/certificate.pfx
echo "***" | base64 --decode > $HOME/certificate.pfx
shell: C:\Program Files\PowerShell\7\pwsh.EXE -command ". '{0}'"
/usr/bin/base64: invalid input
##[error]Process completed with exit code 1.
How do i properly decode base64 encoded secrets on windows machines?
Thanks
I ran into two problems on powershell (using windows-latest default shell) on Github Actions:
Maybe due to newlines, but indeed that invalid input appeared. Needed to pass -i (or --ignore-garbage) to base64 -d so that is silenced.
But then, the output still didn't match the original binary (as verified by dir, wc and md5sum). Because in powershell, the > operator defaults to writing with UTF-16, which will mess up your binary. See
https://www.johndcook.com/blog/2008/08/25/powershell-output-redirection-unicode-or-ascii/ on possible workaround using out-file directly.
But in the end, I just changed to shell: bash for the base64 decode operation. All works promptly there.

Is this the correct way to write if..else statement in cloudbuild.yaml file?

I am trying to deploy a cloud function using cloudbuild.yaml. It works fine if I don't use any conditional statement. I am facing an error when I execute my cloudbuild.yaml file with if conditional statement. What is the correct way to write it. Below is my code:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: deploy
args:
- '-c'
- 'if [ $BRANCH_NAME != "xoxoxoxox" ]
then
[
'functions', 'deploy', 'groups',
'--region=us-central1',
'--source=.',
'--trigger-http',
'--runtime=nodejs8',
'--entry-point=App',
'--allow-unauthenticated',
'--service-account=xoxoxoxox#appspot.gserviceaccount.com'
]
fi'
dir: 'API/groups'
Where am I doing it wrong ?
From the github page, https://github.com/GoogleCloudPlatform/cloud-sdk-docker, the entrypoint is not set to gcloud. So you cannot specify the arguments like that.
Good practice for specifying directory is to start with /workspace
Also the right way to write the step should be
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: deploy
dir: '/workspace/API/groups'
entrypoint: bash
args:
- '-c'
- |
if [ $BRANCH_NAME != "xoxoxoxox" ]
then
gcloud functions deploy groups
--region=us-central1
--source=.
--trigger-http
--runtime=nodejs8
--entry-point=App
--allow-unauthenticated
--service-account=xoxoxoxox#appspot.gserviceaccount.com
fi
I'm not sure you can do this.
In my case, I use the branch selector in the Cloud build trigger to select which branch (or tag) I want to build from a pattern.
I wanted to delete the latest version of each service only if there were more than two previous versions. This was my solution.
args:
- "-c"
- |
if [[ $(gcloud app versions list --format="value(version.id)" --service=MY-SERVICE | wc -l) -ge 2 ]];
then
gcloud app versions list --format="value(version.id)" --sort-by="~version.createTime" --service=admin | tail -n -1 | xargs gcloud app versions delete --service=MY-SERVICE --quiet;
fi

Gitlab cross-project artifact

I have 2 separate gitlab projects, I've looked through the documentation for 2 days now but am still struggling to achieve what I'm trying for.
I have Project A, which generates the documentation for the whole project.
Project B is a Gitlab Pages project.
My gitlab-ci.yml file for Project A has a job like this
build_docs:
stage: deploy
artifacts:
# Create Archive with name of [Current Job - Current Tag]
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
paths:
- documentation/build/dokka/
script:
- ./gradlew assemble
- ls $CI_PROJECT_DIR/documentation/build
- echo "Job Name = $CI_JOB_NAME"
- echo "Project Dir = $CI_PROJECT_DIR"
- echo "Docs trigger key = $DOCS_TRIGGER_KEY"
- echo "Test Unprotected Unmasked Trigger = $TEST_TRIGGER"
- echo "Job Token = $CI_JOB_TOKEN"
- echo "Job ID= $CI_JOB_ID"
- echo "Triggering [Documentation Pipeline]; Artifact from ACL -> Documentation"
- "curl -X POST -F token=${CI_JOB_TOKEN} -F ref=master https://gitlab.duethealth.com/api/v4/projects/538/trigger/pipeline"
This job triggers the following job in Project B:
get-artifacts:
stage: get-artifacts
script:
- echo "I have been triggered!!"
- echo "$CI_JOB_TOKEN"
- echo "$CI_JOB_NAME"
- echo "$CI_PROJECT_DIR"
- ls $CI_PROJECT_DIR
# List artifacts generated from acl project
- 'curl --globoff --header "PRIVATE-TOKEN: abc1234" "https://gitlab.duethealth.com/api/v4/projects/492/jobs"'
# This should take artifacts from ACL and output them into --output filename
- 'curl --location --output artifacts.zip --header "JOB-TOKEN: $CI_JOB_TOKEN" "https://gitlab.duethealth.com/api/v4/android-projects/492/jobs/63426/artifacts"'
# - unzip build_docs-feature-inf-297-upload-kdoc-doc-mod-test.zip
- ls $CI_PROJECT_DIR
- file $CI_PROJECT_DIR/artifacts.zip
- ls
only:
variables:
- $CI_PIPELINE_SOURCE == "pipeline"
tags:
- pages
Now, in the job logs of project A. The Artifacts are uploaded successfully and I see a size of ~50000
In the logs of project B, after
# List artifacts generated from acl project
I DO see the zip file artifact
However it seems that my curl request to GET a jobs artifacts in incorrect somehow. If you look at the picture below you can see 2 things.
1.) The request size is much small than the upload. So we are uploading artifacts of size ~50000 but then we download those same artifacts at a size of ~1000
2.) The zip file the artifacts should be outputted is not a zip file even though it has the .zip file extension.
It seems to me like it is never actually fetching the artifacts and instead just creating some object named artifacts.zip which is not even a zip file and I'm assuming the file size I'm seeing is just the size of the empty artifacts.zip.
Any insight would be greatly appreciated.
My problem was with the URL, after using the correct URL the problem is fixed.

Run Google Cloud Build Command for Each Result of Array

I have an Nx workspace with multiple Angular apps included. When master is updated in my GitHub repo, I want a build to kick off. That part is easy enough with GCB's triggers. But what I want to happen is to run this command:
npm run affected:apps
on the trigger, and build a Docker image and push it to Google Container registry for each affected app. My cloudbuild.yaml file looks like this so far:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: node:10.15.1
entrypoint: npm
args: ['run affected:apps --base=origin/master --head=HEAD']
That command returns a result like this:
> project-name#0.0.0 affected:apps /Users/username/projects/project-folder
> nx affected:apps
Note: Nx defaulted to --base=master --head=HEAD
my-app-1
I'm not sure what to do with Google Cloud with that result. With a node script, I could do the following to print out an array of affected apps:
const { exec } = require('child_process');
function getApps() {
exec('npm run affected:apps', (err, out) => {
if (err) {
console.log(null);
} else {
const lines = out.split('\n');
const apps = lines[lines.length - 2].split(' ');
console.log(JSON.stringify(apps));
}
});
}
getApps();
That returns an array of the affected apps, and null if an error. Still, even with that, I'm not sure what I would do for the next step in Google Cloud build. With the results of either the command or that script, ideally I'd be able to run a docker build command, like this:
docker build --file ./:loop variable:/Dockerfile
where :loop variable: is the name of an affected app. I'd like to do that for each value in the array, and not do anything if, for some reason, the command returns no affected apps.
Any ideas on how to use Google Cloud Build with Nx Workspaces, or if you've just got Google Cloud build experience and know what my next step should be, that'd be great.
Continue #chinoche comment there is an example of how you could save the list of affected apps to the affected.txt file
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
IFS=' ' read -a apps <<< $(npx nx affected:apps --base=origin/master~1 --plain)
for app in "${apps[#]}"
do
echo $app
done >> affected.txt
The next step could read the file and call any other commands, e.g. create docker image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
docker build -t gcr.io/$PROJECT_ID/$project -f <path-to>/Dockerfile .
done < affected.txt
One of the tricks might be to create separate cloudbuild.yaml file for each project and then trigger a new cloud build process for each affected project. That allows having a completely different build process for each project.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
gcloud builds submit --config ./<path-to>/$project/project-specific-cloudbuild.yaml .
done < affected.txt
if you are able to get the affected apps with the node script I'd suggest you to write a file with the affected apps in a Cloud Build custom steps, this file will be written at the "/workspace" directory and will be able to any other custom step that may be executed in later steps, with this you should be able to run the docker build command

Resources