Using Github actions cache to avoid installing composer vendor - yaml

I have a project which is using GitHub actions to build and deploy code. This project is a Wordpress theme based on sage (it does'n matter, but FIY). This project is using node_modules and composer vendors.
My deployment workflow is composed of 3 jobs :
installing node_modules and build resources (js, css, etc.). I'm using cache to install node_modules, and it create a /public folder which is an artifact
installing composer vendor. I'm using cache on vendors, it's faster.
deploying through rsync
I'm getting /public folder on the 3rd step because it's an artifact, but I'm struggling deploying /vendor folder, because it doesn't exist on the 3rd job. I could use artifacts to send /vendor from 2nd to 3rd job, but it's a bit slow.
Do you know if there is a way to use cache to get /vendor folder on the 3rd step ? Do I need to re-run composer install on the last step in order to get this /vendor folder ? If so, my 2nd step is a bit useless isn't it ?
Thank you for your help !

Caches can be used across multiple jobs as long as they're in the same workflow - you won't need to run composer install in both steps. Artifacts are only needed if you need to use the folder/files outside of your workflow.
As long as the first job will always run, you can use the same cache key for both steps and make the assumption that the cache will always exist during the third job.
Something like this would work for you. In a "normal" run (the workflow hasn't already been run for this github.sha), the first cache step will miss and composer install will run. Once job-two is finished, everything in /vendor will be cached with github.sha as the cache key. job-three will pick that /vendor folder up from the cache so you can use it in the rest of the steps.
jobs:
job-two:
steps:
- name: Cache vendor folder
uses: actions/cache#v3
with:
path: /vendor
key: ${{ github.sha }}
- name: Build vendor folder
run: composer install
job-three:
steps:
- name: Retrieve cached vendor folder
uses: actions/cache#v3
with:
path: /vendor
key: ${{ github.sha }}
Hard to get it perfect without seeing your workflow file - you might have to tweak some of the paths.

Related

Gatsby doesn't update theme when modified in node_modules

I'm developing my own gatsby.js theme (actually I've forked and modified another theme, then created new npm package for it). When I change any theme file in node_modules, for example footer.js, I don't see any changes while running gatsby develop until I delete cache with gatsby clean. In the past (2 years ago) when I was making first changes to my npm module, everything was updating as it should. I must also mention that I've cleaned node_modules and updated all dependencies with yarn to the latest available versions.
For example, I'm making this change:
<p className="text-lead"><b>Last modified</b> {lastUpdate}</p>
to
<p className="text-lead"><b>Last change</b> {lastUpdate}</p>
Then gatsby develop detects change:
success onPreExtractQueries - 0.004s
success extract queries from components - 0.128s
success write out requires - 0.003s
success Re-building development bundle - 0.198s
success Writing page-data.json and slice-data.json files to public directory - 0.014s - 0/1 73.40/s
But I see no change in the browser window until I run gatsby clean.
Here's part of my gatsby-config.js at root project folder:
...
plugins: [
{
resolve: "#arturthemaslov/gatsby-theme-intro-maslov",
options: {
basePath: pathPrefix,
contentPath: "content/",
showThemeLogo: false,
theme: "gh-inspired",
},
},
...
Also, I've noticed this warning when running gatsby develop:
warn The #arturthemaslov/gatsby-theme-intro-maslov plugin has generated no Gatsby nodes. Do you need it? This could also suggest the plugin is misconfigured.
Maybe that's something to do with this problem? I've also tried ghosting parts of theme plugin by putting theme files into root src folder, but no luck.
Reason why it isn't working is because you shouldn't be modifying anything in the node_modules directory and when you:
I must also mention that I've cleaned node_modules and updated all
dependencies with yarn to the latest available versions.
You just reverted or updated every dependency within the node_modules directory and if you updated to the latest you need to go through every dependency and see if you have any conflicts which you likely do.
Do note you're also using a theme with Gatsby "^2.20.12" and Gatsby is now on version "^5.2.0".
You also mentioned in the comments you're updated the NPM package while the repository source code is dated a few years. Do not think this is a good approach and you should look at building a release and using the Github Action NPM Publish

Gitlab working directory not clean when using cache with CLONE_STRATEGY: none

I have a GitLab pipeline setup that has a package step to do a maven build during the tag event and a release to upload the jar to the GitLab generic package registry using curl and GitLab-release cli.
What I'm expecting to happen is a cache of the .m2 to be loaded into the package step to allow the mvn clean package to do its thing. Then archive the created jar and test results only.
The release step should begin clean with no git clone, no cache and only the jar and test results.
Instead the 'find .' shows the release step contains everything including
Git directory (.git)
Full checked out repository
.m2 cache
target (fully built as the Package step produced)
From the cache documentation (https://docs.gitlab.com/ee/ci/caching/) on GitLab it states
Archive: 'dependencies' keyword to control which job fetches the artifacts
Disable Cache uses the 'cache: []'
Why is GitLab putting so much content into the release job? The release job fails at times because its finding multiple Jar files from previous tags (IE the clean and the archiving are holding past version).
gitlab-ci.yml
variables:
MAVEN_CLI_OPTS: "-s $CI_PROJECT_DIR/.m2/settings.xml"
MAVEN_VERSION_PLUGIN_VERSION: 2.11.0
MAVEN_ARTIFACT_NAME: test-component
GIT_CLEAN_FLAGS: -ffd
PACKAGE_REGISTRY_URL: "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/${MAVEN_ARTIFACT_NAME}"
cache:
key: primary
paths:
- .m2/repository
stages:
- package
- release
package:
stage: package
image: maven:latest
script:
- mvn ${MAVEN_CLI_OPTS} clean package
artifacts:
paths:
- target/*.jar
- target/surefire-reports
only:
- tags
- merge_requests
- branches
except:
- main
release:
stage: release
image: alpine:latest
cache: []
variables:
GIT_STRATEGY: none
dependencies:
- package
script:
- |
apk add curl gitlab-release-cli
find .
JAR_NAME=`basename target/${MAVEN_ARTIFACT_NAME}-${CI_COMMIT_TAG}.jar`
'curl --header "JOB-TOKEN: ${CI_JOB_TOKEN}" --upload-file target/${JAR_NAME} ${PACKAGE_REGISTRY_URL}/${CI_COMMIT_TAG}/${JAR_NAME}'
release-cli create --name "Release $CI_COMMIT_TAG" --description "$TAG_MESSAGE" --tag-name ${CI_COMMIT_TAG} --assets-link "{\"name\":\"jar\",\"url\":\"${PACKAGE_REGISTRY_URL}/${CI_COMMIT_TAG}/${JAR_NAME}\"}"
only:
- tags
See the GitLab docs on GIT_STRATEGY:
A Git strategy of none also re-uses the local working copy, but skips all Git operations normally done by GitLab. GitLab Runner pre-clone scripts are also skipped, if present. This strategy could mean you need to add fetch and checkout commands to your .gitlab-ci.yml script.
It can be used for jobs that operate exclusively on artifacts, like a deployment job. Git repository data may be present, but it’s likely out of date. You should only rely on files brought into the local working copy from cache or artifacts.
So GitLab documentation is pretty clear that you should always expect the git repository to be present. When you want to work exclusively with artifacts, I you can create a new temporary directory and reference the path to the artifacts explicitly rather than relying on a totally clean working directory.

How to `yarn run start` providing custom modules location

Here is my problem, I constructed a dockerfile launching yarn install from a folder where a package.json and yarn.lock are present (they have been taken from the project I have to setup yarn dependencies for, this project is inside a deconnected server).
Then, I run bash into container image and uploaded the created folder node_modules, and put it into the deconnected server, where project is present, at root folder project.
But then, when I launched yarn start from root folder, it says that it cannot find rescripts despite of the fact that folder #rescripts is present into node_modules.
I tried NODE_PATH=./node_modules yarn start without any success.
Thanks a lot for your help.
Regards
I think i get what i want with :
https://yarnpkg.com/blog/2016/11/24/offline-mirror/
I create a cache folder with all tar.gz dependencies downloaded.
But if i remove node_modules and yarn.lock, and run yarn install --offline --cache-folder npm-packages-offline-cache/, I got error saying it could not find proper dependance in cache folder. It's like it cannot recognize any tar.gz inside. Any help will be welcome.
Regars

Google Cloud Container Builder - Build Docker container from Go source with vendored dependencies

Background
Related question: Google Container Builder: How to install govendor dependencies during build step?
I am trying to use Google Cloud Container Builder to automate the building of my Docker containers using Build Triggers.
My code is in Go, and I have a vendor folder (checked in to Git) in my project root which contains all of my Go dependencies.
My project has four binaries that need to be Dockerized, structured as follows:
vendor/
...
program1/
program1.go
main/
main.go
Dockerfile
program2/
program2.go
main/
main.go
Dockerfile
...
Each program's Dockerfile is simple:
FROM alpine
ADD main main
ENTRYPOINT ["/main"]
I have a Build Trigger set up to track my master branch. The trigger runs the following build request (cloudbuild.yaml), which makes use of the open sourced Docker build step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/program1:0.1.15-$SHORT_SHA', '.']
dir: 'program1/main'
... (repeated for each program)
(images, tags omitted)
To summarize, my current build process is as follows:
Edit code.
Build each Go executable using go build. The executable is called main, and is saved in the programX/main/ directory, alongside main.go.
Commit and push code (since the main executables are tracked by Git) to my master branch.
Build Trigger makes four Docker images using the main file built in step 1.
Goal
I would like to eliminate Step 1 from my build process, so that I no longer need to compile my executables locally, and do not need to track my main executables in Git.
In sum, here is my ideal process:
Edit code, commit, push to remote.
Build Trigger compiles all four programs, builds all four images.
Relax :)
Attempted solution
I used the open source Go build step, as follows:
cloudbuild.yaml: (updated)
steps:
- id: 'build-program1'
name: 'gcr.io/cloud-builders/go'
args: ['build', '-a', '-installsuffix', 'cgo', '-ldflags', '''-w''', '-o', 'main', './main.go']
env: ['PROJECT_ROOT=/workspace', 'CGO_ENABLED=0', 'GOOS=linux']
dir: 'program1/main'
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/program1:0.1.15-$SHORT_SHA', '.']
dir: 'program1/main'
waitFor: ['build-program1']
... (repeated for each program)
(images, tags omitted)
After trying various combinations of setting PROJECT_ROOT and GOPATH in the env field of the build-programX, I kept getting the same error for every single package used in my project (file path varies):
cannot find package "github.com/acoshift/go-firebase-admin" in any of
Step #0 - "build-program1": /usr/local/go/src/github.com/acoshift/go-firebase-admin (from $GOROOT)
Step #0 - "build-program1": /workspace/auth/main/gopath/src/github.com/acoshift/go-firebase-admin (from $GOPATH)
It's not even looking for a vendor directory?
What next?
My guess is that one of the following is true:
I am not specifying the GOPATH/PROJECT_ROOT correctly in the build request file. But if so, what is the correct setting?
My project is not structured correctly.
What I am trying to do is impossible :(*
I need to make a custom build step, somehow.
The version of Go used is old - but how can I check this?
I can find no examples online of what I am trying to achieve, and I find GCP's documentation on this subject quite lacking.
Any help would be greatly appreciated!
The issue is with #1: PROJECT_ROOT refers to the desired import path of your binaries. For example, if in program1/main/main.go you import "github.com/foo/bar/program1" to get the package defined in program1/program1.go, you'd set PROJECT_ROOT=github.com/foo/bar.
Fixed the problem (but not exactly sure how...), thanks to these changes:
Set PROJECT_ROOT to my_root, such that the code I want to compile sits at my_root/program1/main/main.go (thanks to John Asmuth for his answer: https://stackoverflow.com/a/46526875/6905609)
Remove the dirs field for the Go build step
Set the -o flag to ./program1/main/main, and the final build arg to ./program1/main/main.go
Previously I was cding into the program1/main directory during the build step, and for some reason go build was looking for packages within my_root/program1/main instead of my_root. Weird!

Gitlab-CI how to use artifacts in different pipeline

Currently, I have two main project.
1-) Vue Project which contains (webviews for IOS and Android, websites, and renderer for our Electron ) they are sharing components & API's.
2-) Electron Project which builds desktop app for (windows, darwin, linux)
i would like to automate our building, releasing process. my current setup..
before_script:
- apt-get update
- apt-get install zip unzip
- rm -rf vue-project
- git clone vue-project
- cd vue-project
- git checkout dev
- git pull
- sed -i "/\b\(areaCode\|inline-svg-loader\)\b/d" ./packages/devtool/package.json
- yarn install
- ln -s vue-project/packages/desktop/ web
- npm install
build_darwin:
stage: build
script:
- npm run package -- darwin --deploy
cache:
paths:
- vue-project/node_modules
- node_modules
which basically before bundling electron project it's cloning vue-project install dependencies and bundling electron-renderer then when it's finish. i'm running package.
I would like to separate this two different job from each other. is there anyway i could use artifacts from different project gitlab-CI pipelines ?
any help would be an appreciated.
Gitlab has a API for do a lot of tricks.
curl --header "PRIVATE-TOKEN:YOURPRIVATETOKEN" "https://gitlab.example.com/api/v4/projects/1/jobs/artifacts/master/download?job=test"
for download it as a file.
curl --header "PRIVATE-TOKEN:YOURPRIVATETOKEN" -o artifacts.zip "http://gitlab.example.net/api/v4/projects/<projectnumber>/jobs/artifacts/master/download?job=build_desktop
Gitlab can certainly support this. To accomplish this, follow these steps:
ARTIFACT GENERATION
In your Vue Project, modify your job(s) of interest to store artifacts relevant to the Electron project. Each job's artifacts are defined using Gitlab Job Artifacts notation, and are uploaded at job completion to Gitlab, and stored associated to your Project, Branch, and Job.
Note: Branch is often overlooked, and matters when you want to retrieve your artifacts, more on this later.
Illustrating:
Vue Project .gitlab_ci.yml
stages:
- stage1
- ...
vue-job1:
stage: stage1
script:
- echo "vue-job1 artifact1" > ./artifact1
- echo "vue-job1 artifact2" > ./artifact2
artifacts:
when: always
paths:
- ./artifact1
- ./artifact2
expire_in: 90 days
vue-job2:
stage: stage1
script:
# error, would overwrite job1's artifacts since working
# directory is a global space shared by all pipeline jobs
# - echo "vue-job2 artifact1" > ./artifact1
- echo "vue-job2 artifact1" > ./artifact3
artifacts:
when: always
paths:
- ./artifact3
expire_in: 90 days
The artifacts generated above are written to the working directory, which is a clone of your project's repo. So be careful with filename conflicts. To be safe, put your artifacts in a subdirectory (eg: cat "foo" > ./subdir/artifact) and reference them in paths the same way (paths: - ./subdir/artifact). You can use 'ls' in your script to view the working directory.
When your job completes, you can confirm the artifacts stored in Gitlb by using the Gitlab UI. View the job output, and use the Browse button under Job Artifacts on the right panel.
ARTIFACT RETRIEVAL
In your Electron Project, modify your job(s) of interest to retrieve artifacts stored in the Vue Project using the Gitlab Job Artifacts API and curl. In order to access the Vue artifacts, you will need the Vue Project, Branch, and Job that the artifacts were created under.
Project: For Project, use the Project ID displayed in the Gitlab UI Project Details screen.
Branch: Usually master, but depends on the branch your pipeline executes against. Although this is not relevant for your problem, if you are generating and consuming artifacts across executions of the same pipeline, use the Gitlab variable $CI_COMMIT_BRANCH for the branch.
Job: Generally the Job Name that generated the artifacts for your Project. But if you need artifacts produced by a specific Job, then use the Job Number and the corresponding retrieval API.
Illustrating:
Electron Project .gitlab_ci.yml
stages:
- stage1
- ...
electron-job1:
stage: stage1
script:
- curl -o ./artifact1 -H "PRIVATE-TOKEN:$TOKEN" https://gitlab.example.com/api/v4/projects/$VUE_PROJECT_ID/jobs/artifacts/$BRANCH/raw/artifact1?job=vue-job1
- curl -o ./artifact2 -H "PRIVATE-TOKEN:$TOKEN" https://gitlab.example.com/api/v4/projects/$VUE_PROJECT_ID/jobs/artifacts/$BRANCH/raw/artifact2?job=vue-job1
- curl -o ./artifact3 -H "PRIVATE-TOKEN:$TOKEN" https://gitlab.example.com/api/v4/projects/$VUE_PROJECT_ID/jobs/artifacts/$BRANCH/raw/artifact3?job=vue-job2
This script retrieves artifacts individually to the working directory of your Electron Project. There are also options for retrieving all artifacts for your job at once as a zip archive.
MISCELLANEOUS
Although this is not in the problem posed, it is worth noting that you can use artifacts both within the lifespan of a single pipeline execution to pass information between jobs. You can also use this to pass information across pipeline executions within the same project.
With recent versions of gitlab, this can be achieved simply by using either the multi-project pipeline feature (then starting a pipeline in one project triggers a build from the other project): see the documentation
Or you can also use the "needs:project" mechanism which allows one job to download artifacts from other pipelines (see the documentation Use needs:project to download artifacts from up to five jobs in other pipelines.)

Resources