I'm using Yarn Workspaces in my repository and also using AWS CodeBuild to build my packages. When build starts, CodeBuild takes 60 seconds to install all packages and I'd want to avoid this time caching node_modules folder.
When I add:
cache:
paths:
- 'node_modules/**/*'
to my buildspec file and enable LOCAL_CUSTOM_CACHE, I receive this error:
error An unexpected error occurred: "EEXIST: file already exists, mkdir '/codebuild/output/src637134264/src/git-codecommit.us-east-2.amazonaws.com/v1/repos/MY_REPOSITORY/node_modules/#packages/configs'".
Is there a way to remove this error configuring AWS CodeBuild or Yarn?
My buildspec file:
version: 0.2
phases:
install:
commands:
- npm install -g yarn
- git config --global credential.helper '!aws codecommit credential-helper $#'
- git config --global credential.UseHttpPath true
- yarn
pre_build:
commands:
- git rev-parse HEAD
- git pull origin master
build:
commands:
- yarn run build
- yarn run deploy
post_build:
commands:
- echo 'Finished.'
cache:
paths:
- 'node_modules/**/*'
Thank you!
Update 1:
The folder /codebuild/output/src637134264/src/git-codecommit.us-east-2.amazonaws.com/v1/repos/MY_REPOSITORY/node_modules/#packages/configs was being attempted to be created by Yarn, with the command - yarn at install phase. This folder is one of my repository packages, called #packages/config. When I run yarn on my computer, Yarn creates folders linking my packages as described here. An example of how my node_modules structure is on my computer:
node_modules/
|-- ...
|-- #packages/
| |-- configs/
| |-- myPackageA/
| |-- myPackageB/
|-- ...
I was having the exact same issue ("EEXIST: file already exists, mkdir"), I ended up using S3 cache and it worked pretty well. Note: for some reason the first upload to S3 took way (10 minutes) too long, the others went fine.
Before:
[5/5] Building fresh packages...
--
Done in 60.28s.
After:
[5/5] Building fresh packages...
--
Done in 6.64s.
If you already have your project configured you can edit the cache accessing the Project -> Edit -> Artifacts -> Additional configuration.
My buildspec.yml is as follows:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 14
build:
commands:
- yarn config set cache-folder /root/.yarn-cache
- yarn install --frozen-lockfile
- ...other build commands go here
cache:
paths:
- '/root/.yarn-cache/**/*'
- 'node_modules/**/*'
# This third entry is only if you're using monorepos (under the packages folder)
# - 'packages/**/node_modules/**/*'
If you use NPM you'd do something similar, with slightly different commands:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 14
build:
commands:
- npm config -g set prefer-offline true
- npm config -g set cache /root/.npm
- npm ci
- ...other build commands go here
cache:
paths:
- '/root/.npm-cache/**/*'
- 'node_modules/**/*'
# This third entry is only if you're using monorepos (under the packages folder)
# - 'packages/**/node_modules/**/*'
Kudos to: https://mechanicalrock.github.io/2019/02/03/monorepos-aws-codebuild.html
Related
The problem:
Whenever I try to make a request to a Laravel API hosted with Laravel Vapor, I get a 502.
When I view error logs in Cloudwatch, I see this:
As you can see here, the 502 is being returned because there seems either to be a permissions problem with the bootstrap/cache folder or the folder is not present for some reason after deployment.
What I did to try to fix this:
I ensured the folder permissions were correct (the cache folder has 755, and the .gitignore 644).
As you can see I ensured there was a .gitignore ignoring the entire contents of the cache folder.
I tried to dump-autoload and composer update.
I tried deploying from the command line and from git actions
All of the above did not yield any results and the support team tried their best but also could not identify the issue.
The solution??
So, one of my teammates noticed that we only started experiencing this issue after adding the vapor-ui package. I removed it completely from our projects, redeployed, and no more error. The API responds as expected now.
My question is, why would installing vapor-ui cause this issue?
I know that the vapor-ui project itself is a Laravel project so could it be that there is no bootstrap/cache folder in the vapor-ui project?
Here are snippets of my vapor.yml file before removing the vapor-ui:
id: 1234
name: notreal
environments:
production:
domain: notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-123
build:
- 'composer install --no-dev'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run prod && rm -rf node_modules'
staging:
domain: sta-notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-1647335449
database: notreal-sta
build:
- 'composer install'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run dev && rm -rf node_modules'
development:
domain: dev-notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-123
database: notreal-dev
build:
- 'composer install'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run prod && rm -rf node_modules'
I don't think you need this build step:
php artisan vapor-ui:install
You do it locally and then everything is in place already when you deploy
I am trying to deploy my Go 1.14 microservices application on to Google's Flexible environment. I read that there are issues with finding GOROOT, and how it is unable to obtain the correct dependencies.
I wanted to use the flexible environment because I needed to do port forwarding. Since my domain name was used to run the actual application, I wanted port 8081 to run my microservices.
I followed the instruction from this link:
https://blog.cubieserver.de/2019/go-modules-with-app-engine-flexible/
I tried option 3.
This is my gitlab-ci.yaml configurations file.
# .gitlab-ci.yaml
stages:
- build
- deploy
build_server:
stage: build
image: golang:1.14
script:
- go mod vendor
- go install farmcycle.us/user/farmcycle
artifacts:
paths:
- vendor/
deploy_app_engine:
stage: deploy
image: google/cloud-sdk:270.0.0
script:
- echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project $PROJECT_ID app deploy app.yaml
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
This my app.yaml configuration file
runtime: go
env: flex
network:
forwarded_ports:
- 8081/tcp
When I deployed this using the Git CI pipeline. Build stage passes, but the Deploy stage failed.
Running with gitlab-runner 13.4.1 (e95f89a0)
on docker-auto-scale 72989761
Preparing the "docker+machine" executor
Preparing environment
00:03
Getting source from Git repository
00:04
Downloading artifacts
00:02
Executing "step_script" stage of the job script
00:03
$ echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
$ gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
Activated service account credentials for: [farmcycle-hk1996#appspot.gserviceaccount.com]
$ gcloud --quiet --project $PROJECT_ID app deploy app.yaml
ERROR: (gcloud.app.deploy) Staging command [/usr/lib/google-cloud-sdk/platform/google_appengine/go-app-stager /builds/JLiu1272/farmcycle-backend/app.yaml /builds/JLiu1272/farmcycle-backend /tmp/tmprH6xQd/tmpSIeACq] failed with return code [1].
------------------------------------ STDOUT ------------------------------------
------------------------------------ STDERR ------------------------------------
2020/10/10 20:48:27 staging for go1.11
2020/10/10 20:48:27 Staging Flex app: failed analyzing /builds/JLiu1272/farmcycle-backend: cannot find package "farmcycle.us/user/farmcycle/api" in any of:
($GOROOT not set)
/root/go/src/farmcycle.us/user/farmcycle/api (from $GOPATH)
GOPATH: /root/go
--------------------------------------------------------------------------------
Running after_script
00:01
Running after script...
$ rm /tmp/$CI_PIPELINE_ID.json
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
This was the error. Honestly I am not really sure what this error is and how to fix it.
Surprisingly, even using the latest Go runtime, runtime: go1.15, go modules appear to not be used. See golang-docker.
However, Flex builds your app into a container regardless of runtime and so, in this case, it may be better to use a custom runtime and build your own Dockerfile?
runtime: custom
env: flex
Then you get to use e.g. Go 1.15 and go modules (without vendoring) and whatever else you'd like. For a simple main.go that uses modules e.g.:
FROM golang:1.15 as build
ARG PROJECT="flex"
WORKDIR /${PROJECT}
COPY go.mod .
RUN go mod download
COPY main.go .
RUN GOOS=linux \
go build -a -installsuffix cgo \
-o /bin/server \
.
FROM gcr.io/distroless/base-debian10
COPY --from=build /bin/server /
USER 999
ENV PORT=8080
EXPOSE ${PORT}
ENTRYPOINT ["/server"]
This ought to be possible with Google's recently announced support for buildpacks but I've not tried it.
I am working on the performance tuning for Gitlab pipeline using cache.
This is a nodejs project using npm for the dependency management. I have put the node_modules folder into cache for subsequent stages with following setting:
build:
stage: build
only:
- develop
script:
- npm install
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
Could I make the cache available for pipeline triggered next time? Or the cache is accessible in single pipeline?
If I can access that within multiple pipeline, could I recache the node module only when we change package.json?
First, put the cache on the global level. This will make sure, that the jobs share the same cache.
Second, you can use cache:key:files introduced with GitLab 12.5 to only recreate the cache when the package.json changes.
cache:
key:
files:
- package.json
paths:
- node_modules/
build:
stage: build
only:
- develop
script:
- npm install
Further information:
https://docs.gitlab.com/ee/ci/yaml/#cachekeyfiles
Additional hints:
You might want to check on package-lock.json instead of package.json.
I recommend reading the cache mismatch chapter in the documentation to make sure you don't run into common problems where the cache might not be restored.
Instead of simply adding npm install, you can also skip this step when the node_modules folder was recreated from cache. Following bash addition to your npm install will only run the command, if the node_modules folder doesn't exist.
build:
stage: build
only:
- develop
script:
- if [ ! -d "node_modules" ]; then npm install; fi
I'm trying to set up a workflow in CircleCI for my React project.
What I want to achieve is to get a job to build the stuff and another one to deploy the master branch to Firebase hosting.
This is what I have so far after several configurations:
witmy: &witmy
docker:
- image: circleci/node:7.10
version: 2
jobs:
build:
<<: *witmy
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: yarn install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Build app in production mode
command: |
yarn build
- persist_to_workspace:
root: .
deploy:
<<: *witmy
steps:
- attach_workspace:
at: .
- run:
name: Deploy Master to Firebase
command: ./node_modules/.bin/firebase deploy --token=MY_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
The build job always success, but with the deploy I have this error:
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token=MYTOKEN
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code 1
So, what I understand is that the deploy job is not running in the same place the build was, right?
I'm not sure how to fix that. I've read some examples they provide and tried several things, but it doesn't work. I've also read the documentation but I think it's not very clear how to configure everything... maybe I'm too dumb.
I hope you guys can help me out on this one.
Cheers!!
EDITED TO ADD MY CURRENT CONFIG USING WORKSPACES
I've added Workspaces... but still I'm not able to get it working, after a loooot of tries I'm getting this error:
Persisting to Workspace
The specified paths did not match any files in /home/circleci/project
And also it's a real pain to commit and push to CircleCI every single change to the config file when I want to test it... :/
Thanks!
disclaimer: I'm a CircleCI Developer Advocate
Each job is its own running Docker container (or VM). So the problem here is that nothing in node_modules exists in your deploy job. There's 2 ways to solve this:
Install Firebase and anything else you might need, on the fly, just like you do in the build job.
Utilize CircleCI Workspaces to carry over your node_modules directory from the build job to the deploy job.
In my opinion, option 2 is likely your best bet because it's more efficient.
I am facing an issue when cached files are not used in project builds. In my case, I want to download composer dependencies in build stage and then add them into final project folder after all other stages succeeds. I thought that if you set cache attribute into .gitlab-ci.yml file, it will be shared and used in other stages as well. But this sometime works and sometimes not.
Gitlab version is 9.5.4
Here is my .gitlab-ci.yml file:
image: ponk/debian:jessie-ssh
variables:
WEBSERVER: "user#example.com"
WEBSERVER_DEPLOY_DIR: "/domains/example.com/web-presentation/deploy/"
WEBSERVER_CDN_DIR: "/domains/example.com/web-presentation/cdn/"
TEST_VENDOR: '[ "$(ls -A ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}/vendor)" ]'
cache:
key: $CI_PIPELINE_ID
untracked: true
paths:
- vendor/
before_script:
stages:
- build
- tests
- deploy
- post-deploy
Build sources:
image: ponk/php5.6
stage: build
script:
# Install composer dependencies
- composer -n install --no-progress
only:
- tags
- staging
Deploy to Webserver:
stage: deploy
script:
- echo "DEPLOYING TO ... ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}"
- ssh $WEBSERVER mkdir -p ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}
- rsync -rzha app bin vendor www .htaccess ${WEBSERVER}:${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}
- ssh $WEBSERVER '${TEST_VENDOR} && echo "vendor is not empty, build seems ok" || exit 1'
- ssh $WEBSERVER [ -f ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}/vendor/autoload.php ] && echo "vendor/autoload.php exists, build seems ok" || exit 1
- echo "DEPLOYED"
only:
- tags
- staging
Post Deploy Link PRODUCTION to Webserver:
stage: post-deploy
script:
- echo "BINDING PRODUCTION"
- ssh $WEBSERVER unlink ${WEBSERVER_DEPLOY_DIR}production-latest || true
- ssh $WEBSERVER ln -s ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} ${WEBSERVER_DEPLOY_DIR}production-latest
- echo "BOUNDED $CI_COMMIT_SHA -> production-latest"
- ssh $WEBSERVER sudo service php5.6-fpm reload
environment:
name: production
url: http://www.example.com
only:
- tags
Post Deploy Link STAGING to Webserver:
stage: post-deploy
script:
- echo "BINDING STAGING"
- ssh $WEBSERVER unlink ${WEBSERVER_DEPLOY_DIR}staging-latest || true
- ssh $WEBSERVER ln -s ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} ${WEBSERVER_DEPLOY_DIR}staging-latest
- echo "BOUNDED ${CI_COMMIT_SHA} -> staging-latest"
- ssh $WEBSERVER sudo service php5.6-fpm reload
environment:
name: staging
url: http://staging.example.com
only:
- staging
In Gitlab documentation it says: cache is used to specify a list of files and directories which should be cached between jobs.
From what I understand I've set up cache correctly - I have untracked set to true, path includes vendor folder and key is set to Pipeline ID, which should be the same in other stages as well.
I've seen some set ups which contained Artifacts, but unless you use it with Dependencies, it shouldn't have any effect.
I don't know what I'm doing wrong. I need to download composer dependencies first, so I can copy them via rsync in next stage. Do you have any ideas/solutions? Thanks
Artifacts should be used to permanently make available any files you may need at the end of a pipeline, for example generated binaries, required files for the next stage of the pipeline, coverage reports or maybe even a disk image. But cache should be used to speed up the build process, for example if you compiling a C/C++ binary it usually takes a long time for the first build but subsequent builds are usually faster because it doesn't start from scratch, so if you were to store the temporary files made by the compiler by using cache, it would speed up the compilation across different pipelines.
So to answer you, you should use artifacts because you seem to need to run composer every pipeline but want to pass on the files to the next job. You do not need to explicitly define dependencies in your gitlab-ci.yml because if not defined each job pulls all the artifacts from all previous jobs. Cache should work but it is unreliable and is better for a config where it makes it better but is not a necessity.