I am new to CI/CD. I have created a basic react application using create-react-app. I have added the below configuration for circleci. It is working fine in circleci without issues. But there are lot of redundant code like same steps has been used in multiple places. I want to refactor this config file following best practices.
version: 2.1
orbs:
node: circleci/node#4.7.0
jobs:
build:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run build
name: Build app
- persist_to_workspace:
root: ~/project
paths:
- .
test:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run test
name: Test app
- persist_to_workspace:
root: ~/project
paths:
- .
eslint:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run lint
name: Lint app
- persist_to_workspace:
root: ~/project
paths:
- .
workflows:
on_commit:
jobs:
- build
- test
- eslint
I could see you are installing packages for multiple jobs. You can check about save_cache and restore_cache options.
Related
I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.
im writing my first pipeline with bitbucket in yml and was wondering how to fail my pipeline when eslint test results some error and pass only if eslint test has no error logs.
this is how my pipeline looks so far:
image: node:14.15.1
definitions:
caches:
yarn: ~/.yarn
pipelines:
branches:
dev:
- step:
name: Package Install
caches:
- yarn
script:
- cd web_app
- yarn install
- step:
name: lint
caches:
- yarn
script:
- cd web_app
- yarn install
- yarn run lint
- step:
name: build
caches:
- yarn
script:
- cd web_app
- yarn install
- unset CI
- yarn build
will appreciate any help!
I have created one simple react for practicing gitlab's CI/CD pipeline. I have three jobs for CD/CI pipeline. First test the app then build then deploy to the AWS' S3 bucket. After successfully pass the test and run the build production, when it goes deploy stage I got this error : The user-provided path build does not exist. I don't know how to make path in Gitlab's cd/ci pipeline.
This is my gitlab's .gitlab-ci.yml file setup
image: 'node:12'
stages:
- test
- build
- deploy
test:
stage: test
script:
- yarn install
- yarn run test
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_REGION: $AWS_REGION
S3_BUCKET_NAME: $S3_BUCKET_NAME
build:
stage: build
only:
- master
script:
- npm install
- npm run build
deploy:
stage: deploy
only:
- master
image: python:latest
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"
If build/ folder is created as part of build stage then it should be passed as artefact to the deploy stage and deploy should reference build stage using dependencies:
build:
stage: build
only:
- master
script:
- npm install
- npm run build
artifacts:
paths:
- build/
deploy:
stage: deploy
only:
- master
image: python:latest
dependencies:
- build
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"
I have built a "deploy review" stage and deploy job in my yaml file with the following code:
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain
https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
When I check the pipeline on my gitlab account, I see the commit on my review branch but I do not see "deploy review" job running. I see "test artifact" "test website" jobs running.
The link to the gitlab project is https://gitlab.com/syed.r.abdullah/my-static-website/tree/review
I took the following steps:
Added "deploy review" to the yaml file
Created a new branch "review" locally
Added the change to the yaml file in review branch
Committed the change
Pushed the change to gitlab using git push -u origin review
Visited my pipelines and saw review pipeline in failed state
Jobs, inside the review pipeline are "test artifact" and "test website", not "deploy review"
image: node
variables:
STAGING_DOMAIN: crazymonk84-staging.surge.sh
PRODUCTION_DOMAIN: crazymonk84.surge.sh
stages:
- build
- test
- deploy review
- deploy staging
- deploy production
- production tests
- cache
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: pull
paths:
- node_modules/
update cache:
stage: cache
script:
- npm install
only:
- schedules
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: push
paths:
- node_modules/
build website:
stage: build
only:
- master
- merge_requests
except:
- schedules
script:
- echo $CI_COMMIT_SHORT_SHA
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby build
- sed -i "s/%%VERSION%%/$CI_COMMIT_SHORT_SHA/" ./public/index.html
artifacts:
paths:
- ./public
test website:
stage: test
except:
- schedules
script:
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby serve &
- sleep 3
- curl "http://localhost:9000" | tac | tac | grep -q "Gatsby"
test artifact:
image: alpine
stage: test
except:
- schedules
script:
- grep -q "Gatsby" ./public/index.html
cache: {}
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
deploy staging:
stage: deploy staging
environment:
name: staging
url: http://$STAGING_DOMAIN
only:
- master
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $STAGING_DOMAIN
cache: {}
deploy production:
stage: deploy production
environment:
name: production
url: http://$PRODUCTION_DOMAIN
only:
- master
when: manual
allow_failure: false
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $PRODUCTION_DOMAIN
cache: {}
production tests:
image: alpine
stage: production tests
only:
- master
except:
- schedules
script:
- apk add --no-cache curl
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "Hi people"
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "$CI_COMMIT_SHORT_SHA"
cache: {}
I am expecting to see "deploy review" as the only job in the pipeline. However, I see "test artifact" and "test website." What can I do to fix the issue? Thanks.
I found one solution:
Add the following to the build website and deploy review stages:
only:
- master
- merge_requests
I expect you watched a course by Valentin Despa. I've just come across the same problem. I wonder if he published any solution to the issue.
Basically I'm not positive if I'm correct, but...
if we open the link, there might be found an explanation:
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html
only:
- merge_requests
runs the deploy review stage in a detached mode. We don't have an access to our environment variables since they are protected. What I did was I went to Settings -> CI/CD -> Variables -> got rid of a tick in a protected variable option for both variables.
Then, if you run the pipeline again you'll notice it'll throw an error on ./public.
Play his video Dynamic environments and notice at 4:49 (timeline) there are three green pipelines. You have one only (your pipeline is detached)! Meaning build website stage hasn't been run. That's why you'll see the error relating to ./public since your pipeline knows nothing about Gatsby. We need to install Gatsby first and then build it.
deploy review:
stage: deploy review
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
only:
- merge_requests
script:
- npm install --silent
- npm install -g gatsby-cli
- gatsby build
- npm install --global surge
- surge --project ./public --domain yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
artifacts:
paths:
- ./public
everyone. I need some help with some issues that I am facing configuring circleCi for my Angular project.
The config.yml that I am using for a build and deploy process is detailed below. Currently I have decided to do separate jobs for each environment and each one includes the building and deploy. The problem with this approach is that I am repeating myself and I can't find the correct way to deploy an artifact builded in the previous job for the same workflow.
version: 2
jobs:
build:
docker:
- image: circleci/node:8-browsers
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
deploy_prod:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use default
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
deploy_qa:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use qa
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
- deploy_prod:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
I understand that each job is using a different docker image, so this prevents me work on the same workspace.
Q: How can I use the same docker image for different jobs in the same workflow?
I included the store_artifacts thinking it could help me, but what I read about it is that it only works for using through the dashboard or the API.
Q: Am I able to recover an artifact on a job that requires a different job that stored the artifact?
I know that I am repeating myself, my goal is to have a build job required for a deploy job per environment depending on the tags' name. So my deploy_{env} jobs are mainly the firebase commands.
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
- deploy_prod:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
Q: What are the recommended steps (best practices) for this solution?
Q: How can I use the same docker image for different jobs in the same workflow?
There might be two methods of going about this:
1.) Docker Layer Caching: https://circleci.com/docs/2.0/docker-layer-caching/
2.) Caching the .tar file: https://circleci.com/blog/how-to-build-a-docker-image-on-circleci-2-0/
Q: Am I able to recover an artifact on a job that requires a different
job that stored the artifact?
The persist_to_workspace and attach_workspace keys should be helpful here: https://circleci.com/docs/2.0/workflows/#using-workspaces-to-share-data-among-jobs
Q: What are the recommended steps (best practices) for this solution?
Not sure here! Whatever works the fastest and cleanest for you. :)