I have configured my cypress test suite to circleci pipeline. The issue I am facing is that when I pushed my git branch or accept a pull request for the branch connected to the circle ci pipeline, it starts to run the test. I just don't need that I'm preferred to run the pipeline manually from the circle ci dashboard. Can someone guide me to fix the issue please? Attached my circleci yaml file below..
version: 2
jobs:
- request-testing:
type: approval
build:
docker:
- image: cypress/base:14.16.0
environment:
## this enables colors in the output
TERM: xterm
working_directory: ~/app
parallelism: 4
resource_class: large
steps:
- checkout
- restore_cache:
keys:
- v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
- v1-deps-{{ .Branch }}
- v1-deps
- run:
name: Install Dependencies
command: npm ci
- save_cache:
key: v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
# cache NPM modules and the folder with the Cypress binary
paths:
- ~/.npm
- ~/.cache
#run: $(npm bin)/cypress run
- run: $(npm bin)/cypress run --parallel --record --key 4
The simple way would be to edit the CircleCI webhook in your VCS repository settings, and unselect both the "Pushes" and "Pull requests" events.
This way there won't be any webhook delivery sent to CircleCI for these types of events, and therefore no build will be triggered.
Related
I'm configuring CircleCI to try and cache dependencies so I don't have to run yarn install on every single commit.
This is what my config.yml file looks like:
version: 2.1
jobs:
build-and-test-frontend:
docker:
- image: circleci/node:14
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
working_directory: ./frontend/tests
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
workflows:
sample:
jobs:
- build-and-test-frontend
But when either restore_cache or save_cache attempts to run, I get the following error:
error computing cache key: template: cacheKey:1:17: executing "cacheKey" at <checksum "yarn.lock">: error calling checksum: open /home/circleci/project/yarn.lock: no such file or directory
I'm brand new to using CircleCI so I'm not sure how to interpret this. What can I do to fix this?
EDIT:
This is the structure of my directory:
--project_root
|
|--frontend
|-node_modules/
|-public/
|-src/
|-tests/
|-package.json
|-yarn.lock
It's hard for me to give a great answer since I can't see your files in the repo but the config you have now suggest that the yarn.lock file you have is not in the root of the repo but rather in ./frontend/tests.
If that's where it is and that's where you want to keep it, then I'd suggest moving the working_dir key from the step level to the job level. This will then apply it to every step including the caching steps. Then they should find the file they are looking for.
Update:
Thanks for the repo tree. According to that you likely want to have your config like this:
version: 2.1
workflows:
sample:
jobs:
- build-and-test-frontend
jobs:
build-and-test-frontend:
docker:
- image: cimg/node:14.17
working_directory: ./frontend
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
You'll notice a few things here:
I moved workflows to that top. That's just a personal stylistic choice but I believe it helps keep your config readable as it grows.
I moved working_directory to the job level instead of the step it was on.
I set working_directory to the frontend directory. Most filepaths on CircleCI will be relative to the working_directory. Since that's where yarn.lock is, that's where I set it.
I change the image from circleci/node:14 to cimg/node:14. The images in the circleci namespaces are deprecated. Going forward, you'll want to use the newer CircleCI images which are in the cimg namespace.
I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.
Hello there and thank you for reading my question, its my first one here.
I am working with CI/CD pipelines for a year now and I think they are pretty nice and convinient for developing Websites and Stuff. But in the last months I have more and more problems creating fast, efficient and smart pipelines without redundant dependency installs or similar. So I want to use as less computation ressources as possible while still have fast builds. I want to parallelize steps and use theire artifacts in another final step. For example the following GitHub Actions workflow:
My goal with this workflow is to just build a VueJS Single Page App and deploy it to the IBM Cloud. For that I need to install the npm dependencies and build the Vue App and also install the IBM Cloud CLI. After these two steps are finished the builded App should be pushed to the IBM Cloud.
I could just simply run all steps sequentially like this:
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout#v2
- name: Use Node.js 10.X
uses: actions/setup-node#v1
with:
node-version: '10.x'
- name: Cache Node Modules
uses: actions/cache#v2
env:
cache-name: cache-node-modules
with:
# npm cache files are stored in `~/.npm` on Linux/macOS
path: ~/.npm
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- name: Install Dependencies
run: npm ci
- name: Build Page
run: npm run build
- name: Install IBM Cloud CLI
run: curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
shell: bash
- name: Install Cloud Foundry CLI
run: ibmcloud cf install
shell: bash
- name: Authenticate with IBM Cloud CLI
run: ibmcloud login --apikey "${{ secrets.IBM_CLOUD_API_KEY }}" --no-region -g Default
shell: bash
- name: Target a Cloud Foundry org and space
run: ibmcloud target --cf-api "${{ secrets.IBM_CLOUD_CF_API }}" -o "${{ secrets.IBM_CLOUD_CF_ORG }}" -s "${{ secrets.IBM_CLOUD_CF_SPACE }}"
shell: bash
- name: Deploy to Cloud Foundry
run: ibmcloud cf push
shell: bash
But in my opinion this is very ugly and can be improved. So I tried to split the job into 3 parts: build, predeploy and deploy. The build job installs and builds the Vue App. The Predeploy job install the IBM CLI. These two jobs doesn't depend on each other so they can be parallized. But the last job, deploy, depends on both so I added the needs: [build, predeploy] value to it. So I have the following workflow to archive this:
### This will not work!
name: Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout#v2
- name: Use Node.js 10.X
uses: actions/setup-node#v1
with:
node-version: '10.x'
- name: Cache Node Modules
uses: actions/cache#v2
env:
cache-name: cache-node-modules
with:
# npm cache files are stored in `~/.npm` on Linux/macOS
path: ~/.npm
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- name: Install Dependencies
run: npm ci
- name: Build Page
run: npm run build
predeploy:
runs-on: ubuntu-latest
defaults:
run:
shell: bash
steps:
- name: Install IBM Cloud CLI
run: curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
- name: Install Cloud Foundry CLI
run: ibmcloud cf install
- name: Authenticate with IBM Cloud CLI
run: ibmcloud login --apikey "${{ secrets.IBM_CLOUD_API_KEY }}" --no-region -g Default
- name: Target a Cloud Foundry org and space
run: ibmcloud target --cf-api "${{ secrets.IBM_CLOUD_CF_API }}" -o "${{ secrets.IBM_CLOUD_CF_ORG }}" -s "${{ secrets.IBM_CLOUD_CF_SPACE }}"
deploy:
needs: [build, predeploy]
runs-on: ubuntu-latest
steps:
- name: Deploy to Cloud Foundry
# Error: 'ibmcloud: command not found'
run: ibmcloud cf push
shell: bash
Which looks on the GUI like:
[![My GitHub Workflow on the GUI][1]][1]
But this workflow will error since the last job doesn't share the same environment as the other jobs. I am aware that I could use the up/download Artifact feature of GitHub Actions but this seems to me like using a lot of resources. But I dont want to use a lot of ressources for my pipeline, I dont need a lot of different virtual environments or build matrixes. (I know they are very good for large projects, but they seem a little overkill for my little site)
So here are my two final Questions:
Why is parallelism in CI/CD often complication and not straight forward?
How can I improve my current pipeline with parallelism and without redundant executions?
I am glad about every helpful advice or link. Thank you. :)
[1]: https://i.stack.imgur.com/qEqLs.png
I think your original workflow was already pretty efficient. As you mentioned, different jobs are executed on different runners and sometime the additional complexity and effort put into the synchronization/logic between workflows outweighs the benefits of parallelism. In your case I don't think it would make much sense to run your jobs in parallel.
For your first question, I don't think it's an issue specific to CI/CD pipelines. I am getting a bit out of scope here but you have similar issues in any code that does work in parallel or as a matter of fact in any work in general that is done in parallel anywhere. Being factories, teams, code, CI pipelines, as soon as the work is split up, there will be some sort of mechanism to manage the allocation of work and track its progress. Which will make it more complex.
Why GH workflows might seem less straightforward than other systems seem to be a better question and I think it comes does to how long it has been around. It's a pretty recent addition to github and as new features are progressively being added it gets easier and easier to work with.
Regarding other optimizations for your workflow, I would recommend trying to avoid redoing the same work every time the workflow run if it's not needed. You already do this with the cache action for npm. But you could, for example build a docker image, or even better an action ,with your IBM CLI in it and remove the pre-deploy stage entirely. Simply having:
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout#v2
- name: Use Node.js 10.X
uses: actions/setup-node#v1
with:
node-version: '10.x'
- name: Cache Node Modules
uses: actions/cache#v2
env:
cache-name: cache-node-modules
with:
# npm cache files are stored in `~/.npm` on Linux/macOS
path: ~/.npm
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- name: Install Dependencies
run: npm ci
- name: Build Page
run: npm run build
- name: Deploy to Cloud Foundry
uses: my-action:v1
with:
api-key: ${{ secrets.IBM_CLOUD_API_KEY }}
cf-api: ${{ secrets.IBM_CLOUD_CF_API }}
cf-org: ${{ secrets.IBM_CLOUD_CF_ORG }}
cf-space: ${{ secrets.IBM_CLOUD_CF_SPACE }}
New to the world of CircleCI and cannot seem to get anything apart from the first job to run.
I've tried all sorts of things from removing line breaks to renaming the job to "test", swapping the order of the first job and the second job, but nothing works.
Is there something I need to change in the project config such as defining jobs ahead of time?
.circleci/config.yml
version: 2.0
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/ruby:2.5.0-node-browsers
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# - image: circleci/postgres:9.4
working_directory: ~/repo
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "Gemfile.lock" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: install bundler
command: gem install bundler
- run:
name: install dependencies
command: |
bundle install --jobs=4 --retry=3 --path vendor/bundle
- save_cache:
paths:
- ./vendor/bundle
key: v1-dependencies-{{ checksum "Gemfile.lock" }}
precompile_assets:
machine: true
working_directory: ~/repo
steps:
- run:
name: Precompile assets for public folder
command: rails assets:precompile
disclaimer: Developer Evangelist at CircleCI
You're missing the Workflows section of the config. The part that tells CircleCI which jobs are available and how to run them.
https://circleci.com/docs/2.0/workflows/
Also, the precompile_assets job will fail because it doesn't have any files from your repo. You'd need to have a checkout, or utilize workspaces (also available in my link above) in order to provide it files.
everyone. I need some help with some issues that I am facing configuring circleCi for my Angular project.
The config.yml that I am using for a build and deploy process is detailed below. Currently I have decided to do separate jobs for each environment and each one includes the building and deploy. The problem with this approach is that I am repeating myself and I can't find the correct way to deploy an artifact builded in the previous job for the same workflow.
version: 2
jobs:
build:
docker:
- image: circleci/node:8-browsers
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
deploy_prod:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use default
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
deploy_qa:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use qa
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
- deploy_prod:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
I understand that each job is using a different docker image, so this prevents me work on the same workspace.
Q: How can I use the same docker image for different jobs in the same workflow?
I included the store_artifacts thinking it could help me, but what I read about it is that it only works for using through the dashboard or the API.
Q: Am I able to recover an artifact on a job that requires a different job that stored the artifact?
I know that I am repeating myself, my goal is to have a build job required for a deploy job per environment depending on the tags' name. So my deploy_{env} jobs are mainly the firebase commands.
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
- deploy_prod:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
Q: What are the recommended steps (best practices) for this solution?
Q: How can I use the same docker image for different jobs in the same workflow?
There might be two methods of going about this:
1.) Docker Layer Caching: https://circleci.com/docs/2.0/docker-layer-caching/
2.) Caching the .tar file: https://circleci.com/blog/how-to-build-a-docker-image-on-circleci-2-0/
Q: Am I able to recover an artifact on a job that requires a different
job that stored the artifact?
The persist_to_workspace and attach_workspace keys should be helpful here: https://circleci.com/docs/2.0/workflows/#using-workspaces-to-share-data-among-jobs
Q: What are the recommended steps (best practices) for this solution?
Not sure here! Whatever works the fastest and cleanest for you. :)