how to fail my bitbucket pipeline when eslint logs errors - yaml

im writing my first pipeline with bitbucket in yml and was wondering how to fail my pipeline when eslint test results some error and pass only if eslint test has no error logs.
this is how my pipeline looks so far:
image: node:14.15.1
definitions:
caches:
yarn: ~/.yarn
pipelines:
branches:
dev:
- step:
name: Package Install
caches:
- yarn
script:
- cd web_app
- yarn install
- step:
name: lint
caches:
- yarn
script:
- cd web_app
- yarn install
- yarn run lint
- step:
name: build
caches:
- yarn
script:
- cd web_app
- yarn install
- unset CI
- yarn build
will appreciate any help!

Related

Refactor circleci config.yml file for ReactJs

I am new to CI/CD. I have created a basic react application using create-react-app. I have added the below configuration for circleci. It is working fine in circleci without issues. But there are lot of redundant code like same steps has been used in multiple places. I want to refactor this config file following best practices.
version: 2.1
orbs:
node: circleci/node#4.7.0
jobs:
build:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run build
name: Build app
- persist_to_workspace:
root: ~/project
paths:
- .
test:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run test
name: Test app
- persist_to_workspace:
root: ~/project
paths:
- .
eslint:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run lint
name: Lint app
- persist_to_workspace:
root: ~/project
paths:
- .
workflows:
on_commit:
jobs:
- build
- test
- eslint
I could see you are installing packages for multiple jobs. You can check about save_cache and restore_cache options.

Gitlab cache not uploading due to policy

I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.

Gitlab CD/CI: The user-provided path build/ does not exist

I have created one simple react for practicing gitlab's CI/CD pipeline. I have three jobs for CD/CI pipeline. First test the app then build then deploy to the AWS' S3 bucket. After successfully pass the test and run the build production, when it goes deploy stage I got this error : The user-provided path build does not exist. I don't know how to make path in Gitlab's cd/ci pipeline.
This is my gitlab's .gitlab-ci.yml file setup
image: 'node:12'
stages:
- test
- build
- deploy
test:
stage: test
script:
- yarn install
- yarn run test
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_REGION: $AWS_REGION
S3_BUCKET_NAME: $S3_BUCKET_NAME
build:
stage: build
only:
- master
script:
- npm install
- npm run build
deploy:
stage: deploy
only:
- master
image: python:latest
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"
If build/ folder is created as part of build stage then it should be passed as artefact to the deploy stage and deploy should reference build stage using dependencies:
build:
stage: build
only:
- master
script:
- npm install
- npm run build
artifacts:
paths:
- build/
deploy:
stage: deploy
only:
- master
image: python:latest
dependencies:
- build
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"

How to fix dynamic environment in gitlab?

I have built a "deploy review" stage and deploy job in my yaml file with the following code:
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain
https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
When I check the pipeline on my gitlab account, I see the commit on my review branch but I do not see "deploy review" job running. I see "test artifact" "test website" jobs running.
The link to the gitlab project is https://gitlab.com/syed.r.abdullah/my-static-website/tree/review
I took the following steps:
Added "deploy review" to the yaml file
Created a new branch "review" locally
Added the change to the yaml file in review branch
Committed the change
Pushed the change to gitlab using git push -u origin review
Visited my pipelines and saw review pipeline in failed state
Jobs, inside the review pipeline are "test artifact" and "test website", not "deploy review"
image: node
variables:
STAGING_DOMAIN: crazymonk84-staging.surge.sh
PRODUCTION_DOMAIN: crazymonk84.surge.sh
stages:
- build
- test
- deploy review
- deploy staging
- deploy production
- production tests
- cache
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: pull
paths:
- node_modules/
update cache:
stage: cache
script:
- npm install
only:
- schedules
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: push
paths:
- node_modules/
build website:
stage: build
only:
- master
- merge_requests
except:
- schedules
script:
- echo $CI_COMMIT_SHORT_SHA
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby build
- sed -i "s/%%VERSION%%/$CI_COMMIT_SHORT_SHA/" ./public/index.html
artifacts:
paths:
- ./public
test website:
stage: test
except:
- schedules
script:
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby serve &
- sleep 3
- curl "http://localhost:9000" | tac | tac | grep -q "Gatsby"
test artifact:
image: alpine
stage: test
except:
- schedules
script:
- grep -q "Gatsby" ./public/index.html
cache: {}
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
deploy staging:
stage: deploy staging
environment:
name: staging
url: http://$STAGING_DOMAIN
only:
- master
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $STAGING_DOMAIN
cache: {}
deploy production:
stage: deploy production
environment:
name: production
url: http://$PRODUCTION_DOMAIN
only:
- master
when: manual
allow_failure: false
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $PRODUCTION_DOMAIN
cache: {}
production tests:
image: alpine
stage: production tests
only:
- master
except:
- schedules
script:
- apk add --no-cache curl
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "Hi people"
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "$CI_COMMIT_SHORT_SHA"
cache: {}
I am expecting to see "deploy review" as the only job in the pipeline. However, I see "test artifact" and "test website." What can I do to fix the issue? Thanks.
I found one solution:
Add the following to the build website and deploy review stages:
only:
- master
- merge_requests
I expect you watched a course by Valentin Despa. I've just come across the same problem. I wonder if he published any solution to the issue.
Basically I'm not positive if I'm correct, but...
if we open the link, there might be found an explanation:
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html
only:
- merge_requests
runs the deploy review stage in a detached mode. We don't have an access to our environment variables since they are protected. What I did was I went to Settings -> CI/CD -> Variables -> got rid of a tick in a protected variable option for both variables.
Then, if you run the pipeline again you'll notice it'll throw an error on ./public.
Play his video Dynamic environments and notice at 4:49 (timeline) there are three green pipelines. You have one only (your pipeline is detached)! Meaning build website stage hasn't been run. That's why you'll see the error relating to ./public since your pipeline knows nothing about Gatsby. We need to install Gatsby first and then build it.
deploy review:
stage: deploy review
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
only:
- merge_requests
script:
- npm install --silent
- npm install -g gatsby-cli
- gatsby build
- npm install --global surge
- surge --project ./public --domain yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
artifacts:
paths:
- ./public

Gitlab executing next stage even if one fails (with dependency)

I'm setting up a gitlab pipeline with multiple stages and at least two sites each stage. How do iIconditionally allow the next stage to continue even if one site in the first stage failed (and the whole stage as it is marked as failed)? For example: I want to prepare, build and test and I want to do this on Windows & Linux runner. So if my Linux runner failed in preparation but my Windows runner succeeded, then the next stage should start without building the Linux package, because this already failed. But the windows build should start.
My Intention is that if one system fails at least the second is able to continue.
I added dependencies and i thought that this would solve my problem. Because if site "build windows" is dependent on "prepare windows" then it shouldn't matter if "prepare Linux" failed. But this isn't the case :/
image: node:10.6.0
stages:
- prepare
- build
- test
prepare windows:
stage: prepare
tags:
- windows
script:
- npm i
- git submodule foreach 'npm i'
prepare linux:
stage: prepare
tags:
- linux
script:
- npm i
- git submodule foreach 'npm i'
build windows:
stage: build
tags:
- windows
script:
- npm run build-PWA
dependencies:
- prepare windows
build linux:
stage: build
tags:
- linux
script:
- npm run build-PWA
dependencies:
- prepare linux
unit windows:
stage: test
tags:
- windows
script:
- npm run test
dependencies:
- build windows
artifacts:
paths:
- dist/
- package.json
expire_in: 5 days
unit linux:
stage: test
tags:
- linux
script:
- npm run test
dependencies:
- build linux
artifacts:
paths:
- dist/
- package.json
expire_in: 5 days
See allow_failure option:
allow_failure allows a job to fail without impacting the rest of the CI suite.
example:
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true

Resources