CircleCI save output for dependent jobs in workflow - continuous-integration

I have two jobs, B dependant on A and I need to use it's output as input for my next job.
version: 2
jobs:
A:
docker:
- image: xxx
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
- run: git submodule update --init
- run:
name: build A
command: cd platform/android/ && ant
B:
docker:
- image: yyy
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
name: build B
command: ./gradlew assembleDebug
workflows:
version: 2
tests:
jobs:
- A
- B:
requires:
- A
The output of job A in folder ./build/output needs to be saved and used in job B.
How do I achieve this?

disclaimer: I'm a CircleCI Developer Advocate
You would use CircleCI Workspaces.
version: 2
jobs:
A:
docker:
- image: xxx
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
- run: git submodule update --init
- run:
name: build A
command: cd platform/android/ && ant
- persist_to_workspace:
root: build/
paths:
- output
B:
docker:
- image: yyy
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
- attach_workspace:
at: build/
name: build B
command: ./gradlew assembleDebug
workflows:
version: 2
tests:
jobs:
- A
- B:
requires:
- A
Also keep in mind, your B job has some YAML issues.

Related

Refactor circleci config.yml file for ReactJs

I am new to CI/CD. I have created a basic react application using create-react-app. I have added the below configuration for circleci. It is working fine in circleci without issues. But there are lot of redundant code like same steps has been used in multiple places. I want to refactor this config file following best practices.
version: 2.1
orbs:
node: circleci/node#4.7.0
jobs:
build:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run build
name: Build app
- persist_to_workspace:
root: ~/project
paths:
- .
test:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run test
name: Test app
- persist_to_workspace:
root: ~/project
paths:
- .
eslint:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run lint
name: Lint app
- persist_to_workspace:
root: ~/project
paths:
- .
workflows:
on_commit:
jobs:
- build
- test
- eslint
I could see you are installing packages for multiple jobs. You can check about save_cache and restore_cache options.

gitlab Cypress generate allure report

I have my .gitlab-ci.yml file as follows:
image: cypress/base:14.16.0
stages:
- test
test:
stage: test
script:
- npm install
- npm run scripts
where scripts is --> cypress run --spec cypress/integration/UI/myScript.feature
when adding another command after the scripts parameter to generate allure report, The gitlab pipeline is throwing me an error of JAVA home path not being set to generate allure report.
ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH
So I updated my script to something like this:
image: cypress/base:14.16.0
stages:
- test
- allure
test:
stage: test
script:
- npm install
- npm run clean:allure
- npm run scripts
allure_report:
stage: allure
when: always
image: timbru31/java-node
dependencies:
- test
script:
- npm install
- npm run generate-allure-report
artifacts:
when: always
paths:
- cypress/reportsAllure/allure-report/
- cypress/reportsAllure/allure-results/
where generate-allure-report is --> allure generate cypress/reportsAllure/allure-results --clean -o cypress/reportsAllure/allure-report
But here empty reports are generated. does anyone know what artifacts I need to pass from the first stage onto next in order to generate allure report ?
This works for me, but I'm using the default folder locations so you'd need to change those same as artifacts paths and append -o folder/allure-report where appropriate
stages:
- test
- allure
- deploy
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules
.download_history: &download_history
after_script:
- apt-get install -y unzip
- mkdir backup && cd backup || true
- "curl --location --output report.zip --request GET \"https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/jobs/artifacts/master/download?job=pages\" --header \"Authorization: Bearer ${CI_DEPLOY_TOKEN}\" || true"
- (unzip report.zip) || true
- cd ../
- (cp -r backup/public/history/ allure-results/history/) || true
.test_template: &test_template
image:
name: cypress/included:7.5.0
entrypoint: [""]
stage: test
variables:
CY_RUN_ID: ${CI_JOB_ID}
script:
- export CYPRESS_VIDEO=false
- npm install
- ./node_modules/.bin/cypress run --headless --env allure=true
artifacts:
when: always
paths:
- allure-results/
smoke:
<<: *test_template
<<: *download_history
allure_report:
stage: allure
when: always
image:
name: ubuntu:latest
entrypoint: [""]
dependencies:
- smoke
variables:
DEBIAN_FRONTEND: noninteractive
TZ: Europe/London
before_script:
- apt-get update
- apt-get install -y default-jdk wget unzip
- mkdir /work/
- wget https://github.com/allure-framework/allure2/releases/download/2.13.8/allure-2.13.8.zip -P /work/
- unzip /work/allure-2.13.8.zip -d /work/
script:
- /work/allure-2.13.8/bin/allure generate allure-results --clean -o allure-report
artifacts:
when: always
paths:
- allure-report/
- allure-results/
only:
- master
pages:
stage: deploy
when: always
dependencies:
- allure_report
script:
- mv allure-report/ public/
artifacts:
paths:
- public
expire_in: 30 days
only:
- master

COPY failed: no source files were specified - How do I have to use the artifacts?

With mvn package in maven-build I create a folder (with the name "target") with the correct subfiles and folders. When I execute it in my development environment I can go on with the docker-build stage. In Gitlab I get the error: COPY failed: no source files were specified. This happens in step 3/7 in my Dockerfile.
Why they don't know the File in docker-build stage even though I create an artifact?
My .gitlab-ci.yml:
image: maven:latest
stages:
- build
- run
cache:
paths:
- .m2/repository
maven-build:
stage: build
script: mvn package -s .m2/settings.xml
artifacts:
paths:
- target/
docker-build:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker build . -t generic_test
run:
stage: run
script:
- docker run generictest
My Dockerfile:
FROM selenium/standalone-firefox
WORKDIR /app
COPY target/*.jar app.jar
COPY *.json .
ENV http_proxy=http://10.127.255.25:8080
ENV https_proxy=http://10.127.255.25:8080
ENTRYPOINT java -jar app.jar /usr/bin/geckodriver
When I had the target-Folder in Gitlab and don't have to create it before with mvn package it worked. Here is the code which worked before (and yes I have to create it and can't leave it in the repository):
stages:
- build
docker-build:
image: docker:latest
stage: build
services:
- docker:dind
script:
- echo docker build . -t dockertest
- echo docker run dockertest
I got it. By default, all artifacts from all previous stages are passed (documentation), but if you are in the same stage it doesn't know the artifact. I have to create two different stages.
I don't use stage: build two times, I created a third one.
image: maven:latest
stages:
- docker-build
- maven-build
- run
cache:
paths:
- .m2/repository
maven-build:
stage: docker-build
script:
- mvn package -s .m2/settings.xml
- dir
- cd target
- dir
artifacts:
paths:
- target/
docker-build:
image: docker:latest
stage: maven-build
services:
- docker:dind
script:
- ls
- docker build . -t generictest
run:
image: docker:latest
stage: run
services:
- docker:dind
script:
- docker run generictest

Errors in CircleCI config.yml

I am new to the circleci.
My requirement is I need to make sure that a build is triggered and executed on a particular branch(which contains a few automation scenarios).I am finding errors in circleci when pushing the config.yml(mentioned below) to the bitbucket.
Config does not conform to schema: {:workflows {:nightly {:jobs missing-required-key}}}
The .yml file is as follows:
version: 2
jobs:
test_exec:
docker:
- image: maven:3.3-jdk-8
steps:
- checkout
- run:
name: Run test via maven
command: mvn -Dtest=Runner test
workflows:
version: 2
nightly:
triggers:
- schedule:
cron: "18 23 * * *"
filters:
branches:
only:
- AT-HomePage_Filters
Could any one help me in fixing this issue?
Try the following:
version: 2
jobs:
test_exec:
docker:
- image: maven:3.3-jdk-8
steps:
- checkout
- run:
name: Run test via maven
command: mvn -Dtest=Runner test
nightly:
triggers:
- schedule:
cron: "18 23 * * *"
filters:
branches:
only:
- AT-HomePage_Filters

CircleCI: Create Workflow with separate jobs. One build and different Deploys per environment

everyone. I need some help with some issues that I am facing configuring circleCi for my Angular project.
The config.yml that I am using for a build and deploy process is detailed below. Currently I have decided to do separate jobs for each environment and each one includes the building and deploy. The problem with this approach is that I am repeating myself and I can't find the correct way to deploy an artifact builded in the previous job for the same workflow.
version: 2
jobs:
build:
docker:
- image: circleci/node:8-browsers
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
deploy_prod:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use default
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
deploy_qa:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use qa
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
- deploy_prod:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
I understand that each job is using a different docker image, so this prevents me work on the same workspace.
Q: How can I use the same docker image for different jobs in the same workflow?
I included the store_artifacts thinking it could help me, but what I read about it is that it only works for using through the dashboard or the API.
Q: Am I able to recover an artifact on a job that requires a different job that stored the artifact?
I know that I am repeating myself, my goal is to have a build job required for a deploy job per environment depending on the tags' name. So my deploy_{env} jobs are mainly the firebase commands.
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
- deploy_prod:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
Q: What are the recommended steps (best practices) for this solution?
Q: How can I use the same docker image for different jobs in the same workflow?
There might be two methods of going about this:
1.) Docker Layer Caching: https://circleci.com/docs/2.0/docker-layer-caching/
2.) Caching the .tar file: https://circleci.com/blog/how-to-build-a-docker-image-on-circleci-2-0/
Q: Am I able to recover an artifact on a job that requires a different
job that stored the artifact?
The persist_to_workspace and attach_workspace keys should be helpful here: https://circleci.com/docs/2.0/workflows/#using-workspaces-to-share-data-among-jobs
Q: What are the recommended steps (best practices) for this solution?
Not sure here! Whatever works the fastest and cleanest for you. :)

Resources