Errors in CircleCI config.yml - job-scheduling

I am new to the circleci.
My requirement is I need to make sure that a build is triggered and executed on a particular branch(which contains a few automation scenarios).I am finding errors in circleci when pushing the config.yml(mentioned below) to the bitbucket.
Config does not conform to schema: {:workflows {:nightly {:jobs missing-required-key}}}
The .yml file is as follows:
version: 2
jobs:
test_exec:
docker:
- image: maven:3.3-jdk-8
steps:
- checkout
- run:
name: Run test via maven
command: mvn -Dtest=Runner test
workflows:
version: 2
nightly:
triggers:
- schedule:
cron: "18 23 * * *"
filters:
branches:
only:
- AT-HomePage_Filters
Could any one help me in fixing this issue?

Try the following:
version: 2
jobs:
test_exec:
docker:
- image: maven:3.3-jdk-8
steps:
- checkout
- run:
name: Run test via maven
command: mvn -Dtest=Runner test
nightly:
triggers:
- schedule:
cron: "18 23 * * *"
filters:
branches:
only:
- AT-HomePage_Filters

Related

Refactor circleci config.yml file for ReactJs

I am new to CI/CD. I have created a basic react application using create-react-app. I have added the below configuration for circleci. It is working fine in circleci without issues. But there are lot of redundant code like same steps has been used in multiple places. I want to refactor this config file following best practices.
version: 2.1
orbs:
node: circleci/node#4.7.0
jobs:
build:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run build
name: Build app
- persist_to_workspace:
root: ~/project
paths:
- .
test:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run test
name: Test app
- persist_to_workspace:
root: ~/project
paths:
- .
eslint:
docker:
- image: cimg/node:17.2.0
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm run lint
name: Lint app
- persist_to_workspace:
root: ~/project
paths:
- .
workflows:
on_commit:
jobs:
- build
- test
- eslint
I could see you are installing packages for multiple jobs. You can check about save_cache and restore_cache options.

Azure Spring Gradle: files in src/test/resouces not accessible

Im building Spring boot app using gradle. Integration tests (Spock) need to access code/src/resouces/docker-compose.yml file to prepare TestContainers container:
static DockerComposeContainer postgresContainer = new DockerComposeContainer(
ResourceUtils.getFile("classpath:docker-compose.yml"))
The git file structure is:
- code
- src
- main
- test
- resources
- docker-compose.yml
This is working fine on my local machine, but once I run it in Azure pipeline, it gives
No such file or directory: '/__w/1/s/code/build/resources/test/docker-compose.yml'
My pipeline yaml is like bellow. I use Ubuntu container with Java17 as I need to build with 17 but Azure's latest is 11 (maybe this plays any role in the error I get?)
trigger: none
stages:
- stage: Test
displayName: Test Stage
jobs:
- job: Test
pool:
vmImage: 'ubuntu-22.04'
container: gradle:7.6.0-jdk17
variables:
- name: JAVA_HOME_11_X64
value: /opt/java/openjdk
displayName: Test
steps:
- script: java --version
displayName: Java version
- script: |
echo "Build reason is: $(Build.Reason)"
displayName: Build reason
- checkout: self
clean: true
- task: Gradle#2
displayName: 'Gradle Build'
enabled: True
inputs:
javaHomeSelection: 'path'
jdkDirectory: '/opt/java/openjdk'
wrapperScript: code/gradlew
cwd: code
tasks: clean build
publishJUnitResults: true
jdkVersionOption: 1.17
Thanks for help!
I've solved it by "workarround" - I realied that I dont need to use the container with jdk17 that was causing the problem (could not access files on host machine of course)
The true is the Azure silently supports Jkd17 by directive: jdkVersionOption: 1.17
But once someone will need to use the container to build the code and access the repository files that are not on classpath, the problem will raise again
trigger: none
stages:
- stage: Test
displayName: Test Stage
jobs:
- job: Test
pool:
vmImage: 'ubuntu-22.04'
displayName: Test
steps:
- script: java --version
displayName: Java version
- script: |
echo "Build reason is: $(Build.Reason)"
displayName: Build reason
- checkout: self
clean: true
- task: Gradle#2
displayName: 'Gradle Build'
enabled: True
inputs:
wrapperScript: server/code/gradlew
cwd: server/code
tasks: test
publishJUnitResults: true
jdkVersionOption: 1.17
Please follow Azure pipeline issue for more details

CircleCI save output for dependent jobs in workflow

I have two jobs, B dependant on A and I need to use it's output as input for my next job.
version: 2
jobs:
A:
docker:
- image: xxx
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
- run: git submodule update --init
- run:
name: build A
command: cd platform/android/ && ant
B:
docker:
- image: yyy
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
name: build B
command: ./gradlew assembleDebug
workflows:
version: 2
tests:
jobs:
- A
- B:
requires:
- A
The output of job A in folder ./build/output needs to be saved and used in job B.
How do I achieve this?
disclaimer: I'm a CircleCI Developer Advocate
You would use CircleCI Workspaces.
version: 2
jobs:
A:
docker:
- image: xxx
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
- run: git submodule update --init
- run:
name: build A
command: cd platform/android/ && ant
- persist_to_workspace:
root: build/
paths:
- output
B:
docker:
- image: yyy
environment:
MAKEFLAGS: "-i"
JVM_OPTS: -Xmx3200m
steps:
- attach_workspace:
at: build/
name: build B
command: ./gradlew assembleDebug
workflows:
version: 2
tests:
jobs:
- A
- B:
requires:
- A
Also keep in mind, your B job has some YAML issues.

Tag release not built with CircleCI

I am using CircleCI to build a project, everything is running fine, except that my tags are not built when pushed to github:
I don't understand why, I have reduced my whole configuration to a minimalistic config file, it's the same logic:
version: 2
jobs:
my_dummy_job_nightly:
working_directory: ~/build
docker:
- image: docker:git
steps:
- checkout
- setup_remote_docker:
reusable: true
exclusive: true
- run:
name: NIGHTLY BUILD
command: |
apk add --update py-pip
python -m pip install --upgrade pip
my_dummy_job_deploy:
working_directory: ~/build
docker:
- image: docker:git
steps:
- checkout
- setup_remote_docker:
reusable: true
exclusive: true
- run:
name: RELEASE BUILD
command: |
apk add --update py-pip
python -m pip install --upgrade pip
###################################################################################
# CircleCI WORKFLOWS #
###################################################################################
workflows:
version: 2
build-and-deploy:
jobs:
###################################################################################
# NIGHTLY BUILDS #
###################################################################################
- my_dummy_job_nightly:
filters:
tags:
ignore: /.*/
branches:
only: master
###################################################################################
# TAGS BUILDS #
###################################################################################
- hold:
type: approval
filters:
tags:
only: /.*/
branches:
ignore: /.*/
- my_dummy_job_deploy:
requires:
- hold
filters:
tags:
only: /.*/
branches:
ignore: /.*/
I don't understand why the tags don't build ... The regex should let everything through...
I have finally found the issue. Nothing to do with the configuration, CircleCI interface do not show tag build in the Workflows Interface and thus the approval operation block the whole process.
To access the workflow and approve the deployment, you have to click on the build and click on the workflow (see below):
Once on the workflow, it is possible to approve the process:
The only solution I have found to make the build appear is to create a dummy and useless step in the build process that will appear before the approval.
version: 2
jobs:
init_tag_build:
working_directory: ~/build
docker:
- image: docker:git
steps:
- checkout
- setup_remote_docker:
reusable: true
exclusive: true
- run:
name: Launch Build OP
command: |
echo "start tag workflow"
my_deploy_job:
working_directory: ~/build
docker:
- image: docker:git
steps:
- checkout
- setup_remote_docker:
reusable: true
exclusive: true
- run:
name: DEPLOY BUILD
command: |
echo "do the deploy work"
workflows:
version: 2
build-and-deploy:
jobs:
- init_tag_build:
filters:
tags:
only: /.*/
branches:
ignore: /.*/
- hold:
type: approval
requires:
- init_tag_build
filters:
tags:
only: /.*/
branches:
ignore: /.*/
- my_deploy_job:
requires:
- hold
filters:
tags:
only: /.*/
branches:
ignore: /.*/
TL;DR
In the yaml you ignore every branch. Remove the following part.
branches:
ignore: /.*/
You probably meant to build only if tags are shown, but you ignored all the branches. If you want to build for every branch with tags remove the line. If you want to build for some branch (e.g. dev) with tags then add branches: only: dev.
The connection between the two specifier is AND instead of OR. There is a discussion on CircleCI forum to add feature to change it to OR.

CircleCI: Create Workflow with separate jobs. One build and different Deploys per environment

everyone. I need some help with some issues that I am facing configuring circleCi for my Angular project.
The config.yml that I am using for a build and deploy process is detailed below. Currently I have decided to do separate jobs for each environment and each one includes the building and deploy. The problem with this approach is that I am repeating myself and I can't find the correct way to deploy an artifact builded in the previous job for the same workflow.
version: 2
jobs:
build:
docker:
- image: circleci/node:8-browsers
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
deploy_prod:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use default
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
deploy_qa:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use qa
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
- deploy_prod:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
I understand that each job is using a different docker image, so this prevents me work on the same workspace.
Q: How can I use the same docker image for different jobs in the same workflow?
I included the store_artifacts thinking it could help me, but what I read about it is that it only works for using through the dashboard or the API.
Q: Am I able to recover an artifact on a job that requires a different job that stored the artifact?
I know that I am repeating myself, my goal is to have a build job required for a deploy job per environment depending on the tags' name. So my deploy_{env} jobs are mainly the firebase commands.
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
- deploy_prod:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
Q: What are the recommended steps (best practices) for this solution?
Q: How can I use the same docker image for different jobs in the same workflow?
There might be two methods of going about this:
1.) Docker Layer Caching: https://circleci.com/docs/2.0/docker-layer-caching/
2.) Caching the .tar file: https://circleci.com/blog/how-to-build-a-docker-image-on-circleci-2-0/
Q: Am I able to recover an artifact on a job that requires a different
job that stored the artifact?
The persist_to_workspace and attach_workspace keys should be helpful here: https://circleci.com/docs/2.0/workflows/#using-workspaces-to-share-data-among-jobs
Q: What are the recommended steps (best practices) for this solution?
Not sure here! Whatever works the fastest and cleanest for you. :)

Resources