How to run docker in Ciricle CI orbs? - spring

CircleCI introduces orb in 2.1, I am trying to add Circle Ci config my sample project.
But in my testing codes, I have used test containers to simplify the dependent config of my integration tests.
When committing my codes, the Circle CI running is failed.
org.testcontainers.containers.ContainerLaunchException: Container startup failed
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageName=mongo:4.0.10, imagePullPolicy=DefaultPullPolicy())
Caused by: java.lang.IllegalStateException: Could not find a valid Docker environment. Please see logs and check configuration
My Circle CI config.
version: 2.1
orbs:
maven: circleci/maven#1.0.1
codecov: codecov/codecov#1.1.0
jobs:
codecov:
machine:
image: ubuntu-1604:201903-01
steps:
- codecov/upload
workflows:
build:
jobs:
- maven/test:
command: "-q verify -Pcoverage"
- codecov:
requires:
- maven/test

Got it run myself.
The maven orb provides reusable jobs and commands, but by default, it used a JDK executor, does not provide a Docker runtime.
My solution is giving up the reusable job, and reuse some commands from the maven orb in your own jobs.
version: 2.1
orbs:
maven: circleci/maven#1.0.1
codecov: codecov/codecov#1.1.0
executors:
docker-mongo:
docker:
- image: circleci/openjdk:14-jdk-buster
- image: circleci/mongo:latest
jobs:
build:
executor: docker-mongo
steps:
- checkout
- maven/with_cache:
steps:
- run: mvn -q test verify -Pcoverage
- maven/process_test_results
- codecov/upload:
when: on_success
workflows:
build:
jobs:
- build

Related

Can't get CircleCI to post to Slack upon test failure

I'm getting no parsing errors and i believe my config.yml is correct, I just can't seem to get it to post to slack, i have all the backgrounc config correct such as the environment variables and the context part is what we call to our organization settings, it's just not posting upon failure, is there something i'm doing wrong?
version: 2.1
orbs:
cypress: cypress-io/cypress#1
slack: circleci/slack#4.5.0
workflows:
version: 2
test:
jobs:
- cypress/run
jobs:
steps:
executors:
with-chrome:
docker:
- image: 'cypress/browsers:node14.16.0-chrome90-ff88'
description: Runs cypress tests
steps:
- checkout
- run:
name: Run all cypress tests
command: npx cypress run
context: slack-context
- slack/notify:
event: fail
template: basic_fail_1
It appears you're using the CircleCI Cypress orb, and more specifically the cypress/run job defined in that orb. So I'm not sure what the part
jobs:
steps:
executors:
with-chrome:
docker:
- image: 'cypress/browsers:node14.16.0-chrome90-ff88'
description: Runs cypress tests
steps:
- checkout
- run:
name: Run all cypress tests
command: npx cypress run
context: slack-context
- slack/notify:
event: fail
template: basic_fail_1
is for.
The way your config looks, that part will never be executed. Your build will be simply the cypress/run job running with all the default parameters' values.
Further more, contexts need to be referenced within the workflows section (https://circleci.com/docs/2.0/contexts/).
I would suggest the below configuration:
version: 2.1
orbs:
cypress: cypress-io/cypress#1.29.0
slack: circleci/slack#4.10.1
executors:
with-chrome:
docker:
- image: 'cypress/browsers:node14.16.0-chrome90-ff88'
workflows:
version: 2
test:
jobs:
- cypress/run:
executor: with-chrome
browser: chrome
context: slack-context
post-steps:
- slack/notify:
event: fail
template: basic_fail_1

Can someone advise me a good practice for my Azure DevOps deployment

Good afternoon,
I am building a CI pipeline in Azure DevOps which is new ground for me. I managed to create add the build tasks en steps that I wanted. Although there still are some issues. I explain those issues down here.
Issue #1
I misunderstood the meaning of the latest tag. I thought it would automatically pull the latest/newest version from the specified Docker Hub.
Currently my Docker build looks like this:
- task: Docker#2
displayName: 'Build Docker image'
inputs:
repository: '<my_repo_name>'
command: 'build'
Dockerfile: '**/Dockerfile'
tags: $(Build.BuildId)
This pipeline YAML is to deploy to my production VPS which I added under Pipelines -> Environments.
Here is the deployment step of the pipeline:
- deployment: VMDeploy
displayName: 'Deployment to VPS'
pool:
vmImage: 'Ubuntu-20.04'
environment:
name: CDB_VPS
resourceName: <my_resource_name>
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: docker pull <my_repo_name>:latest
- script: docker stop $(docker ps -aq)
- script: docker run -p 8085:8085 <my_repo_name>:latest
Issue #2
I do not get any errors in the pipeline while running it. But I am wondering if this is a good practice. By using this it will always run version latest. Also I don't think this is how I should deploy.
Issue #3
The deployment block gets executed before the build and push block is finished. To give extra information I will post the entire YAML file down here.
trigger:
- master
jobs:
- job: Build
displayName: 'Build Maven project and Docker build'
steps:
- task: replacetokens#3
displayName: 'Replace tokens'
inputs:
targetFiles: |
**/application.properties
- task: Maven#3
displayName: 'Build Maven project'
inputs:
mavenPomFile: 'pom.xml'
goals: 'package'
jdkVersionOption: 11
publishJUnitResults: true
- task: Docker#2
displayName: 'Build Docker image'
inputs:
repository: '<my_repo_name>'
command: 'build'
Dockerfile: '**/Dockerfile'
tags: $(Build.BuildId)
- task: Docker#2
displayName: 'Push Docker image to Docker hub'
inputs:
containerRegistry: 'Dockerhub connection'
repository: '<my_repo_name>'
command: 'push'
Dockerfile: '**/Dockerfile'
tags: $(Build.BuildId)
- deployment: VMDeploy
displayName: 'Deployment to VPS'
pool:
vmImage: 'Ubuntu-20.04'
environment:
name: CDB_VPS
resourceName: <my_vps_resource_name>
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: docker pull <my_repo_name>:latest
- script: docker stop $(docker ps -aq)
- script: docker run -p 8085:8085 <my_repo_name>:latest
If you want to make this on specific image please replace latest with $(Build.BuildId).
steps:
- script: docker pull <my_repo_name>:$(Build.BuildId)
- script: docker stop $(docker ps -aq)
- script: docker run -p 8085:8085 <my_repo_name>:$(Build.BuildId)
And if you want VMDeploy waits for Build please add dependsOn
- deployment: VMDeploy
depenedsOn: Build
Issue #1
The tag in the docker task mean: A list of tags in separate lines. These tags are used in build, push and buildAndPush commands. We could see the tag in the docker, such as below.
Issue #2
We could check the latest deploy in the docker and Azure DevOps pipeline log to ensure that it always run version latest
Issue #3
You could check Krzysztof Madej answer.

GitLab Shared runner, deploy spring boot microservice app to custom server

Before i start, let me tell you that i'm newbie to Gitlab CI file :)
i'm looking to automate deployments of spring boot microservice app to custom server (a namecheap VPS).
(I'm using the Gitlab shared runner)
i have used jHipster ci-cd to generate the .gitlab-ci.yml file. Doing that, i have: build, package, release stages working.
i even could see the image repository built in Gitlab Container registery.
What last, is the deployment. i know that i have to use a docker container to deploy it, but i don't know how.
I'm stuck in deploying the image repository to my VPS.
(as it's my first microservice, my VPS is still new, having only java installed).
Here is my .gitlab-ci.yml file:
#image: jhipster/jhipster:v6.9.0
image: openjdk:11-jdk
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- .maven/
stages:
- check
- build
# - test
# - analyze
- package
- release
- deploy
before_script:
- chmod +x mvnw
- git update-index --chmod=+x mvnw
- export NG_CLI_ANALYTICS="false"
- export MAVEN_USER_HOME=`pwd`/.maven
nohttp:
stage: check
script:
- ./mvnw -ntp checkstyle:check -Dmaven.repo.local=$MAVEN_USER_HOME
maven-compile:
stage: build
script:
- ./mvnw -ntp compile -P-webpack -Dmaven.repo.local=$MAVEN_USER_HOME
artifacts:
paths:
- target/classes/
- target/generated-sources/
expire_in: 1 day
#maven-test:
# stage: test
# script:
# - ./mvnw -ntp verify -P-webpack -Dmaven.repo.local=$MAVEN_USER_HOME
# artifacts:
# reports:
# junit: target/test-results/**/TEST-*.xml
# paths:
# - target/test-results
# - target/jacoco
# expire_in: 1 day
maven-package:
stage: package
script:
- ./mvnw -ntp verify -Pprod -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME
artifacts:
paths:
- target/*.jar
- target/classes
expire_in: 1 day
# Uncomment the following line to use gitlabs container registry. You need to adapt the REGISTRY_URL in case you are not using gitlab.com
docker-push:
stage: release
variables:
REGISTRY_URL: registry.gitlab.com
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHA
dependencies:
- maven-package
script:
- ./mvnw -ntp compile jib:build -Pprod -Djib.to.image=$IMAGE_TAG -Djib.to.auth.username=gitlab-ci-token -Djib.to.auth.password=$CI_BUILD_TOKEN -Dmaven.repo.local=$MAVEN_USER_HOME
docker-deploy:
image: docker:stable-git
stage: deploy
script:
when: manual
only:
- master
Thanks for your help :)

Azure Build Pipeline Not Mapping Files to Docker Volume

I am in the process of creating our Azure DevOps Continuous Deployment pipeline.
One of the steps is to apply a database migration to our environment via scripts from source control. I'm leveraging Docker to avoid needing to install the migration tool (Liquibase) on the agent:
- stage: "ReleaseDev"
jobs:
- deployment: "Database Migration (Development)"
pool:
name: "Some Servers"
environment: "Development - Some Environment"
strategy:
runOnce:
deploy:
steps:
- bash: |
docker run --rm -v "$(Build.SourcesDirectory)/db/Internal:/liquibase/changelog" liquibase/liquibase --url="jdbc:sqlserver://xxx.company.com;Database=SomeTestDatabase;" --changeLogFile=/liquibase/changelog/liquibaseChangeLog.json --username=dbo_liquibase --password=$DEV_LIQUIBASE_PASSWORD update
env:
DEV_LIQUIBASE_PASSWORD: $(dev-liquibase-password)
However, it doesn't appear to be finding the liquibaseChangeLog.json file from the mapped volume in the container:
========================== Starting Command Output ===========================
##[debug]which '/bin/bash'
##[debug]found: '/bin/bash'
##[debug]/bin/bash arg: --noprofile
##[debug]/bin/bash arg: --norc
##[debug]/bin/bash arg: /home/azure/azure1/agent/_work/_temp/b865f905-04d6-4f31-8c9b-74a312d47670.sh
##[debug]exec tool: /bin/bash
##[debug]arguments:
##[debug] --noprofile
##[debug] --norc
##[debug] /home/azure/azure1/agent/_work/_temp/b865f905-04d6-4f31-8c9b-74a312d47670.sh
/bin/bash --noprofile --norc /home/azure/azure1/agent/_work/_temp/b865f905-04d6-4f31-8c9b-74a312d47670.sh
Liquibase Community 3.8.9 by Datical
Unexpected error running Liquibase: /liquibase/changelog/liquibaseChangeLog.json does not exist
For more information, please use the --logLevel flag
##[debug]Exit code 255 received from tool '/bin/bash'
##[debug]STDIO streams have closed for tool '/bin/bash'
##[error]Bash exited with code '255'.
##[debug]Processed: ##vso[task.issue type=error;]Bash exited with code '255'.
##[debug]task result: Failed
##[debug]Processed: ##vso[task.complete result=Failed;done=true;]
Finishing: Bash
I've done a very similar thing in our CI pipeline for branches but the script executed within a Docker-Compose task and not a standalone bash script. So I'm confused what's different in this case.
Looking for some advice for a poor windows developer :)
EDIT: After Leo's suggestion below, it enabled me to come up with this as a final working solution. His comments are the principle, this is the practice.
stages:
- stage: Build
jobs:
- job: "BuildJob"
variables:
solution: "**/*.sln"
buildPlatform: "any cpu"
buildConfiguration: "Release"
pool:
name: "xxx Build Servers"
steps:
- task: PublishPipelineArtifact#1
displayName: "Publish Pipeline Artifact - DB Migrations"
inputs:
targetPath: "db"
artifact: "db_migrations"
- stage: "ReleaseDev"
jobs:
- deployment: "Development_DbMigration"
pool:
name: "xxx Docker Hosts"
environment: "Development - Web Farm"
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact#2
displayName: "Download Pipeline Artifact - DB Migrations"
inputs:
artifactName: 'db_migrations'
targetPath: '$(build.artifactstagingdirectory)/db'
- bash: |
docker run --rm -v "$(build.artifactstagingdirectory)/db/Internal:/liquibase/changelog" liquibase/liquibase --url="jdbc:sqlserver://dev.xxx.com;Database=SomeDatabase;" --changeLogFile=/liquibase/changelog/liquibaseChangeLog.json --username=dbo_SomeDatabase --password=$DEV_LIQUIBASE_PASSWORD update
env:
DEV_LIQUIBASE_PASSWORD: $(dev-liquibase-password)
Azure Build Pipeline Not Mapping Files to Docker Volume
Your comment is critical, and you are very close to your answer based on your comment.
If you are only add the - stage: "ReleaseDev" in your release pipeline, you will get that issue.
In order to support Release pipelines (CD) in YAML as well, MS offer a unified YAML experience, so you can configure each of your pipelines to do CI, CD, or CI and CD together.
Besides, MS also provides different built-in tasks for build/deployment, like Checkout for build stage, Download Artifact for deployment stage.
So, if we only add the ReleaseDev in the pipeline without build stage, it will missing the built-in task Checkout. That the reason why the directory $(Build.SourcesDirectory is empty:
To resolve this issue, we just need to a stage build with a simple task:
stages:
- stage: Build
jobs:
- job: Build
displayName: Build
pool:
name: MyPrivateAgent
steps:
- script: |
echo $(Build.SourcesDirectory)
- stage: "ReleaseDev"
jobs:
- deployment: "Database Migration (Development)"
pool:
name: "Some Servers"
Now, we could get the source code from the repo:
Note: If you have multiple agents in parallel, you may also need to pay attention to whether the build and deploy are running on the same agent, if not, we need to manually upload and download them, check this document for some more details.
Hope this helps.

Runnig tasks only when merging to master

I use the following config which works as expected, it run the command on each PR or merge to the master, Now I want to make some integration test which I want to run only when merged to the master, all the PR should remain the same (and run the following config as before). the nuance here is that for the integration test I need other docker image and different run command to execute (which should execute only when merging to the master), is it possible to do it with CircleCI ?
# Golang CircleCI 2.0 configuration file
version: 2
jobs:
build:
docker:
# specify the version
- image: circleci/golang:1.11
working_directory: /go/src/sbr
steps:
- checkout
- run: go version
- run: go env
- run: go get -v -t -d ./...
- run: go test -v ./...
I try to add another docker image under the existing one but I got error
update:
version: 2
jobs:
build:
docker:
- image: circleci/golang:1.11
working_directory: /go/src/sbr
steps:
- checkout
- run: go version
- run: go env
- run: go get -v -t -d ./...
- run: go test -v ./...
test-integration:
docker:
- image: other-image
workflows:
version: 2
builds:
jobs:
- build
integration-test:
jobs:
- test-integration:
requires:
- build
filters:
branches:
only: master
The issue here that I got error when adding to the second workflow the require
requires:
- build
I want that before the test test-integration it will also run the build job as per-requiste . what im doing wrong ?
The error is:
requires job \"build\" but \"build\" is not part of this workflow.
# At least one job in the workflow must have no dependencies.
# The following jobs are unreachable: integration
#
# -------
# Don't rerun this job. Rerunning will have no effect.
false
Your configuration has a single job named build and no workflows. It sounds like what you want is to run a second job for integration tests, and to have the second job only run when the branch is master. To accomplish both of those you would use a workflow with two jobs.
See https://circleci.com/docs/2.0/configuration-reference/#workflows
An example of what that might look like:
jobs:
build:
docker:
- image: circleci/golang:1.11
...
test-integration:
docker:
- image: other-image
...
workflows:
version: 2
workflow-name:
jobs:
- build
- test-integration:
filters:
branches:
only: master

Resources