I'm trying to combine two examples from the Google Cloud Build documentation
This example where a container is built using a yaml file
And this example where a persistent volume is used
My scenario is simple, the first step in my build produces some assets I'd like to bundle into a Docker image built in a subsequent step. However, since the COPY command is relative to the build directory of the image, I'm unable to determine how to reference the assets from the first step inside the Dockerfile used for the second step.
There are potentially multiple ways to solve this problem, but the easiest way I've found is to use the docker cloud builder and run an sh script (since Bash is not included in the docker image).
In this example, we build on the examples from the question, producing an asset file in the first build step, then using it inside the Dockerfile of the second step.
cloudbuild.yaml
steps:
- name: 'ubuntu'
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!" > /persistent_volume/file
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'sh'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'docker-build.sh' ]
env:
- 'PROJECT=$PROJECT_ID'
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
docker-build.sh
#/bin/sh
cp -R /persistent_volume .
docker build -t gcr.io/$PROJECT/quickstart-image .
Related
Google Cloud docs use the first code block but I'm wondering why they don't use the second one. As far as I can tell they achieve the same result. Is there any practical difference?
# config 1
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project-id/project-name','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project-id/project-name']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
images: ['gcr.io/project-id/project-name']
# config 2
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--region', 'us-central1', '--tag', 'gcr.io/project-id/project-name','.']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
I run this with gcloud builds submit --config cloudbuild.yaml
In the second config, you call a Cloud Build from inside a Cloud Build, that means you pay twice the docker build/push process in the second config.
That time of timeline in fact
Cloud Build 1
Cloud build 2
Docker Build
Docker Push
Deploy on cloud run
In addition, the number of concurrent build are limited and with the config 2 you use 2 times more quotas.
And the result is the same (it should be slightly faster with the config 1 because you haven't a new Cloud Build to spin up)
I am building a concourse pipeline for a spring-boot service. The pipeline has two jobs (test, package-build-and-push).
The first job (test) has a single task (mvn-test-task) that will be responsible for running a maven test.
The second job has two tasks (mvn-package-task, build-image-task). The second job will only run after the first job (test) has finished. The first task (mvn-package-task) will be doing a maven package. Once the task (mvn-package-task) is finished it should copy the complete project folder with the generated target folder to the directory "concourse-demo-repo-out". This folder (concourse-demo-repo-out) then becomes the input folder for the next task (build-image-task) that will use to generate a docker image. The output will be a docker image that is placed in the directory (image).
I now need to push this image to an Amazon ECR repository. For this I am using a resource of the type docker-image (https://github.com/concourse/docker-image-resource). The problem that I am facing now is:
How do I tag the image that was created, and is placed inside the image directory?
How do I specify that I already have an image created to this docker image resource type, so that it uses that, tags it, and then pushes it to ECR.
My pipeline looks like this
resources:
- name: concourse-demo-repo
type: git
icon: github
source:
branch: main
uri: https://github.xyz.com/gzt/concourse-demo.git
username: <my-username>
password: <my-token>
- name: ecr-docker-reg
type: docker-image
icon: docker
source:
aws_access_key_id: <my-aws-access-key>
aws_secret_access_key: <my-aws-secret-key>
repository: <my-aws-ecr-repo>
jobs:
- name: test
public: true
plan:
- get: concourse-demo-repo
trigger: true
- task: mvn-test-task
file: concourse-demo-repo/ci/tasks/maven-test.yml
- name: package-build-and-push
public: true
serial: true
plan:
- get: concourse-demo-repo
trigger: true
passed: [test]
- task: mvn-package-task
file: concourse-demo-repo/ci/tasks/maven-package.yml
- task: build-image-task
privileged: true # oci-build-task must run in a privileged container
file: concourse-demo-repo/ci/tasks/build-image.yml
- put: ecr-docker-reg
params:
load: image
The maven package task and script
---
platform: linux
image_resource:
type: docker-image
source:
repository: maven
inputs:
- name: concourse-demo-repo
run:
path: /bin/sh
args: ["./concourse-demo-repo/ci/scripts/maven-package.sh"]
outputs:
- name: concourse-demo-repo-out
#!/bin/bash
set -e
cd concourse-demo-repo
mvn clean package
#cp -R ./target ../concourse-demo-repo-out
cp -a * ../concourse-demo-repo-out
The build image task
---
platform: linux
image_resource:
type: registry-image
source:
repository: concourse/oci-build-task
inputs:
- name: concourse-demo-repo-out
outputs:
- name: image
params:
CONTEXT: concourse-demo-repo-out
run:
path: build
So finally when I run the pipeline, everything works fine, I am able to build a docker image that is placed in the image directory but I am not able to use the image as a part of "put: ecr-docker-reg".
The error that I get while running the pipeline is
selected worker: ed5d4164f835
waiting for docker to come up...
open image/image: no such file or directory
Consider using registry-image instead. Use this example for image resource definition where you specify the desired tag.
Your build-image-task is analogous to the build task from the example.
Finally, the put will look something like:
- put: ecr-docker-reg
params:
image: image/image <--- but make sure this file exists, read on...
Given that concourse complained about open image/image: no such file or directory, verify that the filename is what you expect it to be. Hijack the failed task and look inside of the image output directory:
$ fly -t my-concourse i -j mypipeline/package-build-and-push
(select the build-image-task from the list)
# ls -al image/
(actual filename here)
👍
So I know that there are a lot of tutorials on the topics, both docker and maven, but I'm having some confusion in combining them alltogether.
I created a multi-module Maven project with 2 modules, 2 spring applications, let's call them application 1 and application 2.
Starting each other via IntelliJ IDEA green "run" button works fine, now I'd like to automate things and run via docker.
I have Dockerfiles that looks the same in both cases:
(in both modules it's the same, only JAR name's different)
FROM adoptopenjdk:11-jre-hotspot
MAINTAINER *my name here lol*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar
ENTRYPOINT ["java","-jar","/application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar"]
CMD /wait && /*.jar
I also have docker-compose:
version: '2.1'
services:
application1:
container_name: app1
build:
context: ../app1
image: docker.io/myname/app1:latest
hostname: app1
ports:
- "8080:8080"
networks:
- spring-cloud-network-app1
application2:
container_name: app2
build:
context: ../app2
depends_on:
application1:
condition: service_started
links:
- application1
image: docker.io/myname/app2:latest
environment:
WAIT_HOSTS: application1:8080
ports:
- "8070:8070"
networks:
- spring-cloud-network-app2
networks:
spring-cloud-network-app1:
driver: bridge
spring-cloud-network-app2:
driver: bridge
What I do currently is:
I run maven package for each module and receive files like "application1(-2)-0.0.1-SNAPSHOT-jar-with-dependencies.jar" in both target folders.
"docker build -t springio/app1 ."
"docker-compose up --build"
And it works, but I feel I do some extra steps.
How can I do the project so that I ONLY have to run docker compose?
(after each time I change things in the code)
Again, I know it's a quite simple thing but I kinda lost the logic.
Thanks!
P.S
Ah, and about the "...docker-compose-wait/releases/download/2.9.0/wait /wait"
It's important that app start one after another, tried different solutions, unfortunately, doesn't really work as good as I would like to. But I guess I'll leave it as is.
So, again, if anyone ever wonders how to do the things I asked, here's the answer: you need multi-stage build Dockerfile.
It'll look like this:
#
# Build stage
#
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
FROM openjdk:11-jre-slim
COPY --from=build /home/app/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/demo.jar"]
What it does is it basically first creates a jar file, copies it into package stage and eventually runs.
That's allow you to run your app in docker by running only docker compose.
Given a spring boot app that writes files to /var/lib/app/files.
I create an docker image with the gradle task:
./gradlew bootBuildImage --imageName=app:latest
Then, I want to use it in docker-compose:
version: '3.5'
services:
app:
image: app:latest
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This will fail, because the folder is created during docker-compose up and is owned by root and the app, hence, has no write access to the folder.
The quick fix is to run the image as root by specifying user: root:
version: '3.5'
services:
app:
image: app:latest
user: root # <------------ required
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This works fine, but I do not want to run it as root. I wonder how to achieve it? I normally could create a Dockerfile that creates the desired folder with correct ownership and write permissions. But as far as I know build packs do not use a custom Dockerfile and hence bootBuildImage would not use it - correct? How can we create writable volumes then?
By inspecting the image I found that the buildpack uses /cnb/lifecycle/launcher to launch the application. Hence I was able to customize the docker command and fix the owner of the specific folder before launch:
version: '3.5'
services:
app:
image: app:latest
# enable the app to write to the storage folder (docker will create it as root by default)
user: root
command: "/bin/sh -c 'chown 1000:1000 /var/lib/app/files && /cnb/lifecycle/launcher'"
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
Still, this is not very nice, because it is not straight forward (and hence my future self will need to spent time on understand it again) and also it is very limited in its extensibility.
Update 30.10.2020 - Spring Boot 2.3
We ended up creating another Dockerfile/layer so that we do not need to hassle with this in the docker-compose file:
# The base_image should hold a reference to the image created by ./gradlew bootBuildImage
ARG base_image
FROM ${base_image}
ENV APP_STORAGE_LOCAL_FOLDER_PATH /var/lib/app/files
USER root
RUN mkdir -p ${APP_STORAGE_LOCAL_FOLDER_PATH}
RUN chown ${CNB_USER_ID}:${CNB_GROUP_ID} ${APP_STORAGE_LOCAL_FOLDER_PATH}
USER ${CNB_USER_ID}:${CNB_GROUP_ID}
ENTRYPOINT /cnb/lifecycle/launcher
Update 25.11.2020 - Spring Boot 2.4
Note that the above Dockerfile will result in this error:
ERROR: failed to launch: determine start command: when there is no default process a command is required
The reason is that the default entrypoint by the paketo builder changed. Changing the entrypoint from /cnb/lifecycle/launcher to the new one fixes it:
ENTRYPOINT /cnb/process/web
See also this question: ERROR: failed to launch: determine start command: when there is no default process a command is required
I'm using Google Cloud Build to run CI for my Nx workspace. Here's the cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
id: Test_Affected_Projects
entrypoint: 'sh'
args: [
'-c',
'docker build --build-arg NPM_TOKEN=$$NPM_TOKEN --file ./test/Dockerfile.test-runner -t mha-test-runner .']
secretEnv: ['NPM_TOKEN']
# Remove the docker image
secrets:
- kmsKeyName: /path/to/key
secretEnv:
NPM_TOKEN: some_key_value
(There are currently two steps, but I removed the second for brevity. The second step just removes the created docker image.)
Now the command inside the Docker image here runs all the tests for the Nx workspace. The thing is, Nx has a great command where only the affected libraries will be tested. But for the command to run, the git history of the project needs to be available.
I've tried to get the git history in the cloud build context, but I haven't been able to get it working. This is the step I added to try and get everything working:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: 'gcr.io/cloud-builders/docker'
id: Test_Affected_Projects
entrypoint: 'sh'
args: [
'-c',
'docker build --build-arg NPM_TOKEN=$$NPM_TOKEN --file ./test/Dockerfile.test-runner -t mha-test-runner .']
secretEnv: ['NPM_TOKEN']
# Remove the docker image
secrets:
- kmsKeyName: /path/to/key
secretEnv:
NPM_TOKEN: some_key_value
That new first command, which should get the git history, fails. The error message says that it's not a git repo, so the command fails.
My question is: how can I get the git history in the cloud build context so that I can use it with different commands in the build/testing process?
I think the reason this isn't working is that you need to store the github credentials in the cloud build environment.
I believe this guide can help.
will allow you to do so, and then you will be able to call the git fetch --unshallow as you already have.