How to mount docker volume into my docker project using compose? - maven

I have a Maven project. I'm running my Maven builds inside Docker. But the problem with that is it downloads all of the Maven dependencies every time I run it and it does not cache any of those Maven downloads.
I found some work arounds for that, where you mount your local .m2 folder into Docker container. But this will make the builds depend on local setup. What I would like to do is to create a volume (long live) and link/mount that volume to .m2 folder inside Docker. That way when I run the Docker build for the 2nd time, it will not download everything. And it will not be dependent on environment.
How can I do this with docker-compose?

Without knowing your exact configuration, I would use something like this...
version: "2"
services:
maven:
image: whatever
volumes:
- m2-repo:/home/foo/.m2/repository
volumes:
m2-repo:
This will create a data volume called m2-repo that is mapped to the /home/foo/.m2/repository (adjust path as necessary). The data volume will survive up/down/start/stop of the Docker Compose project.
You can delete the volume by running something like docker-compose down -v, which will destroy containers and volumes.

Related

Is there a way to automate the creation of Docker Image?

I needed to create a Docker image of a Springboot application and I achieved that by creating a Dockerfile and building it into an image. Then, I used "docker run" to bring up a container. This container is used for all the activities for which my application was written.
My problem, however, is that the JAR file that I have used needs constant changes and that requires me to rebuild the Docker image everytime. Furthermore, I need to take the contents of the earlier running Docker container and transfer it into a container created from the newly built image.
I know this whole process can be written as a Shell script and exected every time I have changes on my JAR file. But, is there any tool I can use to somehow automate it in a simple manner?
Here is my Dockerfile:
FROM java:8
WORKDIR /app
ADD ./SuperApi ./SuperApi
ADD ./config ./config
ADD ./Resources ./Resources
EXPOSE 8000
CMD java -jar SuperApi/SomeName.jar --spring.config.location=SuperApi/application.properties
If you have a JAR file that you need to copy into an otherwise static Docker image, you can use a bind mount to save needing to rebuild repeatedly. This allows for directories to be shared from the host into the container.
Say your project directory (the build location where the JAR file is located) on the host machine is /home/vishwas/projects/my_project, and you need to have the contents placed at /opt/my_project inside the container. When starting the container from the command line, use the -v flag:
docker run -v /home/vishwas/projects/my_project:/opt/my_project [...]
Changes made to files under /home/vishwas/projects/my_project locally will be visible immediately inside the container1, so no need to rebuild (and probably no need to restart) the container.
If using docker-compose, this can be expressed using a volumes stanza under the services listing for that container:
volumes:
- type: bind
source: /home/vishwas/projects/my_project
target: /opt/my_project
This works for development, but later on, it's likely you'll want to bundle the JAR file into the image instead of sharing from the host system (so it can be placed into production). When that time comes, just re-build the image and add a COPY directive to the Dockerfile:
COPY /home/vishwas/projects/my_project /opt/my_project
1: Worth noting that it will default to read/write, so the container will also be able to modify your project files. To mount as read-only, use: docker run -v /home/vishwas/projects/my_project:/opt/my_project:ro
You are looking for docker compose
You can build and start containers with a single command using compose.

Build docker image without docker installed

Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.
I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?
It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.
UPDATE
There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel
So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
We can create Docker image without Docker being installed.
Jib Maven and Gradle Plugins
Google has an open source tool called Jib that is relatively new, but
quite interesting for a number of reasons. Probably the most interesting
thing is that you don’t need docker to run it - it builds the image using
the same standard output as you get from docker build but doesn’t use
docker unless you ask it to - so it works in environments where docker is
not installed (not uncommon in build servers). You also don’t need a
Dockerfile (it would be ignored anyway), or anything in your pom.xml to
get an image built in Maven (Gradle would require you to at least install
the plugin in build.gradle).
Another interesting feature of Jib is that it is opinionated about
layers, and it optimizes them in a slightly different way than the multi-
layer Dockerfile created above. Just like in the fat jar, Jib separates
local application resources from dependencies, but it goes a step further
and also puts snapshot dependencies into a separate layer, since they are
more likely to change. There are configuration options for customizing the
layout further.
Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better
For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container
Have a look at the following tools:
Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.
Fabric8-maven-plugin
The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.
I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.
Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.5.41</version>
</plugin>
then build+deploy using
$ mvn fabric8:deploy
If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.
Buildah
Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:
create a working container, either from scratch or using an image as a starting point
create an image, either from a working container or via the instructions in a Dockerfile
images can be built in either the OCI image format or the traditional upstream docker image format
mount a working container's root filesystem for manipulation
unmount a working container's root filesystem
use the updated contents of a container's root filesystem as a filesystem layer to create a new image
delete a working container or an image
rename a local container
I don't want to force others to install docker on their machines.
If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.
The Docker Engine API is a RESTful API accessed by an HTTP client such
as wget or curl, or the HTTP library which is part of most modern
programming languages.
For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.
Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.
I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.
OpenSource and has a MIT license.
Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.
Requirements
project repository on Gitlab
an openshift cluster (an openshift-online-starter will do for most medium/small
extract how the docker image for this project was created:
# create and push image to ImageStream:
build_rootfs:
image: centos
stage: build-image
dependencies:
- build
before_script:
- mkdir -pv rootfs
- cp -v output/oc-* rootfs/
- mkdir -pv rootfs/etc/pki/tls/certs
- mkdir -pv rootfs/bin-runner
- cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
- chmod -Rv 777 rootfs
tags:
- oc-runner-shared
script:
- registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config

docker-compose.vs.release.yml Volume binding on VS 2017

This is a docker file for .net core web application project.
I am trying to understand what these lines means.
What does ~/clrdbg:/clrdbg:ro means.
When I create files they are stored in root of my project folder as well. arent they suppose to be stored in container volumes.
How do I map volumes properly and delete the contents of these volumes.
version: '2'
services:
is.mvcclient:
build:
args:
source: ${DOCKER_BUILD_SOURCE}
volumes:
- ~/clrdbg:/clrdbg:ro
entrypoint: tail -f /dev/null
labels:
- "com.microsoft.visualstudio.targetope ratingsystem=linux"
~/clrdbg:/clrdbg:ro basically means that local folder ~/clrdbg will be available in the container under /clrdbg and local changes will be also reflected in the container without the need to rebuild the image. RO means that it is read-only so the container can't change the files in that folder.
Your volume is mounted to a host folder (in this case I assume your projects root). Like mentioned in the previous point, in that case changes in local filesystem are reflected in the container.
First you have to get your project into the container, so I guess you can COPY/ADD it to the container on image build. After that you have to do something along the lines of:
services:
is.mvcclient:
volumes:
- data-volume:/clrdbg
volumes:
data-volume:
By doing that, all the changes to the files in the container will only be reflected in those files, not the local ones. Of course, that goes both ways - changes to local files won't be reflected in the container files.

Laradock (container) files on Windows

I installed docker toolbox 1.11.2 and Laradock v.2 cloned from GitHub.
Everything seems to work except the laradock_workspace_1. When is generated it does not create files on the host machine (Windows 7 64-bit). In the docker-compose.yml I have tried playing with the volumes as suggested here
### Laravel Application Code Container ######################
volumes_source:
build: ./volumes/application
volumes:
- ../:/var/www/laravel
If I change the last line to ../.. then run docker-compose up, docker exec -it laradock_workspace_1 ls and I can see that it is traversing the folders on the host machine. I just don't see any files.
My goal here is to make the actual Laravel code external so I can edit them on the host machine and use git.
I can use the Kitematic app to make the changes I want but they seem lost if I do a docker-compose down. (and I get errors about things still being in use.)
I'm new to docker so any help is appreciated.
First, make sure your docker-machine is running. If it is, then follow below:
Open up Virtualbox GUI and right click your docker vm, and select settings, then go to Shared Folders.
Change the c\users to whatever folder your code lies in, like this:
This will mount your desired folder to /c/Users in the docker-machine vm.
After this, change the docker-compose.yml in the laradock folder to this:
### Laravel Application Code Container ######################
volumes_source:
build: ./volumes/application
volumes:
- /c/Users/pomodoro.xyz/code:/var/www/laravel
The logic behind this is, since we are running the docker in a VM, the docker-compose command looks for folder in the VM, not in the windows machines. Thats why we have provided the VM machine path to the docker-compose file.

Simple docker deployment tactics

Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.

Resources