This is a docker file for .net core web application project.
I am trying to understand what these lines means.
What does ~/clrdbg:/clrdbg:ro means.
When I create files they are stored in root of my project folder as well. arent they suppose to be stored in container volumes.
How do I map volumes properly and delete the contents of these volumes.
version: '2'
services:
is.mvcclient:
build:
args:
source: ${DOCKER_BUILD_SOURCE}
volumes:
- ~/clrdbg:/clrdbg:ro
entrypoint: tail -f /dev/null
labels:
- "com.microsoft.visualstudio.targetope ratingsystem=linux"
~/clrdbg:/clrdbg:ro basically means that local folder ~/clrdbg will be available in the container under /clrdbg and local changes will be also reflected in the container without the need to rebuild the image. RO means that it is read-only so the container can't change the files in that folder.
Your volume is mounted to a host folder (in this case I assume your projects root). Like mentioned in the previous point, in that case changes in local filesystem are reflected in the container.
First you have to get your project into the container, so I guess you can COPY/ADD it to the container on image build. After that you have to do something along the lines of:
services:
is.mvcclient:
volumes:
- data-volume:/clrdbg
volumes:
data-volume:
By doing that, all the changes to the files in the container will only be reflected in those files, not the local ones. Of course, that goes both ways - changes to local files won't be reflected in the container files.
Related
I have a large set of read-only configuration files (around 4k) which is used by the microservice to process some XML files and supposed to be read via Apache Commons Configuration.
These files are of the following types:
Properties file
XML
dtd
xfo
xslt
5 of these files will need some environment variables to be substituted in their content, such as third party software location, or different services URL based on the environment the files are deployed in.
Now, I need to make these files available for 4 microservices at run time.
I'm using fabric8.io maven docker plugin with dockerfile for image generation.
Kubernetes, helm, Jenkinsfile, and ArgoCD for the spring-boot microservices CD/CI.
The 2 challenges that I'm facing is how to substitute the variables inside of these static files, and how to make these files available for each pod.
I have three solutions in mind but I would like to know what is the best/optimal 12-factor approach for this problem.
Solution 1: is to deploy the files as a separate pod and allow other pods to access to some volume mount that it provides.
Solution 2: Add the files to the microservice image during the docker image build.
Solution 3: Add the files as a container of each microservice pod.
You could upload this file to a kubernetes ConfigMap.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
apiVersion: v1
kind: ConfigMap
data:
haproxy.cfg: "complete file contents"
It can contain entire file, and mount this file in a pod directory
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: config
volumes:
- name: config
configMap:
name: env-config-haproxy
items:
- key: haproxy.cfg
path: haproxy.cfg
There is another solution: mounting the files as a volume (backed by cloud storage) and substituting values on-access within the service and that's a solution I'd go with by default.
Solutions 1 and especially 3 add a lot of complexity. Solution 2 may be a good choice too, however to choose the best option you really need to answer another question - how are the config files and env substitutions changing with respect to the application container versions?
i.e. When you change the files, should all of the services get new versions?
I needed to create a Docker image of a Springboot application and I achieved that by creating a Dockerfile and building it into an image. Then, I used "docker run" to bring up a container. This container is used for all the activities for which my application was written.
My problem, however, is that the JAR file that I have used needs constant changes and that requires me to rebuild the Docker image everytime. Furthermore, I need to take the contents of the earlier running Docker container and transfer it into a container created from the newly built image.
I know this whole process can be written as a Shell script and exected every time I have changes on my JAR file. But, is there any tool I can use to somehow automate it in a simple manner?
Here is my Dockerfile:
FROM java:8
WORKDIR /app
ADD ./SuperApi ./SuperApi
ADD ./config ./config
ADD ./Resources ./Resources
EXPOSE 8000
CMD java -jar SuperApi/SomeName.jar --spring.config.location=SuperApi/application.properties
If you have a JAR file that you need to copy into an otherwise static Docker image, you can use a bind mount to save needing to rebuild repeatedly. This allows for directories to be shared from the host into the container.
Say your project directory (the build location where the JAR file is located) on the host machine is /home/vishwas/projects/my_project, and you need to have the contents placed at /opt/my_project inside the container. When starting the container from the command line, use the -v flag:
docker run -v /home/vishwas/projects/my_project:/opt/my_project [...]
Changes made to files under /home/vishwas/projects/my_project locally will be visible immediately inside the container1, so no need to rebuild (and probably no need to restart) the container.
If using docker-compose, this can be expressed using a volumes stanza under the services listing for that container:
volumes:
- type: bind
source: /home/vishwas/projects/my_project
target: /opt/my_project
This works for development, but later on, it's likely you'll want to bundle the JAR file into the image instead of sharing from the host system (so it can be placed into production). When that time comes, just re-build the image and add a COPY directive to the Dockerfile:
COPY /home/vishwas/projects/my_project /opt/my_project
1: Worth noting that it will default to read/write, so the container will also be able to modify your project files. To mount as read-only, use: docker run -v /home/vishwas/projects/my_project:/opt/my_project:ro
You are looking for docker compose
You can build and start containers with a single command using compose.
Consider the following YAML code in my docker-compose.yml file that sets up volume mounting (using version 3.7), using short form syntax as specified in the docs:
volumes:
- ./logging:/var/log/cron
This maps the relative path logging on my host machine to the /var/log/cron folder inside the container. When I run docker-compose up, if the logging folder doesn't exist on my host machine, Docker creates it. All good there.
Now, if I change the above to long-form syntax:
volumes:
- type: bind
source: ./logging
target: /var/log/cron
Now when I run docker-compose up, it DOES NOT create logging folder if it doesn't exist on my host machine. I get
Cannot create container for service app: b'Mount denied:\nThe source path "C:/Users/riptusk331/logging"\ndoesn\'t exist and is not known to Docker'
Does anyone know why the short form syntax creates the host path if it doesn't exist, but the long form does not and gives an error?
Using Docker Desktop for Windows.
I have downloaded a postgresql docker image and at the moment editing some config files. The problem that I have is that whenever I edit the config files and commit the docker image (save it as a new one), it never saves anything. The image is still the same as the one I downloaded.
Image I am using:
https://hub.docker.com/_/postgres/
I believe this is the latest docker file.
https://github.com/docker-library/postgres/blob/a00e979002aaa80840d58a5f8cc541342e06788f/9.6/Dockerfile
This is what I did:
1. Run the postgresql docker container
2. Enter the terminal of the container. docker exec -i -t {id of container} /bin/bash
3. Edit some config files.
4. Exit the container.
5. Commit the changes by using docker commit {containerid} {new name}
6. Stop the old container and start the new one.
The new container is created. If I start the new container with the new image and check the config files I edited, my changes are not there. No changes were committed.
What am I doing wrong here?
The Docker file contains a volume declaration
https://github.com/docker-library/postgres/blob/a00e979002aaa80840d58a5f8cc541342e06788f/9.6/Dockerfile#L52
VOLUME /var/lib/postgresql/data
All file edits under this path will not be saved in a Docker image commit. These data files are deliberately excluded as they define your container's state. Images on the other hand are designed to create new containers, so VOLUMEs are a mechanism to keep state separate.
It would appear that you're attempting to use Docker images as a mechanism for DB backup and recovery. This is ill-advised as the docker file system is less performant compared to the native file system typically exposed to a volume.
As Mark rightfully points out, your data is left behind because of the volume definition, and it should not be altered for the general production use.
If you have a legitimate reason to keep the data within the image produced, you may move the postgres data from the volume by adding the following to your dockerfile:
ENV PGDATA /var/lib/postgresql/my_data
RUN mkdir -p $PGDATA
I've been using this technique to produce db images for testing to speedup the feedback loop.
I have a Maven project. I'm running my Maven builds inside Docker. But the problem with that is it downloads all of the Maven dependencies every time I run it and it does not cache any of those Maven downloads.
I found some work arounds for that, where you mount your local .m2 folder into Docker container. But this will make the builds depend on local setup. What I would like to do is to create a volume (long live) and link/mount that volume to .m2 folder inside Docker. That way when I run the Docker build for the 2nd time, it will not download everything. And it will not be dependent on environment.
How can I do this with docker-compose?
Without knowing your exact configuration, I would use something like this...
version: "2"
services:
maven:
image: whatever
volumes:
- m2-repo:/home/foo/.m2/repository
volumes:
m2-repo:
This will create a data volume called m2-repo that is mapped to the /home/foo/.m2/repository (adjust path as necessary). The data volume will survive up/down/start/stop of the Docker Compose project.
You can delete the volume by running something like docker-compose down -v, which will destroy containers and volumes.