I needed to create a Docker image of a Springboot application and I achieved that by creating a Dockerfile and building it into an image. Then, I used "docker run" to bring up a container. This container is used for all the activities for which my application was written.
My problem, however, is that the JAR file that I have used needs constant changes and that requires me to rebuild the Docker image everytime. Furthermore, I need to take the contents of the earlier running Docker container and transfer it into a container created from the newly built image.
I know this whole process can be written as a Shell script and exected every time I have changes on my JAR file. But, is there any tool I can use to somehow automate it in a simple manner?
Here is my Dockerfile:
FROM java:8
WORKDIR /app
ADD ./SuperApi ./SuperApi
ADD ./config ./config
ADD ./Resources ./Resources
EXPOSE 8000
CMD java -jar SuperApi/SomeName.jar --spring.config.location=SuperApi/application.properties
If you have a JAR file that you need to copy into an otherwise static Docker image, you can use a bind mount to save needing to rebuild repeatedly. This allows for directories to be shared from the host into the container.
Say your project directory (the build location where the JAR file is located) on the host machine is /home/vishwas/projects/my_project, and you need to have the contents placed at /opt/my_project inside the container. When starting the container from the command line, use the -v flag:
docker run -v /home/vishwas/projects/my_project:/opt/my_project [...]
Changes made to files under /home/vishwas/projects/my_project locally will be visible immediately inside the container1, so no need to rebuild (and probably no need to restart) the container.
If using docker-compose, this can be expressed using a volumes stanza under the services listing for that container:
volumes:
- type: bind
source: /home/vishwas/projects/my_project
target: /opt/my_project
This works for development, but later on, it's likely you'll want to bundle the JAR file into the image instead of sharing from the host system (so it can be placed into production). When that time comes, just re-build the image and add a COPY directive to the Dockerfile:
COPY /home/vishwas/projects/my_project /opt/my_project
1: Worth noting that it will default to read/write, so the container will also be able to modify your project files. To mount as read-only, use: docker run -v /home/vishwas/projects/my_project:/opt/my_project:ro
You are looking for docker compose
You can build and start containers with a single command using compose.
Related
I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?
When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?
I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up
to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.
I am trying to containerize my Go applicaton. I am using Docker to do this. I have a fully executable Docker application running in my system. To run this in a container i am have created a Dockerfile.
FROM golang:1.7
EXPOSE "portno"
I have kept my dockerfile very simple because i already have an executable file running in my system. Please help me what all contents should i add to get the go app running. I am not able to run the go app as many of the contents are not getting copied in container.
You need to add your executable file to your container using ADD command:
ADD ./app /go/bin/app
And then you need to tell docker that it should be executed as the main container process:
CMD ["/go/bin/app"]
Note that it may be better to build your application from the source code inside your container. It can be done when you build your docker image.
As an example, see this article for more information: http://thenewstack.io/dockerize-go-applications/
I have a Maven project. I'm running my Maven builds inside Docker. But the problem with that is it downloads all of the Maven dependencies every time I run it and it does not cache any of those Maven downloads.
I found some work arounds for that, where you mount your local .m2 folder into Docker container. But this will make the builds depend on local setup. What I would like to do is to create a volume (long live) and link/mount that volume to .m2 folder inside Docker. That way when I run the Docker build for the 2nd time, it will not download everything. And it will not be dependent on environment.
How can I do this with docker-compose?
Without knowing your exact configuration, I would use something like this...
version: "2"
services:
maven:
image: whatever
volumes:
- m2-repo:/home/foo/.m2/repository
volumes:
m2-repo:
This will create a data volume called m2-repo that is mapped to the /home/foo/.m2/repository (adjust path as necessary). The data volume will survive up/down/start/stop of the Docker Compose project.
You can delete the volume by running something like docker-compose down -v, which will destroy containers and volumes.
I have my docker container images in different directories. And I would like to specify the path of the directory in the docker -run command. There is a method to change this path by editing the '-g' option in the configuration file, but it requires to restart the docker deamon. Is there any way to specify the docker image path in the docker-run command itself?
Docker must have the knowledge of not just your image physical location, but its complete tree. because docker image is made up of layers, where each layer is built with one Dockerfile command.
Hence, you should let docker register / know all the images from the directory where the images are present. Moreover, if you have physically copied these images from another machine, they would not work unless they are registered / tagged within Docker engine.
The short answer to your question is NO, it is not possible.
Docker engine itself should manage the images, you could do all what docker engine is doing by changing all the configuration files it maintains internally, because all of them are plain text. But it is definitely not worth your time, and you are better off with docker managing the images itself.
Trying to fix errors and debug problems with my application that is split over several containers, I frequently edit files in containers:
either I am totally lazy and install nano and edit directly in container or
I docker cp the file out of the container, edit it, copy it back and restart the container
Those are intermediate steps before coming to new content for container build, which takes a lot longer than doing the above (which of course is only intermediate/fiddling around).
Now I frequently break the starting program of the container, which in the breaking cases is either a node script or a python webserver script, both typically fail from syntax errors.
Is there any way to save those containers? Since they do not start, I cannot docker exec into them, and thus they are lost to me. I then go the rm/rmi/build/run route after fixing the offending file in the build input.
How can I either edit files in a stopped container, or cp them in or start a shell in a stopped container - anything that allows me to fix this container?
(It seems a bit like working on a remote computer and breaking the networking configuration - connection is lost "forever" this way and one has to use a fallback, if that exists.)
How to edit Docker container files from the host? looks relevant but is outdated.
I had a problem with a container which wouldn't start due to a bad config change I made.
I was able to copy the file out of the stopped container and edit it. something like:
docker cp docker_web_1:/etc/apache2/sites-enabled/apache2.conf .
(correct the file)
docker cp apache.conf docker_web_1:/etc/apache2/sites-enabled/apache2.conf
Answering my own question.. still hoping for a better answer from a more knowledgable person!!
There are 2 possibilities.
1) Editing file system on host directly. This is somewhat dangerous and has a chance of completely breaking the container, possibly other data depending on what goes wrong.
2) Changing the startup script to something that never fails like starting a bash, doing the fixes/edits and then changing the startup program again to the desired one (like node or whatever it was before).
More details:
1) Using
docker ps
to find the running containers or
docker ps -a
to find all containers (including stopped ones) and
docker inspect (containername)
look for the "Id", one of the first values.
This is the part that contains implementation detail and might change, be aware that you may lose your container this way.
Go to
/var/lib/docker/aufs/diff/9bc343a9..(long container id)/
and there you will find all files that are changed towards the image the container is based upon. You can overwrite files, add or edit files.
Again, I would not recommend this.
2) As is described at https://stackoverflow.com/a/32353134/586754 you can find the configuration json config.json at a path like
/var/lib/docker/containers/9bc343a99..(long container id)/config.json
There you can change the args from e. g. "nodejs app.js" to "/bin/bash". Now restart the docker service and start the container (you should see that it now correctly starts up). You should use
docker start -i (containername)
to make sure it does not quit straight away. You can now work with the container and/or later attach with
docker exec -ti (containername) /bin/bash
Also, docker cp is rather useful for copying files that were edited outside of the container.
Also, one should only fall back to those measures if the container is more or less "lost" anyway, so any change would be an improvement.
You can edit container file-system directly, but I don't know if it is a good idea.
First you need to find the path of directory which is used as runtime root for container.
Run docker container inspect id/name.
Look for the key UpperDir in JSON output.
That is your directory.
If you are trying to restart an stopped container and need to alter the container because of misconfiguration but the container isn't starting you can do the following which works using the "docker cp" command (similar to previous suggestion). This procedure lets you remove files and do any other changes needed. With luck you can skip a lot of the steps below.
Use docker inspect to find entrypoint, (named Path in some versions)
Create a clone of the using docker run
Enter clone using docker exec -ti bash (if *nix container)
Locate entrypoint file location by looking though the clone to find
Copy the old entrypoint script using docker cp : ./
Modify or create a new entrypoint script for instance
#!/bin/bash
tail -f /etc/hosts
ensure the script has execution rights
Replace the old entrypoint using docker cp ./ :
start the old container using start
redo steps 6-9 until the starts
Fix issues in container
Restore entrypoint if needed and redo steps 6-9 as required
Remove clone if needed