Curious about the use of docker-compose and dockerfile - spring

I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?

When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?

I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up

to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.

Related

Running tests with Visual Studio Docker compose support

I have added docker compose to my project. When I debug the project it loads the docker compose file. In the override yml I have specified a postgresql image and volume so it automatically brings up the development database. This is great because you can clone repo and not have to install any local software apart from docker.
The only thing that is not good is running tests. When I run tests it doesn't bring up the database container, it just executes the code inside the test project. So tester has to manually start the database image.
I feel like I am probably doing something wrong. Is there a better way to make the tests work with the visual studio docker compose support so it brings up the database automatically?
I thought about running the tests inside the docker file but I think that might get in the way of development. What is a good approach here?
I would not recommend running tests inside your Dockerfile. This will complicate your development process as you have said.
In terms of the database, you can just run it outside of docker-compose so that it is always running in the background. Just remove the postgres config from your docker-compose.yml and run postgres with docker run ... instead. This way it will always be running until you stop it with docker stop ...
docker run -v /tmp/pgdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=<PASSWORD> -d postgres

Can I create single Dockerfile from laradock?

I got instructed to create a single dockerfile in the root of the project, but also got a tip to use the laradock as starting point.
How can I do this? The only way so far I know to create an docker environment is to run it with docker-compose command
No, Dockerfiles are single containers (services) by design. Laradock provides a docker-compose file that references multiple dockerfiles. However you could create a smaller docker-compose file that only starts the containers you need (let's say a webserver with php, a database server and redis).
Laradock ships with way to much containers in docker-compose, that is why the tutorial tells you to specify which containers you want to run.
docker-compose up -d nginx mysql
But if you specify a minimal docker-compose.yml, you just can type
docker-compose up -d
without any additional arguments
Yes, you could add all the required services to a single container, but that would be against what you try to achieve using Docker.

Adding dot config and debugging utilities to a docker instance

I've got a project where a Flask server is run as a docker service via docker-compose (other elements like other API servers, the DB, are modeled as separate services in Docker Compose).
In my dev flow there are times when it's useful for me to drop into a bash shell (via docker exec -it <container_id> bash) and do some debugging like poking around at the files in there, take some logs and write some quick scripts to do some transformations on them, etc. In these scenarios I find it would be useful to have things like my bashrc, bash_profile, and various scripts which I find useful to do this sort of thing inside the docker container.
Is there an easy way to package these things and inject them into a (running) container? I'd prefer to not have these various debug things in the main Dockerfile which is shared.
You could make a Dockerfile.debug which uses the actual Dockerfile-image as base. Then grab your bash files into that.
Alternatively, locate the relevant container directory in /var/lib/docker and just put the files there (on the host). A trick to find the correct onion slice is to exec into the container, do a touch hello.txt, and then just find that file on the host.

Can I use a DockerFile as a script?

We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it

Met "/bin/bash: no such file" when building docker image from scratch

I'm trying to create my own docker image in a ubuntu-14 system.
My docker file is like the following:
FROM scratch
RUN /bin/bash -c 'echo "hello"'
I got the error message when I run docker build .:
exec: "/bin/sh": stat /bin/sh: no such file or directory
I guess it is because /bin/sh doesn't exist in the base image "scratch". How should I solve this problem?
Docker is basically a containerising tool that helps to build systems and bring them up and running in a flash without a lot of resource utilisation as compared to Virtual Machines.
A Docker container is basically a layered container. In case you happen to read a Dockerfile, each and every command in that file will lead to a creation of a new layer in container and the final layer is what your container actually is after all the commands in the Dockerfile has been executed.
The images available on the Dockerhub are specially optimised for this sort of environment and are very easy to setup and build. In case you are building a container right from scratch i.e. without any base image, then what you basically have is an empty container. An empty container does not understand what /bin/bash actually is and hence it won't work for you.
The Docker container does not use any specifics from your underlying OS. Multiple docker containers will make use of the same underlying kernel in an effective manner. That's it. Nothing else.
( There is however a concept of volumes wherein the container shares a specific volume on the local underlying system )
So in case you want to use /bin/bash, you need a base image which will setup the nitty gritties of this command for your container and then you can successfully execute it.
However, it is recommended that you use official Docker images for say Ubuntu and then install your custom stuff on top of it. The official images are right from the makers and are highly optimised for this environment.
Base image scratch does not use /bin/bash. So you should change to:
FROM ubuntu:14.04
RUN /bin/sh -c 'echo "hello"'

Resources