I have added docker compose to my project. When I debug the project it loads the docker compose file. In the override yml I have specified a postgresql image and volume so it automatically brings up the development database. This is great because you can clone repo and not have to install any local software apart from docker.
The only thing that is not good is running tests. When I run tests it doesn't bring up the database container, it just executes the code inside the test project. So tester has to manually start the database image.
I feel like I am probably doing something wrong. Is there a better way to make the tests work with the visual studio docker compose support so it brings up the database automatically?
I thought about running the tests inside the docker file but I think that might get in the way of development. What is a good approach here?
I would not recommend running tests inside your Dockerfile. This will complicate your development process as you have said.
In terms of the database, you can just run it outside of docker-compose so that it is always running in the background. Just remove the postgres config from your docker-compose.yml and run postgres with docker run ... instead. This way it will always be running until you stop it with docker stop ...
docker run -v /tmp/pgdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=<PASSWORD> -d postgres
Related
I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?
When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?
I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up
to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.
I've got laravel sail which as I know is few containers (mysql, redis, laravel, ...). Is there an easy way to just pack up the whole thing to ex. Docker Hub and easly download it on production server, and when i update it on localhost and run docker push, just run docker pull. Then everything (like new commands in DockerFile | apt install thing) will be updated and working exacly how it worked on localhost
I read the documentation, but I cannot figure out how docker works and how to easly change project location (Ex. I'm working on project at work, sometimes at home and this will be much easier to run docker push when I need build source code and deploy it)
I'm keeping source code on github, and it's working for dev servers, but to deploy something I have to check all dependencies and DockerFile, .env file and other things to make it works on production.
Thanks for help!
You can use the existing docker-compose.yml and just run docker-compose up -d on production to start all containers. Just be sure to for example disable xdebug on production as it slows down every request.
We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it
I have multiple projects that I need to switch in between on a regular basis. The projects are setup via docker-compose, yet some need external containers to be available.
So in order to run docker-compose up -d in a project, I have to switch to a different directory first and start some basic service containers there (shared instances of mysql, redis, and the like).
I do not want to run all the containers in parallel, and for some it is not possible as they they listen on the same port.
What I also find annoying that certain containers need a script to be run inside of them in order to function properly in a development, and I find myself repeating doing the same commands over again just in order to switch to a project.
I think this can be automated, I am just unsure how to tackle this problem.
How can I manage to quickly switch the docker environments? My goal is to just have a one-liner.
My current workflow now involves desk.
For each project, I have initialized a desk via:
desk edit project_a
and there I run all the steps that I would have done manually, e.g.:
ponysay "INIT PROJECT A"
docker stop $(docker ps -a -q) # stopping all the running containers
cd ~/src/docker-compose/basic-services
docker-compose up -d
cd ~/src/project_a
docker-compose up -d
docker exec -it project_a_container_name /var/www/project_a/docker/scripts/dev-init.sh
and I switch between the enviornments via:
desk . project_a
desk . project_b
and switching projects now has become quite easy.
Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.