I've got a project where a Flask server is run as a docker service via docker-compose (other elements like other API servers, the DB, are modeled as separate services in Docker Compose).
In my dev flow there are times when it's useful for me to drop into a bash shell (via docker exec -it <container_id> bash) and do some debugging like poking around at the files in there, take some logs and write some quick scripts to do some transformations on them, etc. In these scenarios I find it would be useful to have things like my bashrc, bash_profile, and various scripts which I find useful to do this sort of thing inside the docker container.
Is there an easy way to package these things and inject them into a (running) container? I'd prefer to not have these various debug things in the main Dockerfile which is shared.
You could make a Dockerfile.debug which uses the actual Dockerfile-image as base. Then grab your bash files into that.
Alternatively, locate the relevant container directory in /var/lib/docker and just put the files there (on the host). A trick to find the correct onion slice is to exec into the container, do a touch hello.txt, and then just find that file on the host.
Related
I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?
When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?
I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up
to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.
I am sorry for taking up your time.
I have a local docker setup and I want to copy files from my local host to my container.
But the thing is that I need a command that I can use WHILE i am inside the container.
To explain the situation further: I executed "docker exec -it CONTAINERNAME bash" to enter my container,
and now I am on /var/www/html
and I need to find a way to copy a file/folder from my local environment into that container.
Reason: I am currently writing a dockerfile which automates the process of setting things up. I need that very specific command because a Dockerfile RUN-command can only be executed while inside the container.
What I tried:
"docker cp" is a good command to use when I am outside the container but it doesn't work while in the container.
"DOCKERFILE COPY" might do the trick but I need a general shell command to double check if it really does what it is supposed to do. I must be able to reproduce the same process of my Dockerfile via manually executing the commands one by one.
Once again, I apologize for my inability to solve this problem by myself. My inexperience has caused me nothing but trouble.
Edit: I am using a Win10 64bit OS with dual monitor setup and a lefthanded mouse. My keyboard, albeit old, should possess all the necessary keys to replicate any essential keyboard-shortcuts if required. All my drivers are installed and updated.
When you build an image you need to put there everything you need for a normal work of your container. You shouldn't copy files from the host once your image is built. You might use volumes as a common storage for both the host and the container but I don't think this is your case.
Until this is not totally clear what you do I'd suggest to prepare all the data you need and put it within docker context. Then build an image. You also may find docker-compose useful as, at least, it helps separately define the context and the path to your dockerfile if needed.
We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it
My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.
Trying to fix errors and debug problems with my application that is split over several containers, I frequently edit files in containers:
either I am totally lazy and install nano and edit directly in container or
I docker cp the file out of the container, edit it, copy it back and restart the container
Those are intermediate steps before coming to new content for container build, which takes a lot longer than doing the above (which of course is only intermediate/fiddling around).
Now I frequently break the starting program of the container, which in the breaking cases is either a node script or a python webserver script, both typically fail from syntax errors.
Is there any way to save those containers? Since they do not start, I cannot docker exec into them, and thus they are lost to me. I then go the rm/rmi/build/run route after fixing the offending file in the build input.
How can I either edit files in a stopped container, or cp them in or start a shell in a stopped container - anything that allows me to fix this container?
(It seems a bit like working on a remote computer and breaking the networking configuration - connection is lost "forever" this way and one has to use a fallback, if that exists.)
How to edit Docker container files from the host? looks relevant but is outdated.
I had a problem with a container which wouldn't start due to a bad config change I made.
I was able to copy the file out of the stopped container and edit it. something like:
docker cp docker_web_1:/etc/apache2/sites-enabled/apache2.conf .
(correct the file)
docker cp apache.conf docker_web_1:/etc/apache2/sites-enabled/apache2.conf
Answering my own question.. still hoping for a better answer from a more knowledgable person!!
There are 2 possibilities.
1) Editing file system on host directly. This is somewhat dangerous and has a chance of completely breaking the container, possibly other data depending on what goes wrong.
2) Changing the startup script to something that never fails like starting a bash, doing the fixes/edits and then changing the startup program again to the desired one (like node or whatever it was before).
More details:
1) Using
docker ps
to find the running containers or
docker ps -a
to find all containers (including stopped ones) and
docker inspect (containername)
look for the "Id", one of the first values.
This is the part that contains implementation detail and might change, be aware that you may lose your container this way.
Go to
/var/lib/docker/aufs/diff/9bc343a9..(long container id)/
and there you will find all files that are changed towards the image the container is based upon. You can overwrite files, add or edit files.
Again, I would not recommend this.
2) As is described at https://stackoverflow.com/a/32353134/586754 you can find the configuration json config.json at a path like
/var/lib/docker/containers/9bc343a99..(long container id)/config.json
There you can change the args from e. g. "nodejs app.js" to "/bin/bash". Now restart the docker service and start the container (you should see that it now correctly starts up). You should use
docker start -i (containername)
to make sure it does not quit straight away. You can now work with the container and/or later attach with
docker exec -ti (containername) /bin/bash
Also, docker cp is rather useful for copying files that were edited outside of the container.
Also, one should only fall back to those measures if the container is more or less "lost" anyway, so any change would be an improvement.
You can edit container file-system directly, but I don't know if it is a good idea.
First you need to find the path of directory which is used as runtime root for container.
Run docker container inspect id/name.
Look for the key UpperDir in JSON output.
That is your directory.
If you are trying to restart an stopped container and need to alter the container because of misconfiguration but the container isn't starting you can do the following which works using the "docker cp" command (similar to previous suggestion). This procedure lets you remove files and do any other changes needed. With luck you can skip a lot of the steps below.
Use docker inspect to find entrypoint, (named Path in some versions)
Create a clone of the using docker run
Enter clone using docker exec -ti bash (if *nix container)
Locate entrypoint file location by looking though the clone to find
Copy the old entrypoint script using docker cp : ./
Modify or create a new entrypoint script for instance
#!/bin/bash
tail -f /etc/hosts
ensure the script has execution rights
Replace the old entrypoint using docker cp ./ :
start the old container using start
redo steps 6-9 until the starts
Fix issues in container
Restore entrypoint if needed and redo steps 6-9 as required
Remove clone if needed