I'm trying to build an image for my app, FROM ruby:2.2.1, my app folder sums up about 200 mb compressed.
I'm receiving a "Your disk is full" when running bundle install. It's also takes too much time to create the compressed context. However runing a df on /var/ shows more than 1TB available, this however is not what bother me.
My question is, can I ignore everything using an * in .dockerignore and then add my root project folder as a volume using docker-compose? does this sounds like a good idea?
I've also think in:
Move the Dockerfile to a subfolder (but I think i'm not able to add a parent folder as volume using docker compose
Do a git clone in the Dockerfile, but as I already have the files on my computer this sounds like a dumb step.
Should I just figure out how to add more disk space to the docker container? But I still dont like the time that it takes to create the context.
Note, your question doesn't match your title or first half of your post, I'll answer what you've asked.
My question is, can I ignore everything using an * in .dockerignore and then add my root project folder as a volume using docker-compose? does this sounds like a good idea?
You can add your project with a volume in docker-compose, but you lose much of the portability (your image will be incomplete if anyone else tries to use it without your volume data). You also lose the ability to do any compilation steps and may increase your container startup time as it pulls in dependencies. Lastly, if you run out of space on build, there's a good chance you'll run out of space on a run unless your volume data is a significant portion of your container size.
If I ignore a file on .dockerignore can I use COPY on that file from Dockerfile?
No, you can't use COPY or ADD on any file that's excluded in the push to the docker daemon via .dockerignore.
Related
I have just committed and saved a MySQL container to an image. This MySQL container was created using Portainer, a long time ago, with the main upstream MySQL image through the Portainer web interface making some mouse clicks.
The point with the image was to take it to another server with all the history, metadata and such. I saved also the volume with MySQL data.
I managed to replicate perfectly the same environment on the new server.
But now I'm a bit concerned as I can not find a way to update the "base" MySQl image.
To be clear, I did not build any image with any Dockerfile. The process was exactly as I stated before, through Portainer using MySQL mainstream image from Docker Hub.
So, is there any way to update the MySQL part of my container? I believe there should be, because of all that layers Docker philosophy.
Thanks for your time and help
You can't update the base image underneath an existing image, no matter how you created it. You need to start over from the updated base image and re-run whatever commands you originally ran to create the image. The standard docker build system will do this all for you, given a straightforward text description of what image you start FROM and what commands you need to RUN.
In the particular case of the Docker Hub database images, there's actually fairly little you can do with a derived image. These images are generally set up so that it's impossible to create a derived image with preloaded data; data is always in a volume, possibly an anonymous volume that gets automatically created, and that can't be committed. You can add files to /docker-entrypoint-initdb.d that will be executed the first time the database starts up, but once you have data in a volume, these won't be considered.
You might try running the volume you have against an unmodified current mysql image:
docker run -d -p 3306:3306 -v mysql_data:/var/lib/mysql mysql:8
If you do need to rebuild the custom image, I'd strongly encourage you to do it by writing a Dockerfile to regenerate the image, and check that into source control. Then when you do need to update the base image (security issues happen!) you can just re-run docker build. Avoid docker commit, since it will lead to an unreproducible image and exactly the sort f question you're asking.
Is there some difference between creating image using Dockerfile vs Creating image from container? (e.g. run a container from the same base as Dockerfile, transfer isntaller to the container, run them from command line and then create image from container).
At least I found out that installing VC Runtime from Windows Base docker container does not work :(
If you create an image using a Dockerfile, it's all but trivial to update the image by checking it out from source control, updating the tag on a base image or docker pulling a newer version of it, and re-running docker build.
If you create an image by running docker commit, and you discover in a year that there's a critical security vulnerability in the base Linux distribution and you need to stop using it immediately, you need to remember what it was you did a year ago to build the image and exactly what steps you did to repeat them, and you'd better hope you do them exactly the same way again. Oh, if only you had written down in a text file what base image you started FROM, what files you had to COPY in, and then what commands you need to RUN to set up the application in the image...
In short, writing a Dockerfile, committing it to source control, and running docker build is almost always vastly better practice than running docker commit. You can set up a continuous-integration system to rebuild the image whenever your source code changes; when there is that security vulnerability it's trivial to bump the FROM line to a newer base image and rebuild.
Assuming the project was backed up with the following script:
https://gist.githubusercontent.com/pirate/265e19a8a768a48cf12834ec87fb0eed/raw/64145b8275a081e0c3082365bb1a5835c8b01b3c/docker-compose-backup.sh
and I have compressed tar archive with full backup, is there any "one-liner" way to successfully restore and run the project on clean machine?
The standard “oops I lost all of my containers” Docker restore script should be roughly
# Get a copy of the repository with docker-compose.yml
git clone git#github.com:...
# Unpack a backup specifically of the bind-mounted
# data directories
tar xzf data-backup.tar.gz
# Recreate all of the containers from scratch
docker-compose up -d --build
This requires making sure all of the data in your application is stored somewhere outside individual Docker containers. In a Docker Compose setup, that means using volumes: directives to store the data somewhere else. A typical practice is to store as much data as you can in databases, and have no persistent data at all in non-database containers. If you’re worried about losing the entire /var/lib/docker tree then prefer bind mounts to named volumes, and use whatever normal backup solution you normally use to back up the corresponding host directories.
The script you show tries to preserve a number of things that just don’t need to be backed up:
If you’re preserving the database container’s data directory in a bind-mounted host directory, you don’t need to separately take a database-level backup (though it doesn’t hurt)
docker inspect is an extremely low-level diagnostic tool and it’s usually not useful to run it; there’s nothing you can restore from it
You don’t need to docker save the images because they’re in an external Docker registry (Docker Hub, AWS ECR, ...), and regardless you’ve checked their Dockerfiles into source control and can rebuild them
You don’t need to docker export individual containers because they don’t keep mutable data, and you need to destroy them extremely routinely anyways
The one thing it does is to take advantage of reasonably-known Docker internal details to back up the content of named volumes. Manually accessing files in /var/lib/docker isn’t usually a best practice and the actual format of the files there isn’t guaranteed. The Docker documentation discusses backing up and restoring named volumes in a more portable way (but this is a place I find bind mounts to be much more convenient).
Is there a way to obtain the docker parent image tree for a given image? I know
docker history IMG_NAME will provide an image id for the current image you're working with but everything else is missing. I've read this was taken out in v1.10 for security concerns but it seems to be a larger concern not being able to verify the tree of images that a final image was created from.
The other other thing I've found is docker save IMG_NAME -o TAR_OUTPUT.tar which will let you view all of the files in each layer but that seems pretty tedious.
How can I be assured that the only things modified in a given image for a piece of software is the installation and configuration of the software itself. It seems that being able to see the changes in the Dockerfiles used to generated each successive image would be an easy way to verify this.
Apart from has been said by chintan thakar, you will have to iterate maybe several times.
An example should clarify this
Suppose you want to dig into an image, and the Dockerfile to create this image starts with
FROM wordpress
so you go to
https://hub.docker.com/_/wordpress/
have a look at the Dockerfile, and you notice that
https://github.com/docker-library/wordpress/blob/0a5405cca8daf0338cf32dc7be26f4df5405cfb6/php5.6/apache/Dockerfile
starts with
FROM php:5.6-apache
so you go to the PHP 5.6 reference at
https://github.com/docker-library/php/blob/eadc27f12cfec58e270f8e37cd1b4ae9abcbb4eb/5.6/apache/Dockerfile
and you find the Dockerfile starts with
FROM debian:jessie
so you go to the Dockerfile of Debian jessie at
https://github.com/debuerreotype/docker-debian-artifacts/blob/af5a0043a929e0c87f7610da93bfe599ac40f29b/jessie/Dockerfile
and notice that this image is built like this
FROM scratch
ADD rootfs.tar.xz /
CMD ["bash"]
So you will need to do this if you want to see from where all the files come.
If there is a security issue notified, you will also need to do this, in order to know if you are concerned or not.
I am using Docker for my deployment and as it stands I use Docker-Compose (.yml file) to launch ~6 containers simultaneously. Each image within the Compose file is locally found (no internet connection within deployment environment).
As it stands the steps my deployment takes are as follows:
Run docker-compose up (launches 6 containers from local images such as image1:latest, image2:latest, etc. using the images with the "latest" tag)
When exited/stopped, I have 6 stopped containers. Manually restart each of the six stopped containers (docker start xxx)
Manually commit each re-started container (docker commit xxx)
Manually re-tag each of the previous generation images incrementally (image1:latest -> image1:version1, image1:version2, etc.) and manually delete the image containing the "latest" tag
Manually tag each of the committed containers (which are now images) with the "latest" tag (image1:latest)
This process is rather user-involved and our deployment requires the user involvement to only be run the "docker-compose up" command then shutting down/stopping Docker-Compose.
The required end goal is to have a script, or Docker, take care of these steps by itself and end up with different generations of images (image1:version1, image1:version2, image1:latest, etc.).
So, my question is, how would I go about creating a script (or have Docker do it) where the script (or Docker) can autonomously:
Restart the stopped containers upon stopping/exiting of Docker-Compose
Commit the restarted containers
Re-tag the previous images with latest tags to an incremented version# (image1:version1, image1:version2, etc.) then delete the previous image1:latest image
Tag the newly committed restarted containers (which are now images) with the "latest" tag
This is a rather lengthy and intensive question to answer, but I would appreciate any help with any of the steps required to accomplish my task. Thank you.
The watchtower project tries to address this.
https://github.com/CenturyLinkLabs/watchtower
It auto restarts a running container when a base image is updated.
It is also intelligent so, for example, when in needs to restart a container that is linked to other containers, it does so without destroying the links.
I've never tried it but worth a shot!
Let us know how it goes. I'm gonna favourite this question as it sounds a great idea.
PS If watchtower proves a pain and you try to do this manually then ...
docker inspect
is your friend since it gives you loads of info about containers and images. Allowing you to determine current status.