Script to clone/snapshot Docker Containers including their Data? - shell

I would like to clone a dockerized application including all its data, which uses three containers in this example: 1) a web application container such as a CMS, 2) a database container and 3) a data-volume container (using docker volumes).
With docker-compose, I can easily create identical instances of these containers with just the initial data. But what, if I want to clone a set of running containers on the same server, including all its accumulated data, in a similar way as I would clone a KVM container? With KVM I would suspend or shutdown the VM, clone with something like virt-clone and then start the cloned guest, which has all the same data as the original.
One use case would be to create a clone/snapshot of a running development web-server before making major changes or before installing new versions of plugins.
With Docker, this does not seem to be so straightforward, as data is not automatically copied together with its container. Ideally I would like to do something simple like docker-compose clone and end up with a second set of containers identical to the first, including all their data. Neither Docker nor docker-compose provides a clone command (as of version 1.8), thus I would need to consider various approaches, like backing up & restoring the data/database or using a third party tool like Flocker.
Related to this is the question on how to do something similar to KVM snapshots of a dockerized app, with the ability to easily return to a previous state. Preferably the cloning, snapshotting and reverting should be possible with minimal downtime.
What would be the preferred Docker way of accomplishing these things?
Edit: Based on the first answer, I will make my question a little more specific in order to hopefully arrive at programmatic steps to be able to do something like docker-compose-clone and docker-compose-snapshot using a bash or python script. Cloning the content of the docker volumes seems to be the key to this, as the containers themselves are basically cloned each time I run docker-compose on the same yaml file.
Generally my full-clone script would need to
duplicate the directory containing the docker-compose file
temporarily stop the containers
create (but not necessarily run) the second set of containers
determine the data-volumes to be duplicated
backup these data-volumes
restore the data-volumes into the cloned data container
start the second set of containers
Would this be the correct way to go about it and how should I implement this? I'm especially not sure on how to do step 4 (determine the data-volumes to be duplicated) in a script, as the command docker volume ls will only be available in Docker 1.9.
How could I do something similar to KVM snapshots using this approach? (possibly using COW filesystem features from ZFS, which my Docker install is already using).

With docker you would keep all of your state in volumes. Your containers can be recreated from images as long as they re-use the same volumes (either from the host or a data-volume container).
I'm not aware of an easy way to export volumes from a data-volume container. I know that the docker 1.9 release is going to be adding some top-level apis for interacting with volumes, but I'm not sure if export will be available immediately.
If you're using a host volume, you could manage the state externally from docker.

Currently, I'm using the following script to clone a dockerized CMS web-application Concrete5.7, based on the approach outlined above. It creates a second set of identical containers using docker-compose, then it backs up just the data from the data volumes, and restores it to the data containers in the second set.
This could serve as an example for developing a more generalised script:
#!/bin/bash
set -e
# This script will clone a set of containers including all its data
# the docker-compose.yml is in the PROJ_ORIG directory
# - do not use capital letters or underscores for clone suffix,
# as docker-compose will modify or remove these
PROJ_ORIG="c5app"
PROJ_CLONE="${PROJ_ORIG}003"
# 1. duplicate the directory containing the docker-compose file
cd /opt/docker/compose/concrete5.7/
cp -Rv ${PROJ_ORIG}/ ${PROJ_CLONE}/
# 2. temporarily stop the containers
cd ${PROJ_ORIG}
docker-compose stop
# 3. create, run and stop the second set of containers
# (docker-compose does not have a create command)
cd ../${PROJ_CLONE}
docker-compose up -d
docker-compose stop
# 4. determine the data-volumes to be duplicated
# a) examine which containers are designated data containers
# b) then use docker inspect to determine the relevant directories
# c) store destination directories & process them for backup and clone
#
# In this appliaction we use two data containers
# (here we used DATA as part of the name):
# $ docker-compose ps | grep DATA
# c5app_DB-DATA_1 /true Exit 0
# c5app_WEB-DATA_1 /true Exit 0
#
# $ docker inspect ${PROJ_ORIG}_WEB-DATA_1 | grep Destination
# "Destination": "/var/www/html",
# "Destination": "/etc/apache2",
#
# $ docker inspect ${PROJ_ORIG}_DB-DATA_1 | grep Destination
# "Destination": "/var/lib/mysql",
# these still need to be determined manually from examining
# the docker-compose.yml or using the commands in 4.
DATA_SUF1="_WEB-DATA_1"
VOL1_1="/etc/apache2"
VOL1_2="/var/www/html"
DATA_SUF2="_DB-DATA_1"
VOL2_1="/var/lib/mysql"
# 5. Backup Data:
docker run --rm --volumes-from ${PROJ_ORIG}${DATA_SUF1} -v ${PWD}:/clone debian tar -cpzf /clone/clone${DATA_SUF1}.tar.gz ${VOL1_1} ${VOL1_2}
docker run --rm --volumes-from ${PROJ_ORIG}${DATA_SUF2} -v ${PWD}:/clone debian tar -cpzf /clone/clone${DATA_SUF2}.tar.gz ${VOL2_1}
# 6. Clone Data:
# existing files in volumes need to be deleted before restoring,
# as the installation may have created additional files during initial run,
# which do not get overwritten during restore
docker run --rm --volumes-from ${PROJ_CLONE}${DATA_SUF1} -v ${PWD}:/clone debian bash -c "rm -rf ${VOL1_1}/* ${VOL1_2}/* && tar -xpf /clone/clone${DATA_SUF1}.tar.gz"
docker run --rm --volumes-from ${PROJ_CLONE}${DATA_SUF2} -v ${PWD}:/clone debian bash -c "rm -rf ${VOL2_1}/* && tar -xpf /clone/clone${DATA_SUF2}.tar.gz"
# 7. Start Cloned Containers:
docker-compose start
# 8. Remove tar archives
rm -v clone${DATA_SUF1}.tar.gz
rm -v clone${DATA_SUF2}.tar.gz
Its been tested and works, but still has the following limitations:
the data-volumes to be duplicated need to be determined manually and
the script needs to be modified, depending on the number of data-containers and data-volumes
there is no snap-shot/restore capability
I welcome any suggestions for improvements (especially step 4.). Or, if someone would come up with a different, better approach I would accept that as an answer instead.
The application used in this example, together with the docker-compose.yml file can be found here.

On Windows, there is a port of docker's open source container project available from Windocks that does what you need. There are two options:
Smaller sized databases are copied into containers via an Add database command specified while building the image. After that every container built from that receives the database automatically.
For large databases, there is a cloning functionality. The databases are cloned during the creation of containers and the clones are done in seconds even for terabyte size DBs. Deleting a container also removes the clone automatically. Right now its only available for SQL Server though.
See here for more details on the database adding and cloning.

Related

copy bash command history (recursive search commands) into Docker container

I have a container which I am using interactively (docker run -it), in it, i have to run a pretty common set of commands, though not always in a set order, hence I cannot just run a script.
Thus, I would like for a way to have my commands in recursive search (Ctrl+R) be available in the Docker container.
Any idea how I can do this?
Let's mount the history file into the container from the host so it's contains will get preserved the container death.
# In some directory
touch bash_history
docker run -v ./bash_history:/root/.bash_history:Z -it fedora /bin/bash
I would recommend to have separate bash history to the one that you use on the host for the safety reasons.
I found helpful info in these questions:
Docker and .bash_history
Docker: preserve command history
https://superuser.com/questions/1158739/prompt-command-to-reload-from-bash-history
They use docker volume mounts however, which mean that the container commands affect the local (host PC) commands, which I do not want.
It seems I will have to copy ~/.bash_history from local into container which will make the history work 'one-way'.
UPDATE: Working:
COPY your_command_script.sh some_folder/my_history
ENV HISTFILE myroot/my_history
RUN PROMPT_COMMAND="history -a; history -r"
Explanation:
copy command script into a file in container
tell the shell to look at a different file for history
reload the history file

running a bash client at docker creation to set an environment variable

From examples I've seen one can set environment variables in docker-compose.yml like so:
services:
postgres:
image: my_node_app
ports: -8080:8080
environment:
APP_PASSWORD: mypassword
...
For security reasons, my use case requires me to fetch the password from a server that we have a bash client for:
#!/bin/bash
get_credential <server> <dev-environment> <role> <key>
In docker documentation, I found this, which says that I can pass in shell environment variable values to docker compose. So I can run the bash client to grab the passwords in my starting shell that creates the docker instances. However, that requires me to have my bash client outside docker and inside my maven project.
Another way to do this would be to run/cmd/entrypoint a bash script that can set environment variable for the docker instance. Since my docker image runs node.js, currently my Dockerfile is like this:
FROM node:4-slim
MAINTAINER myself
# ... do Dockerfile stuff
# TRIAL #1: run a bash script to set the environment varable --- UNSUCCESSFUL!
COPY set_en_var.sh /
RUN chmod +x /set_en_var.sh
RUN /bin/bash /set_en_var.sh
# original entry point
#ENTRYPOINT ["node", "mynodeapp.js", "configuration.js"]
# TRIAL #2: use a bash script as entrypoint that sets
# the environment variable and runs my node app . --- UNSUCCESSFUL TOO!
ENTRYPOINT ["/entrypoint.sh"]
Here is the code for entrypoint.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>)
export APP_PASSWORD=( $cred_str )
# run the original entrypoint command
node mynodeapp.js configuration.js
And here is code for my set_en_var.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>
export APP_PASSWORD=( $cred_str )
So 2 questions:
Which is a better choice, having my bash client for password live inside docker or outside docker?
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Which is a better choice, having my bash client for password live inside docker or outside docker?
Always have it inside. You don't want dependencies on the host OS. You want to avoid that situation as much as possible
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Consider the below line of code you used
RUN /bin/bash /set_en_var.sh
This won't work at all. Because you don't make any change to the docker container as such. You just run a bash which gets some environment variables and then the bash exits and nothing on the OS gets changes. Dockerfile build will only maintain changes that happened to the OS from that command. And in your case except for that session of the bash, nothing changes.
Next your approach to do this during the build time is also not justified. If you build it with the environment variables inside it then you are breaking the purpose of having a command to fetch the latest credentials. Suppose your change the password, then this would require you to rebuild the image (in case it had worked)
Now your entrypoint.sh approach is the right one and it should work. You should just check what is going wrong with it. Also echo the cred_str for your testing to make sure you are getting the right credentials detail back from the command
Last you should change the line
node mynodeapp.js configuration.js
to
exec node mynodeapp.js configuration.js
This makes sure that your node process becomes the PID 1.

How to restore a mongo Docker container on the Mac

I removed my mongo container
docker rm myMongoDB
Did I lose all my data, or I can restore it? If so, how?
When I try to run another container from the image
docker run -p 27017:27017 -d mongo --name myMongo2
it won't run and its STATUS says Exited (2) 8 seconds ago.
The official mongo image on Docker Hub (https://hub.docker.com/_/mongo/) defines two volumes to store data in the Dockerfile. If you did not explicitly specify a -v / --volume option when running the container, Docker created anonymous (unnamed) volumes for those, and those volumes may still be around. It may be a bit difficult to find which volumes were last used by the container, because they don't have a name.
To list all volumes that are still present on the docker host, use;
docker volume ls
Which should give you something like this;
DRIVER VOLUME NAME
local 9142c58ad5ac6d6e40ccd84096605f5393bf44ab7b5fe51edfa23cd1f8e13e4b
local 4ac8e34c11ac7955b9b79af10c113b870edd0869889d1005ee17e98e7c6c05f1
local da0b4a7a00c4b60c492599dabe1dbc501113ae4b2dd1811527384a5dc26cab13
local 81a40483ae00d72dcfa2117b3ae40f3fe79038544253e60b85a8d0efc8f3d139
To see what's in a volume, you can attach it to a temporary container, and check what's in there. For example;
docker run -it -v 81a40483ae00d72dcfa2117b3ae40f3fe79038544253e60b85a8d0efc8f3d139:/volume-data ubuntu
That will start an interactive shell in a new ubuntu container, with the volume 81a40483ae00d72dcfa2117b3ae40f3fe79038544253e60b85a8d0efc8f3d139 mounted at /volume-data/ inside the container.
You can then go into that directory, and check if it's the volume you're looking for:
root#08c11a34ed44:/# cd /volume-data/
root#08c11a34ed44:/volume-data# ls -la
once you identified which volumes (according to the Dockerfile, the mongo image uses two), you can start a new mongo container, and mount those volumes;
docker run -d --name mymongo \
-v 4ac8e34c11ac7955b9b79af10c113b870edd0869889d1005ee17e98e7c6c05f1:/data/db/ \
-v da0b4a7a00c4b60c492599dabe1dbc501113ae4b2dd1811527384a5dc26cab13:/data/configdb/ \
mongo
I really suggest you read the Where to Store Data section in the documentation for the mongo image on Docker Hub to prevent loosing your data.
NOTE
I also noted that your last command puts the --name myMongo2 after the image name; it should be before mongo (the image name). Also myMongo2 is an invalid container name, as it is not allowed to have uppercase characters.

convert Dockerfile to Bash script

Is there any easy way to convert a Dockerfile to a Bash script in order to install all the software on a real OS? The reason is that docker container I can not change and I would like afterwards change few things if they did not work out.
In short - no.
By parsing the Dockerfile with a tool such as dockerfile-parse you could run the individual RUN commands, but this would not replicate the Dockerfile's output.
You would have to be running the same version of the same OS.
The ADD and COPY commands affect the filesystem, which is in its own namespace. Running these outside of the container could potentially break your host system. Your host will also have files in places that the container image would not.
VOLUME mounts will also affect the filesytem.
The FROM image (which may in turn be descended from other images) may have other applications installed.
Writing Dockerfiles can be a slow process if there is a large installation or download step. To mitigate that, try adding new packages as a new RUN command (to take advantage of the cache) and add features incrementally, only optimising/compressing the layers when the functionality is complete.
You may also want to use something like ServerSpec to get a TDD approach to your container images and prevent regressions during development.
Best practice docs here, gotchas and the original article.
Basically you can make a copy of a Docker container's file system using “docker export”, which you can then write to a loop device:
docker build -t <YOUR-IMAGE> ...
docker create --name=<YOUR-CONTAINER> <YOUR-IMAGE>
dd if=/dev/zero of=disk.img bs=1 count=0 seek=1G
mkfs.ext2 -F disk.img
sudo mount -o loop disk.img /mnt
docker export <YOUR-CONTAINER> | sudo tar x -C /mnt
sudo umount /mnt
Convert a Docker container to a raw file system image.
More info here:
http://mr.gy/blog/build-vm-image-with-docker.html
You can of course convert a Dockerfile to bash script commands. Its just a matter of determining what the translation means. All docker installs, apply changes to a "file system layer" and that means all changes can be implemented in a real OS.
An example of this process is here:
https://github.com/thatkevin/dockerfile-to-shell-script
It is an example of how you would do the translation.
you can install application inside dockerfile like this
FROM <base>
RUN apt-get update -y
RUN apt-get install <some application> -y

How to mount a host directory in a Docker container

I am trying to mount a host directory into a Docker container so that any updates done on the host is reflected into the Docker containers.
Where am I doing something wrong. Here is what I did:
kishore$ cat Dockerfile
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get -y install git curl vim
CMD ["/bin/bash"]
WORKDIR /test_container
VOLUME ["/test_container"]
kishore$ tree
.
├── Dockerfile
└── main_folder
├── tfile1.txt
├── tfile2.txt
├── tfile3.txt
└── tfile4.txt
1 directory, 5 files
kishore$ pwd
/Users/kishore/tdock
kishore$ docker build --tag=k3_s3:latest .
Uploading context 7.168 kB
Uploading context
Step 0 : FROM ubuntu:trusty
---> 99ec81b80c55
Step 1 : RUN apt-get update
---> Using cache
---> 1c7282005040
Step 2 : RUN apt-get -y install git curl vim
---> Using cache
---> aed48634e300
Step 3 : CMD ["/bin/bash"]
---> Running in d081b576878d
---> 65db8df48595
Step 4 : WORKDIR /test_container
---> Running in 5b8d2ccd719d
---> 250369b30e1f
Step 5 : VOLUME ["/test_container"]
---> Running in 72ca332d9809
---> 163deb2b1bc5
Successfully built 163deb2b1bc5
Removing intermediate container b8bfcb071441
Removing intermediate container d081b576878d
Removing intermediate container 5b8d2ccd719d
Removing intermediate container 72ca332d9809
kishore$ docker run -d -v /Users/kishore/main_folder:/test_container k3_s3:latest
c9f9a7e09c54ee1c2cc966f15c963b4af320b5203b8c46689033c1ab8872a0eakishore$ docker run -i -t k3_s3:latest /bin/bash
root#0f17e2313a46:/test_container# ls -al
total 8
drwx------ 2 root root 4096 Apr 29 05:15 .
drwxr-xr-x 66 root root 4096 Apr 29 05:15 ..
root#0f17e2313a46:/test_container# exit
exitkishore$ docker -v
Docker version 0.9.1, build 867b2a9
I don't know how to check boot2docker version
Questions, issues facing:
How do I need to link the main_folder to the test_container folder present inside the docker container?
I need to make this automatically. How do I to do that without really using the run -d -v command?
What happens if the boot2docker crashes? Where are the Docker files stored (apart from Dockerfile)?
There are a couple ways you can do this. The simplest way to do so is to use the dockerfile ADD command like so:
ADD . /path/inside/docker/container
However, any changes made to this directory on the host after building the dockerfile will not show up in the container. This is because when building a container, docker compresses the directory into a .tar and uploads that context into the container permanently.
The second way to do this is the way you attempted, which is to mount a volume. Due to trying to be as portable as possible you cannot map a host directory to a docker container directory within a dockerfile, because the host directory can change depending on which machine you are running on. To map a host directory to a docker container directory you need to use the -v flag when using docker run, e.g.,:
# Run a container using the `alpine` image, mount the `/tmp`
# directory from your host into the `/container/directory`
# directory in your container, and run the `ls` command to
# show the contents of that directory.
docker run \
-v /tmp:/container/directory \
alpine \
ls /container/directory
The user of this question was using Docker version 0.9.1, build 867b2a9, I will give you an answer for docker version >= 17.06.
What you want, keep local directory synchronized within container directory, is accomplished by mounting the volume with type bind. This will bind the source (your system) and the target (at the docker container) directories. It's almost the same as mounting a directory on linux.
According to Docker documentation, the appropriate command to mount is now mount instead of -v. Here's its documentation:
--mount: Consists of multiple key-value pairs, separated by commas. Each key/value pair takes the form of a <key>=<value> tuple. The --mount syntax is more verbose than -v or --volume, but the order of the keys is not significant, and the value of the flag is easier to understand.
The type of the mount, which can be bind, volume, or tmpfs. (We are going to use bind)
The source of the mount. For bind mounts, this is the path to the file or directory on the Docker daemon host. May be specified as source or src.
The destination takes as its value the path where the file or directory will be mounted in the container. May be specified as destination, dst, or target.
So, to mount the the current directory (source) with /test_container (target) we are going to use:
docker run -it --mount src="$(pwd)",target=/test_container,type=bind k3_s3
If these mount parameters have spaces you must put quotes around them. When I know they don't, I would use `pwd` instead:
docker run -it --mount src=`pwd`,target=/test_container,type=bind k3_s3
You will also have to deal with file permission, see this article.
you can use -v option from cli, this facility is not available via Dockerfile
docker run -t -i -v <host_dir>:<container_dir> ubuntu /bin/bash
where host_dir is the directory from host which you want to mount.
you don't need to worry about directory of container if it doesn't exist docker will create it.
If you do any changes in host_dir from host machine (under root privilege) it will be visible to container and vice versa.
2 successive mounts:
I guess many posts here might be using two boot2docker, the reason you don't see anything is that you are mounting a directory from boot2docker, not from your host.
You basically need 2 successive mounts:
the first one to mount a directory from your host to your system
the second to mount the new directory from boot2docker to your container like this:
1) Mount local system on boot2docker
sudo mount -t vboxsf hostfolder /boot2dockerfolder
2) Mount boot2docker file on linux container
docker run -v /boot2dockerfolder:/root/containerfolder -i -t imagename
Then when you ls inside the containerfolder you will see the content of your hostfolder.
Is it possible that you use docker on OS X via boot2docker or something similar.
I've made the same experience - the command is correct but nothing (sensible) is mounted in the container, anyway.
As it turns out - it's already explained in the docker documentation. When you type docker run -v /var/logs/on/host:/var/logs/in/container ... then /var/logs/on/host is actually mapped from the boot2docker VM-image, not your Mac.
You'll have to pipe the shared folder through your VM to your actual host (the Mac in my case).
For those who wants to mount a folder in current directory:
docker run -d --name some-container -v ${PWD}/folder:/var/folder ubuntu
I'm just experimenting with getting my SailsJS app running inside a Docker container to keep my physical machine clean.
I'm using the following command to mount my SailsJS/NodeJS application under /app:
cd my_source_code_folder
docker run -it -p 1337:1337 -v $(pwd):/app my_docker/image_with_nodejs_etc
[UPDATE] As of ~June 2017, Docker for Mac takes care of all the annoying parts of this where you have to mess with VirtualBox. It lets you map basically everything on your local host using the /private prefix. More info here. [/UPDATE]
All the current answers talk about Boot2docker. Since that's now deprecated in favor of docker-machine, this works for docker-machine:
First, ssh into the docker-machine vm and create the folder we'll be mapping to:
docker-machine ssh $MACHINE_NAME "sudo mkdir -p \"$VOL_DIR\""
Now share the folder to VirtualBox:
WORKDIR=$(basename "$VOL_DIR")
vboxmanage sharedfolder add "$MACHINE_NAME" --name "$WORKDIR" --hostpath "$VOL_DIR" --transient
Finally, ssh into the docker-machine again and mount the folder we just shared:
docker-machine ssh $MACHINE_NAME "sudo mount -t vboxsf -o uid=\"$U\",gid=\"$G\" \"$WORKDIR\" \"$VOL_DIR\""
Note: for UID and GID you can basically use whatever integers as long as they're not already taken.
This is tested as of docker-machine 0.4.1 and docker 1.8.3 on OS X El Capitan.
Using command-line :
docker run -it --name <WHATEVER> -p <LOCAL_PORT>:<CONTAINER_PORT> -v <LOCAL_PATH>:<CONTAINER_PATH> -d <IMAGE>:<TAG>
Using docker-compose.yaml :
version: '2'
services:
cms:
image: <IMAGE>:<TAG>
ports:
- <LOCAL_PORT>:<CONTAINER_PORT>
volumes:
- <LOCAL_PATH>:<CONTAINER_PATH>
Assume :
IMAGE: k3_s3
TAG: latest
LOCAL_PORT: 8080
CONTAINER_PORT: 8080
LOCAL_PATH: /volume-to-mount
CONTAINER_PATH: /mnt
Examples :
First create /volume-to-mount. (Skip if exist)
$ mkdir -p /volume-to-mount
docker-compose -f docker-compose.yaml up -d
version: '2'
services:
cms:
image: ghost-cms:latest
ports:
- 8080:8080
volumes:
- /volume-to-mount:/mnt
Verify your container :
docker exec -it CONTAINER_ID ls -la /mnt
docker run -v /host/directory:/container/directory -t IMAGE-NAME /bin/bash
docker run -v /root/shareData:/home/shareData -t kylemanna/openvpn /bin/bash
In my system I've corrected the answer from nhjk, it works flawless when you add the -t flag.
On Mac OS, to mount a folder /Users/<name>/projects/ on your mac at the root of your container:
docker run -it -v /Users/<name>/projects/:/projects <container_name> bash
ls /projects
If the host is windows 10 then instead of forward slash, use backward slash -
docker run -it -p 12001:80 -v c:\Users\C\Desktop\dockerStorage:/root/sketches
Make sure the host drive is shared (C in this case). In my case I got a prompt asking for share permission after running the command above.
For Windows 10 users, it is important to have the mount point inside the C:/Users/ directory. I tried for hours to get this to work. This post helped but it was not obvious at first as the solution for Windows 10 is a comment to an accepted answer. This is how I did it:
docker run -it -p 12001:80 -v //c/Users/C/Desktop/dockerStorage:/root/sketches \
<your-image-here> /bin/bash
Then to test it, you can do echo TEST > hostTest.txt inside your image. You should be able to see this new file in the local host folder at C:/Users/C/Desktop/dockerStorage/.
As of Docker 18-CE, you can use docker run -v /src/path:/container/path to do 2-way binding of a host folder.
There is a major catch here though if you're working with Windows 10/WSL and have Docker-CE for Windows as your host and then docker-ce client tools in WSL. WSL knows about the entire / filesystem while your Windows host only knows about your drives. Inside WSL, you can use /mnt/c/projectpath, but if you try to docker run -v ${PWD}:/projectpath, you will find in the host that /projectpath/ is empty because on the host /mnt means nothing.
If you work from /c/projectpath though and THEN do docker run -v ${PWD}:/projectpath and you WILL find that in the container, /projectpath will reflect /c/projectpath in realtime. There are no errors or any other ways to detect this issue other than seeing empty mounts inside your guest.
You must also be sure to "share the drive" in the Docker for Windows settings.
Jul 2015 update - boot2docker now supports direct mounting. You can use -v /var/logs/on/host:/var/logs/in/container directly from your Mac prompt, without double mounting
I've been having the same issue.
My command line looked like this:
docker run --rm -i --name $NAME -v `pwd`:/sources:z $NAME
The problem was with 'pwd'. So I changed that to $(pwd):
docker run --rm -i --name $NAME -v $(pwd):/sources:z $NAME
How do I link the main_folder to the test_container folder present inside the docker container?
Your command below is correct, unless your on a mac using boot2docker(depending on future updates) in which case you may find the folder empty. See mattes answer for a tutorial on correcting this.
docker run -d -v /Users/kishore/main_folder:/test_container k3_s3:latest
I need to make this run automatically, how to do that without really
using the run -d -v command.
You can't really get away from using these commands, they are intrinsic to the way docker works. You would be best off putting them into a shell script to save you writing them out repeatedly.
What happens if boot2docker crashes? Where are the docker files stored?
If you manage to use the -v arg and reference your host machine then the files will be safe on your host.
If you've used 'docker build -t myimage .' with a Dockerfile then your files will be baked into the image.
Your docker images, i believe, are stored in the boot2docker-vm. I found this out when my images disappeared when i delete the vm from VirtualBox. (Note, i don't know how Virtualbox works, so the images might be still hidden somewhere else, just not visible to docker).
Had the same problem. Found this in the docker documentation:
Note: The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile, the VOLUME instruction does not support passing a host-dir, because built images should be portable. A host directory wouldn’t be available on all potential hosts.
So, mounting a read/write host directory is only possible with the -v parameter in the docker run command, as the other answers point out correctly.
I found that any directory laying under system directive like /var, /usr, /etc could not be mount under the container.
The directive should be at user's space -v switch instructs docker daemon to mount local directory to the container, for example:
docker run -t -d -v /{local}/{path}:/{container}/{path} --name {container_name} {imagename}
Here's an example with a Windows path:
docker run -P -it --name organizr --mount src="/c/Users/MyUserName/AppData/Roaming/DockerConfigs/Organizr",dst=/config,type=bind organizrtools/organizr-v2:latest
As a side note, during all of this hair pulling, having to wrestle with figuring out, and retyping paths over and over and over again, I decided to whip up a small AutoHotkey script to convert a Windows path to a "Docker Windows" formatted path. This way all I have to do is copy any Windows path that I want to use as a mount point to the clipboard, press the "Apps Key" on the keyboard, and it'll format it into a path format that Docker appreciates.
For example:
Copy this to your clipboard:
C:\Users\My PC\AppData\Roaming\DockerConfigs\Organizr
press the Apps Key while the cursor is where you want it on the command-line, and it'll paste this there:
"/c/Users/My PC/AppData/Roaming/DockerConfigs/Organizr"
Saves a lot to time for me. Here it is for anyone else who may find it useful.
; --------------------------------------------------------------------------------------------------------------
;
; Docker Utility: Convert a Windows Formatted Path to a Docker Formatter Path
; Useful for (example) when mounting Windows volumes via the command-line.
;
; By: J. Scott Elblein
; Version: 1.0
; Date: 2/5/2019
;
; Usage: Cut or Copy the Windows formatted path to the clipboard, press the AppsKey on your keyboard
; (usually right next to the Windows Key), it'll format it into a 'docker path' and enter it
; into the active window. Easy example usage would be to copy your intended volume path via
; Explorer, place the cursor after the "-v" in your Docker command, press the Apps Key and
; then it'll place the formatted path onto the line for you.
;
; TODO:: I may or may not add anything to this depending on needs. Some ideas are:
;
; - Add a tray menu with the ability to do some things, like just replace the unformatted path
; on the clipboard with the formatted one rather than enter it automatically.
; - Add 'smarter' handling so the it first confirms that the clipboard text is even a path in
; the first place. (would need to be able to handle Win + Mac + Linux)
; - Add command-line handling so the script doesn't need to always be in the tray, you could
; just pass the Windows path to the script, have it format it, then paste and close.
; Also, could have it just check for a path on the clipboard upon script startup, if found
; do it's job, then exit the script.
; - Add an 'all-in-one' action, to copy the selected Windows path, and then output the result.
; - Whatever else comes to mind.
;
; --------------------------------------------------------------------------------------------------------------
#NoEnv
SendMode Input
SetWorkingDir %A_ScriptDir%
AppsKey::
; Create a new var, store the current clipboard contents (should be a Windows path)
NewStr := Clipboard
; Rip out the first 2 chars (should be a drive letter and colon) & convert the letter to lowercase
; NOTE: I could probably replace the following 3 lines with a regexreplace, but atm I'm lazy and in a rush.
tmpVar := SubStr(NewStr, 1, 2)
StringLower, tmpVar, tmpVar
; Replace the uppercase drive letter and colon with the lowercase drive letter and colon
NewStr := StrReplace(NewStr, SubStr(NewStr, 1, 2), tmpVar)
; Replace backslashes with forward slashes
NewStr := StrReplace(NewStr, "\", "/")
; Replace all colons with nothing
NewStr := StrReplace(NewStr, ":", "")
; Remove the last char if it's a trailing forward slash
NewStr := RegExReplace(NewStr, "/$")
; Append a leading forward slash if not already there
if RegExMatch(NewStr, "^/") == 0
NewStr := "/" . NewStr
; If there are any spaces in the path ... wrap in double quotes
if RegExMatch(NewStr, " ") > 0
NewStr := """" . NewStr . """"
; Send the result to the active window
SendInput % NewStr
To get this working in Windows 10 I had to open the Docker Settings window from the system tray and go to the Shared Drives section.
I then checked the box next to C. Docker asked for my desktop credentials to gain authorisation to write to my Users folder.
Then I ran the docker container following examples above and also the example on that settings page, attaching to /data in the container.
docker run -v c:/Users/<user.name>/Desktop/dockerStorage:/data -other -options
boot2docker together with VirtualBox Guest Additions
How to mount /Users into boot2docker
https://medium.com/boot2docker-lightweight-linux-for-docker/boot2docker-together-with-virtualbox-guest-additions-da1e3ab2465c
tl;dr Build your own custom boot2docker.iso with VirtualBox Guest
Additions (see link) or download
http://static.dockerfiles.io/boot2docker-v1.0.1-virtualbox-guest-additions-v4.3.12.iso
and save it to ~/.boot2docker/boot2docker.iso.
Note that in Windows you'll have to provide the absolute path.
Host: Windows 10
Container: Tensorflow Notebook
Below worked for me.
docker run -t -i -v D:/projects/:/home/chankeypathak/work -p 8888:8888 jupyter/tensorflow-notebook /bin/bash
i had same issues , i was trying to mount C:\Users\ folder on docker
this is how i did it Docker Toolbox command line
$ docker run -it --name <containername> -v /c/Users:/myVolData <imagename>
You can also do this with Portainer web application for a different visual experience.
First pull the Portainer image:
docker pull portainer/portainer
Then create a volume for Portainer:
docker volume create portainer_data
Also create a Portainer container:
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
You will be able to access the web app with your browser at this URL: "http://localhost:9000". At the first login, you will be prompted to set your Portainer admin credentials.
In the web app, follow these menus and buttons: (Container > Add container > Fill settings > Deploy Container)
I had trouble to create a "mount" volume with Portainer and I realized I had to click "bind" when creating my container's volume. Below is an illustration of the volume binding settings that worked for my container creation with a mounted volume binded to the host.
P.S.: I'm using Docker 19.035 and Portainer 1.23.1
I had the same requirement to mount host directory from container and I used volume mount command. But during testing noticed that it's creating files inside container too but after some digging found that they are just symbolic links and actual file system used form host machine.
Quoting from the Official Website:
Make sure you don’t have any previous getting-started containers running.
Run the following command from the app directory.
x86-64 Mac or Linux device:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "yarn install && yarn run dev"
Windows (PowerShell):
docker run -dp 3000:3000 `
-w /app -v "$(pwd):/app" `
node:12-alpine `
sh -c "yarn install && yarn run dev"
Aple silicon Mac or another ARM64 device:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "apk add --no-cache python2 g++ make && yarn install && yarn run dev"
Explaining:
dp 3000:3000 - same as before. Run in detached (background) mode and create a port mapping
w /app - sets the “working directory” or the current directory that the command will run from
v "$(pwd):/app" - bind mount the current directory from the host into the /app directory in the container
node:12-alpine - the image to use.
Note that this is the base image for our app from the Dockerfile sh -c "yarn install && yarn run dev" - the command.
We’re starting a shell using sh (alpine doesn’t have bash) and running yarn install to install all dependencies and then running yarn run dev. If we look in the package.json, we’ll see that the dev script is starting nodemon.

Resources