Mounting local volumes to docker container - macos

Edited for clarity
I run the following run command
docker run -d -P -v /users/username/app:/app contname
This resulsts in the following when i inspect the container
"HostConfig": {
"Binds": [
"/users/username/app:/app"
],
"Volumes": {
"/app": "/users/username/app",
"/app": "/mnt/sda1/var/lib/docker/vfs/dir/214a16c3678f93cbadb7e7b7d56b5f26b66a34c6d9bb89ade23b16e386a12212"
},
But when i ssh into the container, i can see that app is empty.
Is my assumption that there should be the files from my host machine correct?

Boot2docker automatically mounts your home directory into the virtualbox boot2docker VM not the container. You still need to add -v /users/username/app:/app to your docker run command.
When you add the Volume command to your docker file you are declaring a volume to be created by the container. This volume is mounted at /var/lib/docker/volumes on the host VM. These volumes can only be shared by passing --volumes-from to the docker run command. When you pass the -v switch you creating a volume using a specified location on the host file system.

Related

Haven't been able to bind mount a network drive on Docker for Windows

I'm trying to bind mount a network drive from my host (Windows 10) to a local folder in a container but, regrettably, I haven't been able to achieve it.
In Linux I've had no issues whatsoever.
Any help would be greatly appreciated :)
docker run -v G:/:mnt/shared ubuntu /bin/bash
For local drive, use the following command:
docker run -v c:/your-directory/:/mnt/shared ubuntu /bin/bash
For network drive, you need to create a Docker volume pointing to that network drive.
The command would be as follows:
docker volume create --driver local --opt type=cifs --opt device=//networkdrive-ip/Folder --opt o=user=yourusername,domain=yourdomain,password=yourpassword mydockervolume
docker run -v mydockervolume:/data alpine ls /data
Reference is here: How to Add Network Drive as Volume in Docker on Windows
I managed to do it in a different way.
Here is the answer: Docker add network drive as volume on windows

Case sensitive host volume mount in docker for windows

I am running a linux docker container on windows 10. I need my host to have access to the data that my container generates. I also need the data to persist if I update the container's image.
I created a folder on the host (On a NTFS formated drive), in the docker settings, I share that drive with docker. I then create the container with the host directory mounted (using the -v option on the docker run command)
The problem is that docker creates a cifs mount to my shared drive on the host. It seems like the CIFS protocol is not case sensitive. I create two files:
/data/Test
/data/test
But only one file will be generated. I setup the kernel to support case sensitive files. For example, if I mount the same folder inside cygwin bash, I can create those two files without any problem. The problem is with the CIFS implementation I think.
My current thoughts of solving this issue:
Use Cygwin to create an NFS server on the host, and mount the NFS volume from within the linux container. I am not sure how I can automate this processes though.
Create another linux container with a SAMBA server. Create a volume on that container:
docker run -d -v /data --name dbstore --name a-samba-server
Then use that volume in my container:
docker run -d --volumes-from dbstore --name my-container my-container-image
Then I need to share /data in the samba server and create a map to that share on my host.
Both solutions seem quite cumbersome and I would like to know if there is anyway I can solve this directly with the CIFS share that docker natively creates.

Disk management (cleanup) in a stopped docker container

I'm running an elasticsearch docker container on my virtual machine and recently have got an elasticsearch failure, container just stopped. The reason - my ssd is out of space.
I can easily cleanup my indexes but the real issue here is that I actually cannot start the docker to do that. Container stops right after start without an ability to go via web UI or bash to cleanup space.
How can I cleanup my disk in a stopped container which I couldn't start?
Assuming you're using the official elasticsearch image, the Elasticsearch data directory will be a volume (mind the VOLUME /usr/share/elasticsearch/data statement in that image's Dockerfile).
You can now start another container, mounting your original container's volumes using the --volumes-from option to perform whatever cleanup tasks you deem necessary:
docker run --rm -it \
--volumes-from=<original-elasticsearch-container> \
ubuntu:latest \
/bin/bash
If that should fail, you can also run docker inspect on your Elasticsearch container and find the volume's directory in the host filesystem (assuming you're using the default local volume driver). Look for a Mounts section in the JSON output:
"Mounts": [
{
"Name": "<volume-id>",
"Source": "/var/lib/docker/volumes/<volume-id>/_data",
"Destination": "/usr/share/elasticsearch/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
The "Source" property will describe the volume's location on your host filesystem. When the container is started, this directory is simply bindmounted into the container's mount namespace; any changes you make in this directory on the host will be reflected in the container when it is started.

How to access the docker VM (MobyLinux) filesystem from windows shell?

Is there away to log into host VM's shell, similarly to how can we easily enter into running containers bash?
docker exec -it bash
I accidentally broke one container's crucial file, so that it couldn't start. Unfortunately, that container stored it's data inside. The result was that whenever I tried to run it, it couldn't start. The only solutions I saw were about navigating to host docker daemon's files. However, I'm running docker VM on windows, and I cannot access the files inside VM (MobyLinuxVM).
I'm using Docker for Windows, version 1.12.3-beta30.1 (8711)
Hack your way in
run a container with full root access to MobyLinuxVM and no seccomp profile (so you can mount stuff)
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
https://forums.docker.com/t/how-can-i-ssh-into-the-betas-mobylinuxvm/10991/6

Can't mount HOST folder into Amazon Docker Container?

I'm using an EC2 instance to run docker. From my local machine using OSX, I'm using docker machine to create containers and volumes. However when I'm trying to mount a local folder to any container is not possible.
docker create -v /data --name data-only-container ubuntu /bin/true
docker run -it --volumes-from data-only-container -v $(pwd)/data:/backup ubuntu bash
With the first command I create a data only container and I'm executing the second command to get into a container that should have the data-only-container volumes and the one I'm trying to mount, however when access it the folder /backup is empty
What I'm doing wrong?
EDIT:
I'm trying to mount a host folder in order to restore backuped data from my PC to container. In that case what would be a different approach?
Shall I try to use Flocker?
A host volume mounted with -v /path/to/dir:/container/mnt mounts a directory from the docker host inside the container. When you run this command on your OSX system, the $(pwd)/data will reference a directory on your local machine that doesn't exist on the docker host, the EC2 instance. If you log into your EC2 instance, you'll likely find the $(pwd)/data directory created there and empty.
If you want to mount folders from your OSX system into a docker container, you'll need to run Docker on the OSX system itself.
Edit: To answer the added question of how to move data up to your container in the cloud, there are often ways to move your data to the cloud provider itself, outside of docker, and then include it directly inside the container. To do a docker only approach, you can do something like:
tar -cC /source . | \
docker run --rm -i -v app-data:/target busybox \
/bin/sh -c "tar -xC /target"
This will upload your data with tar over a pipe into a named volume on your docker host. You can then include the named "app-data" volume in any other containers. If you need to do this multiple times with larger data sets, creating an rsync container would be more efficient.

Resources