I'm trying to bind mount a network drive from my host (Windows 10) to a local folder in a container but, regrettably, I haven't been able to achieve it.
In Linux I've had no issues whatsoever.
Any help would be greatly appreciated :)
docker run -v G:/:mnt/shared ubuntu /bin/bash
For local drive, use the following command:
docker run -v c:/your-directory/:/mnt/shared ubuntu /bin/bash
For network drive, you need to create a Docker volume pointing to that network drive.
The command would be as follows:
docker volume create --driver local --opt type=cifs --opt device=//networkdrive-ip/Folder --opt o=user=yourusername,domain=yourdomain,password=yourpassword mydockervolume
docker run -v mydockervolume:/data alpine ls /data
Reference is here: How to Add Network Drive as Volume in Docker on Windows
I managed to do it in a different way.
Here is the answer: Docker add network drive as volume on windows
I have Docker installed on my Windows OS. There is my volumes filed of docker-compose.yml:
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
I just can't figure out how /var/run/docker.sock::/var/run/docker.sock path works for windows as I have no /var/run/ on my windows files where I can find docker.sock. So how this volume binding works at all?
The /var/run/docker.sock file on Docker for Mac and Windows for Linux images is inside the VM that Docker uses to run Linux containers. Those volume mounts happen from inside of that VM to the containers running in the VM. This is also why you can get an empty directory if you try to run a volume mount to a directory that you have not shared with the embedded VM.
You cannot see this file directly from the Windows environment (at least not that I'm aware of), though you can mount it into a container and see it that way.
For more details on how this VM is created, you can see the LinuxKit project: https://github.com/linuxkit/linuxkit
We're working to create a standard "data science" image in Docker in order to help our team maintain a consistent environment. In order for this to be useful for us, we need the containers to have read/write access to our company's network. How can I mount a network drive to a docker container?
Here's what I've tried using the rocker/rstudio image from Docker Hub:
This works:
docker run -d -p 8787:8787 -v //c/users/{insert user}:/home/rstudio/foobar rocker/rstudio
This does not work (where P is the mapped location of the network drive):
docker run -d -p 8787:8787 -v //p:/home/rstudio/foobar rocker/rstudio
This also does not work:
docker run -d -p 8787:8787 -v //10.1.11.###/projects:/home/rstudio/foobar rocker/rstudio
Any suggestions?
I'm relatively new to Docker, so please let me know if I'm not being totally clear.
I know this is relatively old - but for the sake of others - here is what usually works for me. for use - we use a windows file-server so we use cifs-utils in order to map the drive. I assume that below instructions can be applied to nfs or anything else as well.
first - need to run the container in privileged mode so that you can mount remote folders inside of the container (--dns flag might not be required)
docker run --dns <company dns ip> -p 8000:80 --privileged -it <container name and tag>
now, (assuming centos with cifs and being root in the container) - hop into the container and run:
install cifs-utils if not installed yet
yum -y install cifs-utils
create the local dir to be mapped
mkdir /mnt/my-mounted-folder
prepare a file with username and credentials
echo "username=<username-with-access-to-shared-drive>" > ~/.smbcredentials
echo "password=<password>" > ~/.smbcredentials
map the remote folder
mount <remote-shared-folder> <my-local-mounted-folder> -t cifs -o iocharset=utf8,credentials=/root/.smbcredentials,file_mode=0777,dir_mode=0777,uid=1000,gid=1000,cache=strict
now you should have access
hope this helps..
I will write my decision. I have a Synology NAS. The shared folder uses the smb protocol.
I managed to connect it in the following way. The most important thing was to write version 1.0 (vers=1.0). It didn't work without it! I tried to solve the issue for 2 days.
version: "3"
services:
redis:
image: redis
restart: always
container_name: 'redis'
command: redis-server
ports:
- '6379:6379'
environment:
TZ: "Europe/Moscow"
celery:
build:
context: .
dockerfile: celery.dockerfile
container_name: 'celery'
command: celery --broker redis://redis:6379 --result-backend redis://redis:6379 --app worker.celery_worker worker --loglevel info
privileged: true
environment:
TZ: "Europe/Moscow"
volumes:
- .:/code
- nas:/mnt/nas
links:
- redis
depends_on:
- redis
volumes:
nas:
driver: local
driver_opts:
type: cifs
o: username=user,password=pass,**vers=1.0**
device: "//192.168.10.10/main"
I have been searching the solution the last days and I just get one working.
I am running docker container on an ubuntu virtual machine and I am mapping a folder on other host on the same network which is running windows 10, but I am almost sure that the operative system where the container is running is not a problem because the mapping is from the container itself so I think this solution should work in any SO.
Let's code.
First you should create the volume
docker volume create
--driver local
--opt type=cifs
--opt device=//<network-device-ip-folder>
--opt o=user=<your-user>,password=<your-pw>
<volume-name>
And then you have to run a container from an image
docker run
--name <desired-container-name>
-v <volume-name>:/<path-inside-container>
<image-name>
After this a container is running with the volume assignated to it,
and is mapped to .
You create some file in any of this folders and it will be replicated
automatically to the other.
In case someone wants to get this running from docker-compose I leave
this here
services:
<image-name>:
build:
context: .
container_name: <desired-container-name>
volumes:
- <volume-name>:/<path-inside-container>
...
volumes:
<volume-name>:
driver: local
driver_opts:
type: cifs
device: //<network-device-ip-folder>
o: "user=<your-user>,password=<your-pw>"
Hope I can help
Adding to the solution by #Александр Рублев, the trick that solved this for me was reconfiguring the Synology NAS to accept the SMB version used by docker. In my case I had to enable SMBv3
I know this is old, but I found this when looking for something similar but see that it's receiving comments for others, like myself, who find it.
I have figured out how to get this to work for a similar situation that took me awhile to figure out.
The answers here are missing some key information that I'll include, possibly because they weren't available at the time
The CIFS storage is, I believe, only for when you are connecting to a Windows System as I do not believe it is used by Linux at all unless that system is emulating a Windows environment.
This same thing can be done with NFS, which is less secure, but is supported by almost everything.
you can create an NFS volume in a similar way to the CIFS one, just with a few changes. I'll list both so they can be seen side by side
When using NFS on WSL2 you 1st need to install the NFS service into the Linux Host OS. I believe CIFS requires a similar one, most likely the cifs-utils mentioned by #LevHaikin, but as I don't use it I'm not certain. In my case the Host OS is Ubuntu, but you should be able to find the appropriate one by finding your system's equivalent for nfs-common (or cifs-utils if that's correct) installation
sudo apt update
sudo apt install nfs-common
That's it. That will install the service so NFS works on Docker (It took me forever to realize that was the problem since it doesn't seem to be mentioned as needed anywhere)
If using NFS, On the network device you need to have set NFS permissions for the NFS folder, in my case this would be done at the folder folder with the mount then being to a folder inside it. That's fine. (In my case the NAS that is my server mounts to #IP#/volume1/folder, within the NAS I never see the volume1 in the directory structure, but that full path to the shared folder is shown in the settings page when I set the NFS permissions. I'm not including the volume1 part as your system will likely be different) & you want the FULL PATH after the IP (use the IP as the numbers NOT the HostName), according to your NFS share, whatever it may be.
If using a CIFS device the same is true just for CIFS permissions.
The nolock option is often needed but may not be on your system. It just disables the ability to "lock" files.
The soft option means that if the system cannot connect to the mount directory it will not hang. If you need it to only work if the mount is there you can change this to hard instead.
The rw (read/write) option is for Read/Write, ro (read-only) would be for Read Only
As I don't personally use the CIFS volume the options set are just ones in the examples I found, whether they are necessary for you will need to be looked into.
The username & password are required & must be included for CIFS
uid & gid are Linux user & group settings & should be set, I believe, to what your container needs as Windows doesn't use them to my knowledge
file_mode=0777 & dir_mode=0777 are Linux Read/Write Permissions essentially like chmod 0777 giving anything that can access the file Read/Write/Execute permissions (More info Link #4) & this should also be for the Docker Container not the CIFS host
noexec has to do with execution permissions but I don't think actually function here, but it was included in most examples I found, nosuid limits it's ability to access files that are specific to a specific user ID & shouldn't need to be removed unless you know you need it to be, as it's a protection I'd recommend leaving it if possible, nosetuids means that it won't set UID & GUID for newly created files, nodev means no access to/creation of devices on the mount point, vers=1.0 I think is a fallback for compatibility, I personally would not include it unless there is a problem or it doesn't work without it
In these examples I'm mounting //NET.WORK.DRIVE.IP/folder/on/addr/device to a volume named "my-docker-volume" in Read/Write mode. The CIFS volume is using the user supercool with password noboDyCanGue55
NFS from the CLI
docker volume create --driver local --opt type=nfs --opt o=addr=NET.WORK.DRIVE.IP,nolock,rw,soft --opt device=:/folder/on/addr/device my-docker-volume
CIFS from CLI (May not work if Docker is installed on a system other than Windows, will only connect to an IP on a Windows system)
docker volume create --driver local --opt type=cifs --opt o=user=supercool,password=noboDyCanGue55,rw --opt device=//NET.WORK.DRIVE.IP/folder/on/addr/device my-docker-volume
This can also be done within Docker Compose or Portainer.
When you do it there, you will need to add a Volumes: at the bottom of the compose file, with no indent, on the same level as services:
In this example I am mounting the volumes
my-nfs-volume from //10.11.12.13/folder/on/NFS/device to "my-nfs-volume" in Read/Write mode & mounting that in the container to /nfs
my-cifs-volume from //10.11.12.14/folder/on/CIFS/device with permissions from user supercool with password noboDyCanGue55 to "my-cifs-volume" in Read/Write mode & mounting that in the container to /cifs
version: '3'
services:
great-container:
image: imso/awesome/youknow:latest
container_name: totally_awesome
environment:
- PUID=1000
- PGID=1000
ports:
- 1234:5432
volumes:
- my-nfs-volume:/nfs
- my-cifs-volume:/cifs
volumes:
my-nfs-volume:
name: my-nfs-volume
driver_opts:
type: "nfs"
o: "addr=10.11.12.13,nolock,rw,soft"
device: ":/folder/on/NFS/device"
my-cifs-volume:
driver_opts:
type: "cifs"
o: "username=supercool,password=noboDyCanGue55,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev,vers=1.0"
device: "//10.11.12.14/folder/on/CIFS/device/"
More details can be found here:
https://docs.docker.com/engine/reference/commandline/volume_create/
https://www.thegeekdiary.com/common-nfs-mount-options-in-linux/
https://web.mit.edu/rhel-doc/5/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-options.html
https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/
I'm using an EC2 instance to run docker. From my local machine using OSX, I'm using docker machine to create containers and volumes. However when I'm trying to mount a local folder to any container is not possible.
docker create -v /data --name data-only-container ubuntu /bin/true
docker run -it --volumes-from data-only-container -v $(pwd)/data:/backup ubuntu bash
With the first command I create a data only container and I'm executing the second command to get into a container that should have the data-only-container volumes and the one I'm trying to mount, however when access it the folder /backup is empty
What I'm doing wrong?
EDIT:
I'm trying to mount a host folder in order to restore backuped data from my PC to container. In that case what would be a different approach?
Shall I try to use Flocker?
A host volume mounted with -v /path/to/dir:/container/mnt mounts a directory from the docker host inside the container. When you run this command on your OSX system, the $(pwd)/data will reference a directory on your local machine that doesn't exist on the docker host, the EC2 instance. If you log into your EC2 instance, you'll likely find the $(pwd)/data directory created there and empty.
If you want to mount folders from your OSX system into a docker container, you'll need to run Docker on the OSX system itself.
Edit: To answer the added question of how to move data up to your container in the cloud, there are often ways to move your data to the cloud provider itself, outside of docker, and then include it directly inside the container. To do a docker only approach, you can do something like:
tar -cC /source . | \
docker run --rm -i -v app-data:/target busybox \
/bin/sh -c "tar -xC /target"
This will upload your data with tar over a pipe into a named volume on your docker host. You can then include the named "app-data" volume in any other containers. If you need to do this multiple times with larger data sets, creating an rsync container would be more efficient.
Background
CoreOS-Kubernetes has a project for multi-node on Vagrant:
https://github.com/coreos/coreos-kubernetes
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
They have a custom cloud config for the etcd node, but none for the worker node. For those, the Vagrant file references shell scripts, which contain some cloud config but mostly Kubernetes yaml:
https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh
Objective
I'm trying to mount a NFS directory onto the coreOS worker nodes, for use in a Kubernetes pod. From what I read about Kubernetes in docs and tutorials, I want to mount on the node first as a persistent volume, like this on docker:
http://www.emergingafrican.com/2015/02/enabling-docker-volumes-and-kubernetes.html
I saw some posts that said mounting in the pod itself can be buggy, and want to avoid it by mounting on coreOS worker node first:
Kubernetes NFS volume mount fail with exit status 32
If mounting right in the pod is the standard way, just let me know and I'll do that.
Question
Are there options for customizing the cloud config for the worker node? I'm about to start hacking on that shell script, but thought I should check first. I looked through the docs but couldn't find any.
This is the coreOS cloud config I'm trying to add to the Vagrant file:
https://coreos.com/os/docs/latest/mounting-storage.html#mounting-nfs-exports
No NFS mount on coreOS is needed. Kubernetes will do it for you right in the pod:
http://kubernetes.io/v1.1/examples/nfs/README.html
Checkout nfs-busybox replication controller:
http://kubernetes.io/v1.1/examples/nfs/nfs-busybox-rc.yaml
I ran this and got it to write files to the server. That helped me debug the application. Note that even though nfs mounts do not show up when you ssh into the kubernetes node and run docker -it run /bin/bash, they are mounted in the kubernetes pod.. That's where most of my misunderstanding occurred. I guess you have to add the mount parameters to the command when doing it manually.
Additionally, my application, gogs, stored it's config files in /data . To get it to work, I first mounted the nfs to /mnt. Then, like in the kubernetes nfs-busybox example, I created a command which would copy all folders in /data to /mnt . In the replication controller yaml, under the container node, I put a command:
command:
- sh
- -c
- 'sleep 300; cp -a /data /mnt; done'
This gave me enough time to run the initial config of my app. Then I just waited until the sleep time was up and the files were copied over.
I then change my mount point to /data, and now the app starts right where it left off when pod restarts. Coupled with external mysql server, and it so far it looks like it's stateless.