Docker: Mount volume from Windows host - windows

I use 1.12 version of Docker on Windows, since I can't use the Hyper-V feature with the newer "native" version - so I have my quickstart terminal and communicate to docker host via the invisible underlying virtual box.
Now I have the problem, that I need to mount a local folder to a container, which worked successfully from within the docker-machine by adding
--volume="`pwd`:/root/data"
to the docker run command, but it does not when I launch the same command from my Windows quickstart terminal (even though pwd command works correctly in the terminal).
I tried to find the Windows specific settings for the directory and tested several combinations of format, but no luck. Can anyone help me out on how to correctly specify a Windows folder (e.g. C:\Users\alexander.ruehl) for the volume parameter?

You can use relative path for your volume : --volume="./mydata:/root/data"
Also make sure that you have given the permission for read/write to Docker.

Related

How to change the location of docker installation? [duplicate]

I've just upgraded to Windows 10 Home May 2020, activated WSL2, and installed Docker Desktop.
WSL2 must be installed in my system disk, which is a small SSD. I don't want to fill it with docker images. How do I change the docker images path? I'd like to use a path in my big Windows filesystem.
The image location is somewhat confusing. I believe it is in /mnt/wsl/docker-desktop-data/.
How do I change the directory of docker images inside WSL2? May I change docker configuration to select a path inside /mnt/d, or mount a path from /mnt/d over docker data dirs?
The WSL 2 docker-desktop-data vm disk image would normally reside in:
%USERPROFILE%\AppData\Local\Docker\wsl\data\ext4.vhdx
Follow the following to relocate it to other drive/directory, with all existing docker data preserved (tested against Docker Desktop 2.3.0.4 (46911), and continued to work after updating the 3.1.0 (51484)):
First, shut down your docker desktop by right click on the Docker Desktop icon and select Quit Docker Desktop
Then, open your command prompt:
wsl --list -v
You should be able to see, make sure the STATE for both is Stopped.(wsl --shutdown)
NAME STATE VERSION
* docker-desktop Stopped 2
docker-desktop-data Stopped 2
Export docker-desktop-data into a file
wsl --export docker-desktop-data "D:\Docker\wsl\data\docker-desktop-data.tar"
Unregister docker-desktop-data from wsl, note that after this, your ext4.vhdx file would automatically be removed (so back it up first if you have important existing image/container):
wsl --unregister docker-desktop-data
Import the docker-desktop-data back to wsl, but now the ext4.vhdx would reside in different drive/directory:
wsl --import docker-desktop-data "D:\Docker\wsl\data" "D:\Docker\wsl\data\docker-desktop-data.tar" --version 2
Start the Docker Desktop again and it should work
You may delete the D:\Docker\wsl\data\docker-desktop-data.tar file (NOT the ext4.vhdx file) if everything looks good for you after verifying
Stop Docker Desktop
Relocate Docker folder from C:\Users\xxx\AppData\Local\Docker to new path
Make sure C:\Users\xxx\AppData\Local\Docker is no longer there
Open a cmd in administrator mode
Run the following command that will create a symbolic link in the cmd window with the appropriate from and to path
mklink /j "C:\Users\xxx\AppData\Local\Docker" "path to where you relocated your docker folder"
Restart Docker Desktop
Edit: re-register docker-desktop would set the default docker-data to C drive now, so we should only unregister docker-data as the accepted answer.
You can do
wsl --unregister docker-desktop-data
wsl --import docker-desktop-data D:\wsl\docker-desktop-data "C:\Program Files\Docker\Docker\resources\wsl\wsl-data.tar" --version=2
The tar file is the file used to install, and before it is your new destination.
This always work while the move-wsl or lxrunoffline didn't work for me on fast rings. And sometimes you have to unistall/install docker first
Extending #Attila Badi 's answer would be to also give the same treatment to the C:\ProgramData\Docker folder, which seems to be used for WSL / Windows Containers. Even moving the Docker data folders, would still leave you with a boot drive ProgramData\Docker folder of massive proportions - especially if you are unable or unwilling to clean the images. You cannot migrate it, or move it once installed. Using the Docker engine advanced settings works in Linux container mode, but not in windows and vice versa and has trouble starting.
Steps I followed:
Uninstall Docker. I know... Make sure you have saved what you need.
Create the primary space-eating docker folders, in a location you have a lot of space, e.g. :
D:\Data\Docker\ProgramData_Docker &
D:\Data\Docker\AppData_Local_Docker
Create linked folders, by running the below in a command window in administrator mode:
mklink /j "C:\Users\xxx\AppData\Local\Docker" "D:\Data\Docker\ProgramData_Docker"
mklink /j "C:\ProgramData\Docker" "D:\Data\Docker\AppData_Local_Docker"
Install Docker.
You should be able to merrily pull windows server images, but not clog up your boot drive.
UPDATE:
Trying to symlink the C:\ProgramData\Docker folder, may result in a security error, depending on the version running depending on the originally installed version.
Release notes for 4.13.0 refers to this feature, which my be a possible work-around (Thanks to #bhagerty and #Oly for the trail):
start /w “” “Docker Desktop Installer.exe” install --installation-dir=G:\Docker
(Source: ungureanuovidiu # https://forums.docker.com/t/docker-installation-directory/32773/17 )
For me docker won't start with junction.
Then I've used just directory symbolic link:
Docker stopped
Folder "wsl" moved to other location on disk "B"
RUben#AD-RUBEN C:\Users\RUben\AppData\Local\Docker
$ mklink /D wsl "B:\dev\wsl"
**symbolic link** created for wsl <<===>> B:\dev\wsl
Containers and Images are ready to use:
A nice tool:
DDoSolitary/LxRunOffline: A full-featured utility for managing Windows Subsystem for Linux (WSL)
https://github.com/DDoSolitary/LxRunOffline
LxRunOffline.exe move Move a distribution to a new directory.
Options:
-n arg Name of the distribution
-d arg The directory to move the distribution to.
for example:
quit docker desktop, then:
wsl --shutdown
LxRunOffline.exe move -n docker-desktop-data -d D:\vm\dockerdesktop\wsl\data
I found this tool from pxlrbt on github. It's using standard wsl import/export and pretty safe. Just moved both my docker-desktop-data distro to a different drive and it works well.
The best option is to update the registry. Follow the below steps
Shutdown the wsl. Use the command wsl --shutdown.
Move the entire C:\Users\%USERPROFILE%\AppData\Local\Docker directory to different drive for example D:\Docker.
Goto Registry editor location Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss.
Find the registry entry with the BasePath set to C:\Users\%USERPROFILE%\AppData\Local\Docker\wsl\data. Update this D:\Docker\wsl\data.
Find another registry entry with the BasePath set to C:\Users\%USERPROFILE%\AppData\Local\Docker\wsl\distro. Update this D:\Docker\wsl\distro.
Restart wsl using: wsl -d Ubuntu.
In Windows 10 home, docker desktop creates the VM under ""C:\Users\xxx\AppData\Local\Docker" directory and it is this VM that contains the downloaded docker images. If you want to change the VM location from C: to a different directory you can do this by creating a junction on windows (prior to docker desktop installation) using a command like below:
mklink /j "C:\Users\xxx\AppData\Local\Docker" "D:\Users\xxx\AppData\Local\Docker"
Note that prior to executing the command the target directory structure should exist while you should delete the C:\Users\xxx\AppData\Local\Docker directory if it exists already else the command could fail. Now install docker desktop on windows 10 home and voila you can see stuff inside "D:\Users\xxx\AppData\Local\Docker" directory namely the docker VM hard disk image file that is going to contain all the downloaded docker images.
Using small SSD also you may want to relocate WSL swap file location.
https://learn.microsoft.com/en-us/windows/wsl/wsl-config

Cannot create file with Jupyter Notebook running in Docker in Windows 10

I have installed Docker Toolbox for Windows v 18.09 in Windows 10 Version 10.0.19041 Build 19041 and am trying to run a docker container to run Jupiter Notebook with Pyspark.
I am using Windows Powershell to execute docker commands
docker run hello-world works fine so I can assume that Dockers has been installed correctly. Infact, I had go down to Toolbox v 18.09 before I could get hello world to work.
i use the following command to run the pyspark container
docker run -it --rm -p 8888:8888 --volume=//C/Users/prith/pydev://home/jovyan/work jupyter/pyspark-notebook
from the C:/Users/prith/pydev directory that is mapped to the work directory of the container. The // is slash is required because i am working with Windows. the notebook shows up in http://192.168.99.100:8888 as expected and I can login with the token.
Then the problem starts when i try to create a new notebook or even a text file, I get a permission denied error. Evidently the container cannot write to 'some' directory. I have used Windows filesystem properties to give Everyone all privileges on this particular directory and have also run the Powershell in Adminstrtor mode ( to simulate Ubuntu sudo ..) but nothing works.
Interestingly, I can write into the directory located above the work directory in the container but then I cannot access files written into that directory from Windows because I have mapped my local windows directory to /home/jovyan/work
What do what i want? I want to create Jupyter notebooks in the container and save them in Windows
I know all this works like a charm in Linux-Ubuntu, but unfortunately I am stuck with Windows 10. Please help.
It looks you forgot to add the directory you're trying to mount to FILE SHARING.
Please, do right-click on docker icon (in system tray) -> Settings -> Resources -> FILE SHARING
Then, add your local directory.
Finally, if it doesn't work, try to mount volume with --volume="C:\Users\prith\pydev":/home/joyvan/work
this command seems to work
docker run -it --rm -p 8888:8888 --volume='/c/Users/Public/PyDev'://home/jovyan/work jupyter/pyspark-notebook start-notebook.sh --NotebookApp.token=''

the command "docker-machine ls " list nothing on my mac

I installed the latest stable docker for Mac, and started the docker directly without a virtual box. I know that it must have started a virtual box, so I use "docker-machine ls" to find the default machine, but it list nothing. How can i find the virtual machine? My OS version is 10.10.5
PS:
In fact, I didn't create any virtual machines, but do run my spring-boot app on the "alpine-oraclejdk8" image, so does that mean I exactly using the docker? And the reason I want to find the virtual machine is I used "nsenter" to enter the container to debug the log of my app but it doesn't work(the writer of "nsenter" told that I need enter the virtual machine first). So this is my confusing point that how the docker is running but I cant find the virtual machine on MAC
Docker for mac does not use docker-machine. The app that runs and give you the little whale icon in the top menu bar runs its own virtual machine. This virtual machine uses hyperkit, which is a project that uses xhyve, which is a port of bhyve to the mac os darwin kernel.
This will not create any entries to make docker-machine aware of the vm.
Rather than using nsenter to enter your container, you should use the docker exec command instead. The advantage of using docker exec is that it works without having the first ssh to where docker is running.
Because you need to create it.
Run the command
docker-machine create vm1
And you'll have your machine.
To redirect your docker client to the specific machine use this command
eval $(docker-machine env vm1)
Where 'vm1' is the same 'vm1' name that you used to create the machine. You can have a number of docker machine running at the same time using various backends like virtualbox or aws

Setting up docker on the mac: Cannot connect via localhost

I'm just setting up docker on my Mac. The installation worked and I got some containers running (following the getting-stared guide), but now I want to connect with my browser to localhost, to show a web app.
For that I'm following this guide: https://docs.docker.com/engine/userguide/containers/usingdocker/
In the last section it is said, that you simple go with your browser to: localhost:XXXXX wheras XXXXX is the port, that you found out using the command
docker ps -l
First problem: here nothing happends. The browser is showing an empty page (ERR_CONNECTION_REFUSED)
Further more in the guide it is explained, that for Mac you can check your ip address via the command:
docker-machine ip your_vm_name
Here appears the second problem: This command results in an error-message:
Host does not exist: "your_vm_name"
So my questions are:
How to set up the virtual machine (or "your_vm_name" respectively)?
Does it have to do anything with the vhosts file on my Mac OS?
Is there mybe a conflict with MAMP (which I'm also using sometimes)?
Thanks in advance!
And thanks to GianArb for the very fast answer! That solution works as well.
Just to contribute to the community, I just found out by myself, that the solution was too simple to be true.
Instead of your-vm-name use default (obviously the default-host that is set up by docker), so I just used:
docker-machine ip default
and then I got the right IP.
Hello the process to start to use docker on mac with docker-machine is like:
Create a new docker machine on virtualbox, you can use a name like "your_vm_name" or just "default" in this way you can not use the name because "default" is a keyword that docker-machine try to use when you don't specify nothing.
The problem here is, why docker-machine ip your_vm_name doesn't provide the good ip? Can you copy the result of your command
echo $DOCKER_HOST
Usually it's 192.168.99.100
thanks a lot
Actual for Mac OS:
If there is no real need in VirtualBox machine you can just remove it.
Docker can start Linux containers under Mac OS without any VirtualBox machines.
Without any VirtualBox machines, all exported ports are available on the localhost:*.
Remove docker machine
docker-machine ls
docker-machine stop default
docker-machine rm default
Make sure that you don't have the command eval $(docker-machine env ...) in your ~/.bashrc or ~/.zshrc.
Otherwise, nothing will work. You will see the error Error: No machine name(s) specified and no "default" machine exists.
By the default configuration no need to have any env variables like $DOCKER_*
In my case, the only docker installed by downloading a .dmg image from the official site works without any problems. Any versions installed via brew didn't work out of the box.
PS: tested on Mac OS 10.13 and 10.14

Developing on Windows -> Deploying on a Virtual Machine?

Is there an easy way to integrate with VirtualBox such that I could develop under the host, Windows, and deploy and run scripts via a mounted folder in a guest linux system?
I'm looking to develop for Linux under Windows, kind of.
You can use VirtualBox's Shared Folders feature to enable your Ubuntu virtual machine to mount a directory of your Windows host. However, you're likey to be deal with some impedance mismatches like different line endings. I hope that is the least of your worries.
You might want to check out vagrant http://vagrantup.com/
It provides a nice and easy system to create a VM from a template in Virtual Box, and will automatically mount the project folder in the guest VM. The config can also easily be included in your project so others can use it.
I develop in PHP. And I use Debian as guest OS, and Win7 as host OS.
You can done automaticly mount share folder by:
new a file in /etc/init.d/ named mnt_win_sf, than you edit it:
It must has the same info head with /etc/init.d/apache2. And you need just one line of command:
mount -t vboxsf share_folder_name mount_point
We also need to excute this script before apache2, so we edit /etc/init.d/apache2. In the Require Start line, add mnt_win_sf
update them by:
sudo update-rc.d mnt_win_sf defaults
sudo update-rc.d apache2 defaults

Resources