Populating a volume using a container does not work in Docker on Windows - windows

I'm following the instruction on this Docker official page under "Populate a volume using a container" in an attempt to create a new volume populated with existing files in a newly launched container. I ran the following command, expecting the existing files and folders under C:\Data on the container to be available under the volume:
docker run -it --name=test -v C:\Data dataimage/test1:version1
A new volume appears to be created successfully. However, navigating to C:\Data folder on the container shows that it is completely empty. If I run the above command without the -v option instead, then I can see the original files at the same location.
Is this a fully supported feature in Docker on Windows? If so, could someone please shed a light on what I may be doing wrong?
I am using Docker Engine version is v19.03.8. And my host OS is Windows Server 2019.

Try this:
docker run -it --name=test -v '/c/Data:/data' dataimage/test1:version1
That should sync the C:\Data folder on your windows host with the /data folder in the container. If /data isn't the folder name you want in the container, change as needed.

Related

Mont a volume in host directoy

I am running an application with a dockerfile that I made.
I run at first my image with this command:
docker run -it -p 8501:8501 99aa9d3b7cc1
Everything works fine, but I was expecting to see a file in a specific folder of my directory of the app, which is an expected behaviour. But running with docker, seems like the application cannot write in my host directory.
Then I tried to mount a volume with this command
docker 99aa9d3b7cc1:/output .
I got this error docker: invalid reference format.
Which is the right way to persist the data that the application generates?
Use docker bind mounts.
e.g.
-v "$(pwd)"/volume:/output
The files created in /output in the container will be accessible in the volume folder relative to where the docker command has been run.

docker mounting folder not working for Windows 10 with WSL2

I have tried this example out on a windows 10 box which has WSL2 installed and integrated with the latest Docker version. After following the steps in the example and downloading the code in the linux subsystem, I am able to build the image and run the container. The website is also available on the browser when I browse to it on a browser running on Windows 10. However, when I create a file or folder in the container the same doesn't reflect in the host filesystem which in this case is the linux subsystem. Similarly, a file created in the host linux subsystem is not seen in the container's cli when I use the ls command.
I ran this commands to confirm that the folder has been mounted where 44711fc95366 is my container id
docker inspect -f "{{ .Mounts }}" 44711fc95366
This gives an output like so:
[{bind /home/userlab1/my-proj/getting-started/app /usr/src/app true
rprivate}]
If the mount point expressed above is correct, I should be able to create a file or folder in host subsystem on the path /home/userlab1/my-proj/getting-started/app and be able to see it at the /usr/src/app path in the container, correct?
The docker image has been created and run from the linux subsystem command line like so:
docker run -it -v ~/my-proj/getting-started/app:/usr/src/app -p 3001:3000 --name cntr-linux-todo
img-todo:in-linux
While the application runs, the files updated in the container don't reflect on the website that is running from the container, nor does a new file/folder created in the container be seen in the host subsystem and vice versa. What am I missing?
As you are using Windows's version of docker, it cannot see files/folders from WSL.
You can move ~/my-proj into C:\Users\user20358, and mount from there :
-v 'C:\Users\user20358\my-proj\getting-started\app:/usr/src/app'

How to access /var/lib/docker in windows 10 docker desktop?

Installed docker desktop for windows 10
Used powershell to run docker containers ( ubuntu )
Now, I want to browse to /var/lib/docker --> want to browse to overlay2 to check layers.. /diff folder etc.
If i access /var/lib/docker folder - powershell complains that this folder does not exist.
Other piece of info: I have already checked out the disk image location which is mapped for docker desktop. It is a vhdx file.
I was not able to open it with Oracle virtual box - it says it is not a supported version file.
I tried opening in Hyper V manager, the VM is getting listed: DockerDesktopVM.
But my objective is to do SSH and browse /var/lib/docker folders..
(This is for case of WSL2. It is my answer to a similar question)
Docker images are managed by docker's own VM. The path /var/lib/docker given by "docker info" is relative to docker's host file system, not your container's file system. The mount points are different for them. You can view docker's host file system in either of the following ways:
You can mount the host file system to a container directory. Such as,
docker run -v /:/data -it ubuntu /bin/bash
This command runs a shell in Ubuntu docker image, mounting docker's file system to /data directory. There you can find a complete file system under /data, including the ./var/lib/docker. If you want, you can "chroot /data" in the shell prompt to have a better view.
When docker is enabled with your distribution in WSL2, you can always check your containers in your distribution /mnt directory. Docker has mounted everything for you.
/mnt/wsl/docker-desktop-data/data/docker
If you are seasoned enough, you may find the actual location of the virtual disk of all the data in your Windows directory.
C:\Users\your_name\AppData\Local\Docker\wsl\data\
Or probably just for fun:
\\wsl$\Ubuntu\mnt\wsl\docker-desktop-data\data\docker
Unfortunately I haven't tried to dive into them.
As stated on This page of docker forums you can run plain debian docker image with shell and change it's namespace to docker host.
The terminal command you need to run is:
>> docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -i sh
as I understand after running debian image as terminal (-it option), you need to call command nsenter with specified parameters to change namespace to host machine. After this your container becomes Docker host and you can view all it's files.
after this command you can access docker images simply by calling:
>> cd ls /var/lib/docker/
In the left pane of your windows file explorer, you can find all you computer drives. If you have installed WSL (when you setup your Docker), you will see this Linux pinguin icon.
Select the docker-desktop-data directory and inside it, the data directory. Within the data directory you will find there the docker directory and the volume generated by docker run ... -v command.
shortcut would be: cd \\wsl.localhost\docker-desktop-data\data

How to access shared volumes on Docker for Mac

I've reviewed the documentation here:
https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-for-mac
It doesn't say anything about boot2docker, although some other questions along these lines talk about this:
Mount volume to Docker image on OSX
So the question is – the Docker for Mac application provides File Sharing via Preferences -> File Sharing; how does one make use of these shared folders from the docker image (for example if one ssh's into the docker image)? When I say how, I don't mean "what are the use-cases", I mean "please show me an example of how to access a shared folder from the command line of the running container".
Ideally I'm trying to create a similar scenario to Vagrant's synched folders whereby I can edit files on my Host env, independently of the Docker Image but these are updated automatically to the Docker image on save.
UPDATE:
To be clear, the reason for asking this question is because I couldn't get the -v docker command to work. E.g.
docker run -v /Users/geoidesic/Documents/projects/arc/mysite/djangocms_demo:/home/djangocms/djangocms/djangocms_demo -d -p 8001:8000 --name test_shared_volumes bluszcz/djangocms
With the above command the container immediately stops, so if I run docker ps the list of running containers is empty.
However, if I run the container without the -v command, then it stays running as expected:
docker run -d -p 8001:8000 --name test_shared_volumes bluszcz/djangocms
Updated:
Well, if you want to share file/directory between host and container, you're gonna use Docker's bind-mount.
For example, if I want to share my host's /etc/resolv.conf to my container, I do the following:
docker run -v /etc/resolv.conf:/etc/resolv.conf <IMAGE>
In which the -v ... part tells the container to reuse host's /etc/resolve.conf. And whenever I edit this file, the changes will be immediately visible to the container.
In Linux, you can use this way to share almost any of your host files to containers. Unfortunately, this is not the case for Mac. As I mentioned in my old answer, by default you can only share /Users/, /Volumes/, /private/, and /tmp directly.
On my Mac, saying, I want to share the /data directory to a container. I run below command:
docker run -it --rm -v /data:/data busybox sh
Then it pops up an unhappy error:
docker: Error response from daemon: Mounts denied:
The path /data
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
So you see, this is where File Sharing comes up.
Then comes my answers to your questions:
File Sharing does not provide you a ready-to-use way to do the sharing as you have experienced in Vagrant;
To share file/folder between host and container, use Dockers bind-mount.
Hope that helps.
Old answer:
File Sharing is used by Docker's bind-mount feature. By default, you can bind-mount files in /Users/, /Volumes/, /private/, and /tmp directly. For other paths, you need to add them to Preferences -> File Sharing first.
Use cases for bind-mount:
Persisting data generated by the running container, so that you can backup or migrate data.
Sharing data amount multiple running containers.
Share host configuration files to containers.
Share source code between host and containers, to make debugging easier.
Note: For cases #1 and #2, consider using volumes instead of bind-mount.

share windows folder (other than c/Users/) with docker container (using docker windows client)

Using docker client, is there a way to share a folder in windows with a docker container without having to first share the folder via the Virtual Box VM.
Have understood the need of having a double slash from this and this
Ran the following command from the docker client for windows
docker run -it -v //F/devfolder:/development/windev <imagename> <cmdname>
but when did a ls on /development/windev , it turned out it was empty.
I did not have any problem when I tried mounting the c/Users/username folder via the following command
docker run -it -v //c/Users/username/desktop:/development/windev <image> <command>
and the windev folder listed the contents as I would expect it to be
Tried sharing F/devFolder via Virtualbox GUI and gave full access but still the contents of the folder is not listed.
[I am not using boot2docker but docker-machine]
Is it not possible to share any other folder than the c/Users/ folder? If yes, anything else I need to do to ensure that I can see the contents of the mounted folder?
Not only you have to mount it in your VirtualBox, but you also have to instruct, in your boot2docker TinyCore session that you want that folder visible (once you have done a docker-machine ssh yourMachine):
mount -t vboxsf -o uid=1000,gid=50 your-other-share-name /some/mount/location
I know that you are using docker-machine, and not boot2docker, yet docker-machine is still using a boot2docker.iso VM image based on TinyCore, so this command still applies.

Resources