How to access a file of Mac from Docker container - macos

I am running robot framework (Selenium based) testing inside a Docker container. But I need to access files outside the Docker container (in Mac).
I have tried providing absolute path of the Mac but Docker refers it's core folder as the root folder.
I found below links for Windows, but not for Mac.
Docker - accessing files inside container from host
Access file of windows machine from docker container

one approach is copy your files inside docker container at creation time, but if your files updates by another service on host, and it needs to access them too, just mount it like below.
docker run -d --name your-container -v /path/to/files/:/path/inside/container containername:version
this way files on the host machine mounts into docker container and the user inside container can access them.

Related

docker mounting folder not working for Windows 10 with WSL2

I have tried this example out on a windows 10 box which has WSL2 installed and integrated with the latest Docker version. After following the steps in the example and downloading the code in the linux subsystem, I am able to build the image and run the container. The website is also available on the browser when I browse to it on a browser running on Windows 10. However, when I create a file or folder in the container the same doesn't reflect in the host filesystem which in this case is the linux subsystem. Similarly, a file created in the host linux subsystem is not seen in the container's cli when I use the ls command.
I ran this commands to confirm that the folder has been mounted where 44711fc95366 is my container id
docker inspect -f "{{ .Mounts }}" 44711fc95366
This gives an output like so:
[{bind /home/userlab1/my-proj/getting-started/app /usr/src/app true
rprivate}]
If the mount point expressed above is correct, I should be able to create a file or folder in host subsystem on the path /home/userlab1/my-proj/getting-started/app and be able to see it at the /usr/src/app path in the container, correct?
The docker image has been created and run from the linux subsystem command line like so:
docker run -it -v ~/my-proj/getting-started/app:/usr/src/app -p 3001:3000 --name cntr-linux-todo
img-todo:in-linux
While the application runs, the files updated in the container don't reflect on the website that is running from the container, nor does a new file/folder created in the container be seen in the host subsystem and vice versa. What am I missing?
As you are using Windows's version of docker, it cannot see files/folders from WSL.
You can move ~/my-proj into C:\Users\user20358, and mount from there :
-v 'C:\Users\user20358\my-proj\getting-started\app:/usr/src/app'

How to files form docker container to host in windows containers

I am learning docker. In my image build process i create a file called "appsettings.json" in a folder called "Config". I want this file to be editable by the user outside the container. The final goal is that a user can stop the container, make changes to the settings file and start the container again with the new settings file.
I am using windows containers on a windows 10 host. I created a new volume first:
docker volume create myvolume
After that I tried to start my container
docker run -v myvolume:C:/app/Config
However, it seems that the -v argument deletes all content in the Config folder. I was aware that bind mounts override folders in the container with folders on the host, but I thought this named volume will copy the appsettings file to the host.
What I could do is creating the volume first, starting the container and copying the file from within the container into the volume, but this seems to be an annoying overkill.
Is there any easier way or best practice to make files which result as part of the build process visible to the host file system?
you can't mount files from your host machine into volumes. Like Pandey Amit already mentioned in the comments, you have to mount the Config directory of your host machine directly into the container. To achieve this in Windows 10, you have to grant the access to your filesystem in the docker settings first.
Open Docker Dashboard -> Settings -> Resources -> FILE SHARING + add here the directory, which should be mounted in your Docker containers (fyi: all the subdirectories are included)
I recommend to restart docker itself, to take note of the new settings.
Once docker has restarted, run your container:
docker run -v C:/app/Config:/path/to/your/app/Config <image-name>
This will mount the content of C:\app\Config into the container and now you should be able to modify the content of appsettings.json even without having to restart your container (but this depends on the architecture of your application -> if it supports a live reload of appsettings.json)
UPDATE: you cannot mount files from your docker image on a host machine. if you want, you could instruct the user to create the file manually on the host machine and follow with the steps described above. But if the only thing you want achieve, is to allow the user to override application settings I would recommend you to work with environment variables. For example run container with overwritten setting from the host machine:
docker run -e "foo=bar" <image-name>

Populating a volume using a container does not work in Docker on Windows

I'm following the instruction on this Docker official page under "Populate a volume using a container" in an attempt to create a new volume populated with existing files in a newly launched container. I ran the following command, expecting the existing files and folders under C:\Data on the container to be available under the volume:
docker run -it --name=test -v C:\Data dataimage/test1:version1
A new volume appears to be created successfully. However, navigating to C:\Data folder on the container shows that it is completely empty. If I run the above command without the -v option instead, then I can see the original files at the same location.
Is this a fully supported feature in Docker on Windows? If so, could someone please shed a light on what I may be doing wrong?
I am using Docker Engine version is v19.03.8. And my host OS is Windows Server 2019.
Try this:
docker run -it --name=test -v '/c/Data:/data' dataimage/test1:version1
That should sync the C:\Data folder on your windows host with the /data folder in the container. If /data isn't the folder name you want in the container, change as needed.

Pass docker directory as a maven parameter

I have a maven goal that requires the server home folder as a parameter. On my local i just do:
mvn test -Dserverhome=/Users/foo/MyServer
On the test machine, the server is inside a docker container. How do i point to my server directory that is inside a docker container?
You need to mount your host folder as a data volume
docker run -d -P --name aname -v /Users/foo/MyServer:/myserver yourImage
That way, your maven command can always be (within the container)
mvn test -Dserverhome=/myserver
Because you trust that, at runtime, /myserver will have been associated with the right host folder.
Note that if you are using docker on Mac or Windows, /Users is already mounted (by VirtualBox and by boot2docker tinycore Linux), so you would not even need to declare the data volume.

Accessing Docker container files from Windows

How can I access Docker containers Folder and files from Windows file explorer?
If you are running Docker Desktop on Windows, Docker containers don't run natively on the local filesystem, but instead on a hyper-v virtual machine or via WSL2.
Hyper-v (legacy)
In theory, if you were to stop the hyper-v vm, you could open up the vhdx, and if you had the right filesystem drivers, mount it and see the files inside. This is not possible to do while the virtual machine is running. By default the OS that runs for Linux container mode is named "Docker Desktop", but runs busybox.
The file could be found here:
C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx
WSL2 (modern)
WSL things are slightly different, but not much. You are still effectively working with a virtual environment.
One of the nice advantages of WSL however, is that you can actually browse this file system naively with Windows Explorer.
By browsing to \\wsl$ you will be able to see the file systems of any distributions you have, including docker-desktop.
The docker filesystems on my machine seem to live in:
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
However, the overlay 'merged' view, which shows the original file system with your changes, doesn't seem to work via windows explorer and gives you a blank window. You can however still see the 'diff' folder, which contains your changes.
You can open a terminal to either of these instances by using the wsl command, from powershell.
Access via Docker
If you wanted to have a look at this Docker OS and filesystem, one way would be to spin up a container, that has access to the OS at the root, something like:
docker run -it --mount type=bind,source=/,target=/host ubuntu /bin/bash
This should drop you into a Ubuntu docker container, with a Bash terminal, which has the root of the hyper-v container (/), mounted on the path '/host'. Looking inside, you will find the Busybox filesystem of the virtual machine that is running docker, and all the containers.
Due to how docker runs, you will be able to access the filesystems of each container. If you are using the overlay2 filesystem for you containers, you would likely find the filesystem layers here for each container:
/host/var/lib/docker/overlay2
If the files you want to browse through in windows explorer, you should be able to configure a samba export of this folder, that is accessible from the host machine, that is accessible while this container is running.
If the goal however is to be able to browse/edit files on the local OS, and have them update inside the container, normally the easiest way to do this, is to mount local directory into the container. This can be done similar to the example above, but you first need to go into the Docker Desktop settings, and enable the mounting of the shared drive into the host virtual machine, and then provide the volume argument when you spin up a container.
If you are using WSL2, there are a few more options available to you, as you can keep your projects inside the WSL layer, while interacting with them from the host OS or via docker. Best practice for this is still in flux, so I'm going to avoid giving direct advice here.
Another related question's reply answers this: https://stackoverflow.com/a/64418064/1115220
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
I'll give WordPress app as an example by showing a sample of the docker-compose.yaml file. In order to have project files shown in windows from docker container, you'll need to use ports and volumes
Notice volume and ports.
port 8000 from the local machine maps to 80 within the container.
as for volume, ./ current directory on windows maps to the container image files.
wordpress:
depends_on:
- db
image: wordpress:latest
volumes: ['./:/var/www/html']
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
When running Windows container on Windows Docker Desktop, I was able to see all image files here:
C:\ProgramData\Docker\windowsfilter
(requires admin rights to access, and it would be unwize to delete/modify anything there)
Further, with WizTree tool, it's easy to see real sizes of each image layer and even find which specific files contribute to layer's size.
You should use a mount volume. In your docker run .... command, you may specify a mount volume. The syntax is as follows:
-v /host/directory:/container/directory
An example:
docker run -it -v C:\Users\thomas\Desktop:/root/home --name my_container image1
This would allow the container to write files to /root/home and have them appear on the user thomas' desktop

Resources