I cannot mount a local folder to a docker volume on docker-compose
so it is not accessible on docker-compose run cmd.
Here is a repo from the github https://github.com/up1/demo-k6-docker
When I follow readme on docker-compose run k6 run scripts/sample.js it gives me the following error all the time:
WARN[0000] The moduleSpecifier "scripts/sample.js" has no scheme but we will try to resolve it as remote module. This will be deprecated in the future and all remote modules will need to explicitly use "https" as scheme. ERRO[0000] The moduleSpecifier "scripts/sample.js" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker. Additionally it was tried to be loaded as remote module by prepending "https://" to it, which also didn't work. Remote resolution error: "Get "https://scripts/sample.js": dial tcp: lookup scripts on 127.0.0.11:53: no such host"
Tried:
specifically sharing folder in docker app settings window,
different github repos,
different mac laptops,
different setups Dockerfile copy and -v option on docker run
looking for similar questions
and docs
https://k6.io/docs/using-k6/modules#using-local-modules-with-docker
I would really appreciate some help, banging my head against the wall for a couple of days with this
Try this.
docker-compose run k6 run //scripts//sample.js
I'm running on docker desktop version 3.1.0 Windows 10 pro.
Solution
I am learning docker. In my image build process i create a file called "appsettings.json" in a folder called "Config". I want this file to be editable by the user outside the container. The final goal is that a user can stop the container, make changes to the settings file and start the container again with the new settings file.
I am using windows containers on a windows 10 host. I created a new volume first:
docker volume create myvolume
After that I tried to start my container
docker run -v myvolume:C:/app/Config
However, it seems that the -v argument deletes all content in the Config folder. I was aware that bind mounts override folders in the container with folders on the host, but I thought this named volume will copy the appsettings file to the host.
What I could do is creating the volume first, starting the container and copying the file from within the container into the volume, but this seems to be an annoying overkill.
Is there any easier way or best practice to make files which result as part of the build process visible to the host file system?
you can't mount files from your host machine into volumes. Like Pandey Amit already mentioned in the comments, you have to mount the Config directory of your host machine directly into the container. To achieve this in Windows 10, you have to grant the access to your filesystem in the docker settings first.
Open Docker Dashboard -> Settings -> Resources -> FILE SHARING + add here the directory, which should be mounted in your Docker containers (fyi: all the subdirectories are included)
I recommend to restart docker itself, to take note of the new settings.
Once docker has restarted, run your container:
docker run -v C:/app/Config:/path/to/your/app/Config <image-name>
This will mount the content of C:\app\Config into the container and now you should be able to modify the content of appsettings.json even without having to restart your container (but this depends on the architecture of your application -> if it supports a live reload of appsettings.json)
UPDATE: you cannot mount files from your docker image on a host machine. if you want, you could instruct the user to create the file manually on the host machine and follow with the steps described above. But if the only thing you want achieve, is to allow the user to override application settings I would recommend you to work with environment variables. For example run container with overwritten setting from the host machine:
docker run -e "foo=bar" <image-name>
I initially configured my docker setup for Docker for Windows. Everything worked great. I'm using docker-compose to define 3 containers, each of which have a volume being mapped from my ./src (path on host) to /src/ (path on container).
I recently found out that the production server might have Windows 10 Home, which doesn't support Docker for Windows. So, my thinking is that I should revert to docker toolbox to be prepared for that scenario.
So I uninstalled Docker for Windows and installed Docker toolbox. I can build my images with docker-compose build just fine, but now when I run docker compose up -d, 2 of my containers immediately crash because the /src/ directory never gets mounted.
I can verify that the volumes are not getting mounted by running docker exec -it ng01 bash and seeing that the volume directory exists but is empty. 2 of my co-workers can reproduce this issue on their windows machines with docker toolbox.
Does anyone know why this is happening, or how to get around it? I've been looking at a bunch of similar SO posts, but the various solutions have gotten me nowhere. I would appreciate some guidance.
Here is my docker-compose file.
I have my source code in src/.
I have my Dockerfiles in docker/
Here is hotloader.Dockerfile.
Here is web.Dockerfile. I don't think they are the issue, but I might as well share them anyways.
Thank you in advance!
Docker Toolbox for Windows works by setting up a VirtualBox VM named default. Running any docker command forwards that command to the VM (Windows Machine → Virtual Machine → Docker).
To mount local Windows folders as Docker volumes, those folders first need to be shared and mounted on the VM that is running Docker.
By default, C:\Users is shared, so mounting volumes from that location will work without any configuration.
So you can either move your project on this already shared location(C:\Users) or you can follow the steps in this document https://headsigned.com/posts/mounting-docker-volumes-with-docker-toolbox-for-windows/
Hope this helps! :)
I'm trying to mount a network folder with a Docker container on Windows 10 with the following syntax. Using UNC paths does not work. I'm running it under Hyper-V and the stable version of Docker.
docker run -v \\some\windows\network\path:/some/local/container
Before I was using Docker Toolbox, and I could map a network share to an internal folder with VirtualBox. I've tried adding the network share as a drive, but it doesn't show up as an available drive under the settings panel.
Currently I'm using mklink to mirror a local folder to the network folder, but I'd like to not depend on this as a solution.
Do this with Windows based containers
Go to Microsoft documentation https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts.
There you'll find information about how to mount a network drive as a volume in a windows container.
Do this with Linux based containers
Is currently (as of 2019-11-13) not possible. BUT you can use a plugin: https://github.com/ContainX/docker-volume-netshare
I didn't use it, so I have no experience with it. Just found it during my research and wanted to add this as a potential solution.
Recommended solution
While researching on this topic I felt that you should probably mount the drive from within the container. You can pass required credentials either via file or parameters.
Example for credentials as file
You would require to install the package cifs-utils in the container, add
COPY ./.smbcredentials /.smbcredentials
to the Dockerfile and then run the following command after the container is started:
sudo mount -t cifs -o file_mode=0600,dir_mode=0755,credentials=/.smbcredentials //192.168.1.XXX/share /mnt
Potential duplicate
There was another stackoverflow thread on this topic here:
Docker add network drive as volume on windows
The answer provided there (https://stackoverflow.com/a/57510166/12338776) didn't work for me though.
How can I access Docker containers Folder and files from Windows file explorer?
If you are running Docker Desktop on Windows, Docker containers don't run natively on the local filesystem, but instead on a hyper-v virtual machine or via WSL2.
Hyper-v (legacy)
In theory, if you were to stop the hyper-v vm, you could open up the vhdx, and if you had the right filesystem drivers, mount it and see the files inside. This is not possible to do while the virtual machine is running. By default the OS that runs for Linux container mode is named "Docker Desktop", but runs busybox.
The file could be found here:
C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx
WSL2 (modern)
WSL things are slightly different, but not much. You are still effectively working with a virtual environment.
One of the nice advantages of WSL however, is that you can actually browse this file system naively with Windows Explorer.
By browsing to \\wsl$ you will be able to see the file systems of any distributions you have, including docker-desktop.
The docker filesystems on my machine seem to live in:
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
However, the overlay 'merged' view, which shows the original file system with your changes, doesn't seem to work via windows explorer and gives you a blank window. You can however still see the 'diff' folder, which contains your changes.
You can open a terminal to either of these instances by using the wsl command, from powershell.
Access via Docker
If you wanted to have a look at this Docker OS and filesystem, one way would be to spin up a container, that has access to the OS at the root, something like:
docker run -it --mount type=bind,source=/,target=/host ubuntu /bin/bash
This should drop you into a Ubuntu docker container, with a Bash terminal, which has the root of the hyper-v container (/), mounted on the path '/host'. Looking inside, you will find the Busybox filesystem of the virtual machine that is running docker, and all the containers.
Due to how docker runs, you will be able to access the filesystems of each container. If you are using the overlay2 filesystem for you containers, you would likely find the filesystem layers here for each container:
/host/var/lib/docker/overlay2
If the files you want to browse through in windows explorer, you should be able to configure a samba export of this folder, that is accessible from the host machine, that is accessible while this container is running.
If the goal however is to be able to browse/edit files on the local OS, and have them update inside the container, normally the easiest way to do this, is to mount local directory into the container. This can be done similar to the example above, but you first need to go into the Docker Desktop settings, and enable the mounting of the shared drive into the host virtual machine, and then provide the volume argument when you spin up a container.
If you are using WSL2, there are a few more options available to you, as you can keep your projects inside the WSL layer, while interacting with them from the host OS or via docker. Best practice for this is still in flux, so I'm going to avoid giving direct advice here.
Another related question's reply answers this: https://stackoverflow.com/a/64418064/1115220
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
I'll give WordPress app as an example by showing a sample of the docker-compose.yaml file. In order to have project files shown in windows from docker container, you'll need to use ports and volumes
Notice volume and ports.
port 8000 from the local machine maps to 80 within the container.
as for volume, ./ current directory on windows maps to the container image files.
wordpress:
depends_on:
- db
image: wordpress:latest
volumes: ['./:/var/www/html']
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
When running Windows container on Windows Docker Desktop, I was able to see all image files here:
C:\ProgramData\Docker\windowsfilter
(requires admin rights to access, and it would be unwize to delete/modify anything there)
Further, with WizTree tool, it's easy to see real sizes of each image layer and even find which specific files contribute to layer's size.
You should use a mount volume. In your docker run .... command, you may specify a mount volume. The syntax is as follows:
-v /host/directory:/container/directory
An example:
docker run -it -v C:\Users\thomas\Desktop:/root/home --name my_container image1
This would allow the container to write files to /root/home and have them appear on the user thomas' desktop