Docker mount volume error no such file or directory (Windows) - windows

I am trying to set up an EMQx broker in the form of deploying it with Docker. One of my constraints is to do this on Windows. To be able to use TLS/SSL authentication there must be a place to put certs in the container therefore I'd like to mount a volume.
I have tried several ways and read myriad of comments but I cannot make it work consistently. I always bump into "no such file or directory" message.
More interestingly once I got it to work and also saved the .yml file right after, but next time when I used the command docker-compose up with this yaml("YAML that worked once"), I received the usual and same message ("Resulting error message").
Path where the certs reside -> c:\Users\danha\Desktop\certs
Lines in question (please see the entire YAML below):
volumes:
- vol-emqx-conf://C//Users//danha//Desktop//certs
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: /Users/danha/Desktop/certs
o: bind
YAML that worked once:
version: '3.4'
services:
emqx:
image: emqx/emqx:4.3.10-alpine-arm32v7
container_name: "emqx"
hostname: "emqx"
restart: always
environment:
EMQX_NAME: lms_emqx
EMQX_HOST: 127.0.0.1
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_LOADED_PLUGINS: "emqx_auth_mnesia"
EMQX_LOADED_MODULES: "emqx_mod_topic_metrics"
volumes:
- vol-emqx-conf://C//Users//danha//Desktop//certs
labels:
NAME: "emqx"
ports:
- 18083:18083
- 1883:1883
- 8081:8081
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: //C//Users//danha//Desktop//certs
o: bind
Resulting error message
C:\Users\danha\Desktop\dc>docker-compose up
Creating network "dc_default" with the default driver
Creating volume "dc_vol-emqx-conf" with default driver
Creating emqx ... error
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount \\c\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount \\c\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: Encountered errors while bringing up the project.
I have also played around with forward and back slashes but these did not bring any success. At the end I entered a path which gave the result mostly resembling on the correct path in the error message:
YAML neglecting C: from the beginning of path:
version: '3.4'
services:
emqx:
image: emqx/emqx:4.3.10-alpine-arm32v7
container_name: "emqx"
hostname: "emqx"
restart: always
environment:
EMQX_NAME: lms_emqx
EMQX_HOST: 127.0.0.1
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_LOADED_PLUGINS: "emqx_auth_mnesia"
EMQX_LOADED_MODULES: "emqx_mod_topic_metrics"
volumes:
- vol-emqx-conf:/Users/danha/Desktop/certs
labels:
NAME: "emqx"
ports:
- 18083:18083
- 1883:1883
- 8081:8081
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: /Users/danha/Desktop/certs
o: bind
Resulting error message
C:\Users\danha\Desktop\dc>docker-compose up
Creating volume "dc_vol-emqx-conf" with default driver
Creating emqx ... error
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount C:\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount C:\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: Encountered errors while bringing up the project.
That also got me thinking that this issue might be related to access rights and file sharing between Windows and WLS2 and the CMD was run in admin mode too, however I could find any answer further down the line that would have helped.
Probably this is pretty a newbie question but any help would be greatly appreciated.

Related

How to mount a NFS file share inside a Windows Docker Container

I am running Docker on a Windows Server 2022.
I need to start a Windows Container - must be a windows container because I am porting a Net Framework App that cannot run on Linux.
My Windows Container needs to write uploaded files to a network share.
I have read lots of discussion, but I could only find example on Linux containers.
Currently I am trying using a Docker Compose file. This is the compose file:
version: "3.9"
services:
web:
image: misterpanel
ports:
- "80:80"
volumes:
- misterpanel_images:/inetpub/wwwroot/Img
volumes:
misterpanel_images:
driver: local
driver_opts:
type: cifs
o: username=<myysernane>,password=<mypassword>,rw,domain=<mydomain>
device: "\\\\<server-ip-address>\\<files-share>"
Misterpanel image was created FROM mcr.microsoft.com/dotnet/framework/aspnet
When running docker-compose up, I am getting the following error:
PS C:\Users\Administrator\docker> docker-compose up
Creating volume "docker_misterpanel_images" with local driver
ERROR: create docker_misterpanel_images: options are not supported on this platform
I have also tryed to map the network drive at the host, and then mount the bind when starting the container like this:
docker run -p 80:80 --dns 8.8.8.8 --mount type=bind,source=z:\,target=c:\inetpub\wwwroot\Img misterpanel
But then I get this error ( even with Z drive working at the host ):
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: z:.
I have tryed all the possible sintaxes for the source ( \Z\ and //Z/ for example )
I could only mount binds on the Windows Container to the local C: drive.
Does anyone has ever mounted a bind to a file share inside a Windows Container?
Any help will be appreciated.
Thanks,

docker-compose pull Error: "error creating temporary lease: read-only file system"

I'm trying to run docker-compose pull but I get some errors that I don't know what to do with.
My docker-compose.yaml file:
version: '3'
services:
strapi:
image: strapi/strapi
environment:
DATABASE_CLIENT: postgres
DATABASE_NAME: strapi
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USERNAME: strapi
DATABASE_PASSWORD: strapi
volumes:
- ./app:/srv/app
ports:
- '1337:1337'
depends_on:
- postgres
postgres:
image: postgres
environment:
POSTGRES_DB: strapi
POSTGRES_USER: strapi
POSTGRES_PASSWORD: strapi
volumes:
- ./data:/var/lib/postgresql/data
The error message:
Pulling postgres ... error
Pulling strapi ... error
ERROR: for strapi error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
ERROR: for postgres error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
ERROR: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
I tried a multitude of things so YMMV, but here are all of the steps I did that ultimately got it working.
I am using Windows 10 with WSL2 backend on Ubuntu, so again YMMV as I see MacOS is tagged. This is one of the few questions I see related to mine, so I thought it would be valuable.
Steps for success:
Update WSL (wsl --update -- unrelated to the GitHub issue below)
stop Docker Desktop
stop WSL (wsl --shutdown)
unregister the docker-desktop distro (which contains binaries, but no data)
wsl --unregister docker-desktop
restart Docker Desktop (try running as admin)
Enable use of docker compose V2 (settings -> general -> Use Docker Compose V2)
Associated GitHub issue link
Extra Info:
I ended up using V2 of docker compose when it worked... it works either way now that the image has pulled properly, though.
I unsuccessfully restarted, reinstalled, and factory reset Docker Desktop many times.

Windows 10 bind mounts in docker-compose not working

I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.

hostname in docker-compose.yml fails to be recognized on on mac (but works on linux)

I am using the docker-compose 'recipe' below to bring up a container that runs a component of the storm stream processing framework. I am finding that on Mac's
when i enter the container (once it is up and running via docker exec -t -i <container-id> bash)
and I do ping storm-supervisor I get the error
'unknown host'. However, when i run the same docker-compose script on Linux
the host is recognized and ping succeeds.
The failure to resolve the host leads to problems with the Storm component... but what
that component is doing can be ignored for this question. I'm pretty sure if I figured out
how to get the Mac's docker-compose behavior to match Linux's then I would have no problem.
I think i am experiencing the issue mentioned in this post:
https://forums.docker.com/t/docker-compose-not-setting-hostname-when-network-mode-host/16728
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
network_mode: host
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
thanks in advance for any leads or tips !
"network_mode: host" will not work well on docker mac. I experienced the same issue where I had few of my containers in bridge network and the others in host network.
However, you can move all your containers to a custom bridge network. It solved for me.
You can edit your docker-compose.yml file to have a custom bridge network.
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
networks:
- storm
networks:
storm:
external: true
Also, execute the below command to create the custom network.
docker network create storm
You can verify it by
docker network ls
Hope it helped.

Mount volume not working in docker

I am new to docker.I am using Windows 10 and installed docker for my machine and working docker via power-shell.
My problem was i can't copy my files from docker-compose.yml file
My file look like below
version: '2'
services:
maven:
image: maven:3.3.3-jdk-8
volumes:
- ~/.m2:/root/.m2
- /d/projects/test/:/code
working_dir: /code
links:
- mongodb
entrypoint: /code/wait-for-it.sh mongodb:27017 -- mvn clean install
environment:
- mongodb_hosts=mongodb
mongodb:
image: mongo:3.2.4
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
This test project i'm using maven and i have lot of files on it. But it's give the error, like
ERROR: for maven Cannot start service maven: oci runtime error: exec:
"/code/wait-for-it.sh": stat /code/wait-for-it.sh: no such file or
directory ERROR: Encountered errors while bringing up the project.
I shared my local drive also in docker settings, still mount problem was there.
Please help me, thanks for advance.

Resources