Windows 10 bind mounts in docker-compose not working - windows

I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?

OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.

Related

Docker mount volume error no such file or directory (Windows)

I am trying to set up an EMQx broker in the form of deploying it with Docker. One of my constraints is to do this on Windows. To be able to use TLS/SSL authentication there must be a place to put certs in the container therefore I'd like to mount a volume.
I have tried several ways and read myriad of comments but I cannot make it work consistently. I always bump into "no such file or directory" message.
More interestingly once I got it to work and also saved the .yml file right after, but next time when I used the command docker-compose up with this yaml("YAML that worked once"), I received the usual and same message ("Resulting error message").
Path where the certs reside -> c:\Users\danha\Desktop\certs
Lines in question (please see the entire YAML below):
volumes:
- vol-emqx-conf://C//Users//danha//Desktop//certs
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: /Users/danha/Desktop/certs
o: bind
YAML that worked once:
version: '3.4'
services:
emqx:
image: emqx/emqx:4.3.10-alpine-arm32v7
container_name: "emqx"
hostname: "emqx"
restart: always
environment:
EMQX_NAME: lms_emqx
EMQX_HOST: 127.0.0.1
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_LOADED_PLUGINS: "emqx_auth_mnesia"
EMQX_LOADED_MODULES: "emqx_mod_topic_metrics"
volumes:
- vol-emqx-conf://C//Users//danha//Desktop//certs
labels:
NAME: "emqx"
ports:
- 18083:18083
- 1883:1883
- 8081:8081
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: //C//Users//danha//Desktop//certs
o: bind
Resulting error message
C:\Users\danha\Desktop\dc>docker-compose up
Creating network "dc_default" with the default driver
Creating volume "dc_vol-emqx-conf" with default driver
Creating emqx ... error
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount \\c\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount \\c\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: Encountered errors while bringing up the project.
I have also played around with forward and back slashes but these did not bring any success. At the end I entered a path which gave the result mostly resembling on the correct path in the error message:
YAML neglecting C: from the beginning of path:
version: '3.4'
services:
emqx:
image: emqx/emqx:4.3.10-alpine-arm32v7
container_name: "emqx"
hostname: "emqx"
restart: always
environment:
EMQX_NAME: lms_emqx
EMQX_HOST: 127.0.0.1
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_LOADED_PLUGINS: "emqx_auth_mnesia"
EMQX_LOADED_MODULES: "emqx_mod_topic_metrics"
volumes:
- vol-emqx-conf:/Users/danha/Desktop/certs
labels:
NAME: "emqx"
ports:
- 18083:18083
- 1883:1883
- 8081:8081
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: /Users/danha/Desktop/certs
o: bind
Resulting error message
C:\Users\danha\Desktop\dc>docker-compose up
Creating volume "dc_vol-emqx-conf" with default driver
Creating emqx ... error
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount C:\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount C:\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: Encountered errors while bringing up the project.
That also got me thinking that this issue might be related to access rights and file sharing between Windows and WLS2 and the CMD was run in admin mode too, however I could find any answer further down the line that would have helped.
Probably this is pretty a newbie question but any help would be greatly appreciated.

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

Running Sonarqube with docker-compose using bind mount volumes

I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.

Bitnami Magento site always point to port 80 for any links

I am new to this area. I have a docker-compose.yml file which starts Magento & MariaDB dockers container. And here is the script:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- ENVIRONMENT=Test3
ports:
- '89:80' #for Test3
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
I tried to use http://127.0.0.1:89 for the site, and it did happen at beginning (e.g. I could open site with URL: http://127.0.0.1:89 ). However when I view page source I found these style/js still points to http://127.0.0.1 (port 80) one. Also I couldn't access its other page like http://120.0.0.1:89/admin.
Then I google, for example some posts mention I need to change base_url value in "core_config_data" table which I did (https://magento.stackexchange.com/questions/39752/how-do-i-fix-my-base-urls-so-i-can-access-my-magento-site). And I do clear the var/cache folder on both Magento & MariaDB containers, but result is still the same. (I didn't find var/session folder which that link mentions. Maybe a little bit different among Bitnami system and others.)
So how could I try now? And also is there anyway that I could set base_url with correct port to MariaDB at very beginning in my docker-compose.yml file?
P.S. Everything works fine if using default port 80.
Thanks a lot!
You can indicate the port where Apache should be listening in the docker-compose.yml file in this way:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
ports:
- '89:89'
- '443:443'
environment:
- APACHE_HTTP_PORT=89
volumes:
- 'magento_data:/bitnami/magento'
- 'php_data:/bitnami/php'
- 'apache_data:/bitnami/apache'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
Please, note the use of the APACHE_HTTP_PORT environment variable on the Magento container. Also, note that the port forwarding should be 89:89 in this case.
Take into account that this change should be performed when you launch for the first time the containers. That means that, if you have some volumes already, this method won't work because your configuration will be restored from those volumes. So, ensure that you don't have any volume. You can check it by executing
docker volume ls
and checking that there isn't any volume named
local DATE_apache_data
local DATE_magento_data
local DATE_mariadb_data
Also, you can also delete the volumes executing:
docker-compose down -v

hostname in docker-compose.yml fails to be recognized on on mac (but works on linux)

I am using the docker-compose 'recipe' below to bring up a container that runs a component of the storm stream processing framework. I am finding that on Mac's
when i enter the container (once it is up and running via docker exec -t -i <container-id> bash)
and I do ping storm-supervisor I get the error
'unknown host'. However, when i run the same docker-compose script on Linux
the host is recognized and ping succeeds.
The failure to resolve the host leads to problems with the Storm component... but what
that component is doing can be ignored for this question. I'm pretty sure if I figured out
how to get the Mac's docker-compose behavior to match Linux's then I would have no problem.
I think i am experiencing the issue mentioned in this post:
https://forums.docker.com/t/docker-compose-not-setting-hostname-when-network-mode-host/16728
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
network_mode: host
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
thanks in advance for any leads or tips !
"network_mode: host" will not work well on docker mac. I experienced the same issue where I had few of my containers in bridge network and the others in host network.
However, you can move all your containers to a custom bridge network. It solved for me.
You can edit your docker-compose.yml file to have a custom bridge network.
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
networks:
- storm
networks:
storm:
external: true
Also, execute the below command to create the custom network.
docker network create storm
You can verify it by
docker network ls
Hope it helped.

Resources