I'm dockerizing a bunch of windows apps in Windows Containers.
All my apps require the same mappings, here's a short snippet of my config:
version: '3.9'
services:
shell0:
build:
target: myimage
context: .
image: 'salimfadhley/myimage:latest'
entrypoint: c:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe
working_dir: "c:\\"
volumes:
- type: "bind"
source: "x:"
target: "x:"
volumes: # THIS BIT DOESN'T WORK!
xdrive:
source: "x:"
"xdrive" is a network drive share used by all of my applications. Every single process needs access do "xdrve", that's why I'm bind-mounting this into each service.
I'm doing this by repeating the configuration for every single service in this Docker Compose file. There's going to be quite a few of them. It's going to make my docker-compose file very repetitive.
Is there a way to define the "xdrive" just once, for example in the global "volumes" section? I'd like to be able to do something like this per-service:
...service
volumes:
- xdrive: "x:"
Can it be done? What is the syntax to define a bind-mount globally?
You can solve this with YAML syntax:
version: "3.5"
services:
one:
image: busybox
command: ls /foo
volumes:
- &volume-foo
type: bind
source: .
target: /foo
two:
image: busybox
command: ls /foo
volumes:
- *volume-foo
&volume-foo is an anchor, *volume-foo is an alias. An alias repeats what's been declared after the corresponding anchor, in this case a single object of the array. After parsing it will look like this:
version: "3.5"
services:
one:
image: busybox
command: "ls /foo"
volumes:
-
source: "."
target: /foo
type: bind
two:
image: busybox
command: "ls /foo"
volumes:
-
source: "."
target: /foo
type: bind
Related
I'm new in vite js when upgrade from Laravel version 8 to 9.
I'm building docker for a Laravel 9 project use vite js. There is a problem: I can't expose host of resources out of docker containers. It's still working in the inside docker containers.
Are there any advice ? Thanks.
This is my docker-compose file
version: "3.9"
services:
nginx:
image: nginx:1.23-alpine
ports:
- 80:80
mem_limit: "512M"
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
- type: bind
source: ./docker/nginx/dev/default.conf
target: /etc/nginx/conf.d/default.conf
php:
platform: linux/amd64
build:
context: .
dockerfile: ./docker/php/dev/Dockerfile
mem_limit: "512M"
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
oracle:
platform: linux/amd64
image: container-registry.oracle.com/database/express:21.3.0-xe
ports:
- 1521:1521
# - 5500:5500
volumes:
- type: volume
source: oracle
target: /opt/oracle/oradata
volumes:
oracle:
I figured out issue. Caused vite does not expose host to network.
My solution is:
edit file package.json
"scripts": {
"dev": "vite --host",
"build": "vite build"
}
expose 5173 port in docker-compose.yml file
php:
platform: linux/amd64
build:
context: .
dockerfile: ./docker/php/dev/Dockerfile
mem_limit: "512M"
ports:
- 5173:5173
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
There has been voiced downsides to named volumes #David Maze.
Since you can't access the contents of a named volume from outside of Docker, they're harder to back up and manage, and a poor match for tasks like injecting config files and reviewing logs.
Would you try altering all volume types to bind.
Mount volume from host in Dockerfile long format
I am making a project for local development with dockerized apps. I have 3 different domain on my company that each domain has one docker-compose file with 5 services. (15 projects)
If User of my project wants to deploy only 1 service of their domain or/and 2 of the other domains projects, I have to comment out services in other docker-compose files that dont want to be deployed.
So my question is How can i comment out docker-compose(Go) files block with bash script? I want to choose the lines with their context. For example in below example i want to comment out ap2-php-fpm section. I cant make a work around solution because more projects incoming. I have to intervene go language script with bash script.
Demonstration
version: '3.3'
services:
app-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
ap2-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
networks:
general-nt:
external: true
I want to make this file as below with bash script.
version: '3.3'
services:
app-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
# ap2-php-fpm:
# container_name: app
# build:
# context: ${src}/
# volumes:
# - $path:path
# networks:
# general-nt:
# aliases:
# - app
# expose:
# - "9000"
networks:
general-nt:
external: true
For many practical purposes, it may be enough to run docker-compose up with specific service names. If you run
docker-compose up -d app-php-fpm
it will start the service(s) on the command line, and anything it depends_on:, but not anything else. That would avoid the need to comment out parts of the YAML file. You could otherwise interact with the containers normally.
There is a service that uses mongodb. But when I restart computer or docker machine, no data is stored in the database.
docker-compose.yml:
version: "3"
Services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/dockerdata/db
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
command: mongod
I tried to do database storage on the host, but it didn't help either:
docker-compose.yml:
version: "3"
Services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/c/users/frol/mongodata/db
volumes:
- /c/users/frol/mongodata/db:/data/db
ports:
- 27017:27017
command: mongod
If you make a named volume, docker writes an error:
ERROR: for test_mongodb_1 Cannot create container for service mongodb: fa
To mount local volume: mount /c/users/frol/mongodata/db:/mnt/sda1/var/lib/d
ocker/volumes/test_mongodata/_data, flags: 0x1000: no such file or directory
docker-compose.yml:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/c/users/frol/mongodata/db
volumes:
- mongodata:/data/db
ports:
- 27017:27017
command: mongod
volumes:
mongodata:
driver: local
driver_opts:
type: none
device: /c/users/frol/mongodata/db
o: bind
Host - win 8.1, docker toolbox 19.03.1 installed.
Help me, please, I'm a novice. How do I make sure that the database data isn't lost?
You first attempt would work if you just fix a simple typo in your compose file:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db # changed
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
command: mongod
But, since /data/db is the default value of MONGO_DATA_DIR, setting it is pretty redundant.
But I'd prefer to use a named volume, that way the data persists but I don't have to see the "ugly" database storage folder:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
volumes:
- mongodata:/data/db
ports:
- 27017:27017
command: mongod
volumes:
mongodata:
Don't set $MONGO_DATA_DIR; leave it at its default of /data/db.
services:
mongodb:
restart: always
image: mongo:latest
# No need to specifically set $MONGO_DATA_DIR
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
# No need to override command:
Docker containers have a separate filesystem space from the host filesystem. A typical setup for most databases is to have the database storage in a fixed location inside the container; for MongoDB that's the /data/db directory. You can mount a named volume or filesystem path there, but the code inside the container doesn't know the difference.
If you do set environment variables like $MONGO_DATA_DIR, they need to reflect paths inside the container; they can't directly specify host-system paths. (#ruohola's answer works because it changes the container-filesystem path of the bind mount to match the container-filesystem path in the environment variable; the host ./dockerdata and container /dockerdata paths are totally unrelated.)
As you are defining the data dir explicitly, you need to map the same directory in the volume to persist the data
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db #data directory
volumes:
- ./dockerdata/db:/data/db #same data directory which you defined above
ports:
- 27017:27017
command: mongod
I'm trying to start a multi containers applications for codeceptjs using docker-compose. On linux the docker compose yml file works fine but on windows it fails complaining about "volume name is too short". Why docker compose complains on Windows ?
Here's the yml file content:
version: '3.7'
services:
hub:
image: selenium/hub:latest
[...]
chrome:
image: selenium/node-chrome:latest
volumes:
- /dev/shm:/dev/shm
environment:
[...]
networks:
test_network:
ipv4_address: 10.2.0.3
test-acceptance:
image: test/codeceptjs
[...]
volumes:
- $WORKSPACE:/tests
- node_modules:/node_modules
networks:
test_network:
ipv4_address: 10.2.0.5
volumes:
node_modules:
networks:
test_network:
driver: bridge
ipam:
driver: default
config:
-
subnet: 10.2.0.0/24
X
Maybe it's just a typo but the offending values are probably here:
volumes:
node_modules:
You need to put something after the colon.
I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.