I am trying to define a reusable block in a docker-compose.yml file in a way that the reusable block definition itself is NOT included in the final (evaluated) YAML.
I know how to define a reusable block with this syntax:
services:
default: &default
image: some/image
dashboard:
<<: *default
command: run dashboard
ports: ["3000:3000"]
But, the above also creates an entry named default under services, which I would like to avoid. In other words, I need the final YAML result to only include dashboard under the services property.
Is this possible with YAML? I was unable to find any reference that discusses this structure clearly enough.
Intuitively, I have tried some variations of the below, but it also did not work.
services:
&default:
image: some/image
dashboard:
<<: *default
command: run dashboard
ports: ["3000:3000"]
This is not possible in YAML 1.2 (or any former version). The reasoning behind this is that YAML has been designed to be a serialization language, not a configuration language.
The Anchor/Alias construct is nice for serializing cyclic data structures. It was never intended to be used for declaring variables that will be used in multiple places. So currently, the only way to create a reusable structure which can be used in multiple places it to define the structure at the first place where it is used. For example:
services:
dashboard:
<<: &default
image: some/image
command: run dashboard
ports: ["3000:3000"]
some_other_service:
<<: *default
other_props: ...
Also, be aware that the merge key << is not part of the YAML spec and only defined as additional feature for YAML 1.1. It is not defined for YAML 1.2 and will be explicitly deprecated for upcoming YAML 1.3.
We (as in: the people currently working on YAML 1.3) are aware of this missing feature and plan to provide a better solution with YAML 1.3.
Docker Compose file format 3.4 adds support for extension fields: top-level keys starting with x- that are ignored by Docker Compose and the Docker engine.
For example:
version: '3.4'
x-default: &default
image: some/image
services:
dashboard:
<<: *default
command: run dashboard
ports: ["3000:3000"]
Source: “Don’t Repeat Yourself with Anchors, Aliases and Extensions in Docker Compose Files” by King Chung Huang https://link.medium.com/N5DFdiC3F0
Related
Actually I have checked some questions like this
What I do not understand is; if I change my docker-compose.yml and add profile to it then should I leave the Dockerfile without profile ?
For example my docker-compose file:
backend:
container_name: backend
image: backend
build: ./backend
restart: always
deploy:
restart_policy:
condition: on-failure
max_attempts: 15
ports:
- '8080:8080'
environment:
- MYSQL_ROOT_PASSWORD=DbPass3008
- MYSQL_PASSWORD=DbPass3008
- MYSQL_USER=DbUser
- MYSQL_DATABASE=db
depends_on:
- mysql
And I will add:
environment:
- "SPRING_PROFILES_ACTIVE=test
As far as I understand I need to put 3 different compose file and run them with -f parameter for different environments like:
docker-compose -f docker-compose-local/test/prod up -d
But my question is that my Dockerfile is already specifying profile as:
FROM openjdk:17-oracle
ADD ./target/backend-0.0.1-SNAPSHOT.jar backend.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar", "-Dspring.profiles.active=TEST", "backend.jar"]
So how should I change this Dockerfile? Even if I create 3-4 different compose file, they are all using same Dockerfile. Should I create different Dockerfiles too (seems ridicilous) but what is the correct way ?
There's no need to add a java -Dspring.profiles.active=... command-line option; Spring will recognize the runtime SPRING_PROFILES_ACTIVE environment variable on its own. That means all of your environments can use the same image (which is generally a good practice).
Compose can also expand host environment variables in some contexts, so you may be able to use a single Compose file with environment-variable references
version: '3.8'
services:
backend:
environment:
- SPRING_PROFILES_ACTIVE=${ENVIRONMENT:-dev}
ENVIRONMENT=test docker-compose up -d
I tend to discourage putting environment-specific settings in a src/main/resources/*.yml file, since it means you need to recompile the application jar file whenever you deploy to a new environment. Another possibility is to set most Spring properties as environment variables, and then use multiple Compose files to include environment-specific settings. The one downside here is that you need multiple docker-compose -f options and you need to repeat them on every docker-compose invocation.
I recently came across this and was wondering what &django means
version: '2'
services:
django: &django
I can't see anything in the docs related to this.
These are a YAML feature called anchors, and are not particular to Docker Compose. I would suggest you have a look at below URL for more details
https://learnxinyminutes.com/docs/yaml/
Follow the section EXTRA YAML FEATURES
YAML also has a handy feature called 'anchors', which let you easily duplicate
content across your document. Both of these keys will have the same value:
anchored_content: &anchor_name This string will appear as the value of two keys.
other_anchor: *anchor_name
Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
To complement Tarun's answer, & identifies an anchor and * is an alias referring back to the anchor. It is described as the following in the YAML specification:
In the representation graph, a node may appear in more than one
collection. When serializing such data, the first occurrence of the
node is identified by an anchor. Each subsequent occurrence is
serialized as an alias node which refers back to this anchor.
Sidenote:
For those who want to start using anchors in your docker-compose files, there is more powerful way to make re-usable anchors by using docker-compose YAML extension fields.
version: "3.4"
# x-docker-data is an extension and when docker-compose
# parses the YAML, it will not do anything with it
x-docker-data: &docker-file-info
build:
context: .
dockerfile: Dockerfile
services:
some_service_a:
<<: *docker-file-info
restart: on-failure
ports:
- 8080:9090
some_service_b:
<<: *docker-file-info
restart: on-failure
ports:
- 8080:9595
While I'm configuring my yaml it shows the error below:
version:'3.9'
services:
Web:
image:nginx
database:
image:redis
ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
in ".\docker-compose.yml", line 2, column 9
YAML requires a space after mapping keys:
version: '3.9'
services:
Web:
image: nginx
database:
image: redis
If that space is missing, YAML reads version:'3.9' as single scalar that continues on the next line. On the next line, there is space after the :, but you are now in a multiline scalar, and multiline scalars do not allow implicit mapping keys. This is what the error message is trying to tell you.
You also need to fix the indentation to have a proper docker compose file:
version:'3.9'
services:
Web:
image: nginx
database:
image: redis
I wrote a script that creates a local development environment using a docker-compose.yml file.
When running the script, I want to use a yaml linter command to validate that the file is a valid yaml before upping the environment and to do that I'm using the command yamllint.
In this docker-compose.yml file, there is more than one service which "depeneds_on" another service but when I run yamllint, it returns the following error:
47:5 error duplication of key "depends_on" in mapping (key-duplicates)
Which is not a real error, but since the lint is part of the script run then I cannot count on its exit code as it counts this error as an error while in reality, it's not.
An example portion of the docker-compose.yml file:
microservice-one:
image: ms-one:feature-local_development_env
environment:
NODE_ENV: 'development'
NPM_TOKEN: 'SECRET'
ports:
- "3013:3000"
depends_on:
- redis-cluster
microservice-two:
image: ms-two:feature-local_development_env
environment:
NODE_ENV: 'development'
NPM_TOKEN: 'SECRET'
ports:
- "3014:3000"
depends_on:
- redis-cluster
networks:
default:
Is there any other command line yaml linter that you know which will not count more than one "depends_on" as an error?
I found my answer and thought I'll share it with whoever gets here.
So the solution is to override yamllint's default configuration by creating a specific yamllint configuration file.
In my case, the file looks like so:
extends: default
rules:
key-duplicates: disable
Then, I'm running the command like so:
yamllint -d config_file docker-compose.yml
More options can be found in yamllint's official documentation page,
If you need only syntax error and nothing else , below command can be used.
yamllint -d "{rules:{}}"
I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.