Here is my docker-compose.yml file:
version:'2':
services:
redis:
image: redis
environment:
- HOST='localhost'
- PORT=6379
ports:
-"0.0.0.0:${PORT}:6379"
I get this error on running docker-compose up:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Invalid service name 'services' - only [a-zA-Z0-9\._\-] characters are allowed
Unsupported config option for services: 'redis'
There are multiple problems with your file. The one causing the syntax error is that you have an extra colon on the first line:
version:'2':
that way you define a scalar string key version:'2' with value of null. Since you are therefore not defining the version of the docker compose file, the rest of the file (which is version 2 oriented) fails. This is best resolved by adding a space after version:
In addition your ports definition is incorrect, the value for that should be a sequence/list, and you again specify a scalar string -"0.0.0.0:${PORT}:6379" because there is no space after the initial dash.
Change your docker_compose.yaml file to:
version: '2'
# ^ no colon here
# ^ space here
services:
redis:
image: redis
environment:
- HOST='localhost'
- PORT=6379
ports:
- "0.0.0.0:${PORT}:6379"
# ^ extra space here
just remove last character ":" into string version:'2':
after it docker-compose.yml must be like
version:'2'
services:
redis:
image: redis
environment:
- HOST='localhost'
- PORT=6379
ports:
-"0.0.0.0:${PORT}:6379"
Related
Im trying to use Docker-Compose on Microsoft Windows to create a stack for Seafile.
The error message after creating is:
Deployment error
failed to deploy a stack: Named volume “C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql:rw” is used in service “db” but no declaration was found in the volumes section. : exit status 1
Here's my problematic docker-compose.yaml file :
version: '2'
services:
db:
image: mariadb:10.5
container_name: seafile-mysql
environment:
- MYSQL_ROOT_PASSWORD=db_dev # Requested, set the root's password of MySQL service.
- MYSQL_LOG_CONSOLE=true
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql # Requested, specifies the path to MySQL data persistent store.
networks:
- seafile-net
memcached:
image: memcached:1.5.6
container_name: seafile-memcached
entrypoint: memcached -m 256
networks:
- seafile-net
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
ports:
- "9000:80"
# - "443:443" # If https is enabled, cancel the comment.
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Seafile:/shared # Requested, specifies the path to Seafile data persistent store.
environment:
- DB_HOST=db
- DB_ROOT_PASSWD=db_dev # Requested, the value shuold be root's password of MySQL service.
- TIME_ZONE=Etc/UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.
- SEAFILE_ADMIN_EMAIL=me#example.com # Specifies Seafile admin user, default is 'me#example.com'.
- SEAFILE_ADMIN_PASSWORD=asecret # Specifies Seafile admin password, default is 'asecret'.
- SEAFILE_SERVER_LETSENCRYPT=false # Whether to use https or not.
- SEAFILE_SERVER_HOSTNAME=docs.seafile.com # Specifies your host name if https is enabled.
depends_on:
- db
- memcached
networks:
- seafile-net
networks:
seafile-net:
If you see the error "no declaration was found in the volumes section" - probably you are not declaring the volumes from the root section.
The error message can cause confusion. Here how to do it correctly:
...
services:
...
volumes:
- a:/path1
- b:/path2
...
volumes:
a:
b:
...
I know that this could be somehow scattered and I know Docker could handle it differently in another universe, but at the current version it does it in this way: the root section declares the volume, while the services section just use them.
Let me know if this was your problem.
More info:
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose
I have set COMPOSE_PROJECT_NAME=xxx in .env file in order to customise project name. But when I use the following:
docker-compose up --abort-on-container-exit --scale influx=0 kafka=0
It throws an error saying -
ERROR: No such service: kafka=0
If I provide only one service then it scales up/down without any error but breaks down when I provide multiple services. Is there a way to fix this?
Here's the structure of docker-compose.yml file -
version: '3'
services:
influx:
image: xxx
volumes:
- xxx
ports:
- xxx
kafka:
image: xxx
volumes:
- xxx
ports:
- xxx
The syntax is docker-compose up [options] continers... and one of the option is --scale and it takes one argument. You can, for exmaple:
docker-compose up --scale influx=1 --scale kafka=2 kafka influx
^^^^^^^^^^^^ - containers, just omit if all
^^^^^^^^ - argument to --scale option
^^^^^^^ - option
I'm using a Docker Compose file for ELK setup and using the latest version (above 7) for Kibana. Now I set the xpack.encryptedSavedObjects.encryptionKey parameter in the kibana.yml so that I can use the alert and actions feature. But even after that I'm not able to create alert. Can anyone help me please?
I generated 32 character encryption key using Python uuid module.
According to https://github.com/elastic/kibana/issues/57773 the environment variable XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY was missing in the kibana config. In Feb 2020 it was merged and is now working.
The encryption key XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY has to be 32 characters or longer. https://www.elastic.co/guide/en/kibana/current/using-kibana-with-security.html
A working configuration could look like this:
...
kibana:
depends_on:
- elasticsearch
image: docker.elastic.co/kibana/kibana:8.0.0-rc2
container_name: kibana
environment:
- ...
- SERVER_PUBLICBASEURL=https://kibana.stackoverflow.com/
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=a7a6311933d3503b89bc2dbc36572c33a6c10925682e591bffcab6911c06786d
- ...
...
I have tried using the environment variable in my docker-compose.yml file as
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es01": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es01:9200
ELASTICSEARCH_HOSTS: https://es01:9200
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: "743787217A45432B462D4A614EF35266"
volumes:
- /var/elasticsearch/config/certs:$CERTS_DIR
networks:
- elastic
We have changed the string format of xpack.encryptedSavedObjects.encryptionKey in to environment variable format XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY by replacing . with _ and all caps.
Maybe there is a problem with mounting the file, I opted for the environment variables in my docker-compose file.
services:
kibana:
...
environment:
...
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: abcd...
While I'm configuring my yaml it shows the error below:
version:'3.9'
services:
Web:
image:nginx
database:
image:redis
ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
in ".\docker-compose.yml", line 2, column 9
YAML requires a space after mapping keys:
version: '3.9'
services:
Web:
image: nginx
database:
image: redis
If that space is missing, YAML reads version:'3.9' as single scalar that continues on the next line. On the next line, there is space after the :, but you are now in a multiline scalar, and multiline scalars do not allow implicit mapping keys. This is what the error message is trying to tell you.
You also need to fix the indentation to have a proper docker compose file:
version:'3.9'
services:
Web:
image: nginx
database:
image: redis
I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.