Unable to send mail with Sylius Swiftmailer from localhost docker-mailserver - swiftmailer

I am trying to setup a local mailserver and send mail in Sylius using swiftmailer. Here is my swiftmailer.yaml config:
swiftmailer:
transport: 'smtp'
auth_mode: login
username: 'test#dibdrop.dev'
password: 'test'
disable_delivery: false
And my docker-composer.yml for docker-mailserver:
services:
mailserver:
image: docker.io/mailserver/docker-mailserver:latest
container_name: mailserver
# If the FQDN for your mail-server is only two labels (eg: example.com),
# you can assign this entirely to `hostname` and remove `domainname`.
hostname: mail
domainname: dibdrop.dev
env_file: mailserver.env
# More information about the mail-server ports:
# https://docker-mailserver.github.io/docker-mailserver/edge/config/security/understanding-the-ports/
# To avoid conflicts with yaml base-60 float, DO NOT remove the quotation marks.
ports:
- "25:25" # SMTP (explicit TLS => STARTTLS)
- "143:143" # IMAP4 (explicit TLS => STARTTLS)
- "465:465" # ESMTP (implicit TLS)
- "587:587" # ESMTP (explicit TLS => STARTTLS)
- "993:993" # IMAP4 (implicit TLS)
volumes:
- ./docker-data/dms/mail-data/:/var/mail/
- ./docker-data/dms/mail-state/:/var/mail-state/
- ./docker-data/dms/mail-logs/:/var/log/mail/
- ./docker-data/dms/config/:/tmp/docker-mailserver/
- /etc/localtime:/etc/localtime:ro
restart: always
stop_grace_period: 1m
cap_add:
- NET_ADMIN
healthcheck:
test: "ss --listening --tcp | grep -P 'LISTEN.+:smtp' || exit 1"
timeout: 3s
retries: 0
I can connect without problem to the mailserver using 'telnet smtp.localhost 25, but when I try to send via sylius the output is :
Connection could not be established with host localhost :stream_socket_client(): Unable to connect to localhost:25 (Address not available)
I have also tried to set the host to 'smtp.localhost' instead of 'localhost' but it wasn't changing anything.
I'll appreciate any comments to help me understand better how mailservers work and why it's not working in my situation

Related

org.apache.kafka.common.config.ConfigException: Invalid url in bootstrap.servers: "kafka:29092"

I am trying to get a kafka producer to produce messages inside a docker-compose stack. When I run the producer on localhost with env variables that point to the forwarded port to the kafka container it sends successfully. When I containerize the producer application it gets the values from environment correctly, but it still throws
ConfigException: Invalid url in bootstrap.servers: "kafka:29092"
I have tried to follow the confluent example and the spring example for setting up zookeeper and kafka in compose. here's what I have
# docker-compose.yml
version: '3'
networks:
integrations:
ipam:
driver: default
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ALLOW_ANONYMOUS_LOGIN: "yes"
ZOO_ADMINSERVER_ENABLED: "true"
healthcheck:
test: [ "CMD", "nc", "-zv", "127.0.0.1", "2181" ]
interval: 120s
timeout: 10s
retries: 5
ports:
- 2181:2181
networks:
integrations:
kafka:
image: confluentinc/cp-kafka:latest
hostname: kafka
depends_on:
- zookeeper
restart: on-failure
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092 ,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
healthcheck:
test: [ "CMD", "nc", "-zv", "127.0.0.1", "9092" ]
interval: 120s
timeout: 10s
retries: 5
networks:
integrations:
producer:
hostname: producer
build:
context: https://github.com/bry-git/SampleKafkaProducer.git#main
dockerfile: Dockerfile
depends_on:
- zookeeper
- kafka
ports:
- 8081:8081
environment:
- BOOTSTRAP_SERVER="kafka:29092"
- KAFKA_TOPIC="primary"
command: ["gradle", "bootRun"]
networks:
integrations:
When I run the kafka producer spring application from localhost outside of docker it connects to the broker as expected with this run config
run config
this is my config for spring
# application.yml
server:
port: 8088
spring:
kafka:
bootstrap-servers: ${BOOTSTRAP_SERVER}
topic:
name: ${KAFKA_TOPIC}
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
this is my DockerFile
FROM gradle:jdk11
COPY . /SampleKafkaProducer
WORKDIR /SampleKafkaProducer
ARG BOOTSTRAP_SERVER
ENV BOOTSTRAP_SERVER=${BOOTSTRAP_SERVER}
ARG KAFKA_TOPIC
ENV KAFKA_TOPIC=${KAFKA_TOPIC}
EXPOSE 8088
I know i have successfully gotten those values to the producer because when I get the docker logs i see kafka printing its config from inside the container when the whole stack is up.
# output from integration_producer_1
2023-01-31 09:04:30.001 INFO 208 --- [ scheduling-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
producer_1 | acks = -1
producer_1 | batch.size = 16384
producer_1 | bootstrap.servers = ["kafka:29092"]
producer_1 | buffer.memory = 33554432
producer_1 | client.dns.lookup = use_all_dns_ips
producer_1 | client.id = producer-121
producer_1 | compression.type = none
producer_1 | connections.max.idle.ms = 540000
producer_1 | delivery.timeout.ms = 120000
...
producer_1 | security.protocol = PLAINTEXT
producer_1 | security.providers = null
producer_1 | send.buffer.bytes = 131072
I have connected successfully with offset explorer and have exec'd into the producer container when it is running and connected to the other containers by the docker network hostname
root#producer:/SampleKafkaProducer# hostname
producer
root#producer:/SampleKafkaProducer# nc -zv kafka 9092
Connection to kafka (192.168.80.3) 9092 port [tcp/*] succeeded!
root#producer:/SampleKafkaProducer# nc -zv kafka 29092
Connection to kafka (192.168.80.3) 29092 port [tcp/*] succeeded!
root#producer:/SampleKafkaProducer# nc -zv zookeeper 2181
Connection to zookeeper (192.168.80.2) 2181 port [tcp/*] succeeded!
root#producer:/SampleKafkaProducer#
I have tried swapping PLAINTEXT and PLAINTEXT_HOST as well as their port mappings for each with every combination in my compose file. i have no clue why this doesn't work. I figured it some sort of allowed domain that works when external and i target localhost from intelliJ.
I have also tried assigning IPV4 addresses to each container via the network block in compose.

Kibana error: Unable to retrieve version information from Elasticsearch nodes. socket hang up

I am trying to deploy elasticsearch and kibana to kubernetes using this chart and getting this error inside the kibana container, therefore ingress returns 503 error and container is never ready.
Error:
[2022-11-08T12:30:53.321+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.112.130.148:42748, Remote: 10.96.237.95:9200
Ip adress 10.96.237.95 is a valid elasticsearch service address, and port is right.
When i am doing curl to elasticsearch from inside the kibana container, it successfully returns a response.
Am i missing something in my configurations?
Chart version: 7.17.3
Values for elasticsearch chart:
clusterName: "elasticsearch"
nodeGroup: "master"
createCert: false
roles:
master: "true"
data: "true"
ingest: "true"
ml: "true"
transform: "true"
remote_cluster_client: "true"
protocol: https
replicas: 2
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
imageTag: "7.17.3"
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
clusterHealthCheckParams: "wait_for_status=green&timeout=20s"
antiAffinity: "soft"
resources:
requests:
cpu: "100m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "1Gi"
esJavaOpts: "-Xms512m -Xmx512m"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs
Values for kibana chart:
elasticSearchHosts: "https://elasticsearch-master:9200"
extraEnvs:
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: KIBANA_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: encryption-key
key: encryption_key
kibanaConfig:
kibana.yml: |
server.ssl:
enabled: true
key: /usr/share/kibana/config/certs/elastic-certificate.pem
certificate: /usr/share/kibana/config/certs/elastic-certificate.pem
xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
elasticsearch.ssl:
certificateAuthorities: /usr/share/kibana/config/certs/elastic-certificate.pem
verificationMode: certificate
protocol: https
secretMounts:
- name: elastic-certificate-pem
secretName: elastic-certificate-pem
path: /usr/share/kibana/config/certs
imageTag: "7.17.3"
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-issuer
kubernetes.io/ingress.allow-http: 'false'
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
hosts:
- host: mydomain.com
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
tls:
- hosts:
- mydomain.com
secretName: mydomain.com
UPD: tried it with other image version (8.4.1), nothing has changed, i am getting the same error. By the way, logstash is successfully shipping logs to this elasticsearch instance, so i think problem is in kibana.
Figured it out. It was a complete pain in the ass. I hope these tips will help others:
xpack.security.http.ssl.enabled should be set to false. I can't find another way around it, but if you do i'd be glad to hear any advices. As i see it, you don't need security for http layer since kibana connects to elastic via transport layer (correct me if i am wrong). Therefore xpack.security.transport.ssl.enabled should be still set to true, but xpack.security.http.ssl.enabled should be set to false. (don't forget to change your protocol field for readinessProbe to http, and also change protocol for elasticsearch in kibana chart to http.
ELASTIC_USERNAME env variable is pointless in elasticsearch chart, only password is used, user is always elastic
ELASTICSEARCH_USERNAME in kibana chart should be actually set to kibana_systems user with according password for that user
You need to provide the self signed CA for Elasticsearch to Kibana in kibana.yml
elasticsearch.ssl.certificateAuthorities: "/path/cert.ca"
You can test by setting
elasticsearch.ssl.verificationMode: "none"
But that is not recommended for production.

Issues with Traefik v2.0 to use self signed certificate

I'm trying to setup docker with traefik to use self signed certificate on localhost
I'm am developing on my local machine and I want to use docker with traefik. The problem I'm having is that i can't get self signed certificate to work with my setup. I need someone to point me in the right direction!
The certificate shown in browser is always TRAEFIK DEFAULT CERT or a get 404 page not found when i enter my domain
My docker-compose.yaml
version: "3.7"
services:
mariadb:
image: wodby/mariadb:$MARIADB_TAG
container_name: "${PROJECT_NAME}_mariadb"
stop_grace_period: 30s
environment:
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
ports:
- 3306:3306
volumes:
# - ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- mysql:/var/lib/mysql # I want to manage volumes manually.
php:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
PHP_FPM_USER: wodby
PHP_FPM_GROUP: wodby
## Read instructions at https://wodby.com/docs/stacks/wordpress/local#xdebug
# PHP_XDEBUG: 1
# PHP_XDEBUG_DEFAULT_ENABLE: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0
# PHP_IDE_CONFIG: serverName=my-ide
# PHP_XDEBUG_IDEKEY: "my-ide"
# PHP_XDEBUG_REMOTE_HOST: 172.17.0.1 # Linux
# PHP_XDEBUG_REMOTE_HOST: 10.254.254.254 # macOS
# PHP_XDEBUG_REMOTE_HOST: 10.0.75.1 # Windows
volumes:
# - ./app:/var/www/html
## For macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
## For XHProf and Xdebug profiler traces
# - files:/mnt/files
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_VHOST_PRESET: wordpress
#NGINX_SERVER_ROOT: /var/www/html/subdir
volumes:
# - ./app:/var/www/html
# Options for macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.routers.${PROJECT_NAME}_nginx.tls=true"
# - "traefik.http.routers.${PROJECT_NAME}_nginx.tls.certResolver=${PROJECT_BASE_URL}"
mailhog:
image: mailhog/mailhog
container_name: "${PROJECT_NAME}_mailhog"
labels:
- "traefik.http.services.${PROJECT_NAME}_mailhog.loadbalancer.server.port=8025"
-"traefik.http.routers.${PROJECT_NAME}_mailhog.rule=Host(`mailhog.${PROJECT_BASE_URL}`)"
portainer:
image: portainer/portainer
container_name: "${PROJECT_NAME}_portainer"
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.http.routers.${PROJECT_NAME}_portainer.rule=Host(`portainer.${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
ports:
- "80:80"
- "443:443"
- "8080:8080" # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik:/etc/traefik
- ./certs:/certs
volumes:
mysql:
## Docker-sync for macOS users
# docker-sync:
# external: true
## For Xdebug profiler
# files:
My traefik.yml
providers:
file:
filename: "/etc/traefik/config.yml"
docker:
endpoint: "unix:///var/run/docker.sock"
api:
insecure: true
entryPoints:
web:
address: ":80"
web-secure:
address: ":443"
And my config.yml (I understands it that the config for the tls has to be in a separate file!?)
tls:
certificates:
- certFile: /certs/domain.test.crt
- certKey: /certs/domain.test.key
I have been battling with this for a bit now and I seem to have found the combination that gets it working, note, you do not need to have your TLS config in a separate file.
[provider]
[provider.file]
# This file
filename = "/etc/traefik/traefik.toml"
[tls.stores.default.defaultCertificate]
certFile = "/certs/mycert.crt"
keyFile = "/certs/mycert.key"
I have now solved it. My final docker-compose.yml looks like this
Many thanks to #fffnite
version: "3.7"
services:
mariadb:
image: wodby/mariadb:$MARIADB_TAG
container_name: "${PROJECT_NAME}_mariadb"
stop_grace_period: 30s
environment:
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
ports:
- 3306:3306
volumes:
# - ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- mysql:/var/lib/mysql # I want to manage volumes manually.
php:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
PHP_FPM_USER: wodby
PHP_FPM_GROUP: wodby
## Read instructions at https://wodby.com/docs/stacks/wordpress/local#xdebug
# PHP_XDEBUG: 1
# PHP_XDEBUG_DEFAULT_ENABLE: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0
# PHP_IDE_CONFIG: serverName=my-ide
# PHP_XDEBUG_IDEKEY: "my-ide"
# PHP_XDEBUG_REMOTE_HOST: 172.17.0.1 # Linux
# PHP_XDEBUG_REMOTE_HOST: 10.254.254.254 # macOS
# PHP_XDEBUG_REMOTE_HOST: 10.0.75.1 # Windows
volumes:
# - ./app:/var/www/html
## For macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
## For XHProf and Xdebug profiler traces
# - files:/mnt/files
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_VHOST_PRESET: wordpress
#NGINX_SERVER_ROOT: /var/www/html/subdir
volumes:
# - ./app:/var/www/html
# Options for macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.routers.${PROJECT_NAME}_nginx.entrypoints=web"
- "traefik.http.middlewares.${PROJECT_NAME}_https_nginx.redirectscheme.scheme=https"
- "traefik.http.routers.${PROJECT_NAME}_https_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.routers.${PROJECT_NAME}_https_nginx.entrypoints=web-secure"
- "traefik.http.routers.${PROJECT_NAME}_https_nginx.tls=true"
mailhog:
image: mailhog/mailhog
container_name: "${PROJECT_NAME}_mailhog"
labels:
- "traefik.http.services.${PROJECT_NAME}_mailhog.loadbalancer.server.port=8025"
- "traefik.http.routers.${PROJECT_NAME}_mailhog.rule=Host(`mailhog.${PROJECT_BASE_URL}`)"
portainer:
image: portainer/portainer
container_name: "${PROJECT_NAME}_portainer"
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.http.routers.${PROJECT_NAME}_portainer.rule=Host(`portainer.${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
ports:
- "80:80"
- "443:443"
- "8080:8080" # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik:/etc/traefik
- ./certs:/certs
volumes:
mysql:
## Docker-sync for macOS users
# docker-sync:
# external: true
## For Xdebug profiler
# files:

Envoy and statsd Error: node 'id' and 'cluster' id are required

I am trying to configure stats sink to collect stats into statsd.
I have configured the envoy.yaml as follows:
admin:
access_log_path: /logs/envoy_access.log
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8001
stats_sinks:
name: envoy.statsd
config:
tcp_cluster_name: statsd-exporter
static_resources:
...
clusters:
- name: app
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: {{appName}}
port_value: {{appPort}}
- name: statsd-exporter
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: statsd_exporter
port_value: 9125
statsd is built as container within the same docker network.
When I run the docker containers with Envoy and statsd, Envoy shows the following error:
proxy_1 | [2019-05-06 04:50:38.006][27][info][main] [source/server/server.cc:516] exiting
proxy_1 | tcp statsd: node 'id' and 'cluster' are required. Set it either in 'node'
config or via --service-node and --service-cluster options.
template-starter-windows_proxy_1 exited with code 1
How do I resolve this error?
Update
I was able to resolve the error by setting the --service-cluster and --service-node parameters for envoy command:
envoy -c /etc/envoy/envoy.yaml --service-cluster 'front-envoy' --service-node 'front-envoy'
I am not sure why using statsd sink would require these parameters to be set. and The documentation for envoy does not mention this information,

Traefik Let's Encrypt ACME Route53 for multiple domains

I have Traefik configured to issue Let's Encrypt wildcard certificates with DNS-01 challenge.
I have the variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, AWS_HOSTED_ZONE_ID in the env file, for *.domain1.com (domain1.com). This AWS_HOSTED_ZONE_ID is related to domain1.com only.
I need to add new domain domain2.com also hosted in Route53, so Traefik can issue certificates for both *.domain1.com and *.domain2.com.
How have Traefik issue Letsencrypt certificates in multi Route53 domains?
Next is my treafik.yml file:
version: "3.6"
services:
traefik:
image: traefik
env_file: /mnt/ceph/traefik/env
command:
- "--debug=true"
- "--logLevel=DEBUG"
- "--api"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 Compress:true TLS"
- "--defaultentrypoints=http,https"
- "--acme"
- "--acme.storage=acme.json"
- "--acme.acmeLogging=true"
- "--acme.entryPoint=https"
- "--acme.email=email#domain1.com"
#- "--acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--acme.caServer=https://acme-v02.api.letsencrypt.org/directory"
- "--acme.dnsChallenge.provider=route53"
- "--acme.dnsChallenge.delayBeforeCheck=0"
- "--acme.domains=*.domain1.com,domain1.com"
- "--docker"
- "--docker.domain=domain1.com"
- "--docker.exposedByDefault=false"
- "--docker.swarmMode"
- "--docker.watch"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /mnt/ceph/traefik/acme.json:/acme.json
networks:
- backend
- webgateway
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8080
published: 8080
mode: host
deploy:
mode: global
placement:
constraints:
- node.role == manager
update_config:
parallelism: 2
failure_action: rollback
order: start-first
#delay: 5s
restart_policy:
condition: on-failure
labels:
- "traefik.enable=true"
- "traefik.backend=dashboard"
- "traefik.port=8080"
- "traefik.frontend.rule=Host:dashboard.domain1.com"
networks:
backend:
name: traefik_backend
driver: overlay
external: true
webgateway:
driver: overlay
Thank you in advance!!

Resources