I have Traefik configured to issue Let's Encrypt wildcard certificates with DNS-01 challenge.
I have the variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, AWS_HOSTED_ZONE_ID in the env file, for *.domain1.com (domain1.com). This AWS_HOSTED_ZONE_ID is related to domain1.com only.
I need to add new domain domain2.com also hosted in Route53, so Traefik can issue certificates for both *.domain1.com and *.domain2.com.
How have Traefik issue Letsencrypt certificates in multi Route53 domains?
Next is my treafik.yml file:
version: "3.6"
services:
traefik:
image: traefik
env_file: /mnt/ceph/traefik/env
command:
- "--debug=true"
- "--logLevel=DEBUG"
- "--api"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 Compress:true TLS"
- "--defaultentrypoints=http,https"
- "--acme"
- "--acme.storage=acme.json"
- "--acme.acmeLogging=true"
- "--acme.entryPoint=https"
- "--acme.email=email#domain1.com"
#- "--acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--acme.caServer=https://acme-v02.api.letsencrypt.org/directory"
- "--acme.dnsChallenge.provider=route53"
- "--acme.dnsChallenge.delayBeforeCheck=0"
- "--acme.domains=*.domain1.com,domain1.com"
- "--docker"
- "--docker.domain=domain1.com"
- "--docker.exposedByDefault=false"
- "--docker.swarmMode"
- "--docker.watch"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /mnt/ceph/traefik/acme.json:/acme.json
networks:
- backend
- webgateway
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8080
published: 8080
mode: host
deploy:
mode: global
placement:
constraints:
- node.role == manager
update_config:
parallelism: 2
failure_action: rollback
order: start-first
#delay: 5s
restart_policy:
condition: on-failure
labels:
- "traefik.enable=true"
- "traefik.backend=dashboard"
- "traefik.port=8080"
- "traefik.frontend.rule=Host:dashboard.domain1.com"
networks:
backend:
name: traefik_backend
driver: overlay
external: true
webgateway:
driver: overlay
Thank you in advance!!
Related
I configured redis master-slave-sentinel environment using docker-compose.
and I completed it and tested it.
but, in my spring boot application, I wrote a redis sentinel config code.
below spring config class.
#Configuration
#EnableRedisRepositories
public class RedisConfig {
#Bean
public RedisConnectionFactory redisConnectionFactory(){
RedisSentinelConfiguration redisSentinelConfiguration = new RedisSentinelConfiguration()
.master("redis-master")
.sentinel("localhost",26379)
.sentinel("localhost",26380)
.sentinel("localhost",26381);
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisSentinelConfiguration);
return lettuceConnectionFactory;
}
}
and then I did run application, then problem occured.
enter image description here
I think there are some docker-compose network configs....
I want to know why it was...
If you guys know, .. please let me know about it.
↓ redis master-slave-sentinel config docker-compose.yml code.
version: '3.9'
services:
redis-master:
hostname: redis-master
container_name: redis-master
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=master
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- ./master/backup:/freelog/backup
networks:
- net-redis
ports:
- 6379:6379
# slave1 : bitnami/redis:6.2.6
redis-slave-1:
hostname: redis-slave-1
container_name: redis-slave-1
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 6480:6379
volumes:
- ./slave1/backup:/freelog/backup
networks:
- net-redis
depends_on:
- redis-master
# slave2 : bitnami/redis:6.2.6
redis-slave-2:
hostname: redis-slave-2
container_name: redis-slave-2
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 6481:6379
networks:
- net-redis
volumes:
- ./slave2/backup:/freelog/backup
depends_on:
- redis-master
- redis-slave-1
# slave3 : bitnami/redis:6.2.6
redis-slave-3:
hostname: redis-slave-3
container_name: redis-slave-3
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 6482:6379
volumes:
- ./slave3/backup:/freelog/backup
networks:
- net-redis
depends_on:
- redis-master
- redis-slave-2
# sentinel1 : bitnami/redis-sentinel:6.2.6
redis-sentinel-1:
hostname: redis-sentinel-1
container_name: redis-sentinel-1
image: bitnami/redis-sentinel:6.2.6
environment:
- REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=3000
- REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_SET=master-name
- REDIS_SENTINEL_QUORUM=2
# - REDIS_SENTINEL_PASSWORD=170anwkd!
depends_on:
- redis-master
- redis-slave-1
- redis-slave-2
- redis-slave-3
ports:
- 26379:26379
networks:
- net-redis
volumes:
- ./sentinel1/backup:/freelog/backup
redis-sentinel-2:
hostname: redis-sentinel-2
container_name: redis-sentinel-2
image: bitnami/redis-sentinel:6.2.6
environment:
- REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=3000
- REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_SET=master-name
- REDIS_SENTINEL_QUORUM=2
# - REDIS_SENTINEL_PASSWORD=170anwkd!
depends_on:
- redis-master
- redis-slave-1
- redis-slave-2
- redis-slave-3
ports:
- 26380:26379
networks:
- net-redis
volumes:
- ./sentinel2/backup:/freelog/backup
# sentinel3 : bitnami/redis-sentinel:6.2.6
redis-sentinel-3:
hostname: redis-sentinel-3
container_name: redis-sentinel-3
image: bitnami/redis-sentinel:6.2.6
environment:
- REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=3000
- REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_SET=master-name
- REDIS_SENTINEL_QUORUM=2
# - REDIS_SENTINEL_PASSWORD=170anwkd!
depends_on:
- redis-master
- redis-slave-1
- redis-slave-2
- redis-slave-3
ports:
- 26381:26379
networks:
- net-redis
volumes:
- ./sentinel3/backup:/freelog/backup
networks:
net-redis:
driver: bridge
I wrote docker-compose.yml, that worked fine in linux, but i have to port it on windows. The main problem is path, cause when i replace /<path> with c:/<path>, i get this error:
Error response from daemon: invalid mount config for type "volume":
invalid mount path: 'C://' mount path must be absolute
Here is the original code from linux:
docker-compose.yml
version: '2.5.1'
networks:
selenoid:
external:
name: selenoid
services:
selenoid:
networks:
selenoid: null
image: 'aerokube/selenoid:latest'
container_name: 'selenoid'
volumes:
- '/home/rolf/.aerokube/selenoid:/etc/selenoid'
- '/var/run/docker.sock:/var/run/docker.sock'
command: ['-conf', '/etc/selenoid/browsers.json', '-container-network', 'selenoid']
ports:
- '4444:4444'
mysql_db:
networks:
selenoid: null
image: 'percona:latest'
container_name: 'mysql_db'
environment:
MYSQL_ROOT_PASSWORD: admin
MYSQL_DATABASE: DB_MYAPP
MYSQL_USER: test_qa
MYSQL_PASSWORD: qa_test
ports:
- '3306:3306'
volumes:
- '/home/rolf/final_project/mysql/myapp_db:/docker-entrypoint-initdb.d'
healthcheck:
test: ['CMD', 'mysql', '-uroot', '-padmin', '-h0.0.0.0', '-P3306']
timeout: 2s
retries: 15
mock:
networks:
selenoid: null
image: 'vk_api:latest'
container_name: 'mock'
ports:
- '9000:9000'
healthcheck:
test: ['CMD', 'curl', '-f', 'http://0.0.0.0:9000/status']
timeout: 2s
retries: 15
myapp:
networks:
selenoid: null
image: 'myapp'
container_name: 'myapp'
ports:
- '9999:9999'
links:
- 'mock:mock'
- 'mysql_db:mysql_db'
volumes:
- /home/ilia/final_project:/config_dir
entrypoint: "/app/myapp --config=/config_dir/myapp.conf"
depends_on:
selenoid:
condition: service_started
mysql_db:
condition: service_healthy
mock:
condition: service_healthy
I tried to write -v //c/<path> like i saw in some stackoverflow questions, but that didn't work for me
I’m using docker-compose for deploying Rasa. I faced an issue when I tried to retrieve the sender_id in actions it always returned as “default”
Please find my docker-compose below:
version: "3.4"
x-database-credentials: &database-credentials
DB_HOST: "db"
DB_PORT: "5432"
DB_USER: "${DB_USER:-admin}"
DB_PASSWORD: "${DB_PASSWORD}"
DB_LOGIN_DB: "${DB_LOGIN_DB:-rasa}"
x-rabbitmq-credentials: &rabbitmq-credentials
RABBITMQ_HOST: "rabbit"
RABBITMQ_USERNAME: "user"
RABBITMQ_PASSWORD: ${RABBITMQ_PASSWORD}
x-redis-credentials: &redis-credentials
REDIS_HOST: "redis"
REDIS_PORT: "6379"
REDIS_PASSWORD: ${REDIS_PASSWORD}
REDIS_DB: "1"
x-duckling-credentials: &duckling-credentials
RASA_DUCKLING_HTTP_URL: "http://duckling:8000"
services:
rasa-production:
restart: always
image: "rasa/rasa:${RASA_VERSION}-full"
ports:
- "5006:5005"
volumes:
- ./:/app
command:
- run
- --cors
- "*"
environment:
<<: *database-credentials
<<: *redis-credentials
<<: *rabbitmq-credentials
DB_DATABASE: "${DB_DATABASE:-rasa}"
RABBITMQ_QUEUE: "rasa_production_events"
RASA_TELEMETRY_ENABLED: ${RASA_TELEMETRY_ENABLED:-true}
depends_on:
- app
- rabbit
- redis
- db
app:
restart: always
build: actions/.
volumes:
- ./actions:/app/actions
expose:
- "5055"
environment:
SERVICE_BASE_URL: "${SERVICE_BASE_URL}"
RASA_SDK_VERSION: "${RASA_SDK_VERSION}"
depends_on:
- redis
scheduler:
restart: always
build: scheduler/.
environment:
SERVICE_BASE_URL: "${SERVICE_BASE_URL}"
duckling:
restart: always
image: "rasa/duckling:0.1.6.3"
expose:
- "8000"
command: ["duckling-example-exe", "--no-access-log", "--no-error-log"]
# revers_proxy:
# image: nginx
# ports:
# - 80:80
# - 443:443
# volumes:
# - ./config/nginx/:/etc/nginx/conf.d/
# depends_on:
# - rasa-production
# - app
mongo:
image: mongo:4.2.0
ports:
- 27017:27017
# revers_proxy:
# image: nginx
# ports:
# - 5006:5006
# volumes:
# - ./config/nginx/defaul.conf:/etc/nginx/conf.d/default.conf
# depends_on:
# - rasa-production
# - app
redis:
restart: always
image: "bitnami/redis:6.0.8"
environment:
ALLOW_EMPTY_PASSWORD: "yes"
REDIS_PASSWORD: ${REDIS_PASSWORD}
expose:
- "6379"
redisapp:
restart: always
image: "bitnami/redis:6.0.8"
environment:
ALLOW_EMPTY_PASSWORD: "yes"
expose:
- "6379"
rabbit:
restart: always
image: "bitnami/rabbitmq:3.8.9"
environment:
RABBITMQ_HOST: "rabbit"
RABBITMQ_USERNAME: "user"
RABBITMQ_PASSWORD: ${RABBITMQ_PASSWORD}
RABBITMQ_DISK_FREE_LIMIT: "{mem_relative, 0.1}"
expose:
- "5672"
db:
restart: always
image: "bitnami/postgresql:11.9.0"
expose:
- "5432"
environment:
POSTGRESQL_USERNAME: "${DB_USER:-admin}"
POSTGRESQL_PASSWORD: "${DB_PASSWORD}"
POSTGRESQL_DATABASE: "${DB_DATABASE:-rasa}"
volumes:
- ./db:/bitnami/postgresql
endpoints.yml file
tracker_store:
type: sql
dialect: "postgresql"
url: ${DB_HOST}
port: ${DB_PORT}
username: ${DB_USER}
password: ${DB_PASSWORD}
db: ${DB_DATABASE}
login_db: ${DB_LOGIN_DB}
lock_store:
type: "redis"
url: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD}
db: ${REDIS_DB}
event_broker:
type: "pika"
url: ${RABBITMQ_HOST}
username: ${RABBITMQ_USERNAME}
password: ${RABBITMQ_PASSWORD}
queue: rasa_production_events
Rasa Version : 2.1.0
Rasa SDK Version : 2.1.1
Rasa X Version : None
Python Version : 3.8.5
Operating System : Linux-5.4.0-48-generic-x86_64-with-glibc2.29
Python Path : /usr/bin/python3
Please any help to overcome this issue
Looks like a bug described in this issue https://github.com/RasaHQ/rasa/issues/7338
I'm trying to setup docker with traefik to use self signed certificate on localhost
I'm am developing on my local machine and I want to use docker with traefik. The problem I'm having is that i can't get self signed certificate to work with my setup. I need someone to point me in the right direction!
The certificate shown in browser is always TRAEFIK DEFAULT CERT or a get 404 page not found when i enter my domain
My docker-compose.yaml
version: "3.7"
services:
mariadb:
image: wodby/mariadb:$MARIADB_TAG
container_name: "${PROJECT_NAME}_mariadb"
stop_grace_period: 30s
environment:
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
ports:
- 3306:3306
volumes:
# - ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- mysql:/var/lib/mysql # I want to manage volumes manually.
php:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
PHP_FPM_USER: wodby
PHP_FPM_GROUP: wodby
## Read instructions at https://wodby.com/docs/stacks/wordpress/local#xdebug
# PHP_XDEBUG: 1
# PHP_XDEBUG_DEFAULT_ENABLE: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0
# PHP_IDE_CONFIG: serverName=my-ide
# PHP_XDEBUG_IDEKEY: "my-ide"
# PHP_XDEBUG_REMOTE_HOST: 172.17.0.1 # Linux
# PHP_XDEBUG_REMOTE_HOST: 10.254.254.254 # macOS
# PHP_XDEBUG_REMOTE_HOST: 10.0.75.1 # Windows
volumes:
# - ./app:/var/www/html
## For macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
## For XHProf and Xdebug profiler traces
# - files:/mnt/files
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_VHOST_PRESET: wordpress
#NGINX_SERVER_ROOT: /var/www/html/subdir
volumes:
# - ./app:/var/www/html
# Options for macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.routers.${PROJECT_NAME}_nginx.tls=true"
# - "traefik.http.routers.${PROJECT_NAME}_nginx.tls.certResolver=${PROJECT_BASE_URL}"
mailhog:
image: mailhog/mailhog
container_name: "${PROJECT_NAME}_mailhog"
labels:
- "traefik.http.services.${PROJECT_NAME}_mailhog.loadbalancer.server.port=8025"
-"traefik.http.routers.${PROJECT_NAME}_mailhog.rule=Host(`mailhog.${PROJECT_BASE_URL}`)"
portainer:
image: portainer/portainer
container_name: "${PROJECT_NAME}_portainer"
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.http.routers.${PROJECT_NAME}_portainer.rule=Host(`portainer.${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
ports:
- "80:80"
- "443:443"
- "8080:8080" # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik:/etc/traefik
- ./certs:/certs
volumes:
mysql:
## Docker-sync for macOS users
# docker-sync:
# external: true
## For Xdebug profiler
# files:
My traefik.yml
providers:
file:
filename: "/etc/traefik/config.yml"
docker:
endpoint: "unix:///var/run/docker.sock"
api:
insecure: true
entryPoints:
web:
address: ":80"
web-secure:
address: ":443"
And my config.yml (I understands it that the config for the tls has to be in a separate file!?)
tls:
certificates:
- certFile: /certs/domain.test.crt
- certKey: /certs/domain.test.key
I have been battling with this for a bit now and I seem to have found the combination that gets it working, note, you do not need to have your TLS config in a separate file.
[provider]
[provider.file]
# This file
filename = "/etc/traefik/traefik.toml"
[tls.stores.default.defaultCertificate]
certFile = "/certs/mycert.crt"
keyFile = "/certs/mycert.key"
I have now solved it. My final docker-compose.yml looks like this
Many thanks to #fffnite
version: "3.7"
services:
mariadb:
image: wodby/mariadb:$MARIADB_TAG
container_name: "${PROJECT_NAME}_mariadb"
stop_grace_period: 30s
environment:
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
ports:
- 3306:3306
volumes:
# - ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- mysql:/var/lib/mysql # I want to manage volumes manually.
php:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
PHP_FPM_USER: wodby
PHP_FPM_GROUP: wodby
## Read instructions at https://wodby.com/docs/stacks/wordpress/local#xdebug
# PHP_XDEBUG: 1
# PHP_XDEBUG_DEFAULT_ENABLE: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0
# PHP_IDE_CONFIG: serverName=my-ide
# PHP_XDEBUG_IDEKEY: "my-ide"
# PHP_XDEBUG_REMOTE_HOST: 172.17.0.1 # Linux
# PHP_XDEBUG_REMOTE_HOST: 10.254.254.254 # macOS
# PHP_XDEBUG_REMOTE_HOST: 10.0.75.1 # Windows
volumes:
# - ./app:/var/www/html
## For macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
## For XHProf and Xdebug profiler traces
# - files:/mnt/files
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_VHOST_PRESET: wordpress
#NGINX_SERVER_ROOT: /var/www/html/subdir
volumes:
# - ./app:/var/www/html
# Options for macOS users (https://wodby.com/docs/stacks/wordpress/local#docker-for-mac)
- ./app:/var/www/html:cached # User-guided caching
# - docker-sync:/var/www/html # Docker-sync
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.routers.${PROJECT_NAME}_nginx.entrypoints=web"
- "traefik.http.middlewares.${PROJECT_NAME}_https_nginx.redirectscheme.scheme=https"
- "traefik.http.routers.${PROJECT_NAME}_https_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.routers.${PROJECT_NAME}_https_nginx.entrypoints=web-secure"
- "traefik.http.routers.${PROJECT_NAME}_https_nginx.tls=true"
mailhog:
image: mailhog/mailhog
container_name: "${PROJECT_NAME}_mailhog"
labels:
- "traefik.http.services.${PROJECT_NAME}_mailhog.loadbalancer.server.port=8025"
- "traefik.http.routers.${PROJECT_NAME}_mailhog.rule=Host(`mailhog.${PROJECT_BASE_URL}`)"
portainer:
image: portainer/portainer
container_name: "${PROJECT_NAME}_portainer"
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.http.routers.${PROJECT_NAME}_portainer.rule=Host(`portainer.${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
ports:
- "80:80"
- "443:443"
- "8080:8080" # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik:/etc/traefik
- ./certs:/certs
volumes:
mysql:
## Docker-sync for macOS users
# docker-sync:
# external: true
## For Xdebug profiler
# files:
When I enter on the url http://0.0.0.0:9200/ is working.
I am getting the following error when I want to save or retrieve data:
ConnectionError(<urllib3.connection.HTTPConnection object at
0x7f234afdacc0>: Failed to establish a new connection: [Errno 111]
Connection refused) caused by:
NewConnectionError(<urllib3.connection.HTTPConnection object at
0x7f234afdacc0>: Failed to establish a new connection: [Errno 111]
Connection refused)
My docker-compose.yml:
version: "2"
services:
redis:
image: redis:latest
rabbit:
image: rabbitmq:latest
ports:
- "5672:5672"
- "15672:15672"
mysql:
image: mysql:5.7.22
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: 'db'
ports:
- 3306
phpmyadmin:
image: nazarpc/phpmyadmin
environment:
MYSQL_USERNAME: db
ports:
- "0.0.0.0:8081:80"
links:
- mysql:mysql
celery_worker:
build:
context: .
command: bash -c "sleep 3 && celery -A wk worker -l debug"
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
env_file:
- ./envs/development.env
environment:
- C_FORCE_ROOT=true
- BROKER_URL=amqp://guest:guest#rabbit//
working_dir: /wk
links:
- mysql:mysql
- rabbit:rabbit
- redis:redis
celery_worker_refactor:
build:
context: .
command: bash -c "sleep 10 && celery -A wk worker -l error -Ofair -Q refactor"
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
env_file:
- ./envs/development.env
environment:
- C_FORCE_ROOT=true
- BROKER_URL=amqp://guest:guest#rabbit//
working_dir: /wk
links:
- mysql:mysql
- rabbit:rabbit
- redis:redis
celery_beat:
build:
context: .
command: bash -c "rm -f /tmp/celerybeat.pid && sleep 3 && celery -A wk beat -l debug -s /log/celerybeat --pidfile=/tmp/celerybeat.pid"
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
env_file:
- ./envs/development.env
environment:
- C_FORCE_ROOT=true
- BROKER_URL=amqp://guest:guest#rabbit//
working_dir: /wk
links:
- mysql:mysql
- rabbit:rabbit
- redis:redis
ssh_server:
build:
context: .
dockerfile: Dockerfile-ssh
command: /usr/sbin/sshd -D
environment:
DEBUG: 'True'
env_file:
- ./envs/development.env
environment:
- BROKER_URL=amqp://guest:guest#rabbit//
volumes:
- ./wk:/wk
ports:
- "2222:22"
links:
- mysql:mysql
- redis:redis
- rabbit:rabbit
elasticsearch:
image: elasticsearch
environment:
- cluster.name=docker-cluster
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
kibana:
image: kibana
ports:
- 5601:5601
web:
build:
context: .
command: bash -c "sleep 3 && python manage.py runserver 0.0.0.0:8000"
privileged: true
env_file:
- ./envs/development.env
environment:
- BROKER_URL=amqp://guest:guest#rabbit//
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
depends_on:
- elasticsearch
links:
- mysql:mysql
- redis:redis
- rabbit:rabbit
- elasticsearch:elasticsearch
ports:
- "0.0.0.0:8000:8000"
volumes:
esdata:
driver: local
I tried to create e network between web and elastic search, same result.
When I call http://0.0.0.0:9200/ I get a response from server with a JSON.
My Haystack config in settings.py
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE':'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': '0.0.0.0:9200/',
'INDEX_NAME': 'haystack'
}
}
HAYSTACK_SIGNAL_PROCESSOR = 'haystack.signals.RealtimeSignalProcessor'
Some content so I can post this question, Is not only code.
You should use elasticsearch instead of 0.0.0.0:
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE':'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
>>> 'URL': 'elasticsearch:9200',
'INDEX_NAME': 'haystack'
}
}