Failed to establish a new connection, Docker-Compose Django + Haystack + Mysql + ElasticSearch - elasticsearch

When I enter on the url http://0.0.0.0:9200/ is working.
I am getting the following error when I want to save or retrieve data:
ConnectionError(<urllib3.connection.HTTPConnection object at
0x7f234afdacc0>: Failed to establish a new connection: [Errno 111]
Connection refused) caused by:
NewConnectionError(<urllib3.connection.HTTPConnection object at
0x7f234afdacc0>: Failed to establish a new connection: [Errno 111]
Connection refused)
My docker-compose.yml:
version: "2"
services:
redis:
image: redis:latest
rabbit:
image: rabbitmq:latest
ports:
- "5672:5672"
- "15672:15672"
mysql:
image: mysql:5.7.22
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: 'db'
ports:
- 3306
phpmyadmin:
image: nazarpc/phpmyadmin
environment:
MYSQL_USERNAME: db
ports:
- "0.0.0.0:8081:80"
links:
- mysql:mysql
celery_worker:
build:
context: .
command: bash -c "sleep 3 && celery -A wk worker -l debug"
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
env_file:
- ./envs/development.env
environment:
- C_FORCE_ROOT=true
- BROKER_URL=amqp://guest:guest#rabbit//
working_dir: /wk
links:
- mysql:mysql
- rabbit:rabbit
- redis:redis
celery_worker_refactor:
build:
context: .
command: bash -c "sleep 10 && celery -A wk worker -l error -Ofair -Q refactor"
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
env_file:
- ./envs/development.env
environment:
- C_FORCE_ROOT=true
- BROKER_URL=amqp://guest:guest#rabbit//
working_dir: /wk
links:
- mysql:mysql
- rabbit:rabbit
- redis:redis
celery_beat:
build:
context: .
command: bash -c "rm -f /tmp/celerybeat.pid && sleep 3 && celery -A wk beat -l debug -s /log/celerybeat --pidfile=/tmp/celerybeat.pid"
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
env_file:
- ./envs/development.env
environment:
- C_FORCE_ROOT=true
- BROKER_URL=amqp://guest:guest#rabbit//
working_dir: /wk
links:
- mysql:mysql
- rabbit:rabbit
- redis:redis
ssh_server:
build:
context: .
dockerfile: Dockerfile-ssh
command: /usr/sbin/sshd -D
environment:
DEBUG: 'True'
env_file:
- ./envs/development.env
environment:
- BROKER_URL=amqp://guest:guest#rabbit//
volumes:
- ./wk:/wk
ports:
- "2222:22"
links:
- mysql:mysql
- redis:redis
- rabbit:rabbit
elasticsearch:
image: elasticsearch
environment:
- cluster.name=docker-cluster
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
kibana:
image: kibana
ports:
- 5601:5601
web:
build:
context: .
command: bash -c "sleep 3 && python manage.py runserver 0.0.0.0:8000"
privileged: true
env_file:
- ./envs/development.env
environment:
- BROKER_URL=amqp://guest:guest#rabbit//
volumes:
- /log:/log
- /tmp:/tmp
- ./wk:/wk
depends_on:
- elasticsearch
links:
- mysql:mysql
- redis:redis
- rabbit:rabbit
- elasticsearch:elasticsearch
ports:
- "0.0.0.0:8000:8000"
volumes:
esdata:
driver: local
I tried to create e network between web and elastic search, same result.
When I call http://0.0.0.0:9200/ I get a response from server with a JSON.
My Haystack config in settings.py
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE':'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': '0.0.0.0:9200/',
'INDEX_NAME': 'haystack'
}
}
HAYSTACK_SIGNAL_PROCESSOR = 'haystack.signals.RealtimeSignalProcessor'
Some content so I can post this question, Is not only code.

You should use elasticsearch instead of 0.0.0.0:
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE':'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
>>> 'URL': 'elasticsearch:9200',
'INDEX_NAME': 'haystack'
}
}

Related

in redis master-slave-sentinel configuration and Spring boot application, RedisConnectionException occured

I configured redis master-slave-sentinel environment using docker-compose.
and I completed it and tested it.
but, in my spring boot application, I wrote a redis sentinel config code.
below spring config class.
#Configuration
#EnableRedisRepositories
public class RedisConfig {
#Bean
public RedisConnectionFactory redisConnectionFactory(){
RedisSentinelConfiguration redisSentinelConfiguration = new RedisSentinelConfiguration()
.master("redis-master")
.sentinel("localhost",26379)
.sentinel("localhost",26380)
.sentinel("localhost",26381);
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisSentinelConfiguration);
return lettuceConnectionFactory;
}
}
and then I did run application, then problem occured.
enter image description here
I think there are some docker-compose network configs....
I want to know why it was...
If you guys know, .. please let me know about it.
↓ redis master-slave-sentinel config docker-compose.yml code.
version: '3.9'
services:
redis-master:
hostname: redis-master
container_name: redis-master
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=master
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- ./master/backup:/freelog/backup
networks:
- net-redis
ports:
- 6379:6379
# slave1 : bitnami/redis:6.2.6
redis-slave-1:
hostname: redis-slave-1
container_name: redis-slave-1
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 6480:6379
volumes:
- ./slave1/backup:/freelog/backup
networks:
- net-redis
depends_on:
- redis-master
# slave2 : bitnami/redis:6.2.6
redis-slave-2:
hostname: redis-slave-2
container_name: redis-slave-2
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 6481:6379
networks:
- net-redis
volumes:
- ./slave2/backup:/freelog/backup
depends_on:
- redis-master
- redis-slave-1
# slave3 : bitnami/redis:6.2.6
redis-slave-3:
hostname: redis-slave-3
container_name: redis-slave-3
image: bitnami/redis:6.2.6
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 6482:6379
volumes:
- ./slave3/backup:/freelog/backup
networks:
- net-redis
depends_on:
- redis-master
- redis-slave-2
# sentinel1 : bitnami/redis-sentinel:6.2.6
redis-sentinel-1:
hostname: redis-sentinel-1
container_name: redis-sentinel-1
image: bitnami/redis-sentinel:6.2.6
environment:
- REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=3000
- REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_SET=master-name
- REDIS_SENTINEL_QUORUM=2
# - REDIS_SENTINEL_PASSWORD=170anwkd!
depends_on:
- redis-master
- redis-slave-1
- redis-slave-2
- redis-slave-3
ports:
- 26379:26379
networks:
- net-redis
volumes:
- ./sentinel1/backup:/freelog/backup
redis-sentinel-2:
hostname: redis-sentinel-2
container_name: redis-sentinel-2
image: bitnami/redis-sentinel:6.2.6
environment:
- REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=3000
- REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_SET=master-name
- REDIS_SENTINEL_QUORUM=2
# - REDIS_SENTINEL_PASSWORD=170anwkd!
depends_on:
- redis-master
- redis-slave-1
- redis-slave-2
- redis-slave-3
ports:
- 26380:26379
networks:
- net-redis
volumes:
- ./sentinel2/backup:/freelog/backup
# sentinel3 : bitnami/redis-sentinel:6.2.6
redis-sentinel-3:
hostname: redis-sentinel-3
container_name: redis-sentinel-3
image: bitnami/redis-sentinel:6.2.6
environment:
- REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=3000
- REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_SET=master-name
- REDIS_SENTINEL_QUORUM=2
# - REDIS_SENTINEL_PASSWORD=170anwkd!
depends_on:
- redis-master
- redis-slave-1
- redis-slave-2
- redis-slave-3
ports:
- 26381:26379
networks:
- net-redis
volumes:
- ./sentinel3/backup:/freelog/backup
networks:
net-redis:
driver: bridge

Run docker-compose without access to environment variables

Is it possible to run a docker-compose.yml file without having access to the environment variables (on windows)? I have docker desktop installed successfully. Below is the docker-compose.yml.
Best,
---
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.5.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.5.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
connect:
image: cnfldemos/cp-server-connect-datagen:0.3.2-5.5.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.5.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:5.5.1
hostname: control-center
container_name: control-center
depends_on:
- zookeeper
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb-server:
image: confluentinc/cp-ksqldb-server:5.5.1
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:5.5.1
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:5.5.1
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... &&
cub kafka-ready -b broker:29092 1 40 &&
echo Waiting for Confluent Schema Registry to be ready... &&
cub sr-ready schema-registry 8081 40 &&
echo Waiting a few seconds for topic creation to finish... &&
sleep 11 &&
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:5.5.1
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'

Why does the connect-datagen container of my confluent kafka docker-compose disconnect after a couple of minutes?

I am trying to run a bare minimum confluent community example on docker for windows (toolbox) using the example given here:
https://docs.confluent.io/current/quickstart/cos-docker-quickstart.html
but seems like all components gets started only connect is failing of them does not work,
And this is my docker-compose.yml
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:6.1.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:6.1.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect:
image: cnfldemos/cp-server-connect-datagen:0.4.0-6.1.0
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.1.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:6.1.1
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb-server:
image: confluentinc/cp-ksqldb-server:6.1.1
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:6.1.1
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:6.1.1
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:6.1.1
depends_on:
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
Can you help me?
I had same problem and found solution.
We should focus on the docker container status exited 137. this mean is out-of-memory for container. therefore you must allocate minimally at 6 GB for Docker memory.
You can also check the documentation at Prerequisites:
https://docs.confluent.io/platform/current/quickstart/cos-docker-quickstart.html

RASA: Sender_id always returned as “default”

I’m using docker-compose for deploying Rasa. I faced an issue when I tried to retrieve the sender_id in actions it always returned as “default”
Please find my docker-compose below:
version: "3.4"
x-database-credentials: &database-credentials
DB_HOST: "db"
DB_PORT: "5432"
DB_USER: "${DB_USER:-admin}"
DB_PASSWORD: "${DB_PASSWORD}"
DB_LOGIN_DB: "${DB_LOGIN_DB:-rasa}"
x-rabbitmq-credentials: &rabbitmq-credentials
RABBITMQ_HOST: "rabbit"
RABBITMQ_USERNAME: "user"
RABBITMQ_PASSWORD: ${RABBITMQ_PASSWORD}
x-redis-credentials: &redis-credentials
REDIS_HOST: "redis"
REDIS_PORT: "6379"
REDIS_PASSWORD: ${REDIS_PASSWORD}
REDIS_DB: "1"
x-duckling-credentials: &duckling-credentials
RASA_DUCKLING_HTTP_URL: "http://duckling:8000"
services:
rasa-production:
restart: always
image: "rasa/rasa:${RASA_VERSION}-full"
ports:
- "5006:5005"
volumes:
- ./:/app
command:
- run
- --cors
- "*"
environment:
<<: *database-credentials
<<: *redis-credentials
<<: *rabbitmq-credentials
DB_DATABASE: "${DB_DATABASE:-rasa}"
RABBITMQ_QUEUE: "rasa_production_events"
RASA_TELEMETRY_ENABLED: ${RASA_TELEMETRY_ENABLED:-true}
depends_on:
- app
- rabbit
- redis
- db
app:
restart: always
build: actions/.
volumes:
- ./actions:/app/actions
expose:
- "5055"
environment:
SERVICE_BASE_URL: "${SERVICE_BASE_URL}"
RASA_SDK_VERSION: "${RASA_SDK_VERSION}"
depends_on:
- redis
scheduler:
restart: always
build: scheduler/.
environment:
SERVICE_BASE_URL: "${SERVICE_BASE_URL}"
duckling:
restart: always
image: "rasa/duckling:0.1.6.3"
expose:
- "8000"
command: ["duckling-example-exe", "--no-access-log", "--no-error-log"]
# revers_proxy:
# image: nginx
# ports:
# - 80:80
# - 443:443
# volumes:
# - ./config/nginx/:/etc/nginx/conf.d/
# depends_on:
# - rasa-production
# - app
mongo:
image: mongo:4.2.0
ports:
- 27017:27017
# revers_proxy:
# image: nginx
# ports:
# - 5006:5006
# volumes:
# - ./config/nginx/defaul.conf:/etc/nginx/conf.d/default.conf
# depends_on:
# - rasa-production
# - app
redis:
restart: always
image: "bitnami/redis:6.0.8"
environment:
ALLOW_EMPTY_PASSWORD: "yes"
REDIS_PASSWORD: ${REDIS_PASSWORD}
expose:
- "6379"
redisapp:
restart: always
image: "bitnami/redis:6.0.8"
environment:
ALLOW_EMPTY_PASSWORD: "yes"
expose:
- "6379"
rabbit:
restart: always
image: "bitnami/rabbitmq:3.8.9"
environment:
RABBITMQ_HOST: "rabbit"
RABBITMQ_USERNAME: "user"
RABBITMQ_PASSWORD: ${RABBITMQ_PASSWORD}
RABBITMQ_DISK_FREE_LIMIT: "{mem_relative, 0.1}"
expose:
- "5672"
db:
restart: always
image: "bitnami/postgresql:11.9.0"
expose:
- "5432"
environment:
POSTGRESQL_USERNAME: "${DB_USER:-admin}"
POSTGRESQL_PASSWORD: "${DB_PASSWORD}"
POSTGRESQL_DATABASE: "${DB_DATABASE:-rasa}"
volumes:
- ./db:/bitnami/postgresql
endpoints.yml file
tracker_store:
type: sql
dialect: "postgresql"
url: ${DB_HOST}
port: ${DB_PORT}
username: ${DB_USER}
password: ${DB_PASSWORD}
db: ${DB_DATABASE}
login_db: ${DB_LOGIN_DB}
lock_store:
type: "redis"
url: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD}
db: ${REDIS_DB}
event_broker:
type: "pika"
url: ${RABBITMQ_HOST}
username: ${RABBITMQ_USERNAME}
password: ${RABBITMQ_PASSWORD}
queue: rasa_production_events
Rasa Version : 2.1.0
Rasa SDK Version : 2.1.1
Rasa X Version : None
Python Version : 3.8.5
Operating System : Linux-5.4.0-48-generic-x86_64-with-glibc2.29
Python Path : /usr/bin/python3
Please any help to overcome this issue
Looks like a bug described in this issue https://github.com/RasaHQ/rasa/issues/7338

elasticsearch on docker - snapshot and restore - access_denied_exception

I created elasticsearch cluster flowing by the article: Running the Elastic Stack on Docker
After the elasticsearch runs, I need to create snapshot and restore to backup my data.
I modified my elastic-docker-tls.yml file:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es01/es01.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key
- ELASTIC_PASSWORD=$ELASTIC_PASSWORD
- path.repo=/usr/share/elasticsearch/backup
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- databak:/usr/share/elasticsearch/backup
- certs:$CERTS_DIR
ports:
- 9200:9200
networks:
- elastic
healthcheck:
test: curl --cacert $CERTS_DIR/ca/ca.crt -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es02/es02.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key
- path.repo=/usr/share/elasticsearch/backup
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
- databak:/usr/share/elasticsearch/backup
- certs:$CERTS_DIR
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es03/es03.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es03/es03.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es03/es03.key
- path.repo=/usr/share/elasticsearch/backup
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
- databak:/usr/share/elasticsearch/backup
- certs:$CERTS_DIR
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es01": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es01:9200
ELASTICSEARCH_HOSTS: https://es01:9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: $ELASTIC_PASSWORD
ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES: $CERTS_DIR/ca/ca.crt
SERVER_SSL_ENABLED: "true"
SERVER_SSL_KEY: $CERTS_DIR/kib01/kib01.key
SERVER_SSL_CERTIFICATE: $CERTS_DIR/kib01/kib01.crt
volumes:
- certs:$CERTS_DIR
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
databak:
driver: local
certs:
driver: local
networks:
elastic:
driver: bridge
After that, I registered a snapshot repository:
PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/backup/my_backup"
}
}
But, I get the following error message:
{
"error" : {
"root_cause" : [
{
"type" : "repository_exception",
"reason" : "[my_backup] cannot create blob store"
}
],
"type" : "repository_exception",
"reason" : "[my_backup] cannot create blob store",
"caused_by" : {
"type" : "access_denied_exception",
"reason" : "/usr/share/elasticsearch/backup/my_backup"
}
},
"status" : 500
}
I have searched for solutions on google for 2 days but no solution. Can someone help me? Thank you very much!!!
You can set chown for elasticsearch user in docker volume.
Run
ls -l show all mod of directory in elasticsearch
Run
chown elasticsearch /backup
For elasticsearch deployed on kubernetes, the way to do is by adding an init container in the helm values.yaml
extraInitContainers: |
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: create-backup-directory
image: busybox:1.28
command: ['mkdir','-p', '/usr/share/elasticsearch/data/backup']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
extraEnvs:
- name: path.repo
value: /usr/share/elasticsearch/data/backup
This will create a folder called backup in /usr/share/elasticsearch/data directory.

Resources