I'm struggling to create a Docker Compose to create a Redis Cluster. I saw that there is a Redis Cluster image from Bitnami, I tried but it my Spring Boot App cannot connect to it due to the below error:
I tried another approach is to create 2 Redis instances master-slave and I can connect to it. Now I'm trying to create 6 Redis Instances and later create a Redis Cluster with 3 master and 3 slaves with the following command:
redis-cli --cluster create 127.0.0.1:6380 127.0.0.1:6381 \
127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 --cluster-replicas 1
But when I executed the command it said that
Could not connect to Redis at 127.0.0.1:6380: Connection refused
Below is my current Docker-compose.yaml:
version: '3.8'
services:
redis-node-0:
image: redis:latest
container_name: redis-0
ports:
- "6380:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-0:/redis/data
redis-node-1:
image: redis:latest
container_name: redis-1
ports:
- "6381:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-1:/redis/data
redis-node-2:
image: redis:latest
container_name: redis-2
ports:
- "6382:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-2:/redis/data
redis-node-3:
image: redis:latest
container_name: redis-3
ports:
- "6383:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-3:/redis/data
redis-node-4:
image: redis:latest
container_name: redis-4
ports:
- "6384:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-4:/redis/data
redis-node-5:
image: redis:latest
container_name: redis-5
ports:
- "6385:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-5:/redis/data
networks:
default:
name: overlay
volumes:
redis-cluster_data-0:
driver: local
redis-cluster_data-1:
driver: local
redis-cluster_data-2:
driver: local
redis-cluster_data-3:
driver: local
redis-cluster_data-4:
driver: local
redis-cluster_data-5:
driver: local
I'm totally new to Both Docker and Redis, I'm learning so any help would be really appreciated. Thanks in advance.
The way to do this is not obvious, because Redis Cluster doesn't easily work with Docker bridge networking. The simplest way to setup a single-node cluster is to cheat and bind it to 127.0.0.1:
version: '3.8'
services:
redis-single-node-cluster:
image: docker.io/bitnami/redis-cluster:7.0
environment:
- 'ALLOW_EMPTY_PASSWORD=yes'
- 'REDIS_CLUSTER_REPLICAS=0'
- 'REDIS_NODES=127.0.0.1 127.0.0.1 127.0.0.1'
- 'REDIS_CLUSTER_CREATOR=yes'
- 'REDIS_CLUSTER_DYNAMIC_IPS=no'
- 'REDIS_CLUSTER_ANNOUNCE_IP=127.0.0.1'
ports:
- '6379:6379'
You can then connect to it with this Spring Boot config:
spring:
redis:
cluster:
nodes: [localhost:6379]
ssl: false
If you want to setup multiple nodes, you'll have to create a custom network and assign static IPs. You'll probably also have to set network_mode: host.
Related
I'm deploying in AWS Ubuntu instance, on a VM using this yml:
version: "3.7"
services:
cassandra:
container_name: cassandra
image: cassandra:3.11
restart: unless-stopped
hostname: cassandra
environment:
- MAX_HEAP_SIZE=1G
- HEAP_NEWSIZE=1G
- CASSANDRA_CLUSTER_NAME=thp
volumes:
- ./cassandra/data:/var/lib/cassandra/data
networks:
- Hive
elasticsearch:
container_name: elasticsearch
image: elasticsearch:7.11.1
environment:
- http.host=0.0.0.0
- discovery.type=single-node
- cluster.name=hive
- script.allowed_types= inline
- thread_pool.search.queue_size=100000
- thread_pool.write.queue_size=10000
- gateway.recover_after_nodes=1
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms256m -Xmx256m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
networks:
- Hive
cortex:
container_name: cortex
image: thehiveproject/cortex:latest
depends_on:
- elasticsearch
environment:
- 'JOB_DIRECTORY=/opt/cortex/jobs'
ports:
- '0.0.0.0:9001:9001'
volumes:
- ./cortex/application.conf:/etc/cortex/application.conf
- '/var/run/docker.sock:/var/run/docker.sock'
- ./cortex/log/:/var/log/cortex
- /tmp:/tmp
#- ./cortex/Cortex-Analyzers:/opt/cortex/analyzers
#- .cortex/Cortex-Analyzers/analyzers.json:/opt/cortex/analyzers/analyzers.json
privileged: true
networks:
- Hive
thehive:
container_name: thehive
image: 'thehiveproject/thehive4:latest'
restart: unless-stopped
depends_on:
- cassandra
ports:
- '0.0.0.0:9000:9000'
volumes:
- ./thehive/application.conf:/etc/thehive/application.conf
- ./thehive/data:/opt/thp/thehive/data
- ./thehive/index:/opt/thp/thehive/index
command:
--cortex-port 9001
--cortex-keys ${CORTEX_KEY}
networks:
- Hive
networks:
Hive:
driver: bridge
and additional 2 yml application.conf files for thehive and cortex. The problem I have is that when I look up docker instances using docker ps or docker compose ps I can see that cortex and thehive are on 0.0.0.0:9000 and 0.0.0.0:9001 respectively but elasticsearch only shows 9200/tcp, 9300/tcp. How can I get access to web interface of ES locally? I can't figure this out. Using netstat I can't find port 9200 or 9300 listening anywhere.
Elasticsearch does not natively come with a web interface. Elasticsearch exposes a REST api where third party interfaces can interact with.
One of the most popular tools for visualizing or viewing data in the elastic stack is Kibana which interfaces with Elasticsearch. See link for more details: https://www.elastic.co/kibana/
ES API Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html
I have a docker-compose file with Kafka, zookeeper, and spring boot application.
while I run the entire file everything works fine.
when I run it without my spring boot application in order to debug it via intellij It cannot connect to Kafka and doesn't work properly.
my docker-compose file:
version: "3.5" services: # Install Zookeeper. zookeeper:
container_name: zookeeper
image: debezium/zookeeper:1.2
networks:
- mynetwork
ports:
- 2181:2181
- 2888:2888
- 3888:3888 # Install Kafka. kafka:
container_name: kafka
image: debezium/kafka:1.2
depends_on:
- zookeeper
ports:
- 9092:9092
- 29092:29092
networks:
- mynetwork
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP= INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS= INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092
- KAFKA_LISTENERS= EXTERNAL_SAME_HOST://:29092,INTERNAL://:9092
- KAFKA_INTER_BROKER_LISTENER_NAME= PLAINTEXT # Install Postgres. postgres:
container_name: postgres
image: debezium/postgres:12
volumes:
- ./sql/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- 5432:5432
networks:
- mynetwork
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:0.2.1
ports:
- 8080:8080
networks:
- mynetwork
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 #Deploy a Consumer. consumer:
build:
context: .
container_name: pledge-consumer
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/postgres
ports:
- 8101:8080
networks:
- mynetwork
image: isber/ssm-pledgeservice:v1
depends_on:
- zookeeper
- kafka
- postgres
networks: mynetwork:
external: true
In the application I tried:
spring.kafka.bootstrap-servers=kafka:9092
which works when I run it via docker but not from intellij
I also tried when running with intellij:
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.bootstrap-servers=localhost:29092
I found the problem, the image I used:
image: debezium/kafka:1.2
had a problem and it didn't read any of the parameters of the environment I added.
I upgraded to:
image: debezium/kafka:1.4
everything works.
Is it possible to use use a hostname for a custom service. Currently, I have the following:
Redis service: docker-compose.redis.yml
version: '3.6'
services:
redis:
container_name: ddev-${DDEV_SITENAME}-redis
image: redis:latest
restart: always
ports:
- 6379
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=6379
volumes: []
web:
links:
- redis:$DDEV_HOSTNAME
Redis Commander Service: docker-compose.commander.yml
version: '3.6'
services:
redis:
container_name: ddev-${DDEV_SITENAME}-commander
image: rediscommander/redis-commander:latest
restart: always
ports:
- 8081
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=8081
- REDIS_HOSTS=local:redis:6379
volumes: []
web:
links:
- commander:$DDEV_HOSTNAME
At the moment I can access the Redis Commander from the outside with <project-name>.ddev.local:8081/.
What I want to achieve, if possible is to access the Redis Commander from a custom hostname or subdomain like: comander.<project-name>.ddev.local or commander.local.
After a bit of research and a lot of help from Randy Fay, we were able to accomplish it. We had to run the following:
$ sudo ddev hostname commander.local 127.0.0.1
The Redis Commander Service file(docker-compose.commander.yml) had to be updated to:
version: '3.6'
services:
commander:
container_name: ddev-${DDEV_SITENAME}-commander
image: rediscommander/redis-commander:latest
restart: always
ports:
- 8081
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
environment:
- VIRTUAL_HOST=commander.local
- HTTP_EXPOSE=80
- REDIS_HOSTS=local:redis:6379
volumes: []
web:
links:
- commander:$DDEV_HOSTNAME
- commander:commander.local
for it to work.
I am running a postgres database generated by the below docker-compose file on Windows. Before running docker-compose up --build, I created a docker volume with docker volume --name postgresdata --driver local. The latter is done to avoid mounting a Windows folder into Postgres.
However, when I run docker-compose down followed by docker-compose up --build, the database is empty which I would not have expected. Any ideas or suggestions?
This is the docker-compose.yml file I am using:
version: '3.0'
services:
db:
image: postgres:latest
restart: always
ports:
- 5432:5432
env_file:
- env_file
volumes:
- postgresdata
networks:
- db1
market_data:
build: .
environment:
PYTHONUNBUFFERED: 'true'
stdin_open: true
tty: true
links:
- db:db
container_name: market_data_container
volumes:
- '.:/market_data'
depends_on:
- db
networks:
- db1
adminer:
image: adminer
restart: always
ports:
- 8080:8080
networks:
- db1
depends_on:
- db
volumes:
market_data:
postgresdata:
external: true
networks:
db1:
driver: bridge
Postgres uses already a volume to persist data, but docker-compose down deletes this volume. You are using named volumes in your compose file, but don't mount it correctly.
version: '3.0'
services:
db:
image: postgres:latest
restart: always
ports:
- 5432:5432
env_file:
- env_file
volumes:
- postgresdata:/var/lib/postgresql/data
networks:
- db1
Add the default path for postgres data to your volume postgresdata:/var/lib/postgresql/data. This should fix it.
I'm using an application hosted on a docker container.
This application executes bash scripts / instructions to send mails.
I made another container which executes Postfix as a SMTP Relay.
I want to send mails from my application container by using a bash script using my Postfix container as a relay.
I tried to connect with SSH from my application container to my Postfix container. But that doesn't seem to work.
How can i make it so a script executed in my application container can use my Postfix relay while not allowing anything outside of the docker network, or even better, to only allow some containers, to send mails from this relay.
EDIT 1 : Docker-compose files
Application docker compose :
version: "3.4"
volumes:
[...]
services:
application:
restart: always
build: ./application
depends_on:
- mariadb
container_name: application
volumes:
[...]
ports:
- "80:80"
- "443:443"
- "5669:5669"
deploy:
restart_policy:
window: 300s
links:
- mariadb
external_links:
- smtp-server
mariadb:
restart: always
image: mariadb
command: mysqld --sql-mode=ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
container_name: application-mariadb
volumes:
[...]
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
deploy:
restart_policy:
window: 300s
Here's my docker compose for my SMTP server :
version: "3.4"
services:
postfix:
restart: always
build: ./postfix
container_name: smtp-server
deploy:
restart_policy:
window: 300s
{a quick response, because I "cicle" in my work ... and I'm taking 10 minutes of clear my mint, I hope it serves you}
Are you using "docker-compose" ?, could you give an example of your YML file? (a little more context)
[you can not connect to by ssh to a container unless you have "supervisor" installed,which I do not recommend at all.]
from what I see, you only need to make private networks; You could use this:
https://docs.docker.com/compose/networking/
to hide everything, I also recommend using a load balancer / Inverse Proxy like TRAEFIK (if they have access to port 80 or 443 in some clear way this ...)
so you only expose 1/2 port(s) (80 + 443 for example) and everything else is protected by your reverse proxy
Watch as I separate the networks as you need the different containers.
bash have access to db and smtp
db does not have access smtp neither nginx
nginx have access to bash
nginx have access to proxy network to expose 80 and 443
no other container is exposed to the outside more than nginx
--
version: "3"
services:
bash:
####### use hostname "smtp" as SMTP server
image: bash
depends_on:
- db
networks:
- smtp_internal_network
- internal_network
- data_network
volumes:
- ../html:/var/www/html
restart: always
db:
image: percona:5.7
# ports: # for debug connections and querys
# - 3306:3306
volumes:
- ../db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
networks:
- data_network
restart: always
smtp:
image: mwader/postfix-relay
environment:
- POSTFIX_myhostname=smtp.domain.tld
networks:
- smtp_internal_network
restart: always
nginx:
image: nginx
volumes:
- ../html:/var/www/html
networks:
- external_network
- internal_network
labels:
- "traefik.backend=nginx_${COMPOSE_PROJECT_NAME}"
- "traefik.port=80"
- "traefik.frontend.rule=Host:${FRONTEND_RULE}"
- "traefik.frontend.passHostHeader=true"
- "traefik.enable=true"
- "traefik.docker.network=traefik_proxy"
restart: always
depends_on:
- db
- bash
networks:
external_network:
external:
name: traefik_proxy
internal_network:
driver: bridge
smtp_internal_network:
driver: bridge
data_network:
driver: bridge
Edit:
version: "3"
volumes:
[...]
services:
####### use hostname "smtp" as SMTP server in your application
application:
restart: always
build: ./application
depends_on:
- mariadb
volumes:
[...]
ports:
- "80:80"
- "443:443"
- "5669:5669"
deploy:
restart_policy:
window: 300s
networks:
- smtp_external_network
- data_network
mariadb:
restart: always
image: mariadb
command: mysqld --sql-mode=ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
networks:
- data_network
volumes:
[...]
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
deploy:
restart_policy:
window: 300s
networks:
smtp_external_network:
external:
name: [ReplaceForFolderParentNameOfSmtpYmlWithoutSquareBrackets]_smtp
data_network:
driver: bridge
--- (in your other file)
services:
smtp:
restart: always
build: ./postfix
networks:
- smtp
deploy:
restart_policy:
window: 300s
networks:
smpt:
driver: bridge