Docker postgres container loses data that should be stored in volume - windows

I am running a postgres database generated by the below docker-compose file on Windows. Before running docker-compose up --build, I created a docker volume with docker volume --name postgresdata --driver local. The latter is done to avoid mounting a Windows folder into Postgres.
However, when I run docker-compose down followed by docker-compose up --build, the database is empty which I would not have expected. Any ideas or suggestions?
This is the docker-compose.yml file I am using:
version: '3.0'
services:
db:
image: postgres:latest
restart: always
ports:
- 5432:5432
env_file:
- env_file
volumes:
- postgresdata
networks:
- db1
market_data:
build: .
environment:
PYTHONUNBUFFERED: 'true'
stdin_open: true
tty: true
links:
- db:db
container_name: market_data_container
volumes:
- '.:/market_data'
depends_on:
- db
networks:
- db1
adminer:
image: adminer
restart: always
ports:
- 8080:8080
networks:
- db1
depends_on:
- db
volumes:
market_data:
postgresdata:
external: true
networks:
db1:
driver: bridge

Postgres uses already a volume to persist data, but docker-compose down deletes this volume. You are using named volumes in your compose file, but don't mount it correctly.
version: '3.0'
services:
db:
image: postgres:latest
restart: always
ports:
- 5432:5432
env_file:
- env_file
volumes:
- postgresdata:/var/lib/postgresql/data
networks:
- db1
Add the default path for postgres data to your volume postgresdata:/var/lib/postgresql/data. This should fix it.

Related

Redis Cluster Docker Compose

I'm struggling to create a Docker Compose to create a Redis Cluster. I saw that there is a Redis Cluster image from Bitnami, I tried but it my Spring Boot App cannot connect to it due to the below error:
I tried another approach is to create 2 Redis instances master-slave and I can connect to it. Now I'm trying to create 6 Redis Instances and later create a Redis Cluster with 3 master and 3 slaves with the following command:
redis-cli --cluster create 127.0.0.1:6380 127.0.0.1:6381 \
127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 --cluster-replicas 1
But when I executed the command it said that
Could not connect to Redis at 127.0.0.1:6380: Connection refused
Below is my current Docker-compose.yaml:
version: '3.8'
services:
redis-node-0:
image: redis:latest
container_name: redis-0
ports:
- "6380:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-0:/redis/data
redis-node-1:
image: redis:latest
container_name: redis-1
ports:
- "6381:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-1:/redis/data
redis-node-2:
image: redis:latest
container_name: redis-2
ports:
- "6382:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-2:/redis/data
redis-node-3:
image: redis:latest
container_name: redis-3
ports:
- "6383:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-3:/redis/data
redis-node-4:
image: redis:latest
container_name: redis-4
ports:
- "6384:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-4:/redis/data
redis-node-5:
image: redis:latest
container_name: redis-5
ports:
- "6385:6379"
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 5000"]
volumes:
- redis-cluster_data-5:/redis/data
networks:
default:
name: overlay
volumes:
redis-cluster_data-0:
driver: local
redis-cluster_data-1:
driver: local
redis-cluster_data-2:
driver: local
redis-cluster_data-3:
driver: local
redis-cluster_data-4:
driver: local
redis-cluster_data-5:
driver: local
I'm totally new to Both Docker and Redis, I'm learning so any help would be really appreciated. Thanks in advance.
The way to do this is not obvious, because Redis Cluster doesn't easily work with Docker bridge networking. The simplest way to setup a single-node cluster is to cheat and bind it to 127.0.0.1:
version: '3.8'
services:
redis-single-node-cluster:
image: docker.io/bitnami/redis-cluster:7.0
environment:
- 'ALLOW_EMPTY_PASSWORD=yes'
- 'REDIS_CLUSTER_REPLICAS=0'
- 'REDIS_NODES=127.0.0.1 127.0.0.1 127.0.0.1'
- 'REDIS_CLUSTER_CREATOR=yes'
- 'REDIS_CLUSTER_DYNAMIC_IPS=no'
- 'REDIS_CLUSTER_ANNOUNCE_IP=127.0.0.1'
ports:
- '6379:6379'
You can then connect to it with this Spring Boot config:
spring:
redis:
cluster:
nodes: [localhost:6379]
ssl: false
If you want to setup multiple nodes, you'll have to create a custom network and assign static IPs. You'll probably also have to set network_mode: host.

Using same postgres container for a spring project database and a keycloak database

I am trying to run three dockerized services:
Spring-boot app
Keycloak for authentication
Postgres as database
I would like to have both the Spring-boot app and the Keycloak app to use the same Postgres container as their database, but I couldn't find a way to make it work. My docker-compose.yml is as follows:
version: '3.7'
services:
db:
image: 'postgres:13.1-alpine'
container_name: db
ports:
- "5432:5432"
volumes:
- ./app_data:/var/lib/postgresql/data_app
- ./keycloak_data:/var/lib/postgresql/data_keycloak
- ../docker-postgresql-multiple-databases:/docker-entrypoint-initdb.d
environment:
POSTGRES_MULTIPLE_DATABASES: keycloak, app_user
POSTGRES_PASSWORD: password
healthcheck:
test: [ "CMD-SHELL", "pg_isready" ]
interval: 10s
timeout: 5s
retries: 5
keycloak:
image: jboss/keycloak:14.0.0
container_name: keycloak
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- DB_VENDOR=postgres
- DB_ADDR=postgres
- DB_USER=keycloak
- DB_PASSWORD=password
- JDBC_PARAMS=useSSL=false
ports:
- "8080:8080"
depends_on:
- db
healthcheck:
test: "curl -f http://localhost:8080/auth || exit 1"
start_period: 20s
app:
image: 'app.postgre:latest'
build:
context: .
container_name: app
depends_on:
- db
- keycloak
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/app
- SPRING_DATASOURCE_USERNAME=app_user
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
volumes:
app_data:
postgres_data:
(Note: I tried using the following code: https://github.com/mrts/docker-postgresql-multiple-databases to set-up the needed databases by hand, but even so it still fails. I also tried doing without this script, but that also failed.)
I have tried them and managed to make a docker-compose file which runs the spring app and the database together, and another docker-compose file which runs the keycloak app and the database together, but when I try to bring all three together it fails.
I have a very similar setup with Postgres, Keycloak, pgAdmin and a Golang API service. The skeleton of my docker-compose.yml is like this, give it a try (I omitted some parts for simplicity), it is working for me. I think the important parts here are networks and links, and also setting up multiple databases (as you already do). I use db as the hostname of Postgres server, when I connect to it via pgAdmin for example.
services:
db:
build:
context: .
dockerfile: ./Dockerfile.db
volumes:
networks:
- mynetwork
restart: unless-stopped
ports:
- ${POSTGRES_PORT}:5432
environment:
- POSTGRES_MULTIPLE_DATABASES=${POSTGRES_MULTIPLE_DATABASES}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
healthcheck:
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
environment:
volumes:
ports:
networks:
- mynetwork
restart: unless-stopped
depends_on:
- db
api:
build:
context: .
dockerfile: ./Dockerfile.api
ports:
environment:
- POSTGRES_HOST=${POSTGRES_HOST}
- POSTGRES_PORT=${POSTGRES_PORT}
volumes:
networks:
- mynetwork
depends_on:
- db
links:
- db
restart: unless-stopped
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
- DB_VENDOR=${KEYCLOAK_DB_VENDOR}
- DB_ADDR=${KEYCLOAK_DB_ADDR}
- DB_DATABASE=${KEYCLOAK_DB_DATABASE}
- DB_USER=${KEYCLOAK_DB_USER}
- DB_SCHEMA=${KEYCLOAK_DB_SCHEMA}
- DB_PASSWORD=${KEYCLOAK_DB_PASSWORD}
- KEYCLOAK_USER=${KEYCLOAK_USER}
- KEYCLOAK_PASSWORD=${KEYCLOAK_PASSWORD}
ports:
- ${KEYCLOAK_PORT}:8080
depends_on:
- db
networks:
- mynetwork
links:
- db
restart: unless-stopped
volumes:
volumes:
networks:
mynetwork:
And some important a values from my .env:
POSTGRES_MULTIPLE_DATABASES=mydb,keycloak
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_HOST=db
POSTGRES_PORT=5432
KEYCLOAK_PORT=8084
KEYCLOAK_DB_VENDOR=POSTGRES
KEYCLOAK_DB_ADDR=db
KEYCLOAK_DB_DATABASE=keycloak
KEYCLOAK_DB_USER=
KEYCLOAK_DB_SCHEMA=public
KEYCLOAK_DB_PASSWORD=
KEYCLOAK_USER=
KEYCLOAK_PASSWORD=
My Dockerfile.db is like this, you don't need the localedef part (I need it for Hungarian localization):
FROM postgres:latest
RUN localedef -i hu_HU -c -f UTF-8 -A /usr/share/locale/locale.alias hu_HU.UTF-8
COPY docker-postgresql-multiple-databases.sh /docker-entrypoint-initdb.d/
And docker-postgresql-multiple-databases.sh contains:
#!/bin/bash
set -e
set -u
function create_user_and_database() {
local database=$1
echo " Creating user and database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER $database;
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
EOSQL
}
if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
for db in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
create_user_and_database $db
done
echo "Multiple databases created"
fi

Data isn't persisted in the database when using MongoDB with Docker volumes?

There is a service that uses mongodb. But when I restart computer or docker machine, no data is stored in the database.
docker-compose.yml:
version: "3"
Services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/dockerdata/db
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
command: mongod
I tried to do database storage on the host, but it didn't help either:
docker-compose.yml:
version: "3"
Services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/c/users/frol/mongodata/db
volumes:
- /c/users/frol/mongodata/db:/data/db
ports:
- 27017:27017
command: mongod
If you make a named volume, docker writes an error:
ERROR: for test_mongodb_1 Cannot create container for service mongodb: fa
To mount local volume: mount /c/users/frol/mongodata/db:/mnt/sda1/var/lib/d
ocker/volumes/test_mongodata/_data, flags: 0x1000: no such file or directory
docker-compose.yml:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/c/users/frol/mongodata/db
volumes:
- mongodata:/data/db
ports:
- 27017:27017
command: mongod
volumes:
mongodata:
driver: local
driver_opts:
type: none
device: /c/users/frol/mongodata/db
o: bind
Host - win 8.1, docker toolbox 19.03.1 installed.
Help me, please, I'm a novice. How do I make sure that the database data isn't lost?
You first attempt would work if you just fix a simple typo in your compose file:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db # changed
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
command: mongod
But, since /data/db is the default value of MONGO_DATA_DIR, setting it is pretty redundant.
But I'd prefer to use a named volume, that way the data persists but I don't have to see the "ugly" database storage folder:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
volumes:
- mongodata:/data/db
ports:
- 27017:27017
command: mongod
volumes:
mongodata:
Don't set $MONGO_DATA_DIR; leave it at its default of /data/db.
services:
mongodb:
restart: always
image: mongo:latest
# No need to specifically set $MONGO_DATA_DIR
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
# No need to override command:
Docker containers have a separate filesystem space from the host filesystem. A typical setup for most databases is to have the database storage in a fixed location inside the container; for MongoDB that's the /data/db directory. You can mount a named volume or filesystem path there, but the code inside the container doesn't know the difference.
If you do set environment variables like $MONGO_DATA_DIR, they need to reflect paths inside the container; they can't directly specify host-system paths. (#ruohola's answer works because it changes the container-filesystem path of the bind mount to match the container-filesystem path in the environment variable; the host ./dockerdata and container /dockerdata paths are totally unrelated.)
As you are defining the data dir explicitly, you need to map the same directory in the volume to persist the data
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db #data directory
volumes:
- ./dockerdata/db:/data/db #same data directory which you defined above
ports:
- 27017:27017
command: mongod

Docker ENTRYPOINT bash script is executed over and over again

Everything in my docker container initialisation goes well except when I run an ENTRYPOINT script at the end of my Dockerfile with
# ...
ENTRYPOINT ["bash", "./shell_scripts/init.sh"]
which consists of
#!/bin/bash
echo "Init app..."
composer update
composer dump-autoload
php artisan migrate
and when I run docker-compose up --build it keeps running the script over and over again....
docker-compose.yml
version: '3.7'
services:
mysql_db:
image: mysql:8.0.13
container_name: mysql_8.0.13
command: --default-authentication-plugin=mysql_native_password
restart: unless-stopped
tty: true
ports:
- 3307:3306
environment:
SERVICE_TAGS: dev
SERVICE_NAME: mysql
MYSQL_ROOT_PASSWORD: mypass
networks:
- app-network
app_n_php:
build:
context: .
dockerfile: Dockerfile
container_name: app_php_7.3-rc-fpm
volumes:
- type: bind
source: ./app
target: /var/www/app
restart: unless-stopped
tty: true
ports:
- 8001:8000
depends_on:
- mysql_db
environment:
SERVICE_NAME: app_n_php
SERVICE_TAGS: dev
networks:
app-network:
driver: bridge
Any idea what is going on?

Docker compose not using external named volume on Mac

Though I mount data on external volume, upon docker-compose down I am losing my data. I am using docker-compose version 1.14.0, build c7bdf9e and docker version 17.06.1-ce, build 874a737
# docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- data:/var/lib/postgres/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
volumes:
data:
external: true

Resources