docker-compose: how to use minio in- and outside of the docker network - laravel

I have the following docker-compose.yml to run a local environment for my Laravel App.
version: '3'
services:
app:
build:
context: .
dockerfile: .docker/php/Dockerfile
ports:
- 80:80
- 443:443
volumes:
- .:/var/www:delegated
environment:
AWS_ACCESS_KEY_ID: minio_access_key
AWS_SECRET_ACCESS_KEY: minio_secret_key
AWS_BUCKET: Bucket
AWS_ENDPOINT: http://s3:9000
links:
- database
- s3
database:
image: mariadb:10.3
ports:
- 63306:3306
environment:
MYSQL_ROOT_PASSWORD: secret
s3:
image: minio/minio
ports:
- "9000:9000"
volumes:
- ./storage/minio:/data
environment:
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
command: server /data
As you can see, I use minio as AWS S3 compatible storage. This works very well but when I generate a url for a file (Storage::disk('s3')->url('some-file.txt')) obviously I get a url like this http://s3:9000/Bucket/some-file.txt which does not work outside of the Docker network.
I've already tried to set AWS_ENDPOINT to http://127.0.0.1:9000 but then Laravel can't connect to the Minio Server...
Is there a way to configure Docker / Laravel / Minio to generate urls which are accessible in- and outside of the Docker network?

how about binding address? (not tested)
...
s3:
image: minio/minio
ports:
- "9000:9000"
volumes:
- ./storage/minio:/data
environment:
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
command: server --address 0.0.0.0:9000 /data

I expanded on the solutions in this question to create a solution that is working for me on both a localhost and on a server with an accessible dns.
The localhost solution is essentially the solution described above.
Create localhost host mapping
sudo echo "127.0.0.1 my-minio-localhost-alias" >> /etc/hosts
Set HOSTNAME, use 'my-minio-localhost-alias' for localhost
export HOSTNAME=my-minio-localhost-alias
Create hello.txt
Hello from Minio!
Create docker-compose.yml
This compose file contains the following containers:
minio: minio service
minio-mc: command line tool to initialize content
s3-client: command line tool to generate presigned urls
version: '3.7'
networks:
mynet:
services:
minio:
container_name: minio
image: minio/minio
ports:
- published: 9000
target: 9000
command: server /data
networks:
mynet:
aliases:
# For localhost access, add the following to your /etc/hosts
# 127.0.0.1 my-minio-localhost-alias
# When accessing the minio container on a server with an accessible dns, use the following
- ${HOSTNAME}
# When initializing the minio container for the first time, you will need to create an initial bucket named my-bucket.
minio-mc:
container_name: minio-mc
image: minio/mc
depends_on:
- minio
volumes:
- "./hello.txt:/tmp/hello.txt"
networks:
mynet:
s3-client:
container_name: s3-client
image: amazon/aws-cli
environment:
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
depends_on:
- minio
networks:
mynet:
Start the minio container
docker-compose up -d minio
Create a bucket in minio and load a file
docker-compose run minio-mc mc config host add docker http://minio:9000 minioadmin minioadmin
docker-compose run minio-mc mb docker/my-bucket
docker-compose run minio-mc mc cp /tmp/hello.txt docker/my-bucket/foo.txt
Create a presigned URL that is accessible inside AND outside of the docker network
docker-compose run s3-client --endpoint-url http://${HOSTNAME}:9000 s3 presign s3://my-bucket/hello.txt

Since you are mapping the 9000 port on host to that service, you should be able to access it via s3:9000 if you simply add s3 to your hosts file (/etc/hosts on Mac/Linux)
Add this 127.0.0.1 s3 to your hosts file and you should be able to access the s3 container from your host machine by using https://s3:9000/path/to/file
This means you can use the s3 hostname from inside and outside the docker network

I didn't find a complete setup of minio using docker-compose. here it is:
version: '2.4'
services:
s3:
image: minio/minio:latest
ports:
- "9000:9000"
- "9099:9099"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- storage-minio:/data
command: server --address ":9099" --console-address ":9000" /data
restart: always # necessary since it's failing to start sometimes
volumes:
storage-minio:
external: true
In command section we have the address which is the API address and we have console-address where you can connect to the console see the image below. Use to the MINIO_ROOT_USER & MINIO_ROOT_PASSWORD values to sign in.

Adding the "s3" alias to my local hosts file did not do the trick. But explicitly binding the ports to 127.0.0.1 worked like a charm:
s3:
image: minio/minio:RELEASE.2022-02-05T04-40-59Z
restart: "unless-stopped"
volumes:
- s3data:/data
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
# Allow all incoming hosts to access the server by using 0.0.0.0
command: server --address 0.0.0.0:9000 --console-address ":9001" /data
ports:
# Bind explicitly to 127.0.0.1
- "127.0.0.1:9000:9000"
- "9001:9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3

For those who are looking for s3 with minio object server integration test. Specially for JAVA implementation.
docker-compose file:
version: '3.7'
services:
minio-service:
image: quay.io/minio/minio
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
The actual IntegrationTest class:
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import org.junit.jupiter.api.*;
import org.testcontainers.containers.DockerComposeContainer;
import java.io.File;
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class MinioIntegrationTest {
private static final DockerComposeContainer minioContainer = new DockerComposeContainer<>(new File("src/test/resources/docker-compose.yml"))
.withExposedService("minio-service", 9000);
private static final String MINIO_ENDPOINT = "http://localhost:9000";
private static final String ACCESS_KEY = "minio";
private static final String SECRET_KEY = "minio123";
private AmazonS3 s3Client;
#BeforeAll
void setupMinio() {
minioContainer.start();
initializeS3Client();
}
#AfterAll
void closeMinio() {
minioContainer.close();
}
private void initializeS3Client() {
String name = Regions.US_EAST_1.getName();
AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(MINIO_ENDPOINT, name);
s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.build();
}
#Test
void shouldReturnActualContentBasedOnBucketName() throws Exception{
String bucketName = "test-bucket";
String key = "s3-test";
String content = "Minio Integration test";
s3Client.createBucket(bucketName);
s3Client.putObject(bucketName, key, content);
S3Object object = s3Client.getObject(bucketName, key);
byte[] actualContent = new byte[22];
object.getObjectContent().read(actualContent);
Assertions.assertEquals(content, new String(actualContent));
}
}

Related

How to get hostname from one container to another using docker compose?

I have two docker containers. One backend, and the other db (postgres). They both are linked. How do I utilise the backend environment HOST variable in the golang docker container?
From my understanding, both containers have their own IP addresses. I cannot use 'localhost' in the golang container because postgres isn't on localhost, but in an isolated container.
version: "3.7"
services:
backend:
image: golang:1.16
build: ./
working_dir: /app
volumes:
- ./backend/:/app
environment:
HOST: db
command: go run main.go
ports:
- 8080:8080
depends_on:
- db
db:
image: postgres
restart: always
environment:
POSTGRES_USER: gorm
POSTGRES_PASSWORD: gorm
POSTGRES_DB: gorm
ports:
- 9920:9920
I've tried researching how to access this variable as well as check Docker tutorials/documentation, but haven't found a solution.
Docker compose does DNS resolution. You should be able to access your database by name.
Remove:
environment:
HOST: db
Correct the postgres port to 5432:
db:
...
ports:
- 5432:5432
You must be able to connect like so:
db := pg.Connect(&pg.Options{
Addr: "db:5432",
User: "gorm",
Database: "gorm",
Password: "gorm",
})
As for environment variables, you can declare and access them like this:
backend:
environment:
POSTGRES_USER: gorm
...
os.Getenv("POSTGRES_USER")
Docker compose will create a network for your containers where they can communicate and reach each other.
You can make a simple change to your docker compose file by adding a name to your services which will make sure they get the same name each time.
version: "3.7"
services:
backend:
image: golang:1.16
build: ./
container_name: backend
working_dir: /app
volumes:
- ./backend/:/app
command: go run main.go
ports:
- "8080:8080"
depends_on:
- db
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: gorm
POSTGRES_PASSWORD: gorm
POSTGRES_DB: gorm
ports:
- "9920:5432"
The default port for Postgres is 5432, so I mapped it to your 9920. Then, you can access the db container from the backend container by specifying:
db:9920

Connect service container to db container

I'm new to docker and started to play with it on my small project.
I have dockerized the service itself with the following Docker file:
ROM adoptopenjdk:11-jdk-hotspot AS DEPENDENCIES_BUILD_IMAGE
ENV APP_HOME=/usr/app/
WORKDIR $APP_HOME
COPY build.gradle settings.gradle gradlew $APP_HOME
COPY gradle $APP_HOME/gradle
RUN ./gradlew build || return 0
COPY . .
RUN ./gradlew build
FROM adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine AS FINAL
ENV JAR_TEMPLATE=myapp-0.0.1-SNAPSHOT.jar
ENV ARTIFACT_NAME=myapp.jar
ENV APP_HOME=/usr/app
WORKDIR $APP_HOME
COPY --from=DEPENDENCIES_BUILD_IMAGE $APP_HOME/build/libs/$JAR_TEMPLATE .
RUN mv $JAR_TEMPLATE $ARTIFACT_NAME
EXPOSE 8080
CMD ["java", "-jar", "budget-calculator.jar"]
Side note - I know that there's a problem that I'm always copying 0.0.1-SNAPSHOT - but I'm not sure how to solve it at the moment.
After that I wanted to connect my service to a Postgres DB with docker-compose using this confirmation:
version: '3'
services:
backend:
build: .
container_name: myapp
ports:
- "8080:8080"
links:
- "db"
depends_on:
- db
networks:
- backend
db:
restart: unless-stopped
image: postgres:10
container_name: myapp-db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=myapp
ports:
- 5436:5436
networks:
- backend
networks:
backend:
After that I've updated my application.properties file to indicate that the DB link is on the other container as follow:
spring.flyway.url=jdbc:postgresql://db:5436/myapp
spring.flyway.user=postgres
spring.flyway.password=secret
spring.flyway.baseline-on-migrate=true
spring.datasource.url=jdbc:postgresql://db:5436/myapp
spring.datasource.username=postgres
spring.datasource.password=secret
spring.datasource.driverClassName=org.postgresql.Driver
Now I had 2 problems:
While I assumed that build: . will rebuild my image every time
that I'm running docker-compose up if something changed in
practice I saw that it's not the case.
When the backend service starts flyway (a migration DB library) try
to connect to the database and cannot resolve the connection.
I've seen online that the usage of - links is deprecated and I should use the networks but both do not seem to work - what am I missing?
There are 2 problems with my configurations, the first one - the internal port of Postgres was configured as 5436 while the default port of the image is 5432 (I've updated both of them to 5432)
the second one, in order to pass the IP of the DB to the service I've added the following environment variables to the service image:
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/budget
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: secret
SPRING_FLYWAY_URL: jdbc:postgresql://db:5432/budget
SPRING_FLYWAY_USER: postgres
SPRING_FLYWAY_PASSWORD: secret
So my current working configuration is this:
version: '3.8'
services:
backend:
build: .
container_name: app-service
ports:
- "8080:8080"
depends_on:
- db
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/myapp
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: secret
SPRING_FLYWAY_URL: jdbc:postgresql://db:5432/myapp
SPRING_FLYWAY_USER: postgres
SPRING_FLYWAY_PASSWORD: secret
db:
restart: unless-stopped
image: postgres:10
environment:
- POSTGRES_DB=myapp
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=secret
volumes:
- myapp_data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
myapp_data:

Access to local database denied through docker container

I am having a problem connecting my compiled Spring-Boot app to the database that I have running on another container on my server.
I have tried different configurations, changing from localhost to the IP address of my server for the connection. I also double checked that the credentials matched by logging in via Adminer. Finally, I did a rebuild of the compose and image files several times to ensure that I have all the latest versions.
Compose file:
version: '3.1'
services:
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: mypassword
MYSQL_DATABASE: marketingappdb
ports:
- "3306:3306"
expose:
- 3306
volumes:
- ./mariadbvolume:/var/lib/mariadb
networks:
- marketingapp
adminer:
image: adminer
restart: always
ports:
- "8086:8080"
expose:
- 8086
depends_on:
- db
networks:
- marketingapp
springserver:
image: marketingapp
restart: always
ports:
- "8091:8091"
expose:
- 8091
depends_on:
- db
networks:
- marketingapp
networks:
marketingapp:
Spring Server Image:
FROM openjdk:latest
COPY /marketing-app-final.jar .
EXPOSE 8091
ENTRYPOINT ["java", "-jar", "marketing-app-final.jar"]
Application properties for Spring:
server.port = 8091
spring.datasource.url=jdbc:mariadb://0.0.0.0:3306/marketingappdb
spring.datasource.username=root
spring.datasource.password=mypassword
spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
spring.jpa.hibernate.ddl-auto=update
I can connect from my PC to the database using the app from the remote same configuration (obviously replacing localhost with the IP) and don't see why I shouldn't be able to do the same from the actual server. Thanks in advance for any help!
Use the docker dns to connect your spring App to the mariabd:
jdbc:mariadb://db:3306/marketingappdb
Just a few other hints: you don't need to expose port 3306, you already bind it to 3306 on Host (if you just want to use it from within the docker Services you don't need to bind/expose it at all). And the mariabd persistent storage is var/lib/mysql and not var/lib/mariadb

How to fix `kafka: client has run out of available brokers to talk to (Is your cluster reachable?)` error

I am developing an application which reads a message off of an sqs queue, does some stuff with that data, and takes the result and publishes to a kafka topic. In order to test locally, I'd like to set up a kafka image in my docker build. I am currently able to spin up aws-cli, localstack, and my app's containers locally using docker-compose. Separately, I am able to spin up kafka and zookeper without a problem as well. I am unable to get my application to communicate with kafka.
I've tried using two separate compose files, and also fiddled with the networks. Finally, I've referenced: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
Here is my docker-compose file:
version: '3.7'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
env_file: .env
ports:
# Localstack endpoints for various API. Format is localhost:container
- '4563-4584:4563-4584'
- '8080:8080'
environment:
- SERVICES=sns:4575,sqs:4576
- DATA_DIR=/tmp/localstack/data
volumes:
# store data locally in 'localstack' folder
- './localstack:/tmp/localstack'
networks:
- my_network
aws:
image: mesosphere/aws-cli
container_name: aws-cli
# copy local JSON_DATA folder contents into aws-cli container's app folder
#volumes:
# - ./JSON_DATA:/app
env_file: .env
# bash entrypoint needed for multiple commands
entrypoint: /bin/sh -c
command: >
" sleep 10;
aws --endpoint-url=http://localstack:4576 sqs create-queue --queue-name input_queue;
aws --endpoint-url=http://localstack:4575 sns create-topic --name input_topic;
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:us-east-2:123456789012:example_topic --protocol sqs --notification-endpoint http://localhost:4576/queue/input_queue; "
networks:
- my_network
depends_on:
- localstack
my_app:
build: .
image: my_app
container_name: my_app
env_file: .env
ports:
- '9000:9000'
networks:
- my_network
depends_on:
- localstack
- aws
zookeeper:
image: confluentinc/cp-zookeeper:5.0.0
container_name: zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
networks:
- my_network
kafka:
image: confluentinc/cp-kafka:5.0.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
# For more details see See https://rmoff.net/2018/08/02/kafka-listeners-explained/
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "output_topic:2:2"
networks:
- my_network
networks:
my_network:
I would hope to see no errors as a result of publishing to this topic. Instead, I'm getting:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Any ideas what I may be doing wrong? Thank you for your help.
You've made the broker only resolvable within the Kafka container itself (or from your host to the container) by setting the listeners only to localhost.
If you want another Docker service to be able to reach that container, you'll have to add <some protocol>://kafka:<some port> to the advertised listeners, and make the listeners as not localhost
Where the protocol is also added to KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
FWIW, That blog should cover all those bases.

Connecting Spring Cloud Applications in Docker Container

I am attempting to host a Spring Cloud application in Docker containers.The underlying exception is as follows:
search_1 | Caused by: java.lang.IllegalStateException: Invalid URL: config:8888
I understand the reason is because of the URL specified in my config server.
spring.application.name=inventory-client
#spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.uri=config:8888
On my development machine, I am able to use localhost. However, based on a past question (relating to connecting to my database), I learned that localhost is not appropriate in containers. For my database, I was able to use the following:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=false
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.url=jdbc:postgresql://db:5432/leisurely_diversion
#spring.datasource.url=jdbc:postgresql://localhost:5000/leisurely_diversion
spring.datasource.driver-class-name=org.postgresql.Driver
but this obviously did not work as expected for the configuration server.
My docker-compose file:
# Use postgres/example user/password credentials
version: '3.2'
services:
db:
image: postgres
ports:
- 5000:5432
environment:
POSTGRES_PASSWORD: example
volumes:
- type: volume
source: psql_data
target: /var/lib/postgresql/data
networks:
- app
restart: always
config:
image: kellymarchewa/config_server
networks:
- app
volumes:
- /root/.ssh:/root/.ssh
restart: always
search:
image: kellymarchewa/search_api
networks:
- app
restart: always
ports:
- 8082:8082
depends_on:
- db
- config
- inventory
inventory:
image: kellymarchewa/inventory_api
depends_on:
- db
- config
ports:
- 8081:8081
networks:
- app
restart: always
volumes:
psql_data:
networks:
app:
Both services are running under the same user defined network; how I allow the services to find the configuration service?
Thanks.

Resources