How to fix `kafka: client has run out of available brokers to talk to (Is your cluster reachable?)` error - go

I am developing an application which reads a message off of an sqs queue, does some stuff with that data, and takes the result and publishes to a kafka topic. In order to test locally, I'd like to set up a kafka image in my docker build. I am currently able to spin up aws-cli, localstack, and my app's containers locally using docker-compose. Separately, I am able to spin up kafka and zookeper without a problem as well. I am unable to get my application to communicate with kafka.
I've tried using two separate compose files, and also fiddled with the networks. Finally, I've referenced: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
Here is my docker-compose file:
version: '3.7'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
env_file: .env
ports:
# Localstack endpoints for various API. Format is localhost:container
- '4563-4584:4563-4584'
- '8080:8080'
environment:
- SERVICES=sns:4575,sqs:4576
- DATA_DIR=/tmp/localstack/data
volumes:
# store data locally in 'localstack' folder
- './localstack:/tmp/localstack'
networks:
- my_network
aws:
image: mesosphere/aws-cli
container_name: aws-cli
# copy local JSON_DATA folder contents into aws-cli container's app folder
#volumes:
# - ./JSON_DATA:/app
env_file: .env
# bash entrypoint needed for multiple commands
entrypoint: /bin/sh -c
command: >
" sleep 10;
aws --endpoint-url=http://localstack:4576 sqs create-queue --queue-name input_queue;
aws --endpoint-url=http://localstack:4575 sns create-topic --name input_topic;
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:us-east-2:123456789012:example_topic --protocol sqs --notification-endpoint http://localhost:4576/queue/input_queue; "
networks:
- my_network
depends_on:
- localstack
my_app:
build: .
image: my_app
container_name: my_app
env_file: .env
ports:
- '9000:9000'
networks:
- my_network
depends_on:
- localstack
- aws
zookeeper:
image: confluentinc/cp-zookeeper:5.0.0
container_name: zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
networks:
- my_network
kafka:
image: confluentinc/cp-kafka:5.0.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
# For more details see See https://rmoff.net/2018/08/02/kafka-listeners-explained/
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "output_topic:2:2"
networks:
- my_network
networks:
my_network:
I would hope to see no errors as a result of publishing to this topic. Instead, I'm getting:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Any ideas what I may be doing wrong? Thank you for your help.

You've made the broker only resolvable within the Kafka container itself (or from your host to the container) by setting the listeners only to localhost.
If you want another Docker service to be able to reach that container, you'll have to add <some protocol>://kafka:<some port> to the advertised listeners, and make the listeners as not localhost
Where the protocol is also added to KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
FWIW, That blog should cover all those bases.

Related

No resolvable bootstrap urls given in bootstrap.servers on maven install

I am building up a jar file for my spring boot microservice, but upon maven install i am getting the error as stated below.
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-3.1.1.jar:na]
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-3.1.1.jar:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:730) ~[kafka-clients-3.1.1.jar:na]
my docker-compose file for the application is
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
restart: always
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
networks:
- app-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
restart: always
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- app-network
mongodb:
image: mongo:latest
restart: always
container_name: mongodb
networks:
- app-network
ports:
- 27017:27017
matching-engine-helper:
image: matching-engine-helper:latest
restart: always
container_name: "matching-engine-helper"
networks:
- app-network
ports:
- 9192:9192
depends_on:
- mongodb
- zookeeper
- kafka
matching-engine-core:
image: matching-engine-core:latest
restart: always
container_name: "matching-engine-core"
networks:
- app-network
ports:
- 9191:9191
depends_on:
- matching-engine-helper
- zookeeper
- kafka
- mongodb
networks:
app-network:
driver: bridge
and the application.yml is
spring:
data:
mongodb:
database: matching-engine-mongodb
host: mongodb
port: 27017
kafka:
bootstrap-servers: kafka:9092
server:
port: 9192
let me know if i need some environment variable's configuration here. as its running fine on local environment on local host but not on docker containers as i am unable to make a jar out of it.
Assuming mvn install is running on your host, then your host doesn't know how to resolve kafka as a DNS name running in a container. It that's what you wanted, then you should rename your file to application-docker.yml and use Spring profiles to properly override any defaults when you actually do run your code in a container. In your case, you would need to have localhost:29092 for Kafka, and similarly use localhost:27017 for Mongo in your default application.yml. You can also use environment variables for those two properties rather than hard-code them.
Or, if mvn install is running in a Docker layer itself, then that is completely isolated from other Docker networks. You can pass --network flag to docker build, though.
Assuming you don't want to mvn install -DskipTests, ideally, your tests do not rely on external services to be running. If you want to run a Kafka unit test, which is failing to connect, then you should either mock that, or using Spring-Kafka's EmbeddedKafka for integration-tests.
Worth mentioning that Spring boot has a Maven plugin for building Docker images, and it doesn't need a JAR to do so.

Facing error response from daemon-Windows

I am trying to run apache Kafka on windows using docker and my docker-compose.yml code is as follows:
version: "3"
services:
spark:
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
- "4010-4109:4010-4109"
volumes:
- ./notebooks:/home/jovyan/work/notebooks/
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
container_name: kakfa
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
When I execute the command
docker-compose -f docker-compose.yml up
I get an error: Error response from daemon: driver failed programming external connectivity on endpoint kafka-spark-1 (452eae1760b7860e3924c0e630943f825a809272760c8aa8bbb2f58ab2865377): Bind for 0.0.0.0:9092 failed: port is already allocated
I have tried net stop winnat and net start winnat, unfortunately this solution didn't work.
Would appreciate any kind of help!
Spark isn't running Kafka
Remove the ports here
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
Also, change variable for Kafka to use the proper hostname, otherwise Spark will not work with it...
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Then you can also remove ports for Kafka container since you wouldn't have access from the host. Unless you add external listeners.
You may also be interested in an example notebook I use to test PySpark with Kafka.

I want to communicate my client and kafka broker with docker compose

There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but
driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I get an error.
I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.
The name of my client trying to communicate with kafka is
driver-service
I looked through these resources but according to them my method should work:
Connect to Kafka running in Docker
My Python/Java/Spring/Go/Whatever Client Won’t Connect to My Apache
Kafka Cluster in Docker/AWS/My Brother’s Laptop. Please Help!
driver-service githup repositorie
My docker-compose file:
version: '3'
services:
gateway-server:
image: gateway-server-image
container_name: gateway-server-container
ports:
- '5555:5555'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- PASSENGER_SERVICE_URL=172.24.2.4:4444
- DRIVER_SERVICE_URL=172.24.2.5:3333
networks:
microservicesNetwork:
ipv4_address: 172.24.2.6
driver-service:
image: driver-service-image
container_name: driver-service-container
ports:
- '3333:3333'
environment:
- NOTIFICATION_SERVICE_URL=172.24.2.3:8888
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- KAFKA_GROUP_ID=driver-group-id
- KAFKA_BOOTSTRAP_SERVERS=broker:29092
- kafka.consumer.group.id=driver-group-id
- kafka.consumer.enable.auto.commit=true
- kafka.consumer.auto.commit.interval.ms=1000
- kafka.consumer.auto.offset.reset=earliest
- kafka.consumer.max.poll.records=1
networks:
microservicesNetwork:
ipv4_address: 172.24.2.5
passenger-service:
image: passenger-service-image
container_name: passenger-service-container
ports:
- '4444:4444'
environment:
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.4
notification-service:
image: notification-service-image
container_name: notification-service-container
ports:
- '8888:8888'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.3
payment-service:
image: payment-service-image
container_name: payment-service-container
ports:
- '7777:7777'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.2
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- microservicesNetwork
broker:
image: confluentinc/cp-kafka:7.0.1
container_name: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
GROUP_ID: driver-group-id
KAFKA_CREATE_TOPICS: "product"
networks:
- microservicesNetwork
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8080:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=broker
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
- KAFKA_CLUSTERS_0_READONLY=true
networks:
- microservicesNetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
platform: linux/x86_64
environment:
- discovery.type=single-node
- max_open_files=65536
- max_content_length_in_bytes=100000000
- transport.host= elasticsearch
volumes:
- $HOME/app:/var/app
ports:
- "9200:9200"
- "9300:9300"
networks:
- microservicesNetwork
postgresql:
image: postgres:11.1-alpine
platform: linux/x86_64
container_name: postgresql
volumes:
- ./postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=123456
- POSTGRES_USER=postgres
- POSTGRES_DB=cqrs_db
ports:
- "5432:5432"
networks:
- microservicesNetwork
networks:
microservicesNetwork:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.24.2.0/16
gateway: 172.24.2.1
application.prod.properties ->
#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}
payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}
#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB
If the error says localhost/127.0.0.1:9092, then your environment variable isn't being used.
In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used
KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS
But, in your properties, it's unclear how this is used without showing your config class
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS
If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically
Sidenote: All those kafka.consumer. attributes would need to be set as JVM properties, not container environment variables.
Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses
problem solved 😊
If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments. it expects us to configure (Source), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network. That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker. The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document(SOURCE), spring can automatically detect from which address it can connect to Kafka. I just had to give the producer also the Kafka address using the #Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network 😇
Thank you very much #OneCricketeer and #Svend for their help.

Docker kafka on MacOS M1 Issues stuck on configuring

I use macOS M1 Big Sur 11.2.3, but my kafka cannot running well and cannot create/list the topics. I don't know its because the OS or not, but the log for kafka is only like this:
docker-compose logs
list the topics logs
here's my docker compose:
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
networks:
- kafka_net
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- 9092:9092
expose:
- 29092
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LISTENERS: INSIDE://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:29092,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_BROKER_ID: 1
restart: always
networks:
- kafka_net
networks:
kafka_net:
driver: bridge
I think kafka not yet to running, So i cannot list/create a topics. Do you guys have any idea of this? i already search possibilities about this, but i still don't have the problem solving. Thanks
For startup kafka in docker on mac m1 i do next things :
Clone kafka repo and build on my macbook pro m1 (./gradlew clean releaseTarGz)
Create docker-compose project dir with next structure
root dir
kafka_m1
kafka (unziped kafka distr)
Dockerfile
docker-compose.yml
Configure server.properties file (root dir/kafka_m1/kafka/config/server.properties). Main params:
zookeeper.connect=zookeeper:2181
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://127.0.0.1:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
kafka`s Dockerfile (openjdk:15.0.2-jdk was build for m1):
FROM openjdk:15.0.2-jdk
WORKDIR /
COPY . /
EXPOSE 9092
EXPOSE 8092
EXPOSE 9092
EXPOSE 10092
EXPOSE 11092
EXPOSE 12092
ENTRYPOINT [ "/kafka/bin/kafka-server-start.sh", "/kafka/config/server.properties" ]
docker-compose.yml (i choose zookeeper:3.7.0 because it has arm build):
version: "2"
services:
zookeeper:
image: zookeeper:3.7.0
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
build: ./kafka_m1
ports:
- "127.0.0.1:9092:9092"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Start docker-compose with kafka_m1 image building.
It`s fast receipt for run local kafka in docker
Just add platform to your docker-compose.yml file:
platform: linux/amd64
services:
zookeeper:
image: wurstmeister/zookeeper
platform: linux/amd64
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
networks:
- kafka_net
...
If above option doesn't help then you should try the following:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
the docker-compose.yaml file is ok, you just have to execute the following commands into the kafka container:
kafka-topics.sh --zookeeper zookeeper:2181 --topic test --create --partitions 3 --replication-factor 1
kafka-topics.sh --zookeeper zookeeper:2181 --list
btw: you can enter to the kafka container using the command: docker exec -it CONTAINER_ID bash
Let me provide the docker-compose file that I spent a whole day creating.
Below docker-compose file builds containers for the zookeeper, Kafka, and kafka-manager CMAK
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
restart: always
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
restart: always
ports:
- "9092:9092"
- "1099:1099"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: kafka
JMX_PORT: 1099
kafka_manager:
image: hlebalbau/kafka-manager:stable
restart: always
ports:
- "9000:9000"
environment:
ZK_HOSTS: zookeeper:2181
command: -Dpidfile.path=/dev/null
docker-compose up --build -d
Then for managing cluster through CMAK you require to execute following
docker exec -it zookeeper bash
./bin/zkCli.sh
create /kafka-manager/mutex ""
create /kafka-manager/mutex/locks ""
create /kafka-manager/mutex/leases ""

Docker-compose for production running laravel with nginx on azure

I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.

Resources