I have a web application on java spring that connects to postgres. Сonnection string to the database: spring.datasource.url=jdbc:postgresql://postgres:5432/postgres
There is a compose-file that raises the web application and the database:
version: "3"
services:
postgres:
networks:
- backend
image: postgres
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
worker1:
networks:
- backend
image: scripter51/worker
ports:
- "8082:8082"
deploy:
mode: replicated
replicas: 2
placement:
constraints: [node.role == worker]
networks:
backend:
volumes:
db-data:
Public services on the machine of the command docker stack deploy --compose-file comp.yml test
Problem: If the database and the web application are on the same machine - everything works, if at different - then the application can not find the database by the name of the service.
I was able to solve this problem.
I was trying to create a network between the host and the virtual machine using the means of the docker - apparently it does not work.
Related
I just encountered a problem. I am dockerizing a springboot application with MySQL as a database it is perfectly working in a local setup. But when I try to dockerize the application using docker-compose, MySQL container is working fine and is accessible in my workbench but my application is not able to access it throwing the communication link failure.
This is the compose file I am using:
version: "3.8"
services:
mysqldb:
image: mysql:5.7
restart:unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=baskartest
ports:
- 3307:3306
volumes:
- db:/var/lib/mysql
app:
depends_on:
- mysqldb
build: ./bezkoder-app
restart:on-failure
env_file: ./.env
ports:
- 8084:8080
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:3306/baskartest?useSSL=false",
"spring.datasource.username" : "root",
"spring.datasource.password" : "root",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.jpa.hibernate.ddl-auto" : "update"
}'
volumes:
- .m2:/root/.m2
stdin_open: true
tty: true
MySQL is working fine but my app in services is not able to communicate with it.
Here is what I see:
Any help would be appreciated!
I have done small changes in docker-compose.yml file, please use this one. It is working fine.
version: "3.8"
services:
mysqldb:
image: mysql:5.7
container_name: MYSQL_DB
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=baskartest
ports:
- 3307:3306
app:
build: ./bezkoder-app
restart: on-failure
ports:
- 8084:8080
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://MYSQL_DB:3306/baskartest
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
depends_on:
- mysqldb
In my short experience with Docker and containers communicating with eachother, I've found that localhost configuration URL's aren't working. Because one container can't communicatie with another container using just localhost:[PORT].
A lazy way for me to fix this is to work out a static IP address of the machine the containers are running on. Then use this (local) IP address instead of localhost to define the endpoint of the database.
In my situation with Redis I'm doing it like this:
return new LettuceConnectionFactory(new RedisStandaloneConfiguration("192.168.1.201", 6379));
In your situation, you might want to use a Datasource URL like this:
- SPRING_DATASOURCE_URL=jdbc:mysql://[STATIC_IP_OF_MACHINE]:3306/baskartest
*i.e. ...192.168.1.100:3306/baskartest...*
There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but
driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I get an error.
I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.
The name of my client trying to communicate with kafka is
driver-service
I looked through these resources but according to them my method should work:
Connect to Kafka running in Docker
My Python/Java/Spring/Go/Whatever Client Won’t Connect to My Apache
Kafka Cluster in Docker/AWS/My Brother’s Laptop. Please Help!
driver-service githup repositorie
My docker-compose file:
version: '3'
services:
gateway-server:
image: gateway-server-image
container_name: gateway-server-container
ports:
- '5555:5555'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- PASSENGER_SERVICE_URL=172.24.2.4:4444
- DRIVER_SERVICE_URL=172.24.2.5:3333
networks:
microservicesNetwork:
ipv4_address: 172.24.2.6
driver-service:
image: driver-service-image
container_name: driver-service-container
ports:
- '3333:3333'
environment:
- NOTIFICATION_SERVICE_URL=172.24.2.3:8888
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- KAFKA_GROUP_ID=driver-group-id
- KAFKA_BOOTSTRAP_SERVERS=broker:29092
- kafka.consumer.group.id=driver-group-id
- kafka.consumer.enable.auto.commit=true
- kafka.consumer.auto.commit.interval.ms=1000
- kafka.consumer.auto.offset.reset=earliest
- kafka.consumer.max.poll.records=1
networks:
microservicesNetwork:
ipv4_address: 172.24.2.5
passenger-service:
image: passenger-service-image
container_name: passenger-service-container
ports:
- '4444:4444'
environment:
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.4
notification-service:
image: notification-service-image
container_name: notification-service-container
ports:
- '8888:8888'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.3
payment-service:
image: payment-service-image
container_name: payment-service-container
ports:
- '7777:7777'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.2
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- microservicesNetwork
broker:
image: confluentinc/cp-kafka:7.0.1
container_name: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
GROUP_ID: driver-group-id
KAFKA_CREATE_TOPICS: "product"
networks:
- microservicesNetwork
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8080:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=broker
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
- KAFKA_CLUSTERS_0_READONLY=true
networks:
- microservicesNetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
platform: linux/x86_64
environment:
- discovery.type=single-node
- max_open_files=65536
- max_content_length_in_bytes=100000000
- transport.host= elasticsearch
volumes:
- $HOME/app:/var/app
ports:
- "9200:9200"
- "9300:9300"
networks:
- microservicesNetwork
postgresql:
image: postgres:11.1-alpine
platform: linux/x86_64
container_name: postgresql
volumes:
- ./postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=123456
- POSTGRES_USER=postgres
- POSTGRES_DB=cqrs_db
ports:
- "5432:5432"
networks:
- microservicesNetwork
networks:
microservicesNetwork:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.24.2.0/16
gateway: 172.24.2.1
application.prod.properties ->
#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}
payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}
#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB
If the error says localhost/127.0.0.1:9092, then your environment variable isn't being used.
In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used
KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS
But, in your properties, it's unclear how this is used without showing your config class
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS
If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically
Sidenote: All those kafka.consumer. attributes would need to be set as JVM properties, not container environment variables.
Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses
problem solved 😊
If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments. it expects us to configure (Source), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network. That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker. The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document(SOURCE), spring can automatically detect from which address it can connect to Kafka. I just had to give the producer also the Kafka address using the #Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network 😇
Thank you very much #OneCricketeer and #Svend for their help.
I have a Spring Boot microservices project with three microservices to test saga pattern for distributed transactions management.
When I run Axon Server locally with java -jar axonserver.jar and spring boot microservices with mvn spring-boot:run, everything is ok and I can see all microservices in Axon Server dashboard.
I have added Docker file for microservices and a docker-compose.yml into project to run the whole project with docker-compose. Here is my docker-compose.yml file:
version: '3.8'
services:
axonserver:
image: axoniq/axonserver
hostname: axonserver
container_name: axonserver
volumes:
- type: bind
source: ./data
target: /data
- type: bind
source: ./events
target: /eventdata
- type: bind
source: ./config
target: /config
read_only: true
ports:
- '8024:8024'
- '8124:8124'
- '8224:8224'
networks:
- axon-demo
order-service:
container_name: "order-service"
build:
context: ./order-service
ports:
- "8080:8080"
depends_on:
- axonserver
networks:
- axon-demo
payment-service:
container_name: "payment-service"
build:
context: ./payment-service
ports:
- "8081:8081"
depends_on:
- axonserver
networks:
- axon-demo
shipping-service:
container_name: "shipping-service"
build:
context: ./shipping-service
ports:
- "8082:8082"
depends_on:
- axonserver
networks:
- axon-demo
networks:
axon-demo:
driver: bridge
I also added axon-server to application.properties of all microservices as below:
axon.axonserver.servers=axonserver:8124
After running docker-compose up --build command, microservices are unable to connect to the Axon server and I get this error:
order-service | 2021-07-10 15:01:01.199 WARN 1 --- [rverConnector-0] o.a.a.c.AxonServerConnectionManager : Connecting to AxonServer node localhost:8124 failed: UNAVAILABLE: io exception
My question is why microservices are looking for axon server in localhost:8124 that is obviously wrong and is against their configurations in application.properties:
axon.axonserver.servers=axonserver:8124
Here axonserver is the container name of Axon server.
As discussed in the comments, the problem is on your Dockerfile where you are overriding the spring.config.location. By removing that from the Dockerfile, the problem should is fixed.
Your properties file contains the following:
axon.axonserver.servers=${AXONSERVER_HOST:axonserver:8124}
Please try it without the ":8124":
axon.axonserver.servers=${AXONSERVER_HOST:axonserver}
I think Spring boot is confused by the double ":". Port 8124 is the default for Axon Server connections, so you can leave it out without problems.
Two microservices are deployed on AWS inside a container.I have a scenario where my microservice-A have to communicate with microservice-B. But When i tried with http://localhost:8082/url then it didn't work. Unfortunately i had to use the public url of my microservices. Due to the use of public url performance is getting slow.
Can anyone please help me ,so that microservices can be able to communicate on localhost inside docker container.
All you need is a docker network for this. I have achieved this using docker-compose. In the following example I have defined a network back-tier and both the services belong to it. After this your application can access your DB with its service name http://database:27017.
version: '3'
networks:
back-tier:
services:
database:
build: ./Database
networks:
- back-tier
ports:
- "27017:27017"
backend:
build: ./Backend
networks:
- back-tier
ports:
- "8080:8080"
depends_on:
- database
I am attempting to host a Spring Cloud application in Docker containers.The underlying exception is as follows:
search_1 | Caused by: java.lang.IllegalStateException: Invalid URL: config:8888
I understand the reason is because of the URL specified in my config server.
spring.application.name=inventory-client
#spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.uri=config:8888
On my development machine, I am able to use localhost. However, based on a past question (relating to connecting to my database), I learned that localhost is not appropriate in containers. For my database, I was able to use the following:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=false
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.url=jdbc:postgresql://db:5432/leisurely_diversion
#spring.datasource.url=jdbc:postgresql://localhost:5000/leisurely_diversion
spring.datasource.driver-class-name=org.postgresql.Driver
but this obviously did not work as expected for the configuration server.
My docker-compose file:
# Use postgres/example user/password credentials
version: '3.2'
services:
db:
image: postgres
ports:
- 5000:5432
environment:
POSTGRES_PASSWORD: example
volumes:
- type: volume
source: psql_data
target: /var/lib/postgresql/data
networks:
- app
restart: always
config:
image: kellymarchewa/config_server
networks:
- app
volumes:
- /root/.ssh:/root/.ssh
restart: always
search:
image: kellymarchewa/search_api
networks:
- app
restart: always
ports:
- 8082:8082
depends_on:
- db
- config
- inventory
inventory:
image: kellymarchewa/inventory_api
depends_on:
- db
- config
ports:
- 8081:8081
networks:
- app
restart: always
volumes:
psql_data:
networks:
app:
Both services are running under the same user defined network; how I allow the services to find the configuration service?
Thanks.