spring boot eureka with docker gets connection refused - spring-boot

I have a multi-service platform working locally. It uses Spring Boot 2.7, Spring Eureka Docker, and Feign for inter-service calls. Now that I have it all starting up via Docker Compose, the services seem to fail when trying to request each other. Yet, I see all services registered on Eureka dashboard:
When I receive a call in the API-GATEWAY, it calls the IDENTITY-SERVICE.
The caller looks like this:
The main class has:
#SpringBootApplication
#EnableDiscoveryClient
#EnableFeignClients("com.xxx.apigateway.service.feign")
public class ApiGatewayApplication {
...
}
The service to call the other service is:
#FeignClient(name = "identity-service")
public interface IdentityServiceClient {
#RequestMapping(method = RequestMethod.GET, value = "/api/{scannedId}")
ApiKey getApiKey(#PathVariable String scannedId);
}
And the application.properties file for (all services is basically the same) is:
server.port=8081
eureka.client.serviceUrl.defaultZone=http://eureka:#discovery-service:8010/eureka
eureka.client.register-with-eureka=true
eureka.instance.instance-id=${spring.application.name}:${spring.application.instance_id:${random.value}}
eureka.instance.hostname=${HOST_NAME:localhost}
spring.config.import=optional:configserver:http://config-service:8888
spring.cloud.config.uri=http://config-service:8888
spring.cloud.config.name=config-service
spring.cloud.config.discovery.enabled = false
spring.main.web-application-type=reactive
here is part of the docker-compose:
discovery-service:
container_name: discovery-service
image: discovery-service
pull_policy: never
environment:
- SPRING_PROFILES_ACTIVE=docker
expose:
- "8010"
ports:
- "8010:8010"
restart: unless-stopped
networks:
- mynet
api-gateway:
container_name: api-gateway
image: api-gateway
pull_policy: never
environment:
- SPRING_PROFILES_ACTIVE=docker
expose:
- "8081"
ports:
- "8081:8081"
restart: unless-stopped
depends_on:
- config-service
- discovery-service
identity-service:
container_name: identity-service
image: identity-service
pull_policy: never
environment:
- SPRING_PROFILES_ACTIVE=docker
restart: unless-stopped
depends_on:
- config-service
- discovery-service
- api-gateway
And the error thrown by the API_GATEWAY when trying to make the call to the IDENTITY-SERIVCE is:
api-gateway | 2022-09-28 15:10:00.889 DEBUG 1 --- [nio-8081-exec-1] u.f.m.a.s.feign.IdentityServiceClient : [IdentityServiceClient#getAllApiKeys] ---> GET http://identity-service/api/all HTTP/1.1
api-gateway | 2022-09-28 15:10:00.889 DEBUG 1 --- [nio-8081-exec-1] u.f.m.a.s.feign.IdentityServiceClient : [IdentityServiceClient#getAllApiKeys] ---> END HTTP (0-byte body)
api-gateway | 2022-09-28 15:10:00.939 WARN 1 --- [nio-8081-exec-1] o.s.c.l.core.RoundRobinLoadBalancer : No servers available for service: identity-service
api-gateway | 2022-09-28 15:10:00.942 DEBUG 1 --- [nio-8081-exec-1] u.f.m.a.s.feign.IdentityServiceClient : [IdentityServiceClient#getAllApiKeys] <--- HTTP/1.1 503 (51ms)
api-gateway | 2022-09-28 15:10:00.942 DEBUG 1 --- [nio-8081-exec-1] u.f.m.a.s.feign.IdentityServiceClient : [IdentityServiceClient#getAllApiKeys]
api-gateway | 2022-09-28 15:10:00.942 DEBUG 1 --- [nio-8081-exec-1] u.f.m.a.s.feign.IdentityServiceClient : [IdentityServiceClient#getAllApiKeys] Load balancer does not contain an instance for the service identity-service
api-gateway | 2022-09-28 15:10:00.942 DEBUG 1 --- [nio-8081-exec-1] u.f.m.a.s.feign.IdentityServiceClient : [IdentityServiceClient#getAllApiKeys] <--- END HTTP (75-byte body)
api-gateway | 2022-09-28 15:10:00.942 ERROR 1 --- [nio-8081-exec-1] u.f.m.a.service.feign.FeignErrorDecoder : decode exception method: IdentityServiceClient#getAllApiKeys()
api-gateway | 2022-09-28 15:10:00.942 ERROR 1 --- [nio-8081-exec-1] u.f.m.a.service.feign.FeignErrorDecoder : Feign error: HTTP/1.1 503
api-gateway |
api-gateway | feign.Response$ByteArrayBody#7d9a6870
api-gateway | 2022-09-28 15:10:00.942 ERROR 1 --- [nio-8081-exec-1] u.f.m.a.service.feign.FeignErrorDecoder : Throwing exception: null
**...**
api-gateway | feign.RetryableException: Connection refused executing GET http://identity-service/api/all
api-gateway | at feign.FeignException.errorExecuting(FeignException.java:268) ~[feign-core-11.8.jar:na]
api-gateway | Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
api-gateway | Error has been observed at the following site(s):
api-gateway | *__checkpoint ⇢ org.springframework.web.filter.reactive.ServerWebExchangeContextFilter [DefaultWebFilterChain]
api-gateway | *__checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
api-gateway | *__checkpoint ⇢ HTTP GET "/providers/b691eda6-f021-4534-8b4d-aaa5a27584d" [ExceptionHandlingWebHandler]
api-gateway | Original Stack Trace:
api-gateway | at feign.FeignException.errorExecuting(FeignException.java:268) ~[feign-core-11.8.jar:na]
**...**
api-gateway | Caused by: java.net.ConnectException: Connection refused
Since this works outside of Docker Compose, I assume that is the culprit, but I don't for sure and how to fix this.

I faced the same problem before.
By adding this:
eureka.instance.prefer-ip-address=true
and removing this:
# eureka.instance.hostname=${HOST_NAME:localhost}
worked for me.

Related

Springboot docker-compose redis Cannot get Jedis connection;

This may seems as duplicate of docker-compose with springboot and redis and docker-compose redis connection issue but the solutions proposed there are not working in my case.
I am using springboot and the other services are unable to connect to redis service.
Below is my docker compose file:
version: "3.7"
services:
...
users:
build: ./users
ports:
- "8081:8081"
networks:
- jiji-microservices-network
depends_on:
- registry
- gateway
- redis_cache
- postgresDB
- rabbitmq
links:
- redis_cache
- postgresDB
- rabbitmq
environment:
# SPRING_CACHE_TYPE: redis
# SPRING_REDIS_HOST: redis_cache
# SPRING_REDIS_PORT: 6379
# SPRING_REDIS_PASSWORD:
SPRING_DATASOURCE_URL: jdbc:postgresql://postgresDB:5432/jiji_users
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: postgres
SPRING_JPA_HIBERNATE_DDL_AUTO: update
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://registry:8090/eureka
postgresDB:
image: postgres
restart: unless-stopped
ports:
- "5432:5432"
networks:
- jiji-microservices-network
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: jiji_users
redis_cache:
image: redis:latest
restart: on-failure
command: ["redis-server","--bind","redis_cache","--port","6379"]
ports:
- "6379:6379"
networks:
- jiji-microservices-network
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
networks:
- jiji-microservices-network
networks:
jiji-microservices-network:
driver: bridge
Here below is my application.yml file:
...
cache:
type: redis
redis:
host: redis_cache
port: 6379
# cache-null-values: true
time-to-live: 2592000 #30 days
...
The error message I am getting:
users_1 | 2022-03-28 02:53:40.417 INFO 1 --- [nio-8081-exec-4] c.o.u.s.CustomAuthorizationFilter : /api/v1/users/getCode
users_1 | Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
docker ps shows that all the containers are up and running and that redis is listening on port 6379.
PS: It works fine without docker.
The issue was related to my Redisconfiguration. Fixed bit adding redis host and redis port to JedisConnectionFactory Bean.
#Configuration
#Slf4j
public class RedisConfiguration {
#Value("${spring.redis.host}")
private String REDIS_HOST;
#Value("${spring.redis.port}")
private Integer REDIS_PORT;
#Bean
public JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(REDIS_HOST, REDIS_PORT);
return new JedisConnectionFactory(config);
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setEnableTransactionSupport(true);
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
}

kafka SASL_PLAIN SCRAM is fail in spring boot consumer

I try kafka authentication SASL_PLAINTEXT / SCRAM but Authentication failed in spring boot.
i try change SASL_PLAINTEXT / PLAIN and it's working. but SCRAM is Authentication failed SHA-512 and SHA-256
did many diffrent things but it's not working....
how can i fix it?
broker log
broker1 | [2020-12-31 02:57:37,831] INFO [SocketServer brokerId=1] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
broker2 | [2020-12-31 02:57:37,891] INFO [SocketServer brokerId=2] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
Spring boot log
2020-12-31 11:57:37.438 INFO 82416 --- [ restartedMain] o.a.k.c.s.authenticator.AbstractLogin : Successfully logged in.
2020-12-31 11:57:37.497 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.6.0
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 62abe01bee039651
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1609383457495
2020-12-31 11:57:37.502 INFO 82416 --- [ restartedMain] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Subscribed to topic(s): test
2020-12-31 11:57:37.508 INFO 82416 --- [ restartedMain] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2020-12-31 11:57:37.528 INFO 82416 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-12-31 11:57:37.546 INFO 82416 --- [ restartedMain] i.m.k.p.KafkaProducerScramApplication : Started KafkaProducerScramApplication in 2.325 seconds (JVM running for 3.263)
2020-12-31 11:57:37.833 INFO 82416 --- [ntainer#0-0-C-1] o.apache.kafka.common.network.Selector : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Failed authentication with localhost/127.0.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512)
2020-12-31 11:57:37.836 ERROR 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Connection to node -1 (localhost/127.0.0.1:9091) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
2020-12-31 11:57:37.837 WARN 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Bootstrap broker localhost:9091 (id: -1 rack: null) disconnected
2020-12-31 11:57:37.842 ERROR 82416 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'org.apache.kafka.common.errors.SaslAuthenticationException's; no record information is available
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:151) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1425) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1122) ~[spring-kafka-2.6.4.jar:2.6.4]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
my docker-compose.yml
...
...
zookeeper3:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper3
container_name: zookeeper3
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper3:2183
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2183
ZOOKEEPER_CLIENT_PORT: 2183
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \
-Dquorum.auth.enableSasl=true \
-Dquorum.auth.learnerRequireSasl=true \
-Dquorum.auth.serverRequireSasl=true \
-Dquorum.auth.learner.saslLoginContext=QuorumLearner \
-Dquorum.auth.server.saslLoginContext=QuorumServer \
-Dquorum.cnxn.threads.size=20 \
-DrequireClientAuthScheme=sasl"
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker1:
image: confluentinc/cp-kafka:6.0.1
hostname: broker1
container_name: broker1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9091:9091"
- "9101:9101"
- "29091:29091"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29091,SASL_PLAINHOST://:9091
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker1:29090,OUTSIDE://localhost:29091,SASL_PLAINHOST://localhost:9091
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker2:
image: confluentinc/cp-kafka:6.0.1
hostname: broker2
container_name: broker2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9092:9092"
- "9102:9102"
- "29092:29092"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29092,SASL_PLAINHOST://:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker2:29090,OUTSIDE://localhost:29092,SASL_PLAINHOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9102
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
kaka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="password"
user_admin="password"
user_client="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="client"
password="password";
};
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="password";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};
ConsumerConfig.java
private static final String BOOTSTRAP_ADDRESS = "localhost:9091,localhost:9092";
private static final String JAAS_TEMPLATE = "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"%s\" password=\"%s\";";
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
String jaasCfg = String.format(JAAS_TEMPLATE, "client", "password");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer");
props.put("sasl.jaas.config", jaasCfg);
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("security.protocol", "SASL_PLAINTEXT");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
solved.
because i didn't add user information in zookeeper.
add this code.
zookeeper-add-kafka-users:
image: confluentinc/cp-kafka:6.0.1
container_name: "zookeeper-add-kafka-users"
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
command: "bash -c 'echo Waiting for Zookeeper to be ready... && \
cub zk-ready zookeeper1:2181 120 && \
cub zk-ready zookeeper2:2182 120 && \
cub zk-ready zookeeper3:2183 120 && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf
volumes:
- /home/mysend/dev/docker/kafka/sasl:/etc/kafka/secrets/sasl
kafka SASL_PLAIN SCRAM
if don't use docker can use command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

spring cloud unable to route to internal application

I am using spring cloud to route application but unable to do so below are the details
Gateway configuration
port: 8080
debug: true
spring:
application:
name: cloud-gateway
cloud:
gateway:
discovery:
locator:
enabled: true
lower-case-senstive: true
routes:
- id: product-composite
uri: localhost:7000
predicates:
- Path: /product-composite/**
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
server:
port: 8761
waitTimeInMsWhenSyncEmpty: 0
response-cache-update-interval: 5000
management.endpoints.web.exposure.include: '*'
Product Composite Service* (I am trying to call)
port: 7000
spring:
application:
name: product-composite
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
initialInstanceInfoReplicationIntervalSeconds: 5
registryFetchIntervalSeconds: 5
register-with-eureka: true
fetch-registery: true
instance:
leaseRenewalIntervalInSeconds: 5
leaseExpirationDurationInSeconds: 5
Path
#RestController
public class ProductCompositeServiceController {
#GetMapping
public String getString() {
return "Hello grom product CompositeController";
}
}
Can you help? If needed I can provide more details
Note: if product-composite-servcie is called without eureka I am getting response but with eureka I am getting
404 Resource not found
Edit
Below is the stacktrace I am getting so far:
2020-06-24 22:22:05.252 DEBUG 38876 --- [or-http-epoll-2] o.s.w.r.handler.SimpleUrlHandlerMapping : [63939523-1] Mapped to ResourceWebHandler ["classpath:/META-INF/resources/", "classpath:/resources/", "classpath:/static/", "classpath:/public/"]
2020-06-24 22:22:05.255 DEBUG 38876 --- [or-http-epoll-2] o.s.w.r.resource.ResourceWebHandler : [63939523-1] Resource not found
2020-06-24 22:22:05.273 DEBUG 38876 --- [or-http-epoll-2] a.w.r.e.AbstractErrorWebExceptionHandler : [63939523-1] Resolved [ResponseStatusException: 404 NOT_FOUND] for HTTP GET /product-composite/
2020-06-24 22:22:05.281 DEBUG 38876 --- [or-http-epoll-2] o.s.http.codec.json.Jackson2JsonEncoder : [63939523-1] Encoding [{timestamp=Wed Jun 24 22:22:05 IST 2020, path=/product-composite/, status=404, error=Not Found, mess (truncated)...]
2020-06-24 22:22:05.299 DEBUG 38876 --- [or-http-epoll-2] o.s.w.s.adapter.HttpWebHandlerAdapter : [63939523-1] Completed 404 NOT_FOUND
I also tried to add path as
routes:
- id: product-composite
uri: localhost:7000
predicates:
- Path: /product-composite/**
filters:
- RewritePath: /product-composite/(?<segment>.*), /$\{segment}
You don't have a path in your controller. Try
#GetMapping("/product-composite")

Docker-compose, spring app + mongoDB on non-default port

I have a problem with connecting to mongodb from my app.
Here is the docker-compose file:
version: "3"
services:
olx-crawler:
container_name: olx-crawler
image: myimage:v1
ports:
- "8099:8099"
depends_on:
- olx-mongo
environment:
SPRING_DATA_MONGODB_HOST: olx-mongo
olx-mongo:
container_name: olx-mongo
image: mongo
ports:
- "27777:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: biafra
MONGO_INITDB_ROOT_PASSWORD: password
And here is my application.yaml:
spring:
data:
mongodb:
host: localhost
port: 27777
username: biafra
password: password
authentication-database: admin
logging:
level:
org.springframework.data.mongodb.core.MongoTemplate: DEBUG
server:
port: 8099
Now i have done a similar project to this (docker-compose -> spring app + mongodb) and it worked correctly, but it was with the default mongo port 27017.
And i know you have to use mongo container name instead of localhost, this is what this:
SPRING_DATA_MONGODB_HOST: olx-mongo
is for it replaces "localhost" in application.yml with olx-mongo, as you can see in app logs:
Exception in monitor thread while connecting to server olx-mongo:27777
Here are some logs:
olx-mongo | 2020-04-15T18:00:15.170+0000 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>
olx-mongo | 2020-04-15T18:00:15.174+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>
olx-mongo | 2020-04-15T18:00:15.175+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock
olx-mongo | 2020-04-15T18:00:15.175+0000 I NETWORK [listener] Listening on 0.0.0.0
olx-mongo | 2020-04-15T18:00:15.175+0000 I NETWORK [listener] waiting for connections on port 27017
olx-crawler | 2020-04-15 18:00:15.436 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.
olx-crawler | 2020-04-15 18:00:15.486 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 45ms. Found 1 MongoDB repository interfaces.
olx-mongo | 2020-04-15T18:00:16.000+0000 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
olx-crawler | 2020-04-15 18:00:16.037 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8099 (http)
olx-crawler | 2020-04-15 18:00:16.050 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
olx-crawler | 2020-04-15 18:00:16.052 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.33]
olx-crawler | 2020-04-15 18:00:16.116 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
olx-crawler | 2020-04-15 18:00:16.117 INFO 1 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1487 ms
olx-crawler | 2020-04-15 18:00:16.468 INFO 1 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[olx-mongo:27777], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize
=500}
olx-crawler | 2020-04-15 18:00:16.469 INFO 1 --- [ main] org.mongodb.driver.cluster : Adding discovered server olx-mongo:27777 to client view of cluster
olx-crawler | 2020-04-15 18:00:16.550 INFO 1 --- [olx-mongo:27777] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server olx-mongo:27777
olx-crawler |
olx-crawler | com.mongodb.MongoSocketOpenException: Exception opening socket
olx-crawler | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at java.base/java.lang.Thread.run(Thread.java:844) [na:na]
olx-crawler | Caused by: java.net.ConnectException: Connection refused (Connection refused)
olx-crawler | at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:na]
olx-crawler | at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:400) ~[na:na]
olx-crawler | at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:243) ~[na:na]
olx-crawler | at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:225) ~[na:na]
olx-crawler | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:402) ~[na:na]
olx-crawler | at java.base/java.net.Socket.connect(Socket.java:591) ~[na:na]
olx-crawler | at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | ... 3 common frames omitted
olx-crawler |
olx-crawler | 2020-04-15 18:00:17.096 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
olx-crawler | 2020-04-15 18:00:17.229 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'taskScheduler'
olx-crawler | 2020-04-15 18:00:17.306 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8099 (http) with context path ''
olx-crawler | 2020-04-15 18:00:17.315 INFO 1 --- [ main] c.e.olxcrawler.OlxCrawlerApplication : Started OlxCrawlerApplication in 3.944 seconds (JVM running for 4.977)
Any help?
Well you wrote
And i know you have to use mongo container name instead of localhost, but it still does not work.
but you have
spring:
data:
mongodb:
host: localhost
port: 27777
Problem is that with this config you are not able to connect to mongo from within spring boot container. It's just configuration for "outside world" of container. For example you can connect to it from your locally running spring boot application which doesn't run inside docker.
To connect to mongo from within dockerized spring boot app, change host to olx-mongo and port to 27017.

How to use eureka.client.service-url: property of netflix eureka in spring cloud

I am trying to add Eureka client in one of the microservices, but i am unable to figure out if how can I use the service-url.
I am using the Greenwich.SR1 version of spring-cloud.
Below is my application.yml
spring:
application:
name: stock-service
server:
port: 9901
eureka:
instance:
hostname: localhost
client:
register-with-eureka: true
fetch-registry: true
service-url: http://${eureka.instance.hostname}:9902/eureka/
I tried to search it out but everywhere I am getting the old way which is not supported in this version:
Old Way:
eureka: #tells about the Eureka server details and its refresh time
instance:
leaseRenewalIntervalInSeconds: 1
leaseExpirationDurationInSeconds: 2
client:
serviceUrl:
defaultZone: http://127.0.0.1:8761/eureka/
Could someone help here?
Finally, I find the configuration:
spring:
application:
name: stock-service
server:
port: 9901
eureka:
instance:
hostname: localhost
client:
register-with-eureka: true
fetch-registry: true
service-url:
default-zone: http://localhost:9902/eureka
I just tried this configuration in Spring Cloud Hoxton.SR4, and it doesn't work.
Then I find out the correct way (at least for me):
spring:
application:
name: hello-world-server
server:
port: 8010
eureka:
client:
service-url:
defaultZone: http://localhost:9001/eureka/
We could see the logs below after starting your client application:
2020-05-02 16:39:21.914 INFO 27104 --- [ main] c.n.d.DiscoveryClient : Discovery Client initialized at timestamp 1588408761914 with initial instances count: 0
2020-05-02 16:39:21.915 INFO 27104 --- [ main] o.s.c.n.e.s.EurekaServiceRegistry : Registering application HELLO-WORLD-SERVER with eureka with status UP
2020-05-02 16:39:21.915 INFO 27104 --- [ main] c.n.d.DiscoveryClient : Saw local status change event StatusChangeEvent [timestamp=1588408761915, current=UP, previous=STARTING]
2020-05-02 16:39:21.916 INFO 27104 --- [nfoReplicator-0] c.n.d.DiscoveryClient : DiscoveryClient_HELLO-WORLD-SERVER/tumbleweed:hello-world-server:8010: registering service...
And the Server side:
It works!

Resources