Spring Kafka receive and forward another broker - spring-boot

I'm using spring-kafka to receive and send messages.
What i'm gonna do is read some messages from X kafka broker and do some enhancements and send to another Y kafka broker
Here is my beans and configurations.
#Bean
public ProducerFactory<String, AuditLog> forwarderKafkaProducerFactory() {
Map<String, Object> configs = new HashMap<>();
configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9093");
configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configs);
}
#Bean
public KafkaTemplate<String, AuditLog> forwarderKafkaClient() {
return new KafkaTemplate<>(forwarderKafkaProducerFactory());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, AuditLog>> kafkaListenerContainerFactoryV2() {
ConcurrentKafkaListenerContainerFactory<String, AuditLog> factory = new ConcurrentKafkaListenerContainerFactory<>();
Map<String, Object> configs = new HashMap<>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
configs.put(ConsumerConfig.GROUP_ID_CONFIG, "receiver-sender");
configs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory<String, AuditLog> consumerFactory = new DefaultKafkaConsumerFactory<>(
configs, new StringDeserializer(), new JsonDeserializer<>(AuditLog.class));
factory.setConsumerFactory(consumerFactory);
return factory;
}
as you can see one broker runs in 9092 the other one runs in 9093
Receive/Forward logic is like that.
#KafkaListener(topics = "audit_log", containerFactory = "kafkaListenerContainerFactoryV2")
public void listen(#Payload AuditLog payload) {
log.info("Audit log [{}]", payload);
if (!payload.isForwarded()) {
String key = "some-value";
String system = String.join("/", key, payload.getSystem());
payload.setSystem(system);
payload.setForwarded(true);
template.send("audit_log", key, payload);
}
}
Template which is above the code snippet configurations are correct. I can confirm by looking ((DefaultKafkaProducerFactory)template.getProducerFactory()).getConfigurationProperties()
In this configuration i can receive messages from 9092 but i can't send to 9093. Template always sending to 9092.
Thanks.

The problem is Kafka setup was incorrect.
I use wurstmeister/kafka on my current environment.
Docker compose file is like this.
zookeeper:
image: wurstmeister/zookeeper:3.4.6
kafka:
image: wurstmeister/kafka:2.12-2.5.0
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_HOST_NAME: "127.0.0.1"
depends_on:
- zookeeper
Then i just add second Kafka
zookeeper2:
image: wurstmeister/zookeeper:3.4.6
kafka2:
image: wurstmeister/kafka:2.12-2.5.0
ports:
- "9093:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper2:2181"
KAFKA_ADVERTISED_HOST_NAME: "127.0.0.1"
depends_on:
- zookeeper2
But registry owners warns :) I saw this warning now while writing this answer.
modify the KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml to match your docker host IP (Note: Do not use localhost or 127.0.0.1 as the host ip if you want to run multiple brokers.)
I tried bitnami zookeeper/kafka
version: "2"
services:
zookeeper1:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2001:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka1:
image: docker.io/bitnami/kafka:3
ports:
- "9201:9201"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper1:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9201
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka1:9092,EXTERNAL://localhost:9201
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper1
zookeeper2:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2002:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka2:
image: docker.io/bitnami/kafka:3
ports:
- "9202:9202"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper2:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9202
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka2:9092,EXTERNAL://localhost:9202
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper2
The code in question runs correctly with this setup.

Related

Cloud stream not able to track the status for down stream failures

I have written the following code to leverage the cloud stream functional approach to get the events from the RabbitMQ and publish those to KAFKA, I am able to achieve the primary goal with caveat while running the application if the KAFKA broker goes down due to any reason then I am getting the logs of KAFKA BROKER it's down but at the same time I want to stop the event from rabbitMQ or until the broker comes up those messages either should be routed to Exchange or DLQ topic. however, I have seen at many places to use producer sync: true but in my case that is either not helping, a lot of people talked about #ServiceActivator(inputChannel = "error-topic") for error topic while having a failure at target channel, this method is also not getting executed. so in short I don't want to lose my messages received from rabbitMQ during kafka is down due to any reason
application.yml
management:
health:
binders:
enabled: true
kafka:
enabled: true
server:
port: 8081
spring:
rabbitmq:
publisher-confirms : true
kafka:
bootstrap-servers: localhost:9092
producer:
properties:
max.block.ms: 100
admin:
fail-fast: true
cloud:
function:
definition: handle
stream:
bindingRetryInterval : 30
rabbit:
bindings:
handle-in-0:
consumer:
bindingRoutingKey: MyRoutingKey
exchangeType: topic
requeueRejected : true
acknowledgeMode: AUTO
# ackMode: MANUAL
# acknowledge-mode: MANUAL
# republishToDlq : false
kafka:
binder:
considerDownWhenAnyPartitionHasNoLeader: true
producer:
properties:
max.block.ms : 100
brokers:
- localhost
bindings:
handle-in-0:
destination: test_queue
binder: rabbit
group: queue
handle-out-0:
destination: mytopic
producer:
sync: true
errorChannelEnabled: true
binder: kafka
binders:
error:
destination: myerror
rabbit:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: rahul_host
kafka:
type: kafka
json:
cuttoff:
size:
limit: 1000
CloudStreamConfig.java
#Configuration
public class CloudStreamConfig {
private static final Logger log = LoggerFactory.getLogger(CloudStreamConfig.class);
#Autowired
ChunkService chunkService;
#Bean
public Function<Message<RmaValues>,Collection<Message<RmaValues>>> handle() {
return rmaValue -> {
log.info("processor runs : message received with request id : {}", rmaValue.getPayload().getRequestId());
ArrayList<Message<RmaValues>> msgList = new ArrayList<Message<RmaValues>>();
try {
List<RmaValues> dividedJson = chunkService.getDividedJson(rmaValue.getPayload());
for(RmaValues rmaValues : dividedJson) {
msgList.add(MessageBuilder.withPayload(rmaValues).build());
}
} catch (Exception e) {
e.printStackTrace();
}
Channel channel = rmaValue.getHeaders().get(AmqpHeaders.CHANNEL, Channel.class);
Long deliveryTag = rmaValue.getHeaders().get(AmqpHeaders.DELIVERY_TAG, Long.class);
// try {
// channel.basicAck(deliveryTag, false);
// } catch (IOException e) {
// e.printStackTrace();
// }
return msgList;
};
};
#ServiceActivator(inputChannel = "error-topic")
public void errorHandler(ErrorMessage em) {
log.info("---------------------------------------got error message over errorChannel: {}", em);
if (null != em.getPayload() && em.getPayload() instanceof KafkaSendFailureException) {
KafkaSendFailureException kafkaSendFailureException = (KafkaSendFailureException) em.getPayload();
if (kafkaSendFailureException.getRecord() != null && kafkaSendFailureException.getRecord().value() != null
&& kafkaSendFailureException.getRecord().value() instanceof byte[]) {
log.warn("error channel message. Payload {}", new String((byte[])(kafkaSendFailureException.getRecord().value())));
}
}
}
KafkaProducerConfiguration.java
#Configuration
public class KafkaProducerConfiguration {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapAddress);
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate(producerFactory());
}
RmModelOutputIngestionApplication.java
#SpringBootApplication(scanBasePackages = "com.abb.rm")
public class RmModelOutputIngestionApplication {
private static final Logger LOGGER = LogManager.getLogger(RmModelOutputIngestionApplication.class);
public static void main(String[] args) {
SpringApplication.run(RmModelOutputIngestionApplication.class, args);
}
#Bean("objectMapper")
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
LOGGER.info("Returning object mapper...");
return mapper;
}
First, it seems like you are creating too much unnecessary code. Why do you have ObjectMapper? Why do you have KafkaTemplate? Why do you have ProducerFactory? These are all already provided for you.
You really only have to have one function and possibly an error handler - depending on error handling strategy you select, which brings me to the error handling topic. There are 3 primary ways of handling errors. Here is the link to the doc explaining them all and providing samples. Please read thru that and modify your app accordingly and if something doesn't work or unclear feel free to follow up.

Spring Boot app cannot connect to Redis Replica in Docker

I am experiencing a weird issue with Redis connectivity in the Docker.
I have a simple Spring Boot application with a Master-Replica configuration.
As well as docker-compose config which I use to start Redis Master and Redis Replica.
If I start Redis via docker-compose and Spring Boot app as a simple java process outside of Docker, everything works fine. It can successfully connect to both Master and Replica via localhost.
If I launch Spring Boot app as a Docker container alongside Redis containers it can successfully connect to Master, but not Replica. Which means that I can write and read to Master node but when I try to read from Replica I get the following error:
redis-sample-app_1 | Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: redis-replica-a/172.31.0.2:7001
redis-sample-app_1 | Caused by: java.net.ConnectException: Connection refused
redis-sample-app_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_302]
In the redis.conf I've changed the following to bind it to all network interfaces:
bind * -::*
protected-mode no
docker-compose.yml
version: "3"
services:
redis-master:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-master.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
redis-replica-a:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./conf/redis-replica.conf:/usr/local/etc/redis/redis.conf
ports:
- "7001:6379"
redis-sample-app:
image: docker.io/library/redis-sample:0.0.1-SNAPSHOT
environment:
- SPRING_REDIS_HOST=redis-master
- SPRING_REDIS_PORT=6379
- SPRING_REDIS_REPLICAS=redis-replica-a:7001
ports:
- "9080:8080"
depends_on:
- redis-master
- redis-replica-a
application.yml
spring:
redis:
port: 6379
host: localhost
replicas: localhost:7001
RedisConfig.java
#Configuration
class RedisConfig {
private static final Logger LOG = LoggerFactory.getLogger(RedisConfig.class);
#Value("${spring.redis.replicas:}")
private String replicasProperty;
private final RedisProperties redisProperties;
public RedisConfig(RedisProperties redisProperties) {
this.redisProperties = redisProperties;
}
#Bean
public StringRedisTemplate masterReplicaRedisTemplate(LettuceConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
#Bean
public LettuceConnectionFactory masterReplicaLettuceConnectionFactory(LettuceClientConfiguration lettuceConfig) {
LOG.info("Master: {}:{}", redisProperties.getHost(), redisProperties.getPort());
LOG.info("Replica property: {}", replicasProperty);
RedisStaticMasterReplicaConfiguration configuration = new RedisStaticMasterReplicaConfiguration(redisProperties.getHost(), redisProperties.getPort());
if (StringUtils.hasText(replicasProperty)) {
List<RedisURI> replicas = Arrays.stream(this.replicasProperty.split(",")).map(this::toRedisURI).collect(Collectors.toList());
LOG.info("Replica nodes: {}", replicas);
replicas.forEach(replica -> configuration.addNode(replica.getHost(), replica.getPort()));
}
return new LettuceConnectionFactory(configuration, lettuceConfig);
}
#Scope("prototype")
#Bean(destroyMethod = "shutdown")
ClientResources clientResources() {
return DefaultClientResources.create();
}
#Scope("prototype")
#Bean
LettuceClientConfiguration lettuceConfig(ClientResources dcr) {
ClientOptions options = ClientOptions.builder()
.timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.of(5, ChronoUnit.SECONDS)).build())
.disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.autoReconnect(true)
.build();
return LettuceClientConfiguration.builder()
.readFrom(ReadFrom.REPLICA_PREFERRED)
.clientOptions(options)
.clientResources(dcr)
.build();
}
#Bean
StringRedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
return new StringRedisTemplate(redisConnectionFactory);
}
private RedisURI toRedisURI(String url) {
String[] split = url.split(":");
String host = split[0];
int port;
if (split.length > 1) {
port = Integer.parseInt(split[1]);
} else {
port = 6379;
}
return RedisURI.create(host, port);
}
}
Please advise how to continue with troubleshooting.
When running everything (redis, replica and spring) inside the docker network you should be using port 6379 instead of 7001
7001 port can be used to connect from outside the container to it. But now you are trying to connect from container to container.
So change your environment variable to
SPRING_REDIS_REPLICAS=redis-replica-a:6379

Spring boot is not able to read docker-compose environmental variables

I am having a "boring" problem here. Let me explain...
I am trying read some 'environmental variables' from my docker-compose, in my Application.properties.
Although, I am creating properly the env's variables in the containers(I can see that when I inspect the containers), they are not being read from the spring boot Applications.
Its kinda, these 'docker-compose environmental variables', seems like "are not accessible".
I am doing a TEST (); so, I am trying to get an environmental variable in my Spring Application, and it's resulting in 'null', such as:
#Slf4j
#SpringBootApplication(exclude = R2dbcAutoConfiguration.class)
public class AppDriver {
public static void main(String[] args) {
System.out.println( "container variables >>>>> " + System.getenv("MASTER_URL") );
SpringApplication.run(AppDriver.class,args);
}
The log is:
conteiner variables >>>>> null
Below, They are my docker-compose (i do not have any Spring boot service in the docker-compose[for now]):
version: "3.5"
services:
tenant1:
image: postgres
ports:
- "5432:5432"
restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: tenant1
POSTGRES_USER: user
volumes:
- ./data/tenant1:/var/lib/postgresql
mysql:
image: mysql:5.7
ports:
- "3306:3306"
environment:
MASTER_URL: r2dbc:mysql://user:password#localhost/master
MYSQL_ROOT_PASSWORD: mysecret
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: master
My Application.properties:
logging.level.com.example=DEBUG
logging.level.io.r2dbc=DEBUG
master.url = ${MASTER_URL}
And finally, my Java File:
#Configuration
//#ConfigurationProperties(prefix = "master")
#EnableR2dbcRepositories(entityOperationsRef = "masterEntityTemplate")
public class MasterConfig {
#Value("${master.url}")
private String url;
#Bean
#Qualifier(value = "masterConnectionFactory")
public ConnectionFactory masterConnectionFactory() {
System.out.println(">>>>>>>>>>>" + url);
return ConnectionFactories.get(url);
//.get("r2dbc:mysql://user:password#localhost/master");
}
Thats the final error I am taking:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'MASTER_URL' in value "${MASTER_URL}"
Someone know how to solve this problem?
Thanks a lot

Connecting to multiple messaging systems in spring cloud stream application using configuration class

I have got a requirement to read all the spring.cloud.stream.kafka.binder from a cloud configuration service. I was able to move the bindings to separate configuration class by defining a #BindingServiceProperties bean. Also was able to move one Kafka system properties to a configuration class by defining a #KafkaBinderConfigurationProperties. If both the Kafka server properties are moved in as binder configuration, the application is unable to identify the binder and respective bindings. Need help in defining multiple binders in a configuration class and attach each binders to respective bindings.
application.yml
spring:
cloud:
stream:
defaultBinder: kafka1
binders:
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:9095
autoCreateTopics: true
configuration:
auto.offset.reset: latest
kafka2:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:9096
autoCreateTopics: true
configuration:
auto.offset.reset: latest
Using the application.yml configuration I'm able to attach the binder to respective bindings.
BinderConfiguration
#Primary
#Bean
public BindingServiceProperties bindingServiceProperties() {
BindingServiceProperties bindingServiceProperties = new BindingServiceProperties();
BindingProperties input1BindingProps = getInput1BindingProperties();
BindingProperties input2BindingProps = getInput2BindingProperties();
Map<String, BindingProperties> bindingProperties = new HashMap<>();
bindingProperties.put(StreamBindings.INPUT_1, input1BindingProps);
bindingProperties.put(StreamBindings.INPUT_2, input2BindingProps);
bindingServiceProperties.setBindings(bindingProperties);
return bindingServiceProperties;
}
private BindingProperties getInput1BindingProperties() {
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
consumerProperties.setDefaultRetryable(false);
BindingProperties props = new BindingProperties();
props.setDestination("test1");
props.setContentType(MediaType.APPLICATION_JSON_VALUE);
props.setGroup("test-group");
props.setConsumer(consumerProperties);
props.setBinder("kafka1");
return props;
}
private BindingProperties getInput2BindingProperties() {
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
consumerProperties.setDefaultRetryable(false);
BindingProperties props = new BindingProperties();
props.setDestination("test2");
props.setContentType(MediaType.APPLICATION_JSON_VALUE);
props.setGroup("test-group");
props.setConsumer(consumerProperties);
props.setBinder("kafka2");
return props;
}
KafkaBinderConfigurationProperties
#Bean
public KafkaBinderConfigurationProperties getKafka1BinderProps(){
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties
= new KafkaBinderConfigurationProperties(new KafkaProperties());
String brokers = "localhost:9095";
kafkaBinderConfigurationProperties.setBrokers(brokers);
kafkaBinderConfigurationProperties.setDefaultBrokerPort("9095");
Map<String, String> configuration = new HashMap<>();
configuration.put("auto.offset.reset", "latest");
kafkaBinderConfigurationProperties.setConfiguration(configuration);
return kafkaBinderConfigurationProperties;
}
#Bean
public KafkaBinderConfigurationProperties getKafka2BinderProps(){
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties
= new KafkaBinderConfigurationProperties(new KafkaProperties());
String brokers = "localhost:9096";
kafkaBinderConfigurationProperties.setBrokers(brokers);
kafkaBinderConfigurationProperties.setDefaultBrokerPort("9096");
Map<String, String> configuration = new HashMap<>();
configuration.put("auto.offset.reset", "latest");
kafkaBinderConfigurationProperties.setConfiguration(configuration);
return kafkaBinderConfigurationProperties;
}
Application is working fine with the properties defined in application.yml without moving binder configuration into separate configuration class.
Complete code can be found in this repository.
Any help is appreciated.

Netflix Ribbon throws No instances available for MY-MICROSERVICE exception

My application uses Eureka and Ribbon. I'm trying to get two microservices to talk to each other. Below is my method of concern.
#Autowired #LoadBalanced
private RestTemplate client;
#Autowired
private DiscoveryClient dClient;
public String getServices() {
List<String> services = dClient.getServices();
List<ServiceInstance> serviceInstances = new ArrayList<>();
List<String> serviceHosts = new ArrayList<>();
for(String service : services) {
serviceInstances.addAll(dClient.getInstances(service));
}
for(ServiceInstance service : serviceInstances) {
serviceHosts.add(service.getHost());
}
//throws No instances available exception here
try {
System.out.println(this.client.getForObject("http://MY-MICROSERVICE/rest/hello", String.class, new HashMap<String, String>()));
}
catch(Exception e) {
e.printStackTrace();
}
return serviceHosts.toString();
}
The method returns an array of two hostnames(IP). So DiscoveryClient is able to see instances of the two services registered with Eureka. But RestTemplate or more precisely Ribbon throws IllegalStateExcpetion: No instances available exception.
DynamicServerListLoadBalancer for client MY-MICROSERVICE initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=MY-MICROSERVICE,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#23edc38f
java.lang.IllegalStateException: No instances available for MY-MICROSERVICE
at org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient.execute(RibbonLoadBalancerClient.java:119)
at org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient.execute(RibbonLoadBalancerClient.java:99)
at org.springframework.cloud.client.loadbalancer.LoadBalancerInterceptor.intercept(LoadBalancerInterceptor.java:58)
Even the Eureka dashboard shows two services registered. I feel the problem is specifically with Ribbon. Here's my config file.
spring.application.name="my-microservice"
logging.level.org.springframework.boot.autoconfigure.logging=INFO
spring.devtools.restart.enabled=true
spring.devtools.add-properties=true
server.ribbon.eureka.enabled=true
eureka.client.serviceUrl.defaultZone = http://localhost:8761/eureka/
The other microservice also has the same configs except for a different name. What's the problem here?
Solved. I was using application.yml with Eureka-server and application.properties with the client. Once I converted everything to yml, all works fine.
spring:
application:
name: "my-microservice"
devtools:
restart:
enabled: true
add-properties: true
logging:
level:
org.springframework.boot.autoconfigure.logging: INFO
eureka:
client:
serviceUrl:
defaultZone: "http://localhost:8761/eureka/"
This is the yml file for both apps which only differ by the application name.

Resources