Found duplicate key in application.yml - spring-boot

I have been working on a kafka stream project I am new to yaml files I have been using application.properties . Basically I want to use kafka properties as well as spring fox properties here is my
application.yml
spring:
cloud:
function:
definition: consumer;producer
stream:
kafka:
bindings:
producer-out-0:
producer:
configuration:
value.serializer: <packagename>
consumer-in-0:
consumer:
configuration:
value.deserializer: <packagename>
binder:
brokers: localhost:9092
bindings:
producer-out-0:
destination: <topic name>
producer:
useNativeEncoding: true # Enables using the custom serializer
consumer-in-0:
destination: <topic name>
consumer:
use-native-decoding: true # Enables using the custom deserializer
spring:
mvc:
pathmatch:
matching-strategy: ANT_PATH_MATCHER
It shows
in 'reader', line 1, column 1:
spring:
^
found duplicate key spring
in 'reader', line 31, column 1:
spring:
^
I tried add mvc: in between and other things nothing seem to work I dont know how to get spring key as parent to mvc: pathmatch: without ruining above configuration

spring is a duplicate! Your YAML should look like this:
spring:
cloud:
function:
definition: consumer;producer
stream:
kafka:
bindings:
producer-out-0:
producer:
configuration:
value.serializer: <packagename>
consumer-in-0:
consumer:
configuration:
value.deserializer: <packagename>
binder:
brokers: localhost:9092
bindings:
producer-out-0:
destination: <topic name>
producer:
useNativeEncoding: true # Enables using the custom serializer
consumer-in-0:
destination: <topic name>
consumer:
use-native-decoding: true # Enables using the custom deserializer
mvc:
pathmatch:
matching-strategy: ANT_PATH_MATCHER

Related

Spring Cloud Kafka Stream: Publishing to DLQ is failing with Avro

I'm unable to publish to Dlq topic while using ErrorHandlingDeserializer for handling the errors with combination of Avro. Below is the error while publishing.
Topic TOPIC_DLT not present in metadata after 60000 ms.
ERROR KafkaConsumerDestination{consumerDestinationName='TOPIC', partitions=6, dlqName='TOPIC_DLT'}.container-0-C-1 o.s.i.h.LoggingHandler:250 - org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1#49abe531]; nested exception is java.lang.RuntimeException: failed, failedMessage=GenericMessage
And here is the application.yml
spring:
cloud:
stream:
bindings:
process-in-0:
destination: TOPIC
group: groupID
kafka:
binder:
brokers:
- xxx:9092
configuration:
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
jaas:
loginModule: org.apache.kafka.common.security.plain.PlainLoginModule
options:
username: username
password: pwd
consumer-properties:
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
bindings:
process-in-0:
consumer:
configuration:
basic.auth.credentials.source: USER_INFO
schema.registry.url: registryUrl
schema.registry.basic.auth.user.info: user:pwd
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
max-attempts: 1
dlqProducerProperties:
configuration:
basic.auth.credentials.source: USER_INFO
schema.registry.url: registryUrl
schema.registry.basic.auth.user.info: user:pwd
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
deserializationExceptionHandler: sendToDlq
ackEachRecord: true
enableDlq: true
dlqName: TOPIC_DLT
autoCommitOnError: true
autoCommitOffset: true
I'm using the following dependencies:
spring-cloud-dependencies - 2021.0.1
spring-boot-starter-parent - 2.6.3
spring-cloud-stream-binder-kafka
kafka-schema-registry-client - 5.3.0
kafka-avro-serializer - 5.3.0
Im not sure what exactly im missing.
After going through a lot of documentation, I found out that for spring to do the job of posting DLQ, we need to have the same number of partitions for both Original topic and DLT Topic. And if it can't be done then we need to set dlqPartitions to 1 or manually provide the DlqPartitionFunction bean. By providing dlqPartitions: 1 all the messages will go to partition 0.

Spring Cloud Stream connect to multiple hosts for single binder (RabbitMQ)

we are using Spring Cloud Stream to listen to rabbitMQ multiple queues, especially the SCF model
The spring-cloud-stream-reactive module is deprecated in favor of native support via Spring Cloud Function programming model.
by the time there was a single node/host it was working good (application.yml snippet shared below),
however the moment we try to connect multiple nodes it is failing,
Can someone guide how to connect the same
or have some sample related to Spring Cloud Documentation
Following Code is working as expected
spring:
cloud:
stream:
function:
definition: function1;function2;function3
bindings:
function1-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
function2-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
function3-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
rabbit:
bindings:
function1-in-0:
consumer:
bindingRoutingKey: routing.key.1
function2-in-0:
consumer:
bindingRoutingKey: routing.key.2
function3-in-0:
consumer:
bindingRoutingKey: routing.key.3
binder:
nodes: address1
Basically it need to be something like following
spring:
cloud:
stream:
function:
definition: function1;function2;function3
bindings:
function1-in-0:
group: allocation
destination: destinationExchange
binder: rabbit1
function2-in-0:
group: allocation
destination: destinationExchange
binder: rabbit2
function3-in-0:
group: allocation
destination: destinationExchange
binder: rabbit3
binder:
rabbit1:
function1-in-0:
consumer:
bindingRoutingKey: routing.key.1
binder:
nodes: address1
rabbit2:
function2-in-0:
consumer:
bindingRoutingKey: routing.key.2
binder:
nodes: address2
rabbit3:
function3-in-0:
consumer:
bindingRoutingKey: routing.key.3
binder:
nodes: address3
with following addition itself
binders:
rabbit1:
type: rabbit
environment:
spring.spring.cloud.stream.kafka:
binder:
nodes: localhost
i am getting this error
o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is java.lang.IllegalStateException: Unknown binder configuration: rabbit
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.8.jar:5.3.8]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.2.jar:2.5.2]
at com.gap.pem.Application.main(Application.java:14) ~[main/:na]
Caused by: java.lang.IllegalStateException: Unknown binder configuration: rabbit
at org.springframework.util.Assert.state(Assert.java:76) ~[spring-core-5.3.8.jar:5.3.8]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinderInstance(DefaultBinderFactory.java:255) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.doGetBinder(DefaultBinderFactory.java:224) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:152) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.BindingService.getBinder(BindingService.java:386) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:103) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.createAndBindInputs(AbstractBindableProxyFactory.java:118) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStartWithBindable(InputBindingLifecycle.java:58) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608) ~[na:na]
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:57) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.start(InputBindingLifecycle.java:34) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.8.jar:5.3.8]
... 14 common frames omitted
Process finished with exit code 1
we have following dependencies available
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-stream-binder-kafka'
implementation 'org.springframework.cloud:spring-cloud-stream-binder-rabbit'
I struggled a lot getting this to run. Especially with routing keys. In the end my solution was a bitt different than shown here. In hopes that it will help someone in the future :
spring:
cloud:
stream:
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: some.server.com:5672,some.other.server.com:5672
username: someUserName
password: someUserPassword
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
addresses: some.rabbit2.server.us:5672,someother.rabbit2.server.de:5672
username: secondRabbitUserName
password: secondRabbitPassword
function:
definition: someReceiver;anotherRec
rabbit:
bindings:
someReceiver-in-0:
consumer:
auto-bind-dlq: true
republishToDlq: true
bindingRoutingKey: some.routing.key.#
anotherRec-in-0:
consumer:
bindingRoutingKey: finished.#
firstAction-out-0:
producer:
routingKeyExpression: "'switch.'+headers.someValue+'.'+headers.someOtherValue"
bindingRoutingKey: switch.#
userNotification-out-0:
producer:
routingKeyExpression: "'switch.someKeyExpression'"
bindingRoutingKey: switch.#
bindings:
anotherRec-in-0:
destination: reciving.exchange.name
group: some-queue-name-v7
binder: rabbit1
firstAction-out-0:
destination: some.exhange.name.1
binder: rabbit1
someReceiver-in-0:
destination: another.exchange.name.1
group: queueName
binder: rabbit2
userNotification-out-0:
destination: the.exchange.name
binder: rabbit2
Upon adding the binders config for both rabbit1 and rabbit2 it resolved the issue:
Below is the sample config which I tried and was able to consume messages successfully
spring:
cloud:
stream:
function:
definition: processFirstConsumer;processSecondConsumer
bindings:
processFirstConsumer-in-0:
group: allocation
destination: userMessage1
binder: rabbit1
processSecondConsumer-in-0:
group: allocation
destination: userMessage2
binder: rabbit2
binder:
rabbit1:
processFirstConsumer-in-0:
consumer:
bindingRoutingKey: routing.key.1
binder:
nodes: address1
rabbit2:
processSecondConsumer-in-0:
consumer:
bindingRoutingKey: routing.key.2
binder:
nodes: address2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /

Spring-cloud kafka stream schema registry

I am trying to transform with functionnal programming (and spring cloud stream) an input AVRO message from an input topic, and publish a new message on an output topic.
Here is my transform function :
#Bean
public Function<KStream<String, Data>, KStream<String, Double>> evenNumberSquareProcessor() {
return kStream -> kStream.transform(() -> new CustomProcessor(STORE_NAME), STORE_NAME);
}
The CustomProcessor is a class that implements the "Transformer" interface.
I have tried the transformation with non AVRO input and it works fine.
My difficulties is how to declare the schema registry in the application.yaml file or in the the spring application.
I have tried a lot of different configurations (it seems difficult to find the right documentation) and each time the application don't find the settings for the schema.registry.url. I have the following error :
Error creating bean with name 'kafkaStreamsFunctionProcessorInvoker':
Invocation of init method failed; nested exception is
java.lang.IllegalStateException:
org.apache.kafka.common.config.ConfigException: Missing required
configuration "schema.registry.url" which has no default value.
Here is my application.yml file :
spring:
cloud:
stream:
function:
definition: evenNumberSquareProcessor
bindings:
evenNumberSquareProcessor-in-0:
destination: input
content-type: application/*+avro
group: group-1
evenNumberSquareProcessor-out-0:
destination: output
kafka:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
consumer-properties:
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
I have tried this configuration too :
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
evenNumberSquareProcessor-in-0:
consumer:
destination: input
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
evenNumberSquareProcessor-out-0:
destination: output
My spring boot application is declared in this way, with the activation of the schema registry client :
#EnableSchemaRegistryClient
#SpringBootApplication
public class TransformApplication {
public static void main(String[] args) {
SpringApplication.run(TransformApplication.class, args);
}
}
Thanks for any help you could bring to me.
Regards
CG
Configure the schema registry under the configuration then it will be available to all binders. By the way. The avro serializer is under the bindings and the specific channel. If you want use the default property default.value.serde:. Your Serde might be the wrong too.
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: localhost:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
process-in-0:
consumer:
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
Don't use the #EnableSchemaRegistryClient. Enable the schema registry on the Avro Serde. In this example, I am using the bean Data of your definition. Try to follow this example here.
#Service
public class CustomSerdes extends Serdes {
private final static Map<String, String> serdeConfig = Stream.of(
new AbstractMap.SimpleEntry<>(SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081"))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
public static Serde<Data> DataAvro() {
final Serde<Data> dataAvroSerde = new SpecificAvroSerde<>();
dataAvroSerde.configure(serdeConfig, false);
return dataAvroSerde;
}
}

Spring Kafka + Json + Schema Registry Confluent

Confluent schema registry currently support json schema.
Does spring kafka provides support for json schema?
Avro with spring kafka works well using this config
spring:
kafka:
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
consumer:
group-id: template-cg
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
properties:
schema.registry.url: "http://localhost:8081"
But how to configure spring kafka to use json schema + confluent schema registry?
OK, as gary guides, this is not spring problem. Need to configure kafka like this.
spring:
kafka:
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializer
consumer:
group-id: template-cg
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: io.confluent.kafka.serializers.json.KafkaJsonSchemaDeserializer
properties:
schema.registry.url: http://localhost:8081
Dependency (gradle)
repositories {
mavenCentral()
maven {
url "https://packages.confluent.io/maven/"
}
maven {
url "https://jitpack.io"
}
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter'
implementation 'org.springframework.kafka:spring-kafka'
implementation 'io.confluent:kafka-json-schema-serializer:6.1.0'
implementation 'com.github.everit-org.json-schema:org.everit.json.schema:1.12.2'
}

Spring Cloud Stream Kafka Multiple Binding

I am using Spring Cloud Stream Kafka binder to consume messages from Kafka. I am able to make my sample work with a single Kafka Binder as below
spring:
cloud:
stream:
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <broker url>
bindings:
consumer:
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-other-topic
producer:
valueSerde: JsonSerde
Note that both the bindings are to the same Kafka Broker here. However, I have a situation where I need to publish to a topic in some Kafka Cluster and also consume from another topic in a different Kafka Cluster. How should I change my configuration to be able to bind to different Kafka Clusters?
I tried something like this
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
and
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
kafka:
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
But both of them dont seem to work.
The first configuration seems to be invalid.
For the second configuration I get the below error
Caused by: java.lang.IllegalStateException: A default binder has been requested, but there is more than one binder available for 'org.springframework.cloud.stream.messaging.DirectWithAttributesChannel' : kafka1,defaultbinder, and no default binder has been set.
I am using the dependency 'org.springframework.cloud:spring-cloud-starter-stream-kafka:3.0.1.RELEASE' and Spring Boot 2.2.6
Please let me know how to configure multiple bindings for Kafka using Spring Cloud Stream
Update
Tried this configuration below
spring:
cloud:
stream:
binders:
kafka2:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster2-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster1-brokers>
bindings:
consumer:
destination: <some-topic>
binder: kafka1
group: testconsumergroup
content-type: application/json
nativeEncoding: true
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-topic
binder: kafka2
contentType: application/json
nativeEncoding: true
producer:
valueSerde: JsonSerde
The Message Streams and EventHubBinding is as follows
public interface MessageStreams {
String PRODUCER = "producer";
String CONSUMER = "consumer;
#Output(PRODUCER)
MessageChannel producerChannel();
#Input(CONSUMER)
SubscribableChannel consumerChannel()
}
#EnableBinding(MessageStreams.class)
public class EventHubStreamsConfiguration {
}
My Producer class looks like below
#Component
#Slf4j
public class EventPublisher {
private final MessageStreams messageStreams;
public EventPublisher(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
public boolean publish(CustomMessage event) {
MessageChannel messageChannel = getChannel();
MessageBuilder messageBuilder = MessageBuilder.withPayload(event);
boolean messageSent = messageChannel.send(messageBuilder.build());
return messageSent;
}
protected MessageChannel getChannel() {
return messageStreams.producerChannel();
}
}
And Consumer class looks like below
#Component
#Slf4j
public class EventHandler {
private final MessageStreams messageStreams;
public EventHandler(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
#StreamListener(MessageStreams.CONSUMER)
public void handleEvent(Message<CustomMessage> message) throws Exception
{
// process the event
}
#Override
#ServiceActivator(inputChannel = "some-topic.testconsumergroup.errors")
protected void handleError(ErrorMessage errorMessage) throws Exception {
// handle error;
}
}
I am getting the below error while trying to publish and consume the messages from my test.
Dispatcher has no subscribers for channel 'application.producer'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[104], headers={contentType=application/json, timestamp=1593517340422}]
Am i missing anything? For a single cluster, i am able to publish and consume messages. The issue is only happening with multiple cluster bindings

Resources